id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2302.01119
Fractal aggregates of sub-micron-sized grains in the young planet-forming disk around IM Lup
Despite rapidly growing disk observations, it remains a mystery what primordial dust aggregates look like and what the physical and chemical properties of their constituent grains (monomers) are in young planet-forming disks. Confrontation of models with observations to answer this mystery has been a notorious task because we have to abandon a commonly used assumption, perfectly spherical grains, and take into account particles with complex morphology. In this Letter, we present the first thorough comparison between near-infrared scattered light of the young planet-forming disk around IM Lup and the light-scattering properties of complex-shaped dust particles. The availability of scattering observations at multiple wavelengths and over a significant range of scattering angles allows for the first determination of the monomer size, fractal dimension, and size of dust aggregates in a planet-forming disk. We show that the observations are best explained by fractal aggregates with a fractal dimension of 1.5 and a characteristic radius larger than $\sim2~\mu$m. We also determined the radius of the monomer to be $\sim200$ nm, and monomers much smaller than this size can be ruled out on the premise that the fractal dimension is less than 2. Furthermore, dust composition comprising amorphous carbon is found to be favorable to simultaneously account for the faint scattered light and the flared disk morphology. Our results support that planet formation begins with fractal coagulation of sub-micron-sized grains. All the optical properties of complex dust particles computed in this study are publicly available.
Ryo Tazaki, Christian Ginski, Carsten Dominik
2023-02-02T14:24:35Z
http://arxiv.org/abs/2302.01119v2
# Fractal aggregates of sub-micron-sized grains in the young planet-forming disk around IM Lup ###### Abstract Despite rapidly growing disk observations, it remains a mystery what primordial dust aggregates look like and what the physical and chemical properties of their constituent grains (monomers) are in young planet-forming disks. Confrontation of models with observations to answer this mystery has been a notorious task because we have to abandon a commonly used assumption, perfectly spherical grains, and take into account particles with complex morphology. In this Letter, we present the first thorough comparison between near-infrared scattered light of the young planet-forming disk around IM Lup and the light-scattering properties of complex-shaped dust particles. The availability of scattering observations at multiple wavelengths and over a significant range of scattering angles allows for the first determination of the monomer size, fractal dimension, and size of dust aggregates in a planet-forming disk. We show that the observations are best explained by fractal aggregates with a fractal dimension of 1.5 and a characteristic radius larger than \(\sim 2\)\(\mu\)m. We also determined the radius of the monomer to be \(\sim 200\) nm, and monomers much smaller than this size can be ruled out on the premise that the fractal dimension is less than 2. Furthermore, dust composition comprising amorphous carbon is found to be favorable to simultaneously account for the faint scattered light and the flared disk morphology. Our results support that planet formation begins with fractal coagulation of sub-micron-sized grains. All the optical properties of complex dust particles computed in this study are publicly available. 0000-0002-4810-7885]Ryo Tazaki 0000-0002-1881-788X]Christian Ginski 0000-0002-4810-788X]Carsten Dominik ## 1 Introduction Collisional growth of dust aggregates by Brownian motion is the very first step in planet formation. Dust coagulation in this step results in the formation of an aggregate whose fractal dimension is less than 2 (Kempf et al., 1999; Blum et al., 2000; Krause and Blum, 2004; Paszun and Dominik, 2006), where the fractal dimension \(D_{\rm f}\) is defined by \(m\propto a_{c}^{D_{\rm f}}\); \(m\) and \(a_{\rm c}\) being the mass and the characteristic radius of an aggregate, respectively. Fractal aggregates have been found in the comet 67P/Churyumov-Gerasimenko (Bentley et al., 2016; Mannel et al., 2016). However, despite the anticipation that they form in young planet-forming disks, there has been no observational evidence that convincingly shows their presence in such disks. Another key issue that needs observational clarification is the size of constituent grains of an aggregate, called monomers. Since the radius of the monomer affects the impact strength of aggregates (Dominik and Tielens, 1997; Wada et al., 2009), it is an important quantity directly affecting the collisional growth of aggregates. Tazaki and Dominik (2022) recently estimated the monomer radius based on the degree of polarization of optical/near-infrared (IR) scattered light of planet-forming disks and placed the upper limit at 0.4 \(\mu\)m. However, a firm determination of the monomer radius requires a detailed comparison of models with disk observations for each object, which is still an untapped task. IM Lup is a K5 Class II T Tauri star (Alcala et al., 2017) with the age of \(\sim\)1 Myr (Mawet et al., 2012; Avenhaus et al., 2018). The large planet-forming disk surrounding the star has been extensively studied at optical/near-IR and millimeter wavelengths (Pinte et al., 2008; Panic et al., 2009; Cleeves et al., 2016; Avenhaus et al., 2018; Andrews et al., 2018; Oberg et al., 2021). Based on optical/near-IR disk-scattered light, Pinte et al. (2008) suggested the presence of fluffy aggregates and/or ice-mantled grains. However, the detailed dust properties remain largely unknown. Our aim in this Letter is therefore to unveil the detailed dust properties in the surface region of the IM Lup disk through near-IR polarimetric observations (Figure 1). To this end, we perform radiative transfer simulations of the disk based on a detailed numerical study of the optical properties of complex-shaped dust particles (Figure 2). The state-of-the-art approach enables us for the first time to show that dust particles in the IM Lup disk surface have already grown beyond a few microns in size, and they have a fractal structure with monomers of a radius of \(\sim 200\) nm. Our results support that planet formation begins with fractal coagulation of sub-micron-sized grains. We also developed the AggScatVIR database1, where all the optical properties calculated in this study and Tazaki and Dominik (2022) are publicly available. The database will be useful for retrieving the detailed dust properties in other planet-forming disks. Footnote 1: AggScatVIR repository [https://github.com/trazaki1205/AggScatVIR](https://github.com/trazaki1205/AggScatVIR). v1.0.0: [https://doi.org/10.5281/zenodo.7547601](https://doi.org/10.5281/zenodo.7547601) ## 2 Disk-Scattered Light Around Im Lup Figure 1 shows the observed near-IR polarized scattered-light image of the IM Lup disk taken by the Very Large Telescope (VLT)/Spectro-Polarimetric High-contrast Exoplanet REsearch (SPHERE) (Avenhaus et al., 2018). The disk is visible in reflected light by dust particles in the disk surface. Each dust particle scatters the incoming unpolarized stellar light and produces linearly polarized light with an efficiency dependent on dust size, structure, and composition. Therefore, a detailed analysis of the polarized scattered light allows us to diagnose the properties of dust particles. We focus on two observational quantities. The first quantity is _the disk polarization flux and its color_. Avenhaus et al. (2018) measured the ratio of polarized disk flux to total flux (stellar and disk flux) to be \(0.53\%\pm 0.06\%\) and \(0.66\%\pm 0.05\%\) in the \(J\) and \(H\) bands, respectively. Their ratio gives \(J/H=0.81\pm 0.12\); thus, the disk polarization color is reddish (\(J/H<1\)). This result already suggests that dust particles are micron sized or even larger because the color would become bluish (\(J/H>1\)) otherwise due to Rayleigh scattering (Bohren and Huffman, 1983; Mulders et al., 2013). The second quantity is _the polarization phase function_, which describes the amount of polarized disk-scattered light at each scattering angle. This quantity is another key to narrowing down the properties of dust particles (Ginski et al., 2016; Stolker et al., 2016; Milli et al., 2019; Olofsson et al., 2022; Engler et al., 2022; Ginski et al., 2023). Ginski et al. (2023) extracted the polarization phase functions of the IM Lup disk based on the VLT/SPHERE images at the \(J\) and \(H\) bands taken by Avenhaus et al. (2018). The phase functions were extracted at three different disk radii centered at 90 au, 150 au, and 237 au, as shown in Figure 1 (see Ginski et al. (2023) and Appendix A for more detailed extraction procedures). The polarization phase functions for 90 au and 150 au are similar to each other, whereas the one for 237 au deviates from the other two at a scattering angle below 80\({}^{\circ}\). However, the one for 237 au has large error bars, and it is unclear if this deviation is real. Because of its larger errors, we do not focus on the phase function for 237 au. Also, given the similarity between the two, it is reasonable to assume that the 90 au and 150 au phase functions probe similar dust particles (see Appendix C.3 for more details). We will therefore focus on the polarization phase function at 90 au as representative of this disk. It is worth bearing in mind that the polarization phase function extracted from a disk image does not necessarily coincide with the scattering matrix element of each dust particle (e.g., \(-S_{12}\) in Bohren and Huffman (1983)). This is because the disk is optically thick, and the emergent intensity could be affected by multiple scattering and limb brightening (see Appendix C for details). To fully account for these effects, we first create a model image using a radiative transfer calculation, then extract the phase function from the model image using the same technique as we did for the observed image, and compare it to the observation. ## 3 Models and Methods To model the two observational quantities of the IM Lup disk, we perform 3D Monte Carlo radiative transfer simulations by using RADMC-3D v2.0(Dullemond et al., 2012). We constructed a disk model for IM Lup so as to reproduce the observed disk geometry reported in Avenhaus et al. (2018). The disk model and the data reduction, including the phase function extraction from each model image, are described in Appendix B.1. In the radiative transfer simulations, we consider a diverse set of dust particle morphology, as shown in Figure 2. In what follows, the term _particle_ is used to loosely refer to either _grain_ (a monolith-like solid) or _aggregate_ (a cluster of grains). We consider three types of dust particles, and the methods to generate them are described in Appendix B.2. The first type is solid irregular grains, which correspond to the case where no coagulation has been taking place in the disk surface. The second type is fractal aggregates; they are labeled by FA1.1, FA1.3, FA1.5, FA1.9 in Figure 2. We consider four different fractal dimensions ranging from 1.1 to 1.9, which is indicated by the number that follows FA. These fractal aggregates are expected to be formed by primordial dust coagulation, such as the one driven by Brownian motion (Kempf et al., 1999; Blum et al., 2000; Krause & Blum, 2004; Paszun & Dominik, 2006). The third type is compact aggregates2; they are labeled by CA-HP, CA-MP, CA-LP in Figure 2. The letters following CA represent the amount of porosity: high Figure 1: Observed polarized scattered-light image for the IM Lup disk at the \(H\) band (scaled with the square of the distance from the central star) overlaid with the extraction regions of the polarization phase function (left), the computed scattering angles at the disk surface (middle), and the extracted polarization phase functions (right). The phase functions were extracted at deprojected radii centered at 90 au, 150 au, and 237 au with the 40 au width for each. porosity (HP), moderate porosity (MP), and low porosity (LP). The detection of (sub)millimeter-wave scattering polarization for the IM Lup disk (Hull et al., 2018; Stephens et al., 2020) suggests that dust particles are likely compact aggregates at least in the midplane (Tazaki et al., 2019; Brunngraber and Wolf, 2021), although dust particles in the disk surface may not necessarily be. We assume monodisperse and spherical monomer grains with a radius of \(a_{\rm m}=100\), 200, and 400 nm for compact aggregates and \(a_{\rm m}=100\), 150, 200, 300, and 400 nm for fractal aggregates. The summary of the monomer and aggregate radii is given in Tables 1 and 2. Irregular grains and monomers are made of a mixture of water ice (Warren and Brandt, 2008), pyroxene silicate (Mg\({}_{0.7}\)Fe\({}_{0.3}\)SiO\({}_{3}\)) (Dorschner et al., 1995), carbonaceous material, and troilite (Henning and Stognienko, 1996) with the mass ratios proposed by Birnstiel et al. (2018). Since the actual form of carbonaceous material is highly uncertain, we consider two possibilities: organics (Henning and Stognienko, 1996) or amorphous carbon (Zubko et al., 1996). The refractive indices of the mixtures are calculated by using the Bruggeman mixing rule, and the results are summarized in Table 3. To represent the monomer model, we will use the following format: COMPXXX, where COMP and XXX specify the monomer composition and the monomer radius in units of nanometers, respectively. COMP has either org or amc, where the former and the latter correspond to the mixture containing organics and amorphous carbon, respectively. For example, amc200 represents a monomer grain of a radius of 200 nm and is made of a mixture containing amorphous carbon. \begin{table} \begin{tabular}{c c c c c c c c c c c c c c c} \hline \hline \(N\) & \(a_{\rm v}/a_{\rm m}\) & \multicolumn{8}{c}{Characteristic Radius \(a_{\rm c}/a_{\rm m}\)} & \multicolumn{8}{c}{Porosity \(\mathcal{P}=1-N(a_{\rm m}/a_{\rm c})^{3}\) (\%)} \\ \cline{3-14} & & FA1.1 & FA1.3 & FA1.5 & FA1.9 & CA-HP & CA-MP & CA-LP & FA1.1 & FA1.3 & FA1.5 & FA1.9 & CA-HP & CA-MP & CA-LP \\ \hline 8 & 2.00 & 5.12 & 4.50 & 3.92 & 3.59 & 3.25 & 2.57 & 2.23 & 94.03 & 91.21 & 86.71 & 82.74 & 76.73 & 52.91 & 27.78 \\ 16 & 2.52 & 9.83 & 7.87 & 6.42 & 5.54 & 4.45 & 3.42 & 2.97 & 98.31 & 96.72 & 93.96 & 90.57 & 81.82 & 60.12 & 38.68 \\ 32 & 3.17 & 18.6 & 13.5 & 10.3 & 7.68 & 6.00 & 4.68 & 3.92 & 99.50 & 98.71 & 97.09 & 92.94 & 85.17 & 68.87 & 46.73 \\ 64 & 4.00 & 34.9 & 23.1 & 16.5 & 10.5 & 7.79 & 6.17 & 5.21 & 99.85 & 99.48 & 98.56 & 94.55 & 86.47 & 72.78 & 54.86 \\ 128 & 5.04 & 65.6 & 39.5 & 26.2 & 15.7 & 9.87 & 7.95 & 6.75 & 99.95 & 99.79 & 99.29 & 96.68 & 86.68 & 74.57 & 58.37 \\ 256 & 6.35 & 123.2 & 67.3 & 41.6 & 21.8 & 12.4 & 10.2 & 8.72 & 99.99 & 99.92 & 99.64 & 97.53 & 86.55 & 75.93 & 61.41 \\ 512 & 8.00 & & 114.7 & 66.0 & 32.4 & 15.6 & 12.9 & 11.2 & 99.97 & 99.82 & 98.49 & 86.41 & 76.23 & 63.74 \\ 1024 & 10.1 & & & 104.8 & 49.5 & 19.6 & 16.5 & 14.3 & & & 99.91 & 99.16 & 86.43 & 77.12 & 65.11 \\ 2048 & 12.7 & & & & 67.2 & 24.8 & 20.9 & 18.3 & & & 99.33 & 86.58 & 77.63 & 66.51 \\ 4096 & 16.0 & & & & 100.4 & 31.4 & 26.4 & 23.4 & & & 99.60 & 86.82 & 77.81 & 68.05 \\ \hline \end{tabular} \end{table} Table 1: Volume-equivalent and characteristic radii (normalized to monomer radius) and porosities of dust aggregates \begin{table} \begin{tabular}{c c c c c} \hline \hline Composition Model & \multicolumn{2}{c}{org} & \multicolumn{2}{c}{amc} \\ \cline{2-5} \(\lambda\) (\(\mu\)m) & \(n\) & \(k\) & \(n\) & \(k\) \\ \hline 3.78 & 1.53 & 0.0219 & 2.13 & 0.393 \\ 2.18 & 1.47 & 0.0134 & 1.98 & 0.385 \\ 1.63 & 1.48 & 0.0138 & 1.92 & 0.404 \\ 1.25 & 1.49 & 0.0104 & 1.86 & 0.420 \\ 1.04 & 1.49 & 0.0108 & 1.81 & 0.434 \\ 0.735 & 1.50 & 0.0119 & 1.70 & 0.468 \\ 0.554 & 1.51 & 0.0138 & 1.59 & 0.472 \\ \hline \end{tabular} Note. – The material densities of the mixtures are 1.6487 g cm\({}^{-3}\) (\(\rm{\tt erg}\)) and 1.7779 g cm\({}^{-3}\) (\(\rm{\tt amc}\)). \end{table} Table 3: Real (\(n\)) and imaginary (\(k\)) parts of the refractive index of the two mixtures. We compute the optical properties of dust particles by using the \(T\)-Matrix method (Mackowski & Mishchenko, 1996, and references therein) for aggregates and discrete dipole approximation (DDA) for irregular grains (Purcell & Pennypacker, 1973; Draine & Goodman, 1993). Both techniques are known as exact numerical techniques to calculate the optical properties of nonspherical particles. For the \(T\)-Matrix calculations, we use MSTM-v3.0 with analytical orientation averaging and four-realization averaging for each model (Mackowski & Mishchenko, 2011). For the DDA calculations, we use ADDA (Yurkin & Hoekstra, 2011) with averaging of 58 orientations and 10 realizations. The obtained optical properties are then averaged by considering a particle-size distribution obeying \[n(a_{\rm V})da_{\rm V}\propto a_{\rm V}^{-3.5}da_{\rm V}\ (a_{\rm V}^{\rm min} \leq a_{\rm V}\leq a_{\rm V}^{\rm max}), \tag{1}\] where \(n(a_{\rm V})da_{\rm V}\) is the number density of particles within a radius range [\(a_{\rm V}\), \(a_{\rm V}+da_{\rm V}\)], \(a_{\rm V}=(3V/4\pi)^{1/3}\) is the volume-equivalent radius; \(V\) being its material volume, and \(a_{\rm V}^{\rm min}\) and \(a_{\rm V}^{\rm max}\) are the minimum and maximum volume-equivalent radii, respectively. The discrete sampling of \(a_{\rm V}\) is shown in Table 1. The minimum size of the distribution is set as \(a_{\rm V}^{\rm min}=2a_{\rm m}\) for aggregates and \(a_{\rm V}^{\rm min}=0.2\ \mu\)m for irregular grains. The maximum particle radius \(a_{\rm V}^{\rm max}\) is a parameter of this study. Since the volume-equivalent radius is not a good indicator of the apparent size of an aggregate, we also use the characteristic radius \(a_{\rm c}\)(Mukai et al., 1992), and let \(N_{\rm max}\) and \(a_{\rm c}^{\rm max}\) denote the number of monomers and the characteristic radius of the maximum aggregate in the size distribution, respectively. We investigated 360 sets of dust particle models: 20 irregular grain models (10 maximum grain radii; 2 compositions), 126 compact aggregate models (3 porosity models; 2 compositions; 3 monomer radii; on average 7 maximum aggregate radii), and 214 fractal aggregate models (4 fractal dimensions, 2 compositions, 5 monomer radii; on average 5.35 maximum aggregate radii). ## 4 Results ### Disk polarization flux and color Figure 3 compares the disk-scattered light properties for fractal aggregates having a similar structure and radius, but different monomer models (FA1.9 with \(a_{\rm c}^{\rm max}=6.5\pm 0.22\ \mu\)m; see top panels in Figure 3). We found that the properties of monomers strongly affect the polarized scattered-light flux, from which we can constrain the monomer properties. The composition of monomers has a significant impact on the absolute level of disk flux. The org models produce higher polarized fluxes than the amc models because the former composition has a higher scattering albedo at optical/near-IR wavelengths. If we reduce the disk mass, the disk flux could be reduced as well, as shown in Figure 4. However, such a reduction results in lowering the scattering surface, and these models fail to explain the aspect ratio of the disk regardless of aggregate size and fractal dimension. Moreover, we arrived at the same conclusion even considering compact aggregates. All compact aggregate models with the org composition fail to reconcile the disk flaring and polarized flux. Therefore, given the relatively faint scattered light and its flared disk geometry of the IM Lup disk, a highly absorbing monomer composition (e.g., the amc composition) is favorable. The observed reddish disk color favors a not-too-small monomer size. Figure 3 (bottom left) shows that models with \(a_{\rm m}=100\) nm produce bluish dependencies (\(J/H>1\)) regardless of composition and are inconsistent with the observation. On the premise that \(D_{\rm f}<2\), which we think is likely (see Sections 4.2), the presence of even smaller monomers can be ruled out by the following argument. A _total-intensity_ disk color turned out to be nearly gray for all models shown in Figure 3. This tendency remains unchanged no matter how large the aggregate is unless we consider compact aggregates (Tazaki et al., 2019). The observed reddish-polarization color thereby needs to be explained by a decrease in the degree of polarization for shorter wavelengths. The degree of polarization of an aggregate is predominantly determined by the properties of each monomer (Tazaki et al., 2016; Tazaki & Dominik, 2022). Therefore, the degree of polarization of _each monomer_ has to decrease for shorter wavelengths. Since such a property is never achieved by Rayleigh-scattering monomers such that \(x_{\rm m}=2\pi a_{\rm m}/\lambda\ll 1\), we can rule out \(a_{\rm m}\ll\lambda/2\pi\sim 200\) nm, where we substituted \(\lambda=1.25\ \mu\)m (the \(J\) band). Contrary to the case of polarized flux, the impact of the monomer model on the polarization phase function is minor (Figure 3 bottom right). Nevertheless, we can exclude some of the monomer models from it. For example, the amc400 model gives rise to a too-steep phase function. Also, the curves for the aggregates with the org composition show a turnaround at a scattering angle of \(\sim 40^{\circ}\), while it is absent for the cases of amc. This turnaround is caused by multiple scattering in the optically thick disk surface (see Figure 9 in Appendix C). The absence of the turnaround in the observed data also favors the amc composition. Figure 5 shows the polarization phase functions of various particle models, from which we can constrain the aggregation models. In the plots, we excluded models that have an inconsistent disk color, which basically resulted in rejecting large fractal aggregates with small monomers and small particles obeying Rayleigh scattering. For fractal and compact aggregates, we also excluded the org composition and amc400, as these parameters are already shown to be inconsistent with the observation (Section 4.1). As a result, the number of models plotted is 9, 27, and 36 for irregular grains, compact aggregates, and fractal aggregates, respectively. For each model, we assess the quality of fit by calculating a reduced \(\chi^{2}\) including both data points of the \(J\)- and \(H\)-band phase functions. The best-fit model among irregular grains is \(a_{\mathrm{V}}^{\mathrm{max}}=0.504~{}\mu\)m with amc, which yields a reduced \(\chi^{2}=6.8\). This model shows a rapid drop in polarized intensity at a scattering angle below \(\sim 50^{\circ}\) at the \(H\) band and is inconsistent with the observation. The drop stems from two competing functions overlapping in the polarization phase function: one is the total-intensity phase function, which peaks at smaller scattering angles, and the other one is the degree of polarization, which decreases Figure 3: Comparison of polarized scattered light of the IM Lup disk and simulated one for FA1.9 with \(a_{\mathrm{c}}^{\mathrm{max}}=6.5\pm 0.22~{}\mu\)m but consisting of different monomer radii and compositions. The top images show the largest aggregate in the size distribution of each model. Adopted aggregate parameters are (\(N_{\mathrm{max}}\), \(a_{\mathrm{m}}\))=(2048, 100 nm) (dotted-dashed), (512, 200 nm) (solid), (256, 300 nm) (dashed), (128, 400 nm) (dotted). The bottom left and right panels show the polarized flux and the polarization phase function (normalized to \(90^{\circ}\)) at the \(H\) band, respectively. Blue and yellow colors represent the amc and org composition models, respectively. at these angles. For all irregular grain models, the latter function is dominant. Polarized forward scattering by compact aggregates is much stronger than that of irregular grains. This is because the aggregates exhibit stronger forward scattering in total intensity, as they have larger sizes than irregular grains with the same mass. We also found that there is a large variation in polarization phase functions (see gray lines), and most of them overestimate the observed polarized forward scattering amplitude. The best-fit model among compact aggregates is CA-HP with \(N_{\rm max}=512\) and amc200 (\(a_{\rm V}^{\rm max}=1.6\)\(\mu\)m and \(a_{\rm c}^{\rm max}=3.1\)\(\mu\)m), which yields a reduced \(\chi^{2}=5.0\). However, this model underestimates the polarized flux. Although it could be enhanced by increasing the disk mass and then scattering the surface, the geometrically thicker disk would make the phase function even steeper (see Appendix C). Finally, we found that fractal models outperform the other two types of particle models. The best-fit model is FA1.5 with \(N_{\rm max}=32,64,128,256\) and amc200. All of them yield a reduced \(\chi^{2}=3.6\) and are therefore the best fits among all particle models. These aggregates have sizes of \(a_{\rm V}^{\rm max}=0.63\)-1.3 \(\mu\)m and \(a_{\rm c}^{\rm max}=2.1\)-8.3 \(\mu\)m. The upper bound of the aggregate size is ill-constrained as it corresponds to the larger end of the parameter range we studied. We think \(a_{\rm c}^{\rm max}>8.3\)\(\mu\)m is also a possible solution because the aggregate radius is insensitive to the results. These models can successfully reproduce the observed disk-polarized flux, its color, disk aspect ratio, and polarization phase functions at the two bands simultaneously (see also the dashed line in Figure 4). In what follows, we select the largest one (FA1.5 with \(N_{\rm max}=256\) and amc200) as representative of the best fractal models. The polarization phase functions of the fractal models tend to be shallower than those of the compact aggregates, despite the fact that the fractal aggregates are much larger than the compact ones. This is due to a different number density of monomers. In a compact aggregate, the monomers are packed quite closely, and hence, scattered waves from the monomers are efficiently amplified by constructive interference. In a fractal aggregate, each monomer is relatively isolated so that fewer monomer pairs will be involved with constructive interference. As a result, forward scattering is less prominent in fractal models. We also found, more importantly, that once fractal aggregates become micron sized, the quality of fit is insensitive to the aggregate radius and weakly dependent on the fractal dimension. As long as the monomer model is amc200, there are a number of models that fit the observations almost equally well: FA1.1 with \(a_{\rm c}^{\rm max}\geq 3.7\)\(\mu\)m (\(\chi^{2}=3.9-4.1\)), FA1.3 with \(a_{\rm c}^{\rm max}\geq 2.7\)\(\mu\)m (\(\chi^{2}=4.1-4.5\)), FA1.5 with \(a_{\rm c}^{\rm max}\geq 2.1\)\(\mu\)m (\(\chi^{2}=3.6\) Figure 4: Effect of dust disk mass on the polarized flux and the aspect ratio of the disk. The aspect ratio of each dust disk model was measured at a disk height where the optical depth measured from the star becomes unity. The blue, orange, and green lines represent the results for all fractal models with org300 (18 models in total, see Table 2) with the dust disk mass \(M_{\rm dust}=2\times 10^{-5}M_{\odot}\) (fiducial), \(6\times 10^{-6}M_{\odot}\) and \(2\times 10^{-6}M_{\odot}\), respectively. The dust disk mass does not include the contribution of pebble-sized grains that settle into the midplane. The black dashed line represents the results for the fiducial disk mass with FA1.5 with \(N_{\rm max}=256\) with the monomer model of amc200 (see Section 4.2). The observed faint scattered light and large aspect ratio of the disk surface point toward a highly absorbing composition of each monomer. and FA1.9 with \(a_{\rm c}^{\rm max}\geq 1.1\)\(\mu\)m (\(\chi^{2}=4.0-5.0\)). This explains why there is less variation in gray lines compared to compact aggregate models. This property is a direct consequence of the suppression of multiple scattering, that is, monomer-monomer electromagnetic interaction, due to its highly fluffy structure (Tazaki et al., 2016). Without multiple scattering, two scattered waves emanating from a monomer pair that is separated by much more than the wavelength are independent (incoherent) of each other. Scattered waves from monomers in a close neighborhood can still provide coherent scattering, and it is this component that dominantly contributes to scattered-light intensity. As a result, except for a scattering angle very close to \(0^{\circ}\), the properties of scattered light will only be determined by what a local structure within an aggregate looks like and not by how large the aggregate is (Berry & Percival, 1986; Tazaki et al., 2019). There are some outliers in the \(J\)-band phase functions of fractal models. They correspond to aggregates with amc300. Thus, the monomer radius of 300 nm is unfavorable as well. Figure 5: Best dust models for the polarization phase function of the IM Lup disk. The top, middle, and bottom panels represent the results for irregular grains, compact aggregates, and fractal aggregates, respectively. From left to right panels, the disk-polarized flux and the polarization phase functions (normalized to a scattering angle of \(90^{\circ}\)) at the \(J\) and \(H\) bands are shown. The thick lines represent the best fit among each particle type: the best irregular grain model (\(a_{\rm v}^{\rm max}=0.504\)\(\mu\)m with amc: a reduced \(\chi^{2}=6.8\)), the best compact aggregate model (CA-HP with \(N_{\rm max}=512\) and amc200: a reduced \(\chi^{2}=5.0\)), the best fractal model (FA1.5 with \(N_{\rm max}=256\) and amc200: a reduced \(\chi^{2}=3.6\)). Thin gray lines in each panel are models that are consistent with the disk polarization color (\(J/H=0.81\pm 0.12\)). Although the difference in the reduced \(\chi^{2}\) values between the best compact and fractal aggregate models is not significant, the fractal model is favorable because of its robust fitting behavior. Since the phase functions of compact aggregate models are sensitive to aggregate parameters, any fit requires significant fine-tuning. In contrast, those of fractal aggregates are very similar to each other, and most models that have a consistent disk color come close to the observed phase functions. Such properties allow us to model the observed scattered light without performing fine-tuning. ## 5 Discussion ### Are we observing primordial dust coagulation? Our best-fit model suggests the presence of fractal aggregates with \(D_{\rm f}=1.5\). These low-\(D_{\rm f}\) aggregates naturally form via dust coagulation driven by Brownian motion (Kempf et al., 1999; Blum et al., 2000; Krause and Blum, 2004; Paszun and Dominik, 2006), which is supposed to occur at the earliest step in planet formation. For example, Paszun and Dominik (2006) obtained \(D_{\rm f}=1.46\) for the Brownian coagulation in the ballistic collision limit. Here we discuss to what extent such aggregates can grow and survive in the disk surface after the disk age of \(\sim 1\) Myr. Growth of aggregates by Brownian motion as a function of time \(t\) can be approximated by (Blum, 2004; Krause and Blum, 2004) \[N(t)=\left[(1-\gamma)\left(a\frac{t}{\tau_{0}}+c\right)\right]^{1/(1-\gamma)}, \tag{2}\] where \(\tau_{0}=(n_{0}\sigma_{0}v_{0})^{-1}\); \(n_{0}\) is the number density of the monomers, \(\sigma_{0}=4\pi a_{\rm m}^{2}\) is the collision cross section, and \(v_{0}=4\sqrt{kT/\pi m_{\rm m}}\) is the relative velocity between them, \(m_{\rm m}\) is the mass of each monomer, \(k\) is the Boltzmann constant, and \(T\) is the gas temperature. We adopt \(a=1.28\), \(c=2.05\), \(1/(1-\gamma)=1.71\), as suggested by microgravity experiments (Krause and Blum, 2004). To estimate the size of aggregates using Equation (2), we assume the amc200 model, which is the best monomer model found in Section 4.1. The number density of monomers \(n_{0}\) is estimated by using the gas density model of the IM Lup disk in Zhang et al. (2021) with a dust-to-gas ratio of 0.01 (after scaling the new GAIA distance). We also assume a temperature of 25 K. By substituting these values into Equation (2), we found \(N(t=1\ {\rm Myr})\sim 10^{3}\) and \(6\times 10\) at the 90-au and 150-au disk surface (\(z\sim 2H_{\rm g}\)). These sizes are larger than the smallest aggregate we need to explain the scattered light (\(N_{\rm max}\simeq 32\)). Therefore, coagulation driven by Brownian motion is a viable mechanism for forming the inferred fractal aggregates. Suppose the vertical mixing of dust particles is active. In that case, particles in the disk surface region could be a mixture of those formed in situ and those stirred up from the midplane; the latter may have a nonfractal structure. To estimate whether this is the case or not, we calculate the vertical stirring timescale (Dullemond and Dominik, 2004) \[t_{\rm stir}=\frac{1}{\alpha\Omega}\frac{z^{2}}{H_{\rm g}^{2}}, \tag{3}\] where \(z\) is the disk height, \(H_{\rm g}\) is the gas pressure scale height, \(\Omega\) is the Kepler angular frequency, and \(\alpha\) is a nondimensional parameter for the vertical diffusion coefficient. With the gas scale height model of Zhang et al. (2021), the scattering surface derived in Avenhaus et al. (2018) corresponds to \(z\sim 2H_{\rm g}\). Since a typical disk model has a scattering surface at \(3\)-\(4H_{\rm g}\), the dust disk around IM Lup is relatively flat, as also argued in Rich et al. (2021). At \(r=150\) au, we have \(\Omega^{-1}\approx 300\) yr. In order to have mixing during the age of the disk, we need \(t_{\rm stir}<t_{\rm age}\), leading to \(\alpha>(z/H_{\rm g})^{2}/(t_{\rm age}\Omega)\approx 10^{-3}(t_{\rm age}/1\ {\rm Myr })^{-1}\). However, there has been no evidence suggesting such a level of turbulence by molecular line observations toward the IM Lup disk. There are some indirect measurements of the turbulent level in the IM Lup disk. Powell et al. (2022) suggested \(\alpha=1.5\times 10^{-2}\) to explain the observed CO depletion within a timescale of \(t_{\rm age}\approx 1\) Myr. However, a fast CO depletion within \(\sim 1\) Myr is feasible without invoking strong turbulence if the cosmic-ray ionization rate is as high as \(10^{-16}\) with the help of other sequestration mechanisms (Krijt et al., 2020). This rate is still a reasonable value at the outer surface region (Indriolo et al., 2015; Fujii and Kimura, 2022). Franceschi et al. (2022) estimated the \(\alpha\) parameter to be \(3\times 10^{-3}\) to explain the near-IR disk thickness. Since they assumed compact spherical grains, the inferred \(\alpha\) value could even be reduced by a factor of a few for fractal aggregates. Therefore, these indirect measurements do not necessarily require a turbulent level much larger than \(10^{-3}\). In a weakly turbulent disk (\(\alpha<10^{-3}\)), vertical settling must be slow enough to explain the observed disk thickness in IR scattered light. We then calculate the settling timescale defined by (Dullemond and Dominik, 2004) \[t_{\rm sett}=\frac{4}{3\sqrt{2\pi}}\frac{A}{m}\frac{\Sigma_{\rm g}}{\Omega} \exp\left[-\frac{z^{2}}{2H_{\rm g}^{2}}\right], \tag{4}\] where \(A/m\) is the area-to-mass ratio of a fractal aggregate, and \(\Sigma_{\rm g}\) is the gas surface density. At around the second ring (\(r=150\) au), the gas surface density is \(\Sigma_{\rm g}\approx 4\ {\rm g\ cm^{-2}}\)(Zhang et al., 2021). A frac tal aggregate of FA1.5 has an area-to-mass ratio of \(A/m\simeq 0.7(A/m)_{\rm mono}\) at \(N=256\), where \((A/m)_{\rm mono}\) is the area-to-mass ratio of each monomer grain (Tazaki, 2021). Thus, we have \[\frac{4}{3}\frac{A}{m}\Sigma_{\rm g}\approx 8\times 10^{4}\left(\frac{\rho_{\rm m }}{1.78\ {\rm g\ cm^{-3}}}\right)^{-1}\left(\frac{a_{\rm m}}{200\ {\rm nm}}\right)^{-1}. \tag{5}\] By substituting Equation (5) into (4), we obtain \(t_{\rm sett}\approx 1\ {\rm Myr}\) at \(z=2H_{\rm g}\). The settling timescale at the scattering surface is thereby comparable to the age of the disk. The above estimate is also consistent with the findings of Verrios et al. (2022). Based on their hydrodynamic calculations without turbulence, the authors found that the model can reproduce the scattered-light morphology when the area-to-mass ratio of the smallest grains is \(A/m=2.5\times 10^{4}\ {\rm cm^{2}\ g^{-1}}\), but not when \(A/m=2.5\times 10^{3}\ {\rm cm^{2}\ g^{-1}}\) because settling occurs rapidly. Our best fractal aggregates have an area-to-mass ratio of \(A/m\simeq 1.5\times 10^{4}\ {\rm cm^{2}\ g^{-1}}\), which agrees with their estimate within by a factor of two. In summary, Brownian motion can form fractal aggregates with \(D_{\rm f}=1.5\) large enough to explain the disk-scattered light. In the outer surface region of the disk, contamination of dust particles stirred up from the midplane would be limited unless the turbulent strength is \(\alpha>10^{-3}\). Meanwhile, supported by the tight dynamical coupling of the fractal aggregates to gas, they settle down rather slowly to the disk midplane, which reasonably explains the observed disk thickness. ### The monomer radius for other planet-forming disks Tazaki & Dominik (2022) argued that the degree of polarization is another key quantity for diagnosing the properties of the monomers. Figure 6 summarizes the maximum degree of polarization extracted from our radiative transfer simulations. Each colored region represents a range of the maximum degree of polarization that fractal or compact aggregates with a specific monomer radius can produce. In other words, it corresponds to the uncertainty range for various aggregate radii, composition, and porosity (fractal dimension). As a general trend, a larger monomer radius leads to a lower degree of polarization. A wider colored region in compact models is due to the degree of polarization being dependent on not only the monomer properties but also the aggregate parameters (porosity, aggregate size), whereas, in fractal models, it is predominantly determined by the monomer properties. The composition of monomers is another important factor for the degree of polarization. There is a tendency for an aggregate with amc to exhibit a higher degree of polarization than org because a higher scattering albedo of the latter composition facilitates multiple scattering at the disk atmosphere, which in turn reduces the degree of polarization (e.g., Ma & Schmid, 2022). The upper and lower bounds of the colored region, therefore, tend to be set by aggregates with the amc and org models, respectively. The amount of amorphous carbon in each monomer also has a strong impact on optical polarization when \(a_{\rm m}=300\) and 400 nm, leading to a wider colored range for those models at optical wavelengths (Tazaki & Dominik, 2022). The great advantage of using the maximum polarization as a diagnostic is that it is less sensitive to the disk structure and the inclination angle so that we can use it to infer the monomer properties in other disks. In Figure 6, we also plotted the maximum polarization fractions of several planet-forming disks, noting that the polarization fraction for the IM Lup disk has not yet been measured. It turns out that the observed maximum polarization for the disks lies in the range of \(a_{\rm m}=100\)-400 nm. Fractal models seem to favor a monomer radius of 200-400 nm, whereas compact models favor 100-200 nm. For HL Tau (nebula region), HD 163296, and HD 142527, the monomer radius of 100 nm appears to be unfavorable as none of the aggregate models can reproduce their relatively low degree of polarization. For HD 142527, the observed maximum polarization fraction is relatively low, and its wavelength dependence is shallow (Hunziker et al., 2021). Such a tendency seems to be better explained by compact aggregates, supporting an earlier implication that aggregates are likely compact in HD 142527 (Tazaki et al., 2021; Tazaki & Dominik, 2022). To summarize, the monomer radius of other disks seems not to be too different from 200 nm, i.e. within by a factor of \(\sim 2\). We speculate that a monomer radius derived for the IM Lup disk, 200 nm, might be common among planet-forming disks at least for the outer surface regions. ## 6 Conclusions We have shown that the near-IR polarized scattered-light observations of the IM Lup disk can be explained by fractal aggregates with a characteristic radius larger than \(\sim 2\ \mu\)m and a fractal dimension of 1.5. The monomer radius of \(\sim 200\) nm is favorable to explain the observed polarized scattered-light flux, and the monomer radius much smaller than 200 nm can be ruled out as long as the aggregates have a fractal dimension less than 2. Also, the faint polarized scattered light and its flared disk geometry suggest that each monomer is made of a highly absorbing composition (the amc model). These low-\(D_{\rm f}\) fractal aggregates of sub-micron-sized monomers are shown to form by Brow nian motion with a reasonable timescale in the IM Lup disk surface without suffering rapid settling. Our results support the idea that planet formation begins with fractal coagulation of sub-micron-sized grains via Brownian motion, as anticipated from laboratory experiments and numerical studies (Kempf et al., 1999; Blum et al., 2000; Krause and Blum, 2004; Paszun and Dominik, 2006). For the IM Lup disk, millimeter-wave scattering polarization has been detected (Hull et al., 2018; Stephens et al., 2020), which indicates the presence of less porous (sub)millimeter-sized aggregates (Tazaki et al., 2019). This result does not conflict with our conclusion because near-IR and millimeter wavelength observations probe different regions in the disk, and thereby different evolutionary stages of dust coagulation at least if perfect vertical mixing is not present. Combining these results, we speculate that dust coagulation initially proceeds in a fractal manner, but before reaching (sub)millimeter size, aggregates are compacted. In this way, multiwavelength disk scattered-light observations will shed light on the porosity evolution of dust aggregates in disks. ###### Acknowledgements. The authors thank the anonymous referee for his/her helpful comments. R.T. acknowledges the JSPS overseas research fellowship. We thank Daniel Mackowski, Maxim A. Yurkin, and Cornelis Dullemond for making the MSTM, ADDA, RADMC-3D codes publicly available, respectively. We would also like to thank Bruce Draine for the availability of particle data of BA, BAM1, and BAM2, Yasuhiko Okada for providing a generation code for BCCA, and Tomas Stolker for providing the diskmap tool. R.T. also thanks useful discussion with Julien Milli and Daniel Price. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement. numpy(Harris et al., 2020), matplotlib(Hunter, 2007), RADMC-3D v2.0(Dullemond et al., 2012) Figure 6: Maximum degree of polarization of disk-scattered light for fractal aggregates (left) and compact aggregates (right). Each color represents the maximum polarization of aggregates with a specific monomer radius. For the left panel, the colored region shows the values of the maximum polarization for various aggregate radii, four fractal dimensions (FA1.1, FA1.3, FA1.5, FA1.9), and two monomer compositions (org and amc). For the right panel, the colored region shows the values for various aggregate radii, three porosity models (CA-HP, CA-MP, CA-LP), and two monomer compositions (org and amc). For comparison, we overlaid the observed maximum polarization of various planet-forming disks. References: HD 142527 (Hunziker et al., 2021), HD 169142 (Tschudi and Schmid, 2021), HD 34700 A (Monnier et al., 2019), GG Tau (Silber et al., 2000), AB Aur (Perrin et al., 2009), UX Tau A (Tanii et al., 2012), TW Hya (Poteet et al., 2018), and HL Tau nebula region (Murakawa et al., 2008). MSTM v3.0 (Mackowski & Mishchenko, 2011), ADDA (Yurkin & Hoekstra, 2011), diskmap (Stolker et al., 2016), aggregate_gen (Moteki, 2019)
2308.14229
Replica Symmetry Broken States of some Glass Models
We have studied in detail the $M$-$p$ balanced spin glass model, especially the case $p=4$. These types of model have relevance to structural glasses. The models possess two kinds of broken replica states; those with one-step replica symmetry breaking (1RSB) and those with full replica symmetry breaking (FRSB). To determine which arises requires studying the Landau expansion to quintic order. There are 9 quintic order coefficients, and 5 quartic order coefficients, whose values we determine for this model. We show that it is only for $2 \leq M < 2.4714 \cdots$ that the transition at mean-field level is to a state with FRSB, while for larger $M$ values there is either a continuous transition to a state with 1RSB (when $ M \leq 3$) or a discontinuous transition for $M > 3$. The Gardner transition from a 1RSB state at low temperatures to a state with FRSB also requires the Landau expansion to be taken to quintic order. Our result for the form of FRSB in the Gardner phase is similar to that found when $2 \leq M < 2.4714\cdots$, but differs from that given in the early paper of Gross et al. [Phys. Rev. Lett. 55, 304 (1985)]. Finally we discuss the effects of fluctuations on our mean-field solutions using the scheme of H\"{o}ller and Read [Phys. Rev. E 101, 042114 (2020)}] and argue that such fluctuations will remove both the continuous 1RSB transition and discontinuous 1RSB transitions when $8 >d \geq 6$ leaving just the FRSB continuous transition. We suggest values for $M$ and $p$ which might be used in simulations to confirm whether fluctuation corrections do indeed remove the 1RSB transitions.
J. Yeo, M. A. Moore
2023-08-27T23:24:02Z
http://arxiv.org/abs/2308.14229v2
# Replica Symmetry Broken States of some Glass Models ###### Abstract We have studied in detail the \(M\)-\(p\) balanced spin glass model, especially the case \(p=4\). These types of model have relevance to structural glasses. The models possess two kinds of broken replica states; those with one-step replica symmetry breaking (1RSB) and those with full replica symmetry breaking (FRSB). To determine which arises requires studying the Landau expansion to quintic order. There are 9 quintic order coefficients, and 5 quartic order coefficients, whose values we determine for this model. We show that it is only for \(2\leq M<2.4714\cdots\) that the transition at mean-field level is to a state with FRSB, while for larger \(M\) values there is either a continuous transition to a state with 1RSB (when \(M\leq 3\)) or a discontinuous transition for \(M>3\). The Gardner transition from a 1RSB state at low temperatures to a state with FRSB also requires the Landau expansion to be taken to quintic order. Our result for the form of FRSB in the Gardner phase is similar to that found when \(2\leq M<2.4714\cdots\), but differs from that given in the early paper of Gross et al. [Phys. Rev. Lett. **55**, 304 (1985)]. Finally we discuss the effects of fluctuations on our mean-field solutions using the scheme of Holler and Read [Phys. Rev. E **101**, 042114 (2020)] and argue that such fluctuations will remove both the continuous 1RSB transition and discontinuous 1RSB transitions when \(8>d\geq 6\) leaving just the FRSB continuous transition. We suggest values for \(M\) and \(p\) which might be used in simulations to confirm whether fluctuation corrections do indeed remove the 1RSB transitions. ## I Introduction Spin models of the \(p\)-spin or Potts glass variety [1; 2] played an important role in the development of one of the current theories of structural glasses, the Random First Order Transition (RFOT) picture [3; 4; 5; 6; 7]. These models have been primarily studied in the infinite dimensionality limit, which is equivalent to mean-field theory. Of course what is really wanted is an understanding of what happens in the physical realm of two and three dimensions, and for these dimensions simulations [8; 9] of models of the type studied in this paper have revealed that they behave completely differently from what is predicted by the mean-field calculations. In particular in the simulations there is no sign of the random first-order transition which is one of the central features of RFOT theory. Below the ideal glass transition there is supposed to exist the ideal glass state, a state of low configurational entropy but with a high stability due to the assumed paucity of glass states. This state in replica language has one-step replica symmetry breaking (1RSB). The transition temperature to this state is identified as the Kauzmann temperature in RFOT theory, which is the temperature at which the entropy of the glass state becomes equal to that of the crystalline state [10]. While a discontinuous transition was not seen in the simulations, evidence was found for the existence of long correlation lengths, which is also the behavior found in real-space renormalization group (RG) calculations [11; 12] of \(p\)-spin models in three dimensions. That simulations in three dimensions lead to a picture quite different to that which arises from mean-field calculations has largely been ignored: Work has continued apace using the large \(d\) limit and mean-field techniques. We have therefore begun a program of trying to understand why the mean-field picture does not extend to three dimensions [13]. For one particular \(p\)-spin model, the \(M\)-\(p\) spin glass model with \(p=6\), we were able to give an argument that the 1RSB state of that model was unstable in any finite dimension due to the excitation of droplets of flipped spins whose interface free energy are very small [14]. That argument is specific to glass models with a particular form of time reversal symmetry which gives rise to a field theory in which the cubic term \(w_{2}\) is zero (see Eq. (25)). Unfortunately the generic field theories thought relevant to glasses have \(w_{2}\) non-zero and it is these which we study in this paper. Most of our work will be focussed on the case of \(p=4\). The 1RSB phase for \(p=6\) spin glasses is destroyed by non-perturbative droplet excitations. For generic glass models with \(w_{2}\) non-zero, we can only find perturbative arguments. They are strong enough to lead us to the conclusion that the continuous phase transition to a state with 1RSB will not exist for dimensions \(d\) less than 8 and will be replaced by a continuous transition to a state with FRSB. We shall suggest that fluctuation corrections to the coupling terms in Eq. (25) might also drive the system away from having a discontinuous transition to a 1RSB state to a continuous transition to a state with full replica symmetry breaking (FRSB), but we do not know whether the fluctuation corrections are large enough to bring that about. We suspect that this question will only be resolved by simulations and values of \(p\) and \(M\) which might be appropriate for such simulations are suggested in Sec. III. Our procedure is based upon the old idea [15] of using the renormalization group recursion relations for the coupling constants of the field theory to map the coefficients of the critical field theory into a region where the correlation lengths are small and Landau theory (i.e. mean-field theory) with small fluctuation corrections can be employed. This program has also been used by Holler and Read [16] on the problem of the de Almeida-Thouless transition of the Ising spin glass in a field [17]. It has a field theory identical to that of the \(M\)-\(p\)-spin glass models discussed in this paper, i.e. that of Eq. (25), but with different numerical values for the coefficients. (To discuss finite dimensions a gradient term of the form \(\int d^{d}r\sum_{a,b}(\nabla q_{ab}(r))^{2}\) would need to be included in Eq. (25).) The program therefore requires us to understand in detail the stationary solutions i.e. mean-field solutions of Eq. (25), and the bulk of this paper is devoted to this task. Because Holler and Read discussed the RG aspects of the calculations in great detail, we shall treat those briefly, just focussing on the implications of numerical studies which were carried out after their paper was written [18]. In Sec. II we introduce the balanced \(M\)-\(p\) models and the replica procedure which was used to average their free energy over disorder. The balanced \(M\)-\(p\) spin models are very convenient to study with simulations as they are readily extended to finite dimensions on a \(d\)-dimensional lattice. When this is done the resulting field theory acquires the already mentioned gradient squared term. One of the attractions of the balanced version of these models is the absence of "hard modes", which are just usually cast aside (as in the paper of Caltagirone et al. [19]), but this leaves the subsequent calculations of uncertain accuracy. We shall focus on the case \(p=4\) and regard the number of types of Ising spins \(M\) as a variable which can take non-integer values. The simulations of Campellone et al. [9] which failed to find a discontinuous 1RSB transition were in fact done for a closely related model with \(p=4\) and \(M=4\) in three dimensions. At cubic order there are two coupling constants, \(w_{1}\) and \(w_{2}\), at quartic order, there are five coupling constants, \(y_{1},\cdots,y_{5}\) and at quintic order, there are nine coupling constants, \(z_{1},\cdots,z_{9}\). The quadratic term \(\tau\) vanishes as usual at the mean-field transition temperature \(T_{c}\) and is negative when \(T<T_{c}\). We calculate the "bare" value of all these coefficients in Appendix A for the case \(p=4\). Fluctuation corrections will modify the bare values. In studying the model at non-integer values of \(M\) we are anticipating that the fluctuation corrections can modify the bare coefficients. Studying the field theory of Eq. (25) for general values of the coefficients would be a good idea, but there are so many of these coefficients that we have limited our study to those values which can be reached by varying \(M\) in the bare values. In Sec. III we discuss what we believe will be the likely consequences of fluctuation effects on the coupling constants. In Sec. II.1 we determine the free energy of the system in the high-temperature or paramagnetic phase where the order parameter \(q_{ab}\) is independent of \(a\) and \(b\), that is, replica symmetric. At mean-field level \(q_{ab}=0\), (but fluctuation corrections would leave it replica symmetric but non-zero). If the transition is continuous, so that \(q_{ab}\) is small just below the transition, then the expansion of the Landau-Ginzburg free energy functional in powers of \(q_{ab}\) should be useful and we give its form in Sec. II.2. Most workers have stopped at the quartic terms, but we have continued up to the quintic terms. This is necessary for two reasons. The difference in free energy between the 1RSB free energy and the FRSB free energy is of \(O(\tau^{5})\), (see for example, Ref. [20]). Thus one needs to worry about the quintic terms when working out whether the state which forms at the continuous transition is of 1RSB type or is of FRSB type. Fortunately, we can show that the borderline value of \(M\), \(M^{**}\approx 2.47140\) between these types is not dependent on the quintic terms. (For \(2\leq M<M^{**}\) the continuous transition is to a state with FRSB, while for \(M^{**}<M<3\), the continuous transition is to a state with 1RSB.) The second reason relates to studies of the Gardner transition [1; 2]. The Gardner transition is the transition from a state with 1RSB to a state with FRSB as the temperature is lowered. Right from the beginning it was realized that the quintic terms are needed for its study [1]. We shall find though that our actual FRSB solution is quite different to that of Ref. [1]. This is discussed in Sec. II.5. A feature of the FRSB solutions is a singularity first noticed by Goldbart and Elderfield [21]. They found that the FRSB solution for \(q(x)\) at quartic level could have an unphysical singularity in the interval \(0<x<1\) which would imply that the probability of two states having an overlap \(q\) would be negative, which is impossible. This problem was studied in some detail by Janis and colleagues using a non-standard approach to replica symmetry breaking [22]. We find in Sec. II.5 that the singularity at quartic level in fact determines the value of \(M^{**}\) and that one avoids the singularity at \(M>M^{**}\) by simply being in the state with 1RSB. At the Gardner transition the quintic terms remove the quartic level singularities. However, similar singularities are to be found also at quintic level. Right at the Gardner transition temperature \(T_{G}\), just where the free energies of the FRSB state and the 1RSB state are equal, the Goldbart-Elderfield singularity is at the lower breakpoint \(x_{1}\). This causes the derivative of \(q(x)\) at \(x=x_{1}\) to be infinite. However for temperatures \(T\) less than \(T_{G}\), the singularity is below \(x_{1}\) and the derivative stays finite. In Sec. II.3 we derive the free energy at mean-field level for the 1RSB state. For \(M>3\), when \(w_{2}/w_{1}>1\), the transition from the high temperature normal phase to a state with 1RSB is a discontinuous transition which takes place at a transition temperature above \(T_{c}\). We suspect that this behavior would be seen for all values of \(M>3\). However, if one truncates the free energy to quartic level terms, as is commonly done, the 1RSB state only exists in the interval \(3<M\lesssim 6.64\). With the inclusion of the quintic terms, the 1RSB forms at a discontinuous transition when \(14.41\gtrsim M\gtrsim 3.98\) and \(3.27\gtrsim M>3\). Thus with the quintic form the 1RSB state persists up to larger values of \(M\). We believe that if all terms were kept then the discontinuous transition to the 1RSB state would exist for all \(M>3\). In Sec. II.4 we describe the simplifications which arise in the large \(M\) limit. Truncation leads to spurious features as the Landau expansion cannot be expected to be accurate when \(q_{ab}\) is not small. Another spurious feature of truncation is the apparent phase transition at low temperatures from the 1RSB state to the replica symmetric state with \(q_{ab}\) non-zero. In the large \(M\) limit we can solve without truncation and such a transition does not arise (see Sec. II.3). The form of the FRSB solutions at both quartic and quintic level, together with the Gardner transitions, is in Sec. II.5. In Sec. III we discuss how fluctuation corrections to the coupling constants used in the mean-field solution will change the continuous 1RSB transition into the continuous FRSB solution, using extensions of the approach of Holler and Read [16]. We suspect that the discontinuous 1RSB transition might also suffer the same fate, based on the results of simulations in low dimensions [8; 9], but we cannot support this possibility with analytical arguments. We finally conclude with suggestions of the kinds of model which could be studied numerically to resolve these issues, and also to resolve the question of whether the FRSB state can exist for dimensions \(d<6\). ## II The balanced \(M\)-\(p\) model in the fully connected limit In this section, we study the \(M\)-\(p\) spin glass model in the fully connected limit, where one has \(M\) different types of Ising spins, \(S_{i}(x)\), \(i=1,2,\cdots,M\) at each site \(x\) coupled with spins on other sites via \(p\)-body interactions. Here we focus on the so-called balanced model introduced in Ref. [13] for even \(p\), where only the coupling between two sets of \(p/2\) spins on two different sites is considered. It amounts to considering only the soft mode in a more general \(M\)-\(p\) model, where all the couplings between \(k\) spins and \(p-k\) spins are included for \(k=1,2,\cdots,p-1\). In this paper, we focus on the \(p=4\) case. For \(p=4\), the balanced model is given by four-spin interactions between a pair of two spins on two different sites. Each site has \(\binom{M}{2}\) different two-spin combinations. Therefore, for given pair of sites, there are \(\binom{M}{2}^{2}\) terms in the Hamiltonian. The Hamiltonian is given by \[H=-\frac{1}{2}\sum_{x\neq y} \Big{[}\sum_{i_{1}<i_{2}}^{M}\sum_{j_{1}<j_{2}}^{M}J_{x,y}^{(i_{1 },i_{2}),(j_{1},j_{2})}\] \[\times S_{i_{1}}(x)S_{i_{2}}(x)S_{j_{1}}(y)S_{j_{2}}(y)\Big{]}, \tag{1}\] where each \(J_{x,y}^{(i_{1},i_{2}),(j_{1},j_{2})}\) is drawn from the Gaussian distribution with zero mean and the variance \[\frac{J^{2}}{NM^{p-1}}=\frac{J^{2}}{NM^{3}}. \tag{2}\] We will set \(J=1\) for convenience. After neglecting the terms of subleading order in \(N\), we can write the replicated partition function averaged over the disorder as \[\overline{Z^{n}}= \mathrm{Tr}\exp\Big{[}\frac{\beta^{2}}{4NM^{3}} \tag{3}\] \[\times\sum_{a,b}^{n}\Big{\{}\sum_{x}^{N}\sum_{i_{1}<i_{2}}^{M}S_ {i_{1}}^{a}(x)S_{i_{2}}^{a}(x)S_{i_{1}}^{b}(x)S_{i_{2}}^{b}(x)\Big{\}}^{2} \Big{]}.\] The diagonal terms (\(a=b\)) in the replica indices give a factor \(\exp[nN\beta^{2}C]\) where \[C=\frac{1}{4M^{3}}\binom{M}{2}^{2}=\frac{(M-1)^{2}}{16M}. \tag{4}\] For \(a\neq b\), following the convention used in Ref. [19], we introduce the delta functions enforcing \[q_{ab}=\frac{1}{NM^{2}}\sum_{x}^{N}\sum_{i_{1}<i_{2}}^{M}S_{i_{1}}^{a}(x)S_{i_ {2}}^{a}(x)S_{i_{1}}^{b}(x)S_{i_{2}}^{b}(x) \tag{5}\] in the replicated partition function. Using the integral representation of the delta function, we can write \[\overline{Z^{n}}=e^{nN\beta^{2}C}\int\prod_{a<b}dq_{ab}d\mu_{ab}\;\exp[-NG( \underline{q},\underline{\mu})], \tag{6}\] where \[G(\underline{q},\underline{\mu})=-\frac{M}{4}\beta^{2}\sum_{a\neq b}q_{ab}^{ 2}+\frac{M}{2}\sum_{a\neq b}\mu_{ab}q_{ab}-\ln L(\underline{\mu}) \tag{7}\] and \[L(\underline{\mu})=\underset{\{S_{i}^{a}\}}{\mathrm{Tr}}\exp\Big{[}\frac{1}{2 M}\sum_{a\neq b}\mu_{ab}\sum_{i<j}^{M}S_{i}^{a}S_{j}^{a}S_{i}^{b}S_{j}^{b}\Big{]}. \tag{8}\] In the large-\(N\) limit, the integral is dominated by the saddle points which are determined by \[\mu_{ab}=\beta^{2}q_{ab} \tag{9}\] and \[q_{ab}=\frac{1}{M^{2}}\left\langle\sum_{i<j}^{M}S_{i}^{a}S_{j}^{a}S_{i}^{b}S_{ j}^{b}\right\rangle_{L}, \tag{10}\] where \(\left\langle\cdots\right\rangle_{L}\) is evaluated with respect to \(L\) in Eq. (8). The free energy \(F\) is then given by \[\frac{\beta F}{N}=-\frac{1}{N}\lim_{n\to 0}\frac{1}{n}\ln\overline{Z^{n}}=-C \beta^{2}+\lim_{n\to 0}\frac{1}{n}G(\underline{q},\underline{\mu}). \tag{11}\] ### Replica Symmetric Solution We first look for the saddle point solutions in the replica symmetric (RS) form \(q_{ab}=q\) and \(\mu_{ab}=\mu\) for all \(a\neq b\). We have \[\lim_{n\to 0}\frac{1}{n}G(q,\mu)=\frac{M}{4}\beta^{2}q^{2}-\frac{M}{2}\mu q- \lim_{n\to 0}\frac{1}{n}\ln L(\mu). \tag{12}\] Using \[\sum_{a\neq b}S_{i}^{a}S_{j}^{a}S_{i}^{b}S_{j}^{b}=\left(\sum_{a}S_{i}^{a}S_{j}^{a} \right)^{2}-n \tag{13}\] in Eq. (8) and the Hubbard-Stratonivich transformation on the first term, we can rewrite Eq. (8) as \[L(\mu)= e^{-n\mu(K/2M)}\underset{\{S_{i}^{a}\}}{\mathrm{Tr}}\,\int D^{K} \mathbf{y}\] \[\times\exp\left[\sqrt{\frac{\mu}{M}}\sum_{a}\sum_{i<j}^{M}y_{(i,j )}S_{i}^{a}S_{j}^{a}\right], \tag{14}\] where \[K\equiv\binom{M}{2} \tag{15}\] and the integral over the \(K\)-dimensional vector \(\mathbf{y}=(y_{1},y_{2},\cdots,y_{K})\equiv(y_{(1,2)},y_{(1,3)},\cdots,y_{(M-1,M)})\) is defined as \[\int D^{K}\mathbf{y}\equiv\prod_{\alpha=1}^{K}\left(\int_{-\infty}^{\infty}\frac{ dy_{\alpha}}{\sqrt{2\pi}}e^{-y_{\alpha}^{2}/2}\right). \tag{16}\] We therefore have \[\lim_{n\to 0}\frac{1}{n}\ln L(\mu)=-\frac{K}{2M}\mu+M\ln 2+\int D^{K}\mathbf{y }\ \ln\zeta(\mathbf{y},\mu), \tag{17}\] where \[\zeta(\mathbf{y},\mu)\equiv\frac{1}{2^{M}}\underset{\{S_{i}\}}{\mathrm{Tr}}\ \exp\left[\sqrt{\frac{\mu}{M}}\mathbf{y}\cdot\mathbf{\Psi}\right] \tag{18}\] with the \(K\)-dimensional vector \(\mathbf{\Psi}=(\Psi_{1},\Psi_{2},\cdots,\Psi_{K})=(S_{1}S_{2},S_{1}S_{3},\cdots,S _{M-1}S_{M})\). The RS free energy is then given by \[\frac{\beta F_{\mathrm{RS}}}{N}= -C\beta^{2}+\frac{M}{4}\beta^{2}q^{2}-\frac{M}{2}\mu q+\frac{K}{2 M}\mu\] \[-M\ln 2-\int D^{K}\mathbf{y}\ \ln\zeta(\mathbf{y},\mu). \tag{19}\] By varying the free energy with respect to \(q\) and \(\mu\), respectively, we have saddle point equations, \[\mu=\beta^{2}q \tag{20}\] and \[q= \frac{1}{M^{2}}\int D^{K}\mathbf{y}\ \frac{1}{\zeta^{2}(\mathbf{y},\mu)}\] \[\times\sum_{\alpha=1}^{K}\left\{\frac{1}{2^{M}}\underset{\{S_{i} \}}{\mathrm{Tr}}\,\Psi_{\alpha}\exp\left[\sqrt{\frac{\mu}{M}}\mathbf{y}\cdot\mathbf{ \Psi}\right]\right\}^{2}. \tag{21}\] At high temperatures, the RS solutions are given by \(q=\mu=0\). In that case, \(\zeta=1\) and the corresponding free energy is \[\frac{\beta F_{\mathrm{RS}}}{N}=-C\beta^{2}-M\ln 2. \tag{22}\] The entropy \(S=-\partial F/\partial T\) for this phase is \[\frac{S_{\mathrm{RS}}}{N}=-C\beta^{2}+M\ln 2. \tag{23}\] This becomes negative below \[T_{*}=\sqrt{\frac{C}{M\ln 2}}=\frac{M-1}{4M\sqrt{\ln 2}}. \tag{24}\] Some values of \(T_{*}\) are \(T_{*}\)=0.20019 for \(M=3\), 0.22521 for \(M=4\), 0.25023 for \(M=6\) and 0.25738 for \(M=7\). It keeps increasing with \(M\) and approaches 0.30028 in the \(M\to\infty\) limit. ### Landau Expansion of Free Energy In order to study a possible continuous transition, we expand the free energy, Eq. (11) for small values of the order parameter. We first expand Eq. (8) to \(O(\mu^{5})\) and take the trace over the spins. The detailed steps are given in Appendix A. Now using Eqs. (7), (9) and (11), we can write the free energy as \[\frac{\beta F}{N} =-C\beta^{2}-M\ln 2+\lim_{n\to 0}\frac{1}{n}\Big{[}\tau\sum_{a,b}q _{ab}^{2} \tag{25}\] \[-w_{1}\sum_{a,b,c}q_{ab}q_{bc}q_{ca}-w_{2}\sum_{a,b}q_{ab}^{3}-y_ {1}\sum_{a,b}q_{ab}^{4}\] \[-y_{2}\sum_{a,b,c}q_{ab}^{2}q_{bc}^{2}-y_{3}\sum_{a,b,c}q_{ab}^{2} q_{bc}q_{ca}-y_{5}\sum_{a,b,c,d}q_{ab}q_{bc}q_{cd}q_{da}\] \[-z_{1}\sum_{a,b}q_{ab}^{5}-z_{2}\sum_{a,b,c}q_{ab}^{3}q_{bc}^{2}- z_{3}\sum_{a,b,c}q_{ab}^{3}q_{bc}q_{ca}\] \[-z_{4}\sum_{a,b,c}q_{ab}^{2}q_{bc}^{2}q_{ca}-z_{5}\sum_{a,b,c,d}q_{ ab}^{2}q_{b}q_{bc}q_{cd}q_{da}\] \[-z_{6}\sum_{a,b,c,d}q_{ab}^{2}q_{bc}q_{cd}q_{db}-z_{7}\sum_{a,b,c, d}q_{ab}^{2}q_{bc}q_{cd}^{2}\] \[-z_{8}\sum_{a,b,c,d}q_{ab}q_{bc}q_{cd}q_{da}q_{ac}-z_{9}\sum_{a,b, c,d,e}q_{ab}q_{bc}q_{cd}q_{de}q_{ca}\Big{]},\] where \(q_{aa}=0\), \(q_{ab}=q_{ba}\), and all the sums over replica indices are without any restriction. The coefficient of the quadratic term is given by \[\tau=\frac{M}{4}\beta^{2}\left(1-\frac{K}{M^{3}}\beta^{2}\right)=\frac{M}{4} \beta^{4}\left(T^{2}-T_{c}^{2}\right), \tag{26}\] where \[T_{c}\equiv\sqrt{\frac{K}{M^{3}}}=\frac{1}{M}\sqrt{\frac{M-1}{2}}. \tag{27}\] This expression coincides with Eq. (27) of Ref. [19]. Some values of \(T_{c}\) are \(0.33333\) for \(M=3\), \(0.30619\) for \(M=4\), \(0.26352\) for \(M=6\) and \(0.24744\) for \(M=7\). Note that \(T_{c}\) decreases with \(M\) and becomes zero in the \(M\to\infty\) limit. Note also that \(T_{c}>T_{*}\) for \(M=2,3,\cdots,6\) and \(T_{*}>T_{c}\) for \(M\geq 7\). The coefficients of the cubic terms are given by \[w_{1}=\frac{\beta^{6}K}{6M^{3}},\ \ w_{2}=\frac{\beta^{6}K}{6M^{3}}(M-2). \tag{28}\] The quartic and quintic coefficients are given in Appendix A as functions of \(M\). It is known [19; 1] that if the ratio of the cubic terms \(w_{2}/w_{1}\), which in our model is equal to \(M-2\), is greater than one, a discontinuous transition to the one-step replica symmetry breaking phase (1RSB) occurs. When \(M=2\), our model reduces to the Ising spin glass and we can check that the cubic and quartic coefficients coincide with those for the Ising spin glass except for the multiplicity factor of \(2^{3}\) for \(w_{i}\) and \(2^{4}\) for \(y_{i}\). ### The 1RSB Solution We now consider the case where \(q_{ab}\) and \(\mu_{ab}\) take the one step replica symmetry breaking (1RSB) form taking values \(q_{1}\) and \(\mu_{1}\) on \(n/m_{1}\) diagonal blocks (labelled by \(B_{k}\), \(k=1,2,\cdots,n/m_{1}\) of size \(m_{1}\) and \(q_{0}\) and \(\mu_{0}\) outside the blocks. We then have the terms in Eq. (7) as \[\sum_{a\neq b}q_{ab}^{2}=n[(m_{1}-1)q_{1}^{2}+(n-m_{1})q_{0}^{2}], \tag{29}\] \[\sum_{a\neq b}\mu_{ab}q_{ab}=n[(m_{1}-1)\mu_{1}q_{1}+(n-m_{1})\mu _{0}q_{0}]. \tag{30}\] We will focus on the 1RSB solutions with \(q_{0}=\mu_{0}=0\). By writing \[\frac{1}{2M}\sum_{i<j}^{M}\sum_{a\neq b}\mu_{ab}S_{i}^{a}S_{j}^{a }S_{i}^{b}S_{j}^{b}\] \[=\frac{\mu_{1}}{2M}\sum_{k=1}^{n/m_{1}}\sum_{i<j}^{M}\left\{\left[ \sum_{a\in B_{k}}S_{i}^{a}S_{j}^{a}\right]^{2}-m_{1}\right\} \tag{31}\] in Eq. (8) and by using the Hubbard-Stratonovich transformation, we have \[\underset{\{S_{i}^{x}\}}{\mathrm{Tr}}\,\exp\left[\frac{1}{2M} \sum_{i<j}^{M}\sum_{a\neq b}\mu_{ab}S_{i}^{a}S_{j}^{a}S_{i}^{b}S_{j}^{b}\right] \tag{32}\] \[= \exp\left[-n\frac{\mu_{1}K}{2M}\right]\] \[\times\Big{[}\int D^{K}\mathbf{y}\;\Big{\{}\underset{\{S_{i}\}}{ \mathrm{Tr}}\,\exp\left[\sqrt{\frac{\mu_{1}}{M}}\sum_{i<j}^{M}y_{(i,j)}S_{i}S_ {j}\right]\Big{\}}^{m_{1}}\Big{]}^{n/m_{1}}\!.\] Therefore we have \[\underset{n\to 0}{\lim}\,\frac{1}{n}\ln L(\underline{\mu})= -\frac{K}{2M}\mu_{1}+M\ln 2\] \[+\frac{1}{m_{1}}\ln\int D^{K}\mathbf{y}\;\zeta^{m_{1}}(\mathbf{y},\mu_{1 }), \tag{33}\] where \(\zeta\) is defined in Eq. (18). Using Eqs. (29), (30) and (33) in Eq. (11), \[\frac{\beta F_{\mathrm{1RSB}}}{N}= -C\beta^{2}-\frac{M}{4}\beta^{2}(m_{1}-1)q_{1}^{2}\] \[+\frac{M}{2}(m_{1}-1)\mu_{1}q_{1}+\frac{K}{2M}\mu_{1}-M\ln 2\] \[-\frac{1}{m_{1}}\ln\int D^{K}\mathbf{y}\;\zeta^{m_{1}}(\mathbf{y},\mu_{1 }). \tag{34}\] Varying the free energy with respect to \(q_{1}\) and \(\mu_{1}\), respectively, we have \[\mu_{1}=\beta^{2}q_{1}. \tag{35}\] and \[q_{1} =\frac{1}{M^{2}}\frac{1}{\int D^{K}\mathbf{y}\;\zeta^{m_{1}}(\mathbf{y}, \mu_{1})} \tag{36}\] \[\times\int D^{K}\mathbf{y}\;\zeta^{m_{1}-2}\sum_{\alpha=1}^{K}\left\{ \frac{1}{2^{M}}\underset{\{S_{i}\}}{\mathrm{Tr}}\;\Psi_{\alpha}\exp[\sqrt{ \frac{\mu_{1}}{M}}\mathbf{y}\cdot\mathbf{\Psi}]\right\}^{2},\] Now varying the free energy with respect to \(m_{1}\), we have \[\frac{M}{4}\beta^{2}q_{1}^{2}+\frac{1}{m_{1}^{2}}\ln\int D^{K} \mathbf{y}\;\zeta^{m_{1}}(\mathbf{y},\mu_{1})\] \[-\frac{1}{m_{1}}\frac{\int D^{K}\mathbf{y}\;\zeta^{m_{1}}(\mathbf{y},\mu_ {1})\ln\zeta(\mathbf{y},\mu_{1})}{\int D^{K}\mathbf{y}\;\zeta^{m_{1}}(\mathbf{y},\mu_{1})}=0. \tag{37}\] In summary, Eqs. (35), (36), and (37) are the saddle point equations one has to solve for the 1RSB state. Note that when \(m_{1}=1\), we can explicitly evaluate \[\int D^{K}\mathbf{y}\;\zeta(\mathbf{y},\mu_{1})=\exp\left[\frac{K}{2M}\mu_{1}\right]. \tag{38}\] From Eq. (34), we see that when \(m_{1}=1\), the 1RSB free energy is equal to the RS one: \[\frac{\beta F_{\mathrm{1RSB}}}{N}\underset{m_{1}\to 1}{\rightarrow}-C\beta^{2}-M\ln 2 =\frac{\beta F_{\mathrm{RS}}}{N}. \tag{39}\] To determine the transition temperature \(T_{c}^{\mathrm{1RSB}}\) to the 1RSB state, we set \(m_{1}=1\) in Eqs. (35), (36) and (37) and solve for \(\beta\). For \(m_{1}=1\), we can combine these three equations into one equation, \(f_{M}(\sigma)=0\) for the parameter \[\sigma\equiv\sqrt{\frac{\mu_{1}}{M}}, \tag{40}\] where \[f_{M}(\sigma) \equiv e^{-K\sigma^{2}/2}\int D^{K}\mathbf{y}\;\Big{[}\zeta(\mathbf{y},\mu_{1}) \ln\zeta(\mathbf{y},\mu_{1}) \tag{41}\] \[-\frac{\sigma^{2}}{4}\frac{\sum_{\alpha=1}^{K}\left\{2^{-M} \mathrm{Tr}\;\Psi_{\alpha}\exp\left[\sigma\mathbf{y}\cdot\mathbf{\Psi}\right]\right\}^{2}} {\zeta(\mathbf{y},\mu_{1})}\Big{]}-\frac{K}{2}\sigma^{2}.\] Note that \(\zeta(\mathbf{y},\mu_{1})\) is a function of \(\sigma\). If there exists a nonzero solution \(\sigma\) to \(f_{M}(\sigma)=0\), one can obtain nonzero \(q_{1}\) from Eq. (36) and the transition temperature \(T_{c}^{1\text{RSB}}\) from Eq. (35). We solve this equation by numerically evaluating multi-dimensional integrals in Eq. (41). In Figs. 1 and 2, \(f_{M}\) is plotted as a function of \(\sigma\) for \(M=3\) and \(M=4\). As we can see from the figures, \(f_{M}(\sigma)\) starts off very flat and increases monotonically for large values of \(\sigma\). For \(M=3\), Fig. 1 clearly shows a monotonic increase as a function of \(\sigma\), thus we can conclude that the only solution to \(f_{3}(\sigma)=0\) is \(\sigma=0\). From Eq. (36), we then have \(q_{1}=0\) thus no discontinuous transition in this case. For \(M=4\), we have to evaluate six-dimensional (\(K=6\)) integrals in Eq. (41). For that, we use Monte Carlo methods, and the results are shown in Fig. 2. The error bars come from sampling random points in the integrands within the Monte Carlo evaluation of the integrals. We have averaged over 30 trials for each data point. Since \(f_{4}(\sigma)\) stays very flat for small \(\sigma\) before increasing to large positive values, it is quite difficult to determine, if any, nonzero solution \(\sigma\) from this plot alone. To understand the situation more clearly, we study the behavior of \(f_{M}(\sigma)\) for small \(\sigma\). We can show (see Appendix B for details) that for small \(\sigma\), the leading order in the small-\(\sigma\) expansion of \(f_{M}(\sigma)\) is \(O(\sigma^{6})\). In fact, if we write \(f_{M}(\sigma)=\sum_{i=0}^{\infty}c_{i}(M)\sigma^{i}\), we find that \(c_{i}=0\) for \(i\) odd, \(c_{0}=c_{2}=c_{4}=0\) and \[c_{6}(M)=-\frac{M}{24}(M-1)(M-3) \tag{42}\] for \(M=3,4,5,\cdots\). Therefore, for \(M=3\), the leading order is actually \(O(\sigma^{8})\). The next-order coefficient is given by \[c_{8}(M)=-\frac{M}{48}(M-1)(3M^{2}-27M+47), \tag{43}\] for \(M\geq 3\). Some steps needed to obtain these are given in Appendix B. We note that \(c_{8}(M=3)>0\). This is consistent with the monotonic increase of \(f_{3}(\sigma)\) shown in Fig. 1. For \(M>3\), \(c_{6}\) becomes negative. Combining this fact with the monotonic increase for large \(\sigma\), we can conclude that there exists a nonzero solution to \(f_{M}(\sigma)=0\) and that a discontinuous transition for \(M>3\) is expected. From Eq. (43), we find that \(c_{8}(M)>0\) for \(M\lesssim 6.64\), therefore for these values of \(M\), we can estimate the solution as \(\sigma\simeq\sqrt{-c_{6}(M)/c_{8}(M)}\). This program, however, fails when \(c_{8}(M)<0\) for \(M\gtrsim 6.64\). (\(c_{6}<0\) for \(M>3\).) We need to go to higher order to study the 1RSB transition beyond this value of \(M\). We find, however, that the method in Appendix B becomes too cumbersome to get \(c_{10}\). The Landau expansion of the free energy given in Eq.(25) provides a more useful tool. Since \(\sigma^{2}\sim\mu_{1}\sim q_{1}\), \(O(\sigma^{6})\) and \(O(\sigma^{8})\) correspond to the cubic and quartic orders in \(q_{ab}\), respectively, and we need quintic order terms in \(q_{ab}\) to evaluate \(c_{10}\). In Appendix C, we apply the 1RSB form directly to \(q_{ab}\) in Eq. (25). When \(m_{1}=1\), the saddle point equations can be combined into a form \[-\frac{1}{2}(w_{2}-w_{1})q_{1}^{3}-(y_{1}-y_{3}+y_{5})q_{1}^{4}-\frac{3}{2}z_{ 1}^{\text{eff}}q_{1}^{5}=0, \tag{44}\] where \[z_{1}^{\text{eff}}\equiv z_{1}-z_{3}-z_{4}+z_{5}+z_{8}-z_{9}. \tag{45}\] Recalling that \(q_{1}=\mu_{1}/\beta^{2}=M\sigma^{2}/\beta^{2}\) and using the values of \(w_{i}\) and \(y_{i}\) given in Appendix A, we can identify the first two terms in Eq. (44) as the small-\(\sigma\) expansion of \(f_{M}(\sigma)\), since we can rewrite \[c_{6}(M)=-\frac{M^{3}}{2\beta^{6}}(w_{2}-w_{1}), \tag{46}\] and \[c_{8}(M)=-\frac{M^{4}}{\beta^{8}}(y_{1}-y_{3}+y_{5}). \tag{47}\] It follows that the last term in Eq. (44) gives \[c_{10}(M)=-\frac{3M^{5}}{2\beta^{10}}z_{1}^{\text{eff}}. \tag{48}\] Figure 1: \(f_{M}(\sigma)\) defined in Eq. (41) for \(M=3\). A nonzero solution \(\sigma\) of \(f_{M}(\sigma)=0\) would signal a discontinuous transition into the 1RSB state. Figure 2: Same as Fig. 1 with \(M=4\). The explicit expression as a function of \(M\) is given in Eqs. (105) and (110) in Appendix C. In Fig. 3, \((y_{1}-y_{3}+y_{5})/\beta^{8}\) and \(z_{1}^{\rm eff}/\beta^{10}\) are displayed as functions of \(M\). We note that \(y_{1}-y_{3}+y_{5}\) is negative (and \(c_{8}\) is positive) for \(2.35\lesssim M\lesssim 6.64\). Therefore, as we mentioned above, we can find the 1RSB solution for \(m_{1}=1\) for \(3<M\lesssim 6.64\) within the quartic theory. The result for the 1RSB transition temperature obtained in this way is shown as a solid red line in Fig. 4 (a). We note, however, that the result becomes unreliable as we approach the boundary value \(M\simeq 6.64\) as it shows a fictitious diverging behavior. We now study how the quintic theory may improve this result. The quintic contribution can be summarized by \(z_{1}^{\rm eff}\), which is negative for \(4.37\lesssim M\lesssim 12.46\) (and for the narrow region \(2\leq M\lesssim 2.12\)). Since \(c_{10}\) is positive in that interval, we have a chance to extend the result of the quartic theory to larger values of \(M\). As one can see in Fig. 4 (a), the 1RSB transition line calculated within the quintic theory indeed extends to large values of \(M\). But, since Eq. (44) for \(q_{1}\neq 0\) becomes a quadratic equation for \(q_{1}\), there are intervals of \(M\) where no real solution exists. We find that for \(3.27\lesssim M\lesssim 3.98\) and for \(M\gtrsim 14.41\), solutions to this equation become complex and no 1RSB solution can be obtained. This can be seen in Fig. 4 (b), where one can see a segment of the 1RSB transition line is missing. Also as in the quartic theory, the transition line displays an apparent divergent behavior as we approach the boundary value \(M\simeq 14.41\). Therefore, we can conclude that it is possible to obtain the 1RSB transition line using truncated models, but the truncation of the free energy to a specific order produces some unphysical features. Comparing the results of the quartic and quintic theories in Fig. 4 (a), we expect that a systematic improvement may occur if we go to even higher orders. We also note that the 1RSB transition temperatures obtained in this way always stay above \(T_{*}\). The 1RSB transition line discussed above is obtained by setting \(m_{1}=1\) where the 1RSB free energy coincides with that of the high-temperature RS phase (with \(q=0\)). Using the results in Appendix C, we can obtain 1RSB solutions for general values of \(0\leq m_{1}\leq 1\) for the truncated model. Rather unexpectedly, we find that for given \(M\), the 1RSB solution ceases to exist below a certain finite temperature for which \(m_{1}=0\). We note that if \(m_{1}=0\), the 1RSB free energy becomes that of the RS phase with nonzero \(q\) (see Eq. (104)). Therefore, below that temperature, we only have the RS solution with nonzero \(q\). This is illustrated in Fig. 5, where we plot the free energies of both 1RSB and RS solutions calculated within a truncated model. One can clearly see that the 1RSB solution exists only in a finite temperature interval. Within that interval, the system is in the 1RSB phase which has a higher free energy than the RS one with nonzero \(q\). However, below that interval, there is no 1RSB solution, so the system returns to the RS phase. We believe that this rather unusual behavior is caused by the truncation of the model in an arbitrary order. In the large-\(M\) limit considered in Sec. II.4, where one can find the 1RSB solutions without truncation, we find that the 1RSB solution continues down to zero temperature and has a higher free energy than the RS one. ### The Large-\(M\) Limit In this subsection, we consider the situation where we take the limit \(M\to\infty\) from the start. In the large-\(M\) limit, Eq. (8) can be rewritten as \[L(\underline{\mu}) =\mathop{\mathrm{Tr}}_{\{S_{i}^{a}\}}\exp\left[\frac{1}{4M}\sum _{a\neq b}\mu_{ab}\left\{\left(\sum_{i}^{M}S_{i}^{a}S_{i}^{b}\right)^{2}-M \right\}\right]\] \[\simeq\mathop{\mathrm{Tr}}_{\{S_{i}^{a}\}}\exp\left[\frac{M}{4} \sum_{a\neq b}\mu_{ab}\left(\frac{1}{M}\sum_{i}^{M}S_{i}^{a}S_{i}^{b}\right)^{ 2}\right], \tag{49}\] where we have neglected the subleading terms in the large-\(M\) limit. We now introduce the delta function \(\delta(MQ_{ab}-\sum_{i}^{M}S_{i}^{a}S_{i}^{b})\) using the integral representation with the variable \(\lambda_{ab}\). Then we have from Eq. (6) \[\overline{Z^{n}}= e^{nN\beta^{2}C}\int\prod_{a<b}dq_{ab}d\mu_{ab}dQ_{ab}d\lambda_{ab}\] \[\times \exp\Big{[}-NM\Big{\{}-\frac{1}{4}\beta^{2}\sum_{a\neq b}q_{ab}^ {2}+\frac{1}{2}\sum_{a\neq b}\mu_{ab}q_{ab}\] \[-\frac{1}{4}\sum_{a\neq b}\mu_{ab}Q_{ab}^{2}+\frac{1}{2}\sum_{a \neq b}\lambda_{ab}Q_{ab}-\ln\tilde{L}(\underline{\lambda})\Big{\}}\Big{]} \tag{50}\] where \[\widetilde{L}(\underline{\lambda})=\mathop{\mathrm{Tr}}_{\{S^{a}\}}\exp\left[ \frac{1}{2}\sum_{a\neq b}\lambda_{ab}S^{a}S^{b}\right]. \tag{51}\] In the large-\(M\) limit, the integral is dominated by the saddle points. In particular, the saddle point equations Figure 3: \((y_{1}-y_{3}+y_{5})/\beta^{8}\) (dashed line) and \(z_{1}^{\rm eff}/\beta^{10}\) (solid line) as functions of \(M\). In the large-\(M\) limit, they approach 1/16 and 1/20, respectively. obtained by varying \(q_{ab}\) and \(\mu_{ab}\) are, respectively, \[\mu_{ab}=\beta^{2}q_{ab} \tag{52}\] and \[q_{ab}=\frac{1}{2}Q_{ab}^{2}. \tag{53}\] Inserting this into the above equation, we can rewrite Eq. (50) as \[\overline{Z^{n}}=e^{nN(\beta J)^{2}C}\int\prod_{a<b}dQ_{ab}d\lambda_{ab}\;\exp [-NM\widetilde{G}(\underline{Q},\underline{\lambda})], \tag{54}\] with \[\widetilde{G}(\underline{Q},\underline{\lambda})=-\frac{1}{16}(\beta J)^{2} \sum_{a\neq b}Q_{ab}^{4}+\frac{1}{2}\sum_{a\neq b}\lambda_{ab}Q_{ab}-\ln \widetilde{L}(\underline{\lambda}). \tag{55}\] The free energy in the large-\(M\) limit is then given by \[\frac{\beta F}{NM}=-(\beta J)^{2}C_{\infty}+\lim_{n\to 0}\frac{1}{n} \widetilde{G}(\underline{Q},\underline{\lambda}), \tag{56}\] where \[C_{\infty}=\lim_{M\to\infty}\frac{C}{M}=\frac{1}{16} \tag{57}\] Note that we have restored \(J^{2}\) which sets the variance in Eq. (2) explicitly. This free energy is exactly the same as the one for the fully connected \(p\) spin glass model with \(p=4\), which is given by the Hamiltonian \[H=-\sum_{1\leq x_{1}<\cdots<x_{p}\leq N}J_{x_{1},x_{2},\cdots,x_{N}}S(x_{1})S( x_{2})\cdots S(x_{p}), \tag{58}\] for the Ising spin \(S(x)\) at site \(x\). The bonds \(J_{x_{1},x_{2},\cdots,x_{N}}\) are independent random variables satisfying the Gaussian distribution with zero mean and variance \[\frac{p!\tilde{J}^{2}}{2N^{p-1}}. \tag{59}\] The free energy for this model is given exactly the same as Eq. (56) with \(\tilde{J}^{2}=J^{2}/4\). (The formula for this correspondence for general \(p\) is \(\tilde{J}^{2}=4C_{\infty}J^{2}\).) We can readily use the known results for this model. The replica symmetric phase with \(\lambda=Q=0\) has the free energy per site as \[\frac{\beta F_{\rm RS}}{N}=-\frac{(\beta\tilde{J})^{2}}{4}-\ln 2. \tag{60}\] Figure 5: Dimensionless free energies per spin of the 1RSB solution (solid line) and the RS solution with \(q\neq 0\) (dashed line) as functions of temperature calculated for the quartic \(M=4\) model. For each case, the free energy difference (\(\Delta F\)) from that of the high-temperature RS solution (\(q=0\), Eq. (22)) is plotted. The 1RSB solution exists only in the temperature interval \(0.212\leq T\leq 0.311\). Figure 4: (a) Red and blue solid lines are the 1RSB transition temperatures \(T_{c}^{\rm 1RSB}\) as functions of \(M\) for the \(p=4\) balanced \(M\)-\(p\) model expanded up to quartic (red) and to quintic (blue) orders in the order parameter. Dashed and dot-dashed lines are \(T_{*}\) (Eq. (24)) and \(T_{c}\) (Eq. (27)), respectively. Two closely-spaced horizontal lines are the large-\(M\) limits of \(T_{c}\) (lower one, Eq. (62)) and \(T_{c}^{\rm 1RSB}\) (upper one, Eq. (70)). (b) Close-up of the same plot for \(3\leq M\leq 4\). There is a gap in the solid blue line in the interval \(3.27\lesssim M\lesssim 3.98\), where no 1RSB solution exists at \(m_{1}=1\) for the quintic theory. The red line corresponds to the quartic theory, which has no gap. The dot-dashed line is \(T_{c}\). The entropy per site is then given by \[\frac{S_{\rm RS}}{N}=\ln 2-\frac{(\beta\tilde{J})^{2}}{4}, \tag{61}\] which becomes negative for temperature \(T/\tilde{J}<T_{*}^{\infty}/\tilde{J}\equiv 1/(2\sqrt{\ln 2})\). Therefore in the original unit \[T_{*}^{\infty}/J=\frac{1}{4\sqrt{\ln 2}}\simeq 0.30028. \tag{62}\] This is the same value as that obtained in the \(M\to\infty\) limit of Eq. (24). If we use the 1RSB form for \(Q_{ab}\) and \(\lambda_{ab}\) in Eq. (56), the free energy becomes \[\frac{\beta F_{\rm 1RSB}^{\infty}}{N}= -\frac{(\beta\tilde{J})^{2}}{4}[1+(m_{1}-1)Q_{1}^{p}]+\frac{1}{2} (m_{1}-1)\lambda_{1}Q_{1}\] \[+ \frac{\lambda_{1}}{2}-\ln 2-\frac{1}{m_{1}}\ln\int Dy\ \cosh^{m_{1}}( \sqrt{\lambda_{1}}y).\] The saddle point equations are as follows: \[\lambda_{1}=\frac{(\beta\tilde{J})^{2}}{2}pQ_{1}^{p-1}, \tag{64}\] and \[Q_{1}=\frac{\int Dy\ \cosh^{m_{1}}(\sqrt{\lambda_{1}}y)\tanh^{2}(\sqrt{ \lambda_{1}}y)}{\int Dy\ \cosh^{m_{1}}(\sqrt{\lambda_{1}}y)}. \tag{65}\] There is another saddle point equation which is obtained by varying the free energy with respect to \(m_{1}\): \[\frac{(\beta\tilde{J})^{2}}{4}Q_{1}^{p}(p-1)+\frac{1}{m_{1}^{2}} \ln\int Dy\ \cosh^{m_{1}}(\sqrt{\lambda_{1}}y)\] \[-\frac{1}{m_{1}}\frac{\int Dy\ \cosh^{m_{1}}(\sqrt{\lambda_{1}}y) \ln(\cosh(\sqrt{\lambda_{1}}y))}{\int Dy\ \cosh^{m_{1}}(\sqrt{\lambda_{1}}y)}=0. \tag{66}\] Again, when \(m_{1}=1\), \(F_{\rm 1RSB}\) becomes equal to \(F_{\rm RS}\). We determine the temperature \(T_{\rm 1RSB}^{\infty}\) by setting \(m_{1}=1\). Using \(\int Dy\ \cosh(\sqrt{\lambda_{1}}y)=e^{\lambda_{1}/2}\), we can combine Eqs. (65), (66) and (64) to get \[e^{-\lambda_{1}/2} \int Dy\ \cosh(\sqrt{\lambda_{1}}y)\Big{[}\ln\cosh(\sqrt{\lambda_{1}}y)\] \[-\frac{p-1}{2p}\lambda_{1}\tanh^{2}(\sqrt{\lambda_{1}}y)\Big{]}- \frac{\lambda_{1}}{2}=0. \tag{67}\] If we define \[\nu\equiv\sqrt{\lambda_{1}}, \tag{68}\] then the above equation can be rewritten as \(f_{\infty}(\nu)=0\) where \[f_{\infty}(\nu)\equiv e^{-\nu^{2}/2}\int Dy\ \Big{[}\cosh(\nu y)\ln\cosh( \nu y)\] \[-\frac{p-1}{2p}\nu^{2}\frac{\sinh^{2}(\nu y)}{\cosh(\nu y)}\Big{]} -\frac{\nu^{2}}{2}. \tag{69}\] This is to be compared with the corresponding Eq. (41) for finite \(M\). In Fig. 6, \(f_{\infty}(\nu)\) is plotted for \(p=4\). From the nonzero solution and from the corresponding \(Q_{1}\) in Eq. (65) and the relation Eq. (64), we obtain \(T_{\rm 1RSB}^{\infty}/\tilde{J}\simeq 0.61688\) or in the original unit \[T_{\rm 1RSB}^{\infty}/J\simeq 0.30844>T_{\infty}^{*}. \tag{70}\] For \(f(\nu)\), the small-\(\nu\) expansion yields \[f_{\infty}(\nu)= \left(\frac{2-p}{4p}\right)\nu^{4}+\left(\frac{2p-3}{6p}\right) \nu^{6}\] \[+\left(\frac{5(4-3p)}{24p}\right)\nu^{8}+O(\nu^{10}). \tag{71}\] We can see that for \(p>2\), \(f_{\infty}(\nu)\) has a negative slope near the origin. For \(p=2\), the leading order term is \(\nu^{6}\) with a positive coefficient. ### The FRSB Solution Here we consider the FRSB solutions. We first write the free energy in terms of the Parisi function \(q(x)\) for \(0\leq x\leq 1\). It is given by \[\frac{\beta F_{\rm FRSB}}{N}=-C\beta^{2}-M\ln 2-\tau\langle q^{2}\rangle\] \[-w_{1}\int_{0}^{1}dx\ \left\{xq^{3}(x)+3q(x)\int_{0}^{x}dy\;q^{2}(y) \right\}+w_{2}\langle q^{3}\rangle\] \[+y_{1}\langle q^{4}\rangle+y_{2}\Big{\{}\langle q^{4}\rangle-2 \langle q^{2}\rangle^{2}\] \[\qquad\qquad-\int_{0}^{1}dx\int_{0}^{x}dy\;(q^{2}(x)-q^{2}(y))^{2} \Big{\}}\] \[-y_{3}\Big{\{}2\langle q\rangle\langle q^{3}\rangle+\int_{0}^{1} dx\;q^{2}(x)\int_{0}^{x}dy\;(q(x)-q(y))^{2}\Big{\}}\] \[-y_{5}\Big{\{}\langle q^{2}\rangle^{2}-4\langle q\rangle^{2} \langle q^{2}\rangle\] \[\qquad-4\langle q\rangle\int_{0}^{1}dx\;q(x)\int_{0}^{x}dy\;(q(x )-q(y))^{2}\] \[\qquad-\int_{0}^{1}dx\int_{0}^{x}dy\int_{0}^{x}dz\;(q(x)-q(y))^{2} (q(x)-q(z))^{2}\Big{\}}\] \[+z_{1}\langle q^{5}\rangle, \tag{72}\] where \[\langle q^{k}\rangle=\int_{0}^{1}q^{k}(x)dx. \tag{73}\] and we have only kept the first quintic term. The FRSB expressions for the rest of the quintic terms are given in Appendix E. Because the equations for the stationarity equations of the FRSB functional equations are so cumbersome we have relegated them to the Appendices D and E. We can only make progress in solving these equations at the quintic level by making simplifications. The full set of quintic terms is given in Appendix E but in Eq. (72) we have reduced them from 9 terms to just one. A similar device was used by Parisi [23] at quartic level when he retained only the \(y_{1}\) term. Subsequent studies have shown that the physics was hardly changed by such an approximation, but numerical values do get modified. We choose the numerical value of that \(z_{1}\) to equal \(z_{1}^{\rm eff}\) in Eq. (109). A second simplification was to set \(y_{5}=\)0. When this is done the differential equation of Eq. (108) can be solved analytically. With \(y_{5}\) set to be zero we do not think that does much harm to the physics of the problem. For example, the Goldbart-Elderfield singularity [21] still arises. But without the approximations of retaining only the \(z_{1}\) term and setting \(y_{5}\) to zero, the numerical work required for a solution would have been much harder. Fortunately at quartic level, that is, if we set \(z_{1}=0\), one can solve the differential equation for \(q(x)\), Eq. (108), analytically. There is no need to set \(y_{5}\) to zero when just working at quartic level. Because it is a first order differential equation, its solution depends on one adjustable constant \(x_{0}\). The result is \[q(x)=\frac{w_{1}y_{3}-2w_{2}y_{5}-\frac{2(y_{3}-2xy_{3})(y_{3}^{2}-4y_{1}y_{5} )x_{0}}{\sqrt{y_{1}-xy_{3}+x^{2}y_{5}}}}{2(-y_{3}^{2}+4y_{1}y_{5})}. \tag{74}\] Physical requirements on the choice of \(x_{0}\) are that for some interval \(0<x_{1}<x<x_{2}<1\), \(q(x)\) is real, an increasing function of \(x\), and positive. \(x_{1}\) is for the solutions discussed in this paper at the point where \(q(x_{1})=0\), and solving this equation gives us \(x_{1}\) as a function of \(x_{0}\). The upper breakpoint, \(x_{2}\), is where \(q(x)\) takes the constant value \(q(x_{2})\) in the interval \(1>x>x_{2}\). Its value as a function of \(x_{0}\) is determined by solving Eq. (107) at the value \(x=x_{2}\). This relates the value of \(x_{2}\) to \(x_{0}\). The value of \(x_{0}\) itself can be determined by setting the right-hand side of Eq. (104) to zero by choosing a value for \(x_{0}\), for any value of \(x>x_{1}\). The FRSB solution for the case \(M=2.25\) at a value of \(\tau=-0.001\) is shown in Fig. 7. It Figure 8: Plots of \(q(x)\) for the FRSB solution (red) and the 1RSB solution (black) at \(M=2.50\) for \(\tau=-0.01\). This calculation has been done at quintic level, with just one quintic coefficient, with \(z_{1}=z_{1}^{\rm eff}\) and with \(y_{5}=0\), for both the FRSB and 1RSB solutions, in order to simplify the numerical work in the FRSB case. At this value of \(M\), the first transition is to the 1RSB state at \(\tau=0\), but below the Gardner transition temperature \(T_{G}\), (which corresponds to a value of \(\tau_{G}\approx-0.0078\)) there is a transition to a state with FRSB. Below \(T_{G}\), this FRSB state has a higher free energy than the corresponding 1RSB state. Figure 7: Plots of \(q(x)\) for the FRSB solution (red) and the 1RSB solution (black) at \(M=2.25\) at \(\tau=-0.001\). The FRSB state is the equilibrium state as it has the higher free energy. These plots are for the quartic theory. \(x_{1}\) for the FRSB solution is where \(q(x)\) goes to zero, \(x_{1}\approx 0.24437\), while the upper breakpoint \(x_{2}\approx 0.25870\). is contrasted with the form of \(q(x)\) for the 1RSB case at the same values of \(M\) and \(\tau\). Note that there is an inverse square root singularity in \(q(x)\) when \(x=x_{s}\), where \(y_{1}-x_{s}y_{3}+x_{s}^{2}y_{5}=0\) but this singularity, the Goldbart-Elderfield singularity, [21], causes no problem so long as it occurs at a value of \(x_{s}\) which is greater than \(x_{2}\) or less than \(x_{1}\). In the limit \(\tau\to 0\), \(q(x)\) also goes to zero (\(\sim|\tau|\)) so Eq. (10) fixes \(x_{2}\to w_{2}/w_{1}=(M-2)\). Hence a FRSB solution can only exist if \(x_{2}<x_{s}\), which translates to \(M^{**}\leq 2+\sqrt{2}/3\approx 2.47140\). The free energy difference between the FRSB and the 1RSB state differs at order \(\tau^{5}\) and we have found numerically that the coefficient of this term goes towards zero as \(M\to M^{**}\). One might have thought that one could not ignore the quintic terms when determining \(M^{**}\) as they too give a contribution of \(O(\tau^{5})\). However, in the limit when \(\tau\to 0\), both the 1RSB and the FRSB solutions have their upper breakpoints at \(w_{2}/w_{1}\) and at small \(\tau\) the value of \(q(x)\) on the plateau is the same for both solutions (see Fig. 7). The form of \(q(x)\) for the two solutions only differ in the interval between \(x_{2}\) and \(x_{1}\) and \(x_{2}-x_{1}\sim|\tau|\) itself, so in the integrals for the free energy, Eq. (11), the plateau regions give the contribution of \(O(|\tau|^{5})\), which is the same for both solutions, and the region of \(x\) where the solutions differ only contributes to the higher order terms in \(\tau\). For \(3>M>M^{**}\) the continuous transition is to the 1RSB state. For \(M>3\), that is for \(w_{2}/w_{1}>1\), the transition is discontinuous and is to the 1RSB state. We were unable to find a solution with FRSB which had a higher free energy than the 1RSB solution at the discontinuous transition itself. While the quintic terms are not needed to determine the value of \(M^{**}\), it was pointed out years ago that they are needed to obtain the Gardner transition [1]. This is the transition which arises in the 1RSB state and it is to a state with FRSB. Provided we set \(y_{5}\) to zero and just retain one of the quintic terms \(z_{1}\), MATHEMATICA can analytically solve the first order differential equation, but its explicit form is so long that we have not included its form in this paper. In Fig. 8 we show the resulting FRSB solution and the 1RSB solution with the same parameters when \(M=2.50\) at a temperature below the Gardner transition temperature, so that the FRSB state has a higher free energy than the 1RSB state. Curiously the form of the FRSB solution is nothing like that given in Ref. [1]. They claimed that the continuously varying feature of \(q(x)\) grew from the upper plateau. However, our solution is very similar to the FRSB solution for \(M<M^{**}\), and it seems natural to us that at low enough temperature that solution should smoothly extend into the region \(M>M^{**}\) as \(M\) is increased. A feature of the Gardner solution is that right at the critical temperature \(T_{G}\) where the Gardner state has a free energy just equal to that of the 1RSB state, its \(q(x)\) is such that its derivative \(dq(x)/dx\) is infinite right at the lower break point \(x_{1}\). This is because at \(T_{G}\) the Goldbart-Elderfield singularity of the quintic order solution is just at \(x_{1}\). As the temperature is reduced below \(T_{G}\), this singularity occurs below \(x_{1}\), and \(dq(x)/dx\) is finite at \(x_{1}\) (as in Fig. 8). For \(T>T_{G}\), the FRSB solution ceases to exist. Figure 9 is a schematic phase diagram showing the phases which we have found in the \(M-p\) balanced model as a function of \(M\). To find the Gardner phase we had to use the Landau expansion to quintic order. In the next section we shall discuss the effects of the fluctuation corrections to the mean-field theory and argue that in dimensions \(d<8\) that the phase diagram becomes radically different to its mean-field form. ## III Discussion of fluctuation corrections and behavior in finite dimensions Most of this paper has been concerned with calculations at mean-field level. Our motivation to study these was because we wished to move towards the inclusion of fluctuations about the mean-field solutions by using RG equations to renormalize the numerous coupling constants, (\(\tau\), \(w_{1}\), \(w_{2}\), \(y_{1}\), \(\cdots\), \(y_{5}\), \(z_{1}\), \(\cdots\), \(z_{9}\)) until they lie in the region where fluctuations have become small and mean-field theory becomes accurate. This is the same program as followed by Holler and Read [16] for the de Almeida-Thouless (AT) transition [17]. This is the transition of the Ising spin glass in a field \(h\), and in the \(h-T\) phase diagram there is a line, the de Almeida-Thouless line which separates the high-temperature paramagnetic phase replica symmetric phase from a state with some version of replica symmetry breaking. The field theory of our problem, Eq. (25) is identical to theirs and the Figure 9: A schematic plot of the phase diagram as a function of \(T\) and \(M\), within the mean-field approximation. Phase boundaries associated with a continuous transition are drawn with colored dashed lines, while a solid line denotes a discontinuous transition. The FRSB transition for \(M>M^{**}\) is the Gardner transition. reader should consult their paper for details. However, since their paper was written new simulations have suggested a possible extension of their approach, which we describe. We begin by briefly summarizing some of their results and procedures. For the quartic coefficients below \(d<8\) the coefficients \(y_{1}\), \(y_{2}\), \(y_{3}\), \(y_{4}\) and \(y_{5}\) are dominated by the "box" diagrams for dimensions \(8>d>6\) and their bare values become negligible compared to the contribution of the box diagrams, which can be expressed in terms of the values of \(w_{1}\) and \(w_{2}\). For \(d>8\), a good approximation to their values is provided by the bare values of these coefficients. The important combination of coefficients \[\tilde{y}(x)=Y(x)=y_{1}-xy_{3}+x^{2}y_{5}, \tag{75}\] at the value of \(x\) corresponding to the upper break point \(x_{2}\) (which in the limit \(\tau\to 0\) has the value \(w_{2}/w_{1}\)) plays a key role in determining the nature of the state below the transition. When \(\tilde{y}(\rho)\) is positive, (where \(\rho=w_{2}/w_{1}\))), the transition is to a state with FRSB, but if it is negative the transition is to a state with 1RSB. (This is how the value of \(M^{**}\) was determined in the mean-field calculations by setting \(x=\rho=M-2\) and solving \(Y(x)=0\) for \(M\)). Holler and Read found from the box diagrams that \[\tilde{y}(\rho)=K_{d}w_{1}^{4}\rho^{2}(22-48\rho-32\rho^{2}-8 \rho^{3}+\rho^{4})/(8-d), \tag{76}\] where \(K_{d}=2/(\Gamma(d/2)(4\pi)^{d/2})\), (provided \(\rho<1\)). Holler and Read studied in particular the RG flow equations in dimensions \(d=6+\epsilon\), where they could employ the Bray and Roberts [24] RG recursion relations. Using these recursion relation, one finds that under the RG transforms \(w_{1}\) and \(w_{2}\) scale down towards zero as \(\exp[-\frac{1}{2}\epsilon l]\). As \(l\rightarrow\infty\) both \(w_{1}\) and \(w_{2}\) approach their fixed point value, (which is 0) but their ratio \(\rho=w_{2}/w_{1}\) approaches a constant as the RG scale parameter \(l\) goes to infinity. The Bray-Roberts recursion relations are only valid if \(w_{1}\) and \(w_{2}\) are of \(O(\sqrt{\epsilon})\) and lie for \(d>6\) within the basin of attraction of the Gaussian fixed point at \(w_{1}=w_{2}=0\). The bare values of \(w_{1}\) and \(w_{2}\) are of \(O(1)\) and so do not lie within the basin of attraction. The fluctuation corrections must somehow first modify the values of \(w_{1}\) and \(w_{2}\) so that the RG calculation can proceed. It is the numerical value of \(\rho\) in the large \(l\) limit which determines whether \(\tilde{y}(\rho)\) is positive or negative. The polynomial in Eq. (76) is such that \(\tilde{y}(\rho)\) is positive provided \(\rho<0.8418\). Holler and Read did not determine the ratio \(\rho\). We shall argue that its value is universal at least for values of \(d<8\) and that \(\rho=0.5\). Then as \(0.5<0.8418\), the state formed will have FRSB and so is in the universality class of the Ising spin glass in a field. The key to understanding this is the real space RG calculation of Angelini and Biroli [25]. This suggested that the transition at the AT line in high dimensions might be controlled by a zero-temperature fixed point. They found that in a simple real-space RG approximation that in high enough dimensions, the RG flows of \(h\) and \(J\), the standard deviation of the bond distribution, which are initially close to their values on the AT line at some non-zero temperature flowed close to their value on the AT line at zero temperature, but then veer away up the \(h\)-axis at \(T=0\). Then the flow is away from the fixed point at \(T=0\) and \(h=h_{AT}\), where \(h_{AT}\) is the value of the field \(h\) on the AT line at \(T=0\). In other words the RG flow is controlled by a zero temperature fixed point. Because their RG procedure (the Migdal-Kadanoff approximation) works well only in low dimensions it was uncertain whether their zero-temperature fixed point scenario in high dimensions should be trusted. However, we believe that the recent simulation in six dimensions in Ref. [18] strongly suggests that it should be believed. These simulations showed that in six dimensions that the renormalized vertices related to the "bare" couplings \(w_{1}\) and \(w_{2}\) were such that their ratio was close to \(1/2\). But this is the _same_ value (i.e. \(1/2\)) as was found at \(T=0\) in the mean-field like Bethe lattice calculation of the same renormalized vertices in Ref. [26]. We therefore shall take it that the renormalized value of \(\rho\) which should be inserted into Eq. (76) is \(1/2\). As a consequence the continuous transition from the high-temperature phase should be to a state with FRSB, and for \(d<8\) the continuous 1RSB transition should no longer occur. The same line of argument will also apply to the AT transition of spin glasses in a field. This is a transition from a paramagnetic high-temperature phase to a state with FRSB at lower temperatures. These have been extensively studied by simulations and the most recent of these is that of Bharadwaj et al. [27]. They found numerical evidence that the AT line might not exist below six dimensions. The absence of the AT line below six dimensions was argued for in Ref. [28], where it was suggested that as \(d\to 6\), \(h_{AT}^{2}\sim(d-6)\), where \(h_{AT}\) is the AT field at \(T=0\). If this is correct then in three dimensions there would be no phase transition to a state with replica symmetry breaking, but there could be long length scales according to the droplet picture of spin glasses in a field [29; 30; 31] and the Imry-Ma argument [32], especially if the field is small. That structural glasses might behave as the Ising spin glass in a field was suggested many years ago [33]. RG calculations are only useful when there exist long correlation length scales. At mean-field level when \(\rho=w_{2}/w_{1}>1\) the transition to the 1RSB state is via a discontinuous transition at which there are no long correlation length scales. How do the fluctuation corrections affect such a transition? Our belief is that the effect of the fluctuations is to drive the value of the ratio \(w_{2}/w_{1}\) into the region where the transition is continuous. Certainly there is no sign of a discontinuous transition in the real space RG calculations such as Ref. [12]. Nor was there any sign of a discontinuous transition in the AT line simulations in Ref. [27]. But at present we cannot really exclude the possibility of a discontinuous transition in physical dimensions but we note once more that the simulations of Ref. [9] found no evidence for such a transition at \(M=4\) in three dimensions. Fig. 10 provides a summary of our expected form of the phase diagram first for \(6<d<8\) and secondly for \(d<6\). The chief omission of our work is therefore a stronger conclusion on the possible existence of a discontinuous transition and its dependence on the dimensionality \(d\) of the system. The only way forward for investigating this question, especially in high dimensions close to or above \(d=6\) would seem to be simulations on the one-dimensional proxy models. In these proxy models the form of the long range interactions between the spins can be tuned to mimic behavior in \(d\) dimensions. Indeed for the case \(p=3\), \(M=2\) that has already been done [34]. Alas at mean-field level this model has \(w_{2}/w_{1}<1\) and so it would not be expected to have a discontinuous transition and indeed there was no sign of such in the simulation. The case when \(p=3\) and \(M=3\) has \(w_{2}/w_{1}=2\)[19] and so might be a good model to simulate as it should have a clear discontinuous transition. The model of the type studied in this paper, \(p=4\) but with \(M=4\) could also be a good model to simulate using the one-dimensional proxy model: It has also \(w_{2}/w_{1}=2\). ###### Acknowledgements. We would like to thank Jairo de Almeida for sharing his notes dating from the seventies on the quintic terms in the presence of FRSB for the Ising spin glass. ## Appendix A Expansion of the free energy to the quintic order in order parameter We expand Eq. (8) to \(O(\mu^{5})\). We first write \(L\equiv 2^{nM}L^{\prime}\), where \[L^{\prime}\equiv\mathrm{Tr}^{\prime}_{\{S^{n}_{1}\}}\exp\left[\frac{1}{2M} \sum_{(a,b)}\mu_{ab}f_{ab}\right]. \tag{10}\] Here \(\mathrm{Tr}^{\prime}\equiv 2^{-nM}\mathrm{Tr}\) satisfies \(\mathrm{Tr}^{\prime}_{\{S^{n}_{1}\}}1=1\), and we define \[f_{ab}\equiv\sum_{\alpha=1}^{K}\Psi^{a}_{\alpha}\Psi^{b}_{\alpha}, \tag{11}\] where \(\mathbf{\Psi}^{a}=(S^{a}_{1}S^{a}_{2},S^{a}_{1}S^{a}_{3},\cdots,S^{a}_{M-1}S^{a}_ {M})\) is a \(K\)-dimensional vector for each replica index \(a\) with components \(\Psi^{a}_{\alpha}\), \(\alpha=1,2,\cdots,K\equiv M(M-1)/2\). The expansion of \(L^{\prime}\) to \(O(\mu^{5})\) has the following structure: \[L^{\prime}=1+\tilde{t}_{2}\sum_{(a,b)}\mu_{ab}^{2}+\tilde{w}_{1} \sum_{(a,b,c)}\mu_{ab}\mu_{bc}\mu_{ca}+\tilde{w}_{2}\sum_{(a,b)}\mu_{ab}^{3}\] \[+\tilde{y}_{1}\sum_{a,b}\mu_{ab}^{4}+\tilde{y}_{2}\sum_{(a,b,c)} \mu_{ab}^{2}\mu_{bc}^{2}+\tilde{y}_{3}\sum_{(a,b,c)}\mu_{ab}^{2}\mu_{bc}\mu_{ ca}\] \[+\tilde{y}_{5}\sum_{(a,b,c,d)}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{da}+ \tilde{d}_{1}\sum_{(a,b,c,d)}\mu_{ab}^{2}\mu_{cd}^{2}\] \[+\tilde{z}_{1}\sum_{(a,b)}\mu_{ab}^{5}+\tilde{z}_{2}\sum_{(a,b,c) }\mu_{ab}^{3}\mu_{bc}^{2}+\tilde{z}_{3}\sum_{(a,b,c)}\mu_{ab}^{3}\mu_{bc}\mu_{ ca}\] \[+\tilde{z}_{4}\sum_{(a,b,c)}\mu_{ab}^{2}\mu_{bc}^{2}\mu_{ca}+ \tilde{z}_{5}\sum_{(a,b,c,d)}\mu_{ab}^{2}\mu_{bc}\mu_{cd}\mu_{da}\] \[+\tilde{z}_{6}\sum_{(a,b,c,d)}\mu_{ab}^{2}\mu_{bc}\mu_{cd}\mu_{db} +\tilde{z}_{7}\sum_{(a,b,c,d)}\mu_{ab}^{2}\mu_{bc}\mu_{cd}^{2}\] \[+\tilde{z}_{8}\sum_{(a,b,c,d)}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{da} \mu_{ac}+\tilde{z}_{9}\sum_{(a,b,c,d,e)}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{de}\mu_{ ee}\] \[+\tilde{d}_{2}\sum_{(a,b,c,d)}\mu_{ab}^{3}\mu_{cd}^{2}+\tilde{d}_ {3}\sum_{(a,b,c,d,e)}\mu_{ab}^{2}\mu_{cd}\mu_{de}\mu_{ec}. \tag{12}\] Here \((a,b)\), \((a,b,c)\), \((a,b,c,d)\) etc. indicate that the sums are over all _distinct_ replica indices. The coefficients are obtained by taking the trace of the spins as we explain below. In order to calculate the free energy, we have to take the logarithm of \(L^{\prime}\) and expand \(\ln(1+x)\) to \(O(\mu^{5})\). there are three contributions to this order coming from \(-(1/2)x^{2}\) part. They are \[-\frac{1}{2}\tilde{t}_{2}^{2}\sum_{(a,b)}\mu_{ab}^{2}\sum_{(c,d) }\mu_{cd}^{2} \tag{13}\] \[= -\frac{1}{2}\tilde{t}_{2}^{2}\left[2\sum_{(a,b)}\mu_{ab}^{4}+4 \sum_{(a,b,c)}\mu_{ab}^{2}\mu_{bc}^{2}+\sum_{(a,b,c,d)}\mu_{ab}^{2}\mu_{cd}^{ 2}\right],\] \[-\frac{1}{2}\cdot 2\tilde{t}_{2}\tilde{w}_{1}\sum_{(a,b)}\mu_{ab}^{2} \sum_{(c,d,e)}\mu_{cd}\mu_{de}\mu_{ec} \tag{14}\] \[= -\tilde{t}_{2}\tilde{w}_{1}\Big{[}6\sum_{(a,b,c)}\mu_{ab}^{3}\mu_ {bc}\mu_{ca}+6\sum_{(a,b,c,d)}\mu_{ab}^{2}\mu_{bc}\mu_{cd}\mu_{db}\] \[+\sum_{(a,b,c,d)}\mu_{ab}^{2}\mu_{cd}\mu_{de}\mu_{ce}\Big{]},\] Figure 10: Schematic phase diagrams after allowing for the effect of fluctuation corrections to the mean-field phase diagram of Fig. 9 for (a) \(6<d<8\) and (b) \(d<6\). For \(d<6\) it is hypothesized that there is only one phase present, the high-temperature paramagnetic phase. In the region \(6<d<8\) there is a continuous transition from the paramagnetic phase to a state with FRSB. and \[-\frac{1}{2}\cdot 2\tilde{t}_{2}\tilde{w}_{2}\sum_{(a,b)}\mu_{ab}^{2} \sum_{(c,d)}\mu_{cd}^{3} \tag{10}\] \[= -\tilde{t}_{2}\tilde{w}_{2}\Big{[}2\sum_{(a,b)}\mu_{ab}^{5}+4\sum_ {(a,b,c)}\mu_{ab}^{2}\mu_{bc}^{3}+\sum_{(a,b,c,d)}\mu_{ab}^{2}\mu_{cd}^{3}\Big{]}.\] Note that the last terms in Eqs. (11), (12) and (10) as well as the terms in Eq. (10) with coefficients, \(\tilde{d}_{i}\), \(i=1,2,3\) have disconnected parts. When we take the trace over the spins, we have to keep in mind that the Ising spins must be paired to give nonvanishing contribution. For example, we have \(\text{Tr}^{f}f_{ab}=0\) for \(a\neq b\). We evaluate the first few sets of coefficients as follows. \[\tilde{t}_{2} =\frac{1}{2!}\frac{2}{(2M)^{2}}\ \text{Tr}^{f}f_{ab}^{2}=\frac{1}{2!} \frac{1}{(2M)^{2}}2K=\frac{K}{4M^{2}}, \tag{11}\] \[\tilde{w}_{1} =\frac{1}{3!}\frac{8}{(2M)^{3}}\ \text{Tr}^{f}f_{ab}f_{bc}f_{ca}= \frac{1}{3!}\frac{1}{(2M)^{3}}8K=\frac{K}{6M^{3}},\] (12) \[\tilde{w}_{2} =\frac{1}{3!}\frac{4}{(2M)^{3}}\ \text{Tr}^{f}f_{ab}^{3}=\frac{1}{3!} \frac{1}{(2M)^{3}}4M(M-1)(M-2)\] \[=\frac{K}{6M^{3}}(M-2), \tag{13}\] and \[\tilde{d}_{1} =\frac{1}{4!}\frac{12}{(2M)^{4}}\ \text{Tr}^{f}f_{ab}^{2}f_{cd}^{2}= \frac{1}{4!}\frac{1}{(2M)^{4}}12K^{2}, \tag{14}\] \[\tilde{d}_{2} =\frac{1}{5!}\frac{80}{(2M)^{5}}\ \text{Tr}^{f}f_{ab}^{3}f_{cd}^{2}\] \[=\frac{1}{5!}\frac{1}{(2M)^{5}}80KM(M-1)(M-2),\] (15) \[\tilde{d}_{3} =\frac{1}{5!}\frac{160}{(2M)^{5}}\ \text{Tr}^{f}f_{ab}f_{bc}f_{ca}f_{de}^{2}= \frac{1}{5!}\frac{1}{(2M)^{5}}160K^{2}, \tag{16}\] Here all replica indices are distinct. One can see that \(\tilde{d}_{1}=\tilde{t}_{2}^{2}/2\), \(\tilde{d}_{2}=\tilde{t}_{2}\tilde{w}_{2}\) and \(\tilde{d}_{3}=\tilde{t}_{2}\tilde{w}_{1}\). Therefore all the disconnected terms in \(\ln L^{\prime}\) vanish. We therefore have \[\ln L^{\prime} =\tilde{t}_{2}\sum_{(a,b)}\mu_{ab}^{2}+\tilde{w}_{1}\sum_{(a,b,c) }\mu_{ab}\mu_{bc}\mu_{ca}+\tilde{w}_{2}\sum_{(a,b)}\mu_{ab}^{3}\] \[+\big{(}\tilde{y}_{1}-\tilde{t}_{2}^{2}\big{)}\sum_{a,b}\mu_{ab}^ {4}+\big{(}\tilde{y}_{2}-2\tilde{t}_{2}^{2}\big{)}\sum_{(a,b,c)}\mu_{ab}^{2} \mu_{bc}^{2}+\tilde{y}_{3}\sum_{(a,b,c)}\mu_{ab}^{2}\mu_{bc}\mu_{ca}+\tilde{ y}_{5}\sum_{(a,b,c,d)}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{da}\] \[+\big{(}\tilde{z}_{1}-2\tilde{t}_{2}\tilde{w}_{2}\big{)}\sum_{(a, b)}\mu_{ab}^{5}+\big{(}\tilde{z}_{2}-4\tilde{t}_{2}\tilde{w}_{2}\big{)}\sum_{(a,b,c)} \mu_{ab}^{3}\mu_{bc}^{2}+\big{(}\tilde{z}_{3}-6\tilde{t}_{2}\tilde{w}_{1} \big{)}\sum_{(a,b,c)}\mu_{ab}^{3}\mu_{bc}\mu_{ca}+\tilde{z}_{4}\sum_{(a,b,c)} \mu_{ab}^{2}\mu_{bc}^{2}\mu_{ca}\] \[+\tilde{z}_{5}\sum_{(a,b,c,d)}\mu_{ab}^{2}\mu_{bc}\mu_{cd}\mu_{da} +\big{(}\tilde{z}_{6}-6\tilde{t}_{2}\tilde{w}_{1}\big{)}\sum_{(a,b,c,d)}\mu_{ ab}^{2}\mu_{bc}\mu_{cd}\mu_{db}+\tilde{z}_{7}\sum_{(a,b,c,d)}\mu_{ab}^{2}\mu_{bc} \mu_{cd}^{2}\] \[+\tilde{z}_{8}\sum_{(a,b,c,d)}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{da} +\tilde{z}_{9}\sum_{(a,b,c,d,e)}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{de}\mu_{ea}. \tag{17}\] The first quartic coefficient is given by \[\tilde{y}_{1} =\frac{1}{4!}\frac{8}{(2M)^{4}}\ \text{Tr}^{f}f_{ab}^{4}\] \[=\frac{1}{4!}\frac{8}{(2M)^{4}}\Big{[}K+3K(K-1)\] \[\qquad\qquad\qquad+3M(M-1)(M-2)(M-3)\Big{]}. \tag{18}\] This is valid for \(M\geq 3\). For \(2\leq M\leq 3\), there are not enough spins whose combination makes the second term in the square bracket. Therefore, the square bracket must be just \(K+3K(K-1)\) for \(2\leq M\leq 3\). The rest of them are \[\tilde{y}_{2} =\frac{1}{4!}\frac{48}{(2M)^{4}}\ \text{Tr}^{f}f_{ab}^{2}f_{bc}^{2}= \frac{1}{4!}\frac{48}{(2M)^{4}}K^{2}, \tag{19}\] \[\tilde{y}_{3} =\frac{1}{4!}\frac{96}{(2M)^{4}}\ \text{Tr}^{f}f_{ab}^{2}f_{bc}f_{ca}\] \[=\frac{1}{4!}\frac{96}{(2M)^{4}}M(M-1)(M-2), \tag{20}\] and \[\tilde{y}_{5} =\frac{1}{4!}\frac{48}{(2M)^{4}}\ \text{Tr}^{f}f_{ab}f_{bc}f_{cd}f_{ da}=\frac{1}{4!}\frac{48}{(2M)^{4}}K. \tag{21}\] These are valid for \(M\geq 2\). We obtain the first quintic coefficient as \[\tilde{z}_{1} = \frac{1}{5!}\frac{16}{(2M)^{5}}\ {\rm Tr}^{\prime}f_{ab}^{5}\] \[= \frac{1}{5!}\frac{16}{(2M)^{5}}\Big{[}10M(M-1)(M-2)K\] \[\qquad\qquad+12M(M-1)(M-2)(M-3)(M-4)\Big{]}.\] This is valid for \(M\geq 4\). For \(2\leq M\leq 4\), the second term in the square bracket should be dropped for the same reason as given for \(\tilde{y}_{1}\). The next coefficient is given for \(M\geq 2\) as \[\tilde{z}_{2} = \frac{1}{5!}\frac{320}{(2M)^{5}}\ {\rm Tr}^{\prime}f_{ab}^{3}f_{bc}^{2} \tag{101}\] \[= \frac{1}{5!}\frac{320}{(2M)^{5}}M(M-1)(M-2)K,\] The third and fourth quintic coefficients are given by \[\tilde{z}_{3} = \frac{1}{5!}\frac{320}{(2M)^{5}}\ {\rm Tr}^{\prime}f_{ab}^{3}f_{bc}f_{ca} \tag{102}\] \[= \frac{1}{5!}\frac{320}{(2M)^{5}}\Big{[}K+3K(K-1)\] \[\qquad\qquad+3M(M-1)(M-2)(M-3)\Big{]},\] and \[\tilde{z}_{4} = \frac{1}{5!}\frac{480}{(2M)^{5}}\ {\rm Tr}^{\prime}f_{ab}^{2}f_{bc}^{2}f_{ca} \tag{103}\] \[= \frac{1}{5!}\frac{480}{(2M)^{5}}\Big{[}2M(M-1)(M-2)\] \[\qquad\qquad+2M(M-1)(M-2)(M-3)\Big{]}.\] Again these expressions are valid only for \(M\geq 3\). For \(2\leq M\leq 3\), the second terms in the square brackets in Eqs. (102) and (103) do not appear. The remaining quintic coefficients are given by \[\tilde{z}_{5} = \frac{1}{5!}\frac{960}{(2M)^{5}}\ {\rm Tr}^{\prime}f_{ab}^{2}f_{bc}f_{cd}f_{da} \tag{104}\] \[= \frac{1}{5!}\frac{960}{(2M)^{5}}M(M-1)(M-2),\] \[\tilde{z}_{6} = \frac{1}{5!}\frac{960}{(2M)^{5}}\ {\rm Tr}^{\prime}f_{ab}^{2}f_{bc}f_{cd}f_{ db}=\frac{1}{5!}\frac{960}{(2M)^{5}}K^{2}, \tag{105}\] \[\tilde{z}_{7} = \frac{1}{5!}\frac{480}{(2M)^{5}}\ {\rm Tr}^{\prime}f_{ab}^{2}f_{bc}f_{ cd}^{2}=0, \tag{106}\] \[\tilde{z}_{8} = \frac{1}{5!}\frac{960}{(2M)^{5}}\ {\rm Tr}^{\prime}f_{ab}f_{bc}f_{ cd}f_{da}f_{ac} \tag{107}\] \[= \frac{1}{5!}\frac{960}{(2M)^{5}}M(M-1)(M-2),\] and \[\tilde{z}_{9} = \frac{1}{5!}\frac{384}{(2M)^{5}}\ {\rm Tr}^{\prime}f_{ab}f_{bc}f_{ cd}f_{de}f_{ea}=\frac{1}{5!}\frac{384}{(2M)^{5}}K. \tag{108}\] These expressions are valid for all \(M\geq 2\). We now convert the summations over replica indices in Eq. (A) into those without any restriction. We obtain \[\ln L^{\prime}=t_{2}^{\prime}\sum_{a,b}\mu_{ab}^{2}+w_{1}^{\prime }\sum_{a,b,c}\mu_{ab}\mu_{bc}\mu_{ca}+w_{2}^{\prime}\sum_{a,b}\mu_{ab}^{3} \tag{109}\] \[+ y_{1}^{\prime}\sum_{a,b}\mu_{ab}^{4}+y_{2}^{\prime}\sum_{a,b,c} \mu_{ab}^{2}\mu_{bc}^{2}+y_{3}^{\prime}\sum_{a,b,c}\mu_{ab}^{2}\mu_{bc}\mu_{ca}\] \[+ y_{5}^{\prime}\sum_{a,b,c,d}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{da}+z_ {1}^{\prime}\sum_{a,b}\mu_{ab}^{5}+z_{2}^{\prime}\sum_{a,b,c}\mu_{ab}^{3}\mu_{ bc}^{2}\] \[+ z_{3}^{\prime}\sum_{a,b,c}\mu_{ab}^{3}\mu_{bc}\mu_{ca}+z_{4}^{ \prime}\sum_{a,b,c}\mu_{ab}^{2}\mu_{bc}^{2}\mu_{ca}\] \[+ z_{5}^{\prime}\sum_{a,b,c,d}\mu_{ab}^{2}\mu_{bc}\mu_{cd}\mu_{da} +z_{6}^{\prime}\sum_{a,b,c,d}\mu_{ab}^{2}\mu_{bc}\mu_{cd}\mu_{db}\] \[+ z_{7}^{\prime}\sum_{a,b,c,d}\mu_{ab}^{2}\mu_{bc}\mu_{cd}^{2}+z_ {8}^{\prime}\sum_{a,b,c,d}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{da}\mu_{ac}\] \[+ z_{9}^{\prime}\sum_{a,b,c,d,e}\mu_{ab}\mu_{bc}\mu_{cd}\mu_{de} \mu_{ea},\] where \(t_{2}^{\prime}=\tilde{t}_{2}\), \(w_{1}^{\prime}=\tilde{w}_{1}\) and \(w_{2}^{\prime}=\tilde{w}_{2}\). The first two quartic coefficients are \[y_{1}^{\prime} = \tilde{y}_{1}-\tilde{t}_{2}^{2}-\left(\tilde{y}_{2}-2\tilde{t}_{2 }^{2}\right)+\tilde{y}_{5}\] \[= \left(\frac{K}{24M^{4}}\right)\begin{cases}2,&\text{if $2\leq M\leq 3$}\\ (3M^{2}-15M+20),&\text{if $M\geq 3$}\end{cases}\] and \[y_{2}^{\prime}=\tilde{y}_{2}-2\tilde{t}_{2}^{2}-2\tilde{y}_{5}=-\left(\frac{K}{4 M^{4}}\right). \tag{110}\] The rest of them are the same as when the summations are restricted. \[y_{3}^{\prime}=\tilde{y}_{3},\quad y_{5}^{\prime}=\tilde{y}_{5}. \tag{111}\] The quintic coefficients are given by \[z_{1}^{\prime} = \tilde{z}_{1}-2\tilde{t}_{2}\tilde{w}_{2}-\left(\tilde{z}_{2}-4 \tilde{t}_{2}\tilde{w}_{2}\right)+\tilde{z}_{5}+\tilde{z}_{7} \tag{112}\] \[= \left(\frac{K}{10M^{5}}\right)\begin{cases}5(M-2),&\text{if $2\leq M \leq 4$}\\ (M-2)(M^{2}-7M+17),&\text{if $M\geq 4$}\end{cases}\] \[z_{2}^{\prime} = \tilde{z}_{2}-4\tilde{t}_{2}\tilde{w}_{2}-2\tilde{z}_{5}-2\tilde {z}_{7}\] (113) \[= -\left(\frac{K}{M^{5}}\right)(M-2),\] \[z_{3}^{\prime} = \tilde{z}_{3}-6\tilde{t}_{2}\tilde{w}_{1}-2\left(\tilde{z}_{6}-6 \tilde{t}_{2}\tilde{w}_{1}\right)+5\tilde{z}_{9}\] \[= \left(\frac{K}{6M^{5}}\right)\begin{cases}2,&\text{if $2\leq M\leq 3$}\\ (3M^{2}-15M+20),&\text{if $M\geq 3$}\end{cases}\] \[z_{4}^{\prime} =\tilde{z}_{4}-\tilde{z}_{7}-\tilde{z}_{8} \tag{101}\] \[=\left(\frac{K}{2M^{5}}\right)\begin{cases}0,&\text{if }2\leq M \leq 3\\ (M-2)(M-3),&\text{if }M\geq 3\end{cases}\] and \[z_{6}^{\prime}=\tilde{z}_{6}-6\tilde{t}_{2}\tilde{w}_{1}-5\tilde{z}_{9}=- \left(\frac{K}{2M^{5}}\right). \tag{102}\] The other coefficients are unchanged, namely, \[z_{i}^{\prime}=\tilde{z}_{i} \tag{103}\] for \(i=5,7,8\) and \(9\). Finally, the free energy is now given by Eq. (11) with Eq. (7). One of the saddle point equations gives \(\mu_{ab}=\beta^{2}q_{ab}\). Inserting this relation into Eq. (11), we obtain the free energy in the form given in Eq. (25) with \[w_{i}\equiv\beta^{6}w_{i}^{\prime},\ \ \ \ \ y_{j}\equiv\beta^{8}y_{j}^{\prime}, \ \ \ \ z_{k}\equiv\beta^{10}z_{k}^{\prime}, \tag{104}\] for \(i=1,2\), \(j=1,2,3,5\) and \(k=1,2,\cdots,9\). ## Appendix B Small-\(\sigma\) behavior of \(f_{M}(\sigma)\) Here we present some steps leading to the small-\(\sigma\) expansion of \(f_{M}(\sigma)\) defined in Eq. (41). As mentioned in the main text, we expand \(f_{M}(\sigma)\) up to \(O(\sigma^{8})\). There are numerous terms to be evaluated. In the following, for brevity, we only list the quantities needed for the calculation of the \(O(\sigma^{6})\)-coefficient. We first write \[\zeta(\mathbf{y},\mu_{1})\equiv\frac{1}{2^{M}}\underset{\{S_{i}\}}{\mathrm{Tr}}\ \exp\left[\sigma\mathbf{y}\cdot\mathbf{\Psi}\right]=\sum_{j=0}^{\infty}\frac{\sigma^{ j}}{j!}\zeta_{j}(\mathbf{y}), \tag{105}\] where \(\sigma\equiv\sqrt{\mu_{1}/M}\). We immediately see that \(\zeta_{1}(\mathbf{y})=0\) since \(\mathrm{Tr}\ \Psi_{\alpha}=0\). Using the fact that \(\mathrm{Tr}\Psi_{\alpha}\Psi_{\beta}=0\) for \(\alpha\neq\beta\), we find that \(\zeta_{2}(\mathbf{y})=\sum_{\alpha}^{K}y_{\alpha}^{2}\) and \(\zeta_{3}(\mathbf{y})=\sum_{(\alpha,\beta,\gamma)}^{K}y_{\alpha}y_{\beta}y_{\gamma }\ \frac{1}{2^{M}}\mathrm{Tr}\Psi_{\alpha}\Psi_{\beta}\Psi_{\gamma}\). Higher order contributions are \[\zeta_{4}(\mathbf{y}) =\sum_{\alpha}^{K}y_{\alpha}^{4}+3\sum_{\alpha\neq\beta}^{K}y_{ \alpha}^{2}y_{\beta}^{2}\] \[+\sum_{(\alpha,\beta,\gamma,\delta)}^{K}y_{\alpha}y_{\beta}y_{ \gamma}y_{\delta}\ \frac{1}{2^{M}}\mathrm{Tr}\Psi_{\alpha}\Psi_{\beta}\Psi_{\gamma}\Psi_{\delta}, \tag{106}\] \[\zeta_{5}(\mathbf{y}) =10\sum_{(\alpha,\beta,\gamma)}^{K}y_{\alpha}^{3}y_{\beta}y_{ \gamma}\ \frac{1}{2^{M}}\mathrm{Tr}\Psi_{\alpha}\Psi_{\beta}\Psi_{\gamma}\] \[+10\sum_{(\alpha,\beta,\gamma,\delta)}^{K}y_{\alpha}^{2}y_{\beta} y_{\gamma}y_{\delta}\ \frac{1}{2^{M}}\mathrm{Tr}\Psi_{\beta}\Psi_{\gamma}\Psi_{\delta} \tag{107}\] \[+\sum_{(\alpha,\beta,\gamma,\delta,\sigma)}^{K}y_{\alpha}y_{\beta }y_{\gamma}y_{\delta}y_{\sigma}\ \frac{1}{2^{M}}\mathrm{Tr}\Psi_{\alpha}\Psi_{\beta}\Psi_{\gamma}\Psi_{\delta} \Psi_{\sigma},\] and \[\zeta_{6}(\mathbf{y}) =\sum_{\alpha}^{K}y_{\alpha}^{6}+15\sum_{\alpha\neq\beta}^{K}y_{ \alpha}^{4}y_{\beta}^{2}+15\sum_{(\alpha,\beta,\gamma)}^{K}y_{\alpha}^{2}y_{ \beta}^{2}y_{\gamma}^{2} \tag{108}\] \[+20\sum_{(\alpha,\beta,\gamma,\delta)}^{K}y_{\alpha}^{3}y_{\beta }y_{\gamma}y_{\delta}\ \frac{1}{2^{M}}\mathrm{Tr}\Psi_{\alpha}\Psi_{\beta}\Psi_{\gamma}\Psi_{\delta}\] \[+15\sum_{(\alpha,\beta,\gamma,\delta,\sigma)}^{K}y_{\alpha}^{2}y_{ \beta}y_{\gamma}y_{\delta}y_{\sigma}\ \frac{1}{2^{M}}\mathrm{Tr}\Psi_{\beta}\Psi_{\gamma}\Psi_{\delta}\Psi_{\sigma}\] \[+\sum_{(\alpha,\beta,\gamma,\delta,\sigma,\mu)}^{K}y_{\alpha}y_{ \beta}y_{\gamma}y_{\delta}y_{\sigma}y_{\mu}\ \frac{1}{2^{M}}\mathrm{Tr}\Psi_{\alpha}\Psi_{\beta}\Psi_{\gamma}\Psi_{\delta} \Psi_{\sigma}\Psi_{\mu}.\] Here \((\alpha,\beta,\gamma)\), etc indicate the summation is over all distinct indices and \(K\equiv{M\choose 2}\). Performing the Gaussian integrals, we have \(\int D^{K}\mathbf{y}\ \zeta_{j}(\mathbf{y})=0\) for \(j\) odd, \(\int D^{K}\mathbf{y}\ \zeta_{2}(\mathbf{y})=K\), \(\int D^{K}\mathbf{y}\ \zeta_{4}(\mathbf{y})=3K+3K(K-1)\), and \[\int D^{K}\mathbf{y}\ \zeta_{6}(\mathbf{y}) =15K+45K(K-1)\] \[+15K(K-1)(K-2). \tag{109}\] For the calculation up to \(O(\sigma^{6})\), we also need the following quantities: \[\int D^{K}\mathbf{y}\ \zeta_{2}^{2}(\mathbf{y})= 3K+K(K-1), \tag{110}\] \[\int D^{K}\mathbf{y}\ \zeta_{2}^{3}(\mathbf{y})= 15K+9K(K-1)\] \[+K(K-1)(K-2),\] (111) \[\int D^{K}\mathbf{y}\ \zeta_{2}(\mathbf{y})\zeta_{4}(\mathbf{y})= 15K+21K(K-1)\] \[+3K(K-1)(K-2),\] (112) \[\int D^{K}\mathbf{y}\ \zeta_{3}^{2}(\mathbf{y})= 6M(M-1)(M-2). \tag{113}\] These expressions are valid when \(K\geq 2\) or \(M=3,4,5,\cdots\). Now in the second term inside the integral in Eq. (41), we can write by symmetry \[\sum_{\alpha=1}^{K}\left[\frac{1}{2^{M}}\mathrm{Tr}\ \Psi_{\alpha}\exp\left[\sigma\mathbf{y}\cdot\mathbf{\Psi}\right]\right]^{2}\] \[= K\left[\frac{1}{2^{M}}\mathrm{Tr}\ \Psi_{1}\exp\left[\sigma\mathbf{y}\cdot\mathbf{\Psi} \right]\right]^{2}. \tag{114}\] We then define \[\frac{1}{2^{M}}\mathrm{Tr}\ \Psi_{1}\exp\left[\sigma\mathbf{y}\cdot\mathbf{\Psi} \right]\equiv\sum_{j=1}^{\infty}\frac{\sigma^{j}}{j!}\eta_{j}(\mathbf{y}). \tag{115}\] We find that \(\eta_{1}(\mathbf{y})=y_{1}\), \[\eta_{2}(\mathbf{y})=\sum_{(\alpha,\beta)}y_{\alpha}y_{\beta}2^{-M}\mathrm{Tr} \Psi_{1}\Psi_{\alpha}\Psi_{\beta}, \tag{116}\] \[\eta_{3}(\mathbf{y}) =y_{1}+3y_{1}\sum_{\alpha\neq 1}y_{\alpha}^{2}\] \[+\sum_{(\alpha,\beta,\gamma)}^{K}y_{\alpha}y_{\beta}y_{\gamma}\; \frac{1}{2^{M}}\text{Tr}\Psi_{1}\Psi_{\alpha}\Psi_{\beta}\Psi_{\gamma}. \tag{101}\] For the calculation up to \(O(\sigma^{6})\), we need \[\int D^{K}\mathbf{y}\;\eta_{2}^{2}(\mathbf{y})=4(M-2), \tag{102}\] \[\int D^{K}\mathbf{y}\;\eta_{1}(\mathbf{y})\eta_{3}(\mathbf{y})=3K,\] (103) \[\int D^{K}\mathbf{y}\;\eta_{1}^{2}(\mathbf{y})\zeta_{2}(\mathbf{y})=K+2. \tag{104}\] It is now a matter of Taylor expanding the functions inside the integral in Eq. (41) and using the above results to get the expansion coefficients in \(f_{M}(\sigma)=\sum_{j=0}^{\infty}c_{2j}(M)\sigma^{2j}\). We find that \(c_{0}=c_{2}=c_{4}=0\) and the leading order term is \(O(\sigma^{6})\). We obtain \[c_{6}(M)=-\frac{M}{24}(M-1)(M-3). \tag{105}\] As mentioned in the main text, it becomes negative for \(M>3\). To go up to \(O(\sigma^{8})\), we need results of more Gaussian integrals similar to Eqs. (100)-(101) and to Eqs. (102)-(104). After a rather long calculation with the help of symbolic algebra packages in MATHEMATICA, we obtain \[c_{8}(M)=-\frac{M}{48}(M-1)(3M^{2}-27M+47), \tag{106}\] which is valid for \(K\geq 3\) or \(M=3,4,5,\cdots\). We note that \(c_{8}(M=3)=7/8>0\). ## Appendix C The 1RSB equations for the quintic Landau free energy Here we consider the 1RSB saddle point equations corresponding to the free energy expanded up to quintic order as given in Eq. (25). Let us assume that \(q_{ab}\) takes the 1RSB form having values \(q_{1}\) on \(n/m_{1}\) diagonal blocks of size \(m_{1}\) and \(q_{0}=0\) outside the blocks. We can then express the cubic, quartic, and quintic terms in \(q_{ab}\) in terms of \(q_{1}\) and \(m_{1}\) as we have done in Eqs. (29) and (30) for the quadratic terms. We obtain \[\frac{\beta F_{1\text{RSB}}}{N}= -C\beta^{2}-M\ln 2+\tau(m_{1}-1)q_{1}^{2}-w_{1}(m_{1}-1)(m_{1}-2)q_ {1}^{3}-w_{2}(m_{1}-1)q_{1}^{3}\] \[-y_{1}(m_{1}-1)q_{1}^{4}-y_{2}(m_{1}-1)^{2}q_{1}^{4}-y_{3}(m_{1}- 1)(m_{1}-2)q_{1}^{4}-y_{5}(m_{1}-1)(m_{1}^{2}-3m_{1}+3)q_{1}^{4}\] \[-z_{1}(m_{1}-1)q_{1}^{5}-z_{2}(m_{1}-1)^{2}q_{1}^{5}-z_{3}(m_{1}- 1)(m_{1}-2)q_{1}^{5}-z_{4}(m_{1}-1)(m_{1}-2)q_{1}^{5}\] \[-z_{5}(m_{1}-1)(m_{1}^{2}-3m_{1}+3)q_{1}^{5}-z_{6}(m_{1}-1)^{2}(m_ {1}-2)q_{1}^{5}-z_{7}(m_{1}-1)^{3}q_{1}^{5}\] \[-z_{8}(m_{1}-1)(m_{1}-2)^{2}q_{1}^{5}-z_{9}(m_{1}-1)(m_{1}-2)(m_{1 }^{2}-2m_{1}+2)q_{1}^{5}. \tag{107}\] The saddle point equations are obtained by varying the free energy with respect to \(q_{1}\) and \(m_{1}\). They are given by \[2\tau q_{1}= 3\Big{[}w_{1}(m_{1}-2)+w_{2}\Big{]}q_{1}^{2}+4\Big{[}y_{1}+y_{2 }(m_{1}-1)\] \[+y_{3}(m_{1}-2)+y_{5}(m_{1}^{2}-3m_{1}+3)\Big{]}q_{1}^{3}\] \[+5\Big{[}z_{1}+z_{2}(m_{1}-1)+z_{3}(m_{1}-2)+z_{4}(m_{1}-2)\] \[+z_{5}(m_{1}^{2}-3m_{1}+3)+z_{6}(m_{1}-1)(m_{1}-2)\] \[+z_{7}(m_{1}-1)^{2}+z_{8}(m_{1}-2)^{2}\] \[+z_{9}(m_{1}-2)(m_{1}^{2}-2m_{1}+2)\Big{]}q_{1}^{4} \tag{108}\] and \[\tau q_{1}^{2}= \Big{[}w_{1}(2m_{1}-3)+w_{2}\Big{]}q_{1}^{3}+\Big{[}y_{1}+2y_{2 }(m_{1}-1)\] \[+y_{3}(2m_{1}-3)+y_{5}(3m_{1}^{2}-8m_{1}+6)\Big{]}q_{1}^{4}\] \[+\Big{[}z_{1}+2z_{2}(m_{1}-1)+z_{3}(2m_{1}-3)+z_{4}(2m_{1}-3)\] \[+z_{5}(3m_{1}^{2}-8m_{1}+6)+z_{6}(3m_{1}^{2}-8m_{1}+5)\] \[+3z_{7}(m_{1}-1)^{2}+z_{8}(3m_{1}^{2}-10m_{1}+8)\] \[+z_{9}(4m_{1}^{3}-15m_{1}^{2}+20m_{1}-10)\Big{]}q_{1}^{5} \tag{109}\] Combining the above equations with the condition \(q_{1}\neq 0\), we have \[0= \Big{[}-m_{1}w_{1}+w_{2}\Big{]}+2\Big{[}y_{1}-y_{3}+y_{5}m_{1}(2-m_{1 })\Big{]}q_{1}\] \[+ \Big{[}3z_{1}+z_{2}(m_{1}-1)+z_{3}(m_{1}-4)+z_{4}(m_{1}-4)\] \[+ z_{5}(-m_{1}^{2}+m_{1}+3)+z_{6}m_{1}(1-m_{1})-z_{7}(m_{1}-1)^{2}\] \[+ z_{8}(4-m_{1}^{2})+z_{9}m_{1}(-3m_{1}^{2}+10m_{1}-10)\Big{]}q_{1 }^{2} \tag{101}\] The 1RSB transition temperature is determined by setting \(m_{1}=1\) in the above equation. We obtain \[(w_{2}-w_{1})+2(y_{1}-y_{3}+y_{5})q_{1}\] \[+3(z_{1}-z_{3}-z_{4}+z_{5}+z_{8}-z_{9})q_{1}^{2}=0. \tag{102}\] Equivalently, we have an equation without factors of \(\beta\) as \[(w_{2}^{\prime}-w_{1}^{\prime})+2(y_{1}^{\prime}-y_{3}^{\prime}+ y_{5}^{\prime})\mu_{1}\] \[+3(z_{1}^{\prime}-z_{3}^{\prime}-z_{4}^{\prime}+z_{5}^{\prime}+ z_{8}^{\prime}-z_{9}^{\prime})\mu_{1}^{2}=0. \tag{103}\] From Appendix A, the coefficients are given by \[w_{2}-w_{1}=\frac{\beta^{6}}{12M^{2}}(M-1)(M-3), \tag{104}\] and \[y_{1}-y_{3}+y_{5} \tag{105}\] \[= \begin{cases}-\frac{\beta^{8}}{48M^{3}}(M-1)(12M-29),&\text{if $2 \leq M\leq 3$}\\ \frac{\beta^{8}}{48M^{3}}(M-1)(3M^{2}-27M+47),&\text{if $M\geq 3$}.\end{cases}\] In Sec. II.5, we have defined the effective quintic coefficient \(z_{1}^{\text{eff}}\) as the one that appears in the above equation, which can be calculated from the results in Appendix A as \[z_{1}^{\text{eff}} \equiv z_{1}-z_{3}-z_{4}+z_{5}+z_{8}-z_{9} \tag{106}\] \[=\begin{cases}\frac{\beta^{10}}{60M^{4}}(M-1)(45M-103),\\ -\frac{\beta^{10}}{60M^{4}}(M-1)(30M^{2}-195M+283),\\ \frac{\beta^{10}}{60M^{4}}(M-1)(3M^{3}-57M^{2}+273M-355),\end{cases}\] In the above equation, the three cases from top to bottom correspond to the regions, \(2\leq M\leq 3\), \(3\leq M\leq 4\) and \(M\geq 4\), respectively. This is related to the small-\(\sigma\) expansion of \(f_{M}(\sigma)\) discussed in Sec. II.3 as follows. If we multiply Eq. (102) by \(-q_{1}^{3}/2\) and use \(q_{1}=\mu_{1}/\beta^{2}=M\sigma^{2}/\beta^{2}\), Eq. (102) becomes \[c_{6}(M)\sigma^{6}+c_{8}(M)\sigma^{8}+c_{10}(M)\sigma^{10}=0, \tag{107}\] where \[c_{6}(M) =-\frac{M^{3}}{2\beta^{6}}(w_{2}-w_{1}), \tag{108}\] \[c_{8}(M) =-\frac{M^{4}}{\beta^{8}}(y_{1}-y_{3}+y_{5}), \tag{109}\] and \[c_{10}(M)=-\frac{3M^{5}}{2\beta^{10}}(z_{1}-z_{3}-z_{4}+z_{5}+z_{8}-z_{9}). \tag{110}\] ## Appendix D FRSB equations for the free energy with one quintic term Taking a functional derivative of the free energy in Eq. (72) with respect to \(q(x)\), we have \[0= \frac{\delta}{\delta q(x)}\left(\frac{\beta F_{\text{FRSB}}}{N} \right)=-2\tau q(x)-w_{1}\left\{3xq^{2}(x)+3\int_{0}^{x}dy\;q^{2}(y)+6q(x) \int_{x}^{1}dy\;q(y)\right\}+3w_{2}q^{2}(x)\] \[+4y_{1}q^{3}(x)-4y_{2}\langle q^{2}\rangle q(x)-y_{3}\left\{2 \langle q^{3}\rangle+6\langle q\rangle q^{2}(x)+2\langle q^{2}\rangle q(x)+4 xq^{3}(x)-6q^{2}(x)\int_{0}^{x}dy\;q(y)-2\int_{x}^{1}dyq^{3}(y)\right\}\] \[-y_{5}\Bigg{\{}4\langle q^{2}\rangle q(x)-8\langle q\rangle^{2} q(x)-8\langle q^{2}\rangle-4\int_{0}^{1}dx^{\prime}\;q(x^{\prime})\int_{0}^{x^{ \prime}}dy\;(q(x^{\prime})-q(y))^{2}\] \[-4\langle q\rangle\Big{[}3xq^{2}(x)-4q(x)\int_{0}^{x}dy\;q(y)-2 \int_{x}^{1}dy\;q^{2}(y)+\int_{0}^{x}dy\;q^{2}(y)+2q(x)\int_{x}^{1}dy\;q(y) \Big{]}\] \[-\Bigg{[}4x^{2}q^{3}(x)-12xq^{2}(x)\int_{0}^{x}dy\;q(y)-4\int_{x}^ {1}dy\;yq^{3}(y)+4xq(x)\int_{0}^{x}dy\;q^{2}(y)+4q(x)\int_{x}^{1}dy\;yq^{2}(y)\] \[-4\int_{0}^{x}dyq(y)\int_{0}^{x}dz\;q^{2}(z)-4\int_{x}^{1}dy\;q(y) \int_{0}^{y}dz\;q^{2}(z)-8q(x)\int_{x}^{1}dy\;q(y)\int_{0}^{y}dz\;q(z)+8q(x) \left[\int_{0}^{x}dy\;q(y)\right]^{2}\] \[+8\int_{x}^{1}dy\;q^{2}(y)\int_{0}^{y}dz\;q(z)+4q(x)\int_{x}^{1}dy \;\int_{0}^{y}dz\;q^{2}(z)\Bigg{]}\Bigg{\}}+5z_{1}q^{4}(x). \tag{111}\] For \(0\leq x\leq 1\) where \(q^{\prime}(x)\neq 0\), we can take a derivative of the above equation and have \[\left(\frac{1}{q^{\prime}(x)}\frac{d}{dx}\right)\left[\frac{\delta}{\delta q(x)} \left(\frac{\beta F_{\text{FRSB}}}{N}\right)\right]=0. \tag{46}\] This gives us \[0= -2\tau-w_{1}\left\{6xq(x)+6\int_{x}^{1}dy\;q(y)\right\}+6w_{2}q(x)\] \[+12y_{1}q^{2}(x)-4y_{2}\langle q^{2}\rangle-y_{3}\Bigg{\{}12 \langle q\rangle q(x)+12xq^{2}(x)\] \[-12q(x)\int_{0}^{x}dy\;q(y)+2\langle q^{2}\rangle\Bigg{\}}-y_{5} \Bigg{\{}4\langle q^{2}\rangle-8\langle q\rangle^{2}\] \[-4\langle q\rangle\Big{[}6xq(x)-4\int_{0}^{x}dy\;q(y)+2\int_{x}^ {1}dy\;q(y)\Big{]}\] \[-\Bigg{[}12x^{2}q^{2}(x)-24xq(x)\int_{0}^{x}dy\;q(y)+4x\int_{0}^ {x}dy\;q^{2}(y)\] \[+4\int_{x}^{1}dy\;yq^{2}(y)-8\int_{x}^{1}dy\;q(y)\int_{0}^{y}dz\; q(z)\] \[+8\left[\int_{0}^{x}dyq(y)\right]^{2}+4\int_{x}^{1}dy\int_{0}^{y} dz\;q^{2}(z)\Bigg{]}\Bigg{\}}\] \[+20z_{1}q^{3}(x) \tag{47}\] Taking one more derivative with respect to \(x\) and divide by \(q^{\prime}(x)\), we have for \(x\) with \(q^{\prime}(x)\neq 0\) \[\left(\frac{1}{q^{\prime}(x)}\frac{d}{dx}\right)\left(\frac{1}{q^{\prime}(x)} \frac{d}{dx}\right)\left[\frac{\delta}{\delta q(x)}\left(\frac{\beta F_{\text{ FRSB}}}{N}\right)\right]=0. \tag{48}\] This is given by \[0= -6(w_{1}x-w_{2})+24Y(x)q(x)\] \[+12Y^{\prime}(x)\int_{x}^{1}dy\;q(y)+60z_{1}q^{2}(x), \tag{49}\] where \[Y(x)\equiv y_{1}-xy_{3}+x^{2}y_{5}. \tag{50}\] Taking a derivative of the above equation with respect to \(x\) once again, we have \[\frac{d}{dx}\left(\frac{1}{q^{\prime}(x)}\frac{d}{dx}\right) \left(\frac{1}{q^{\prime}(x)}\frac{d}{dx}\right)\left[\frac{\delta}{\delta q( x)}\left(\frac{\beta F_{\text{FRSB}}}{N}\right)\right]=0, \tag{51}\] This can be written as \[0= -6w_{1}+24Y(x)q^{\prime}(x)+12Y^{\prime}(x)q(x)\] \[+24y_{5}\int_{x}^{1}dy\;q(y)+120z_{1}q^{\prime}(x)q(x). \tag{52}\] Eliminating \(\int_{x_{0}}^{1}dy\;q(y)\) from Eqs. (49) and (52), we have \[q^{\prime}(x)\] \[= \frac{-y_{3}w_{1}+2y_{5}w_{2}+2(-y_{3}^{2}+4y_{1}y_{5})q(x)+20z_{1 }y_{5}q^{2}(x)}{4Y^{\prime}(x)(Y(x)+5z_{1}q(x))}. \tag{53}\] ## Appendix E FRSB expressions for all quintic terms Here we present the expressions in terms of the Parisi function \(q(x)\) for the quintic contributions to the free energy, which is denoted by \(F_{\text{FRSB}}^{(5)}\). We have \[\frac{\beta F_{\text{FRSB}}^{(5)}}{N}= z_{1}\langle q^{5}\rangle-z_{2}\Big{[}-\langle q^{5}\rangle+2 \langle q^{3}\rangle\langle q^{2}\rangle+\int_{0}^{1}dx\;\int_{0}^{x}dy\;(q ^{3}(y)-q^{3}(x))(q^{2}(y)-q^{2}(x))\Big{]}\] \[-z_{3}\left[2\langle q\rangle\langle q^{4}\rangle+\int_{0}^{1} dx\;q^{3}(x)\int_{0}^{x}dy\;(q(y)-q(x))^{2}\right]-z_{4}\Big{[}2\langle q^{2} \rangle\langle q^{3}\rangle+\int_{0}^{1}dx\;q(x)\int_{0}^{x}dy\;\left(q^{2}(y) -q^{2}(x)\right)^{2}\Big{]}\] \[-z_{5}\Big{[}-4\langle q\rangle^{2}\langle q^{3}\rangle+\langle q ^{2}\rangle\langle q^{3}\rangle-3\langle q\rangle\langle q^{2}h\rangle- \langle q^{3}\rangle\langle h\rangle-\int_{0}^{1}dx\;q^{2}(x)\int_{0}^{x}dy \;(q(y)-q(x))(h(y)-h(x))\Big{]},\] \[-z_{6}\left[-2\langle q\rangle\langle q^{2}\rangle^{2}-\langle q^ {2}\rangle\langleqh\rangle\right]-z_{7}\Big{[}2\langle q^{2}\rangle\langle q^ {3}\rangle+\langle q\rangle\langle q^{4}\rangle-4\langle q\rangle\langle q^{2} \rangle^{2}-3\langle q^{2}\rangle\langle g\rangle+\langle q^{2}g\rangle\] \[-\langle q\rangle\int_{0}^{1}dx\int_{0}^{x}dy\;\left(q^{2}(y)-q^{2 }(x)\right)^{2}-\int_{0}^{1}dx\int_{0}^{x}dy\;(g(y)-g(x))(q^{2}(y)-q^{2}(x)) \Big{]},\] \[-z_{8}\left[-4\langle q^{2}\rangle\langle q^{3}\rangle-4\langle q \rangle\langle q^{2}h\rangle-\langleqh^{2}\rangle\right]-z_{9}\Big{[}8\langle q \rangle^{3}\langle q^{2}\rangle-4\langle q^{2}\rangle^{2}\langle q\rangle+10 \langle q\rangle^{2}\langleqh\rangle-2\langle q^{2}\rangle\langleqh\rangle\] \[+2\langle q\rangle\langle q^{2}\rangle\langle h\rangle+3\langle q \rangle\langle h^{2}\rangle+\langle h\rangle\langleqh\rangle+2\langle q \rangle\int_{0}^{1}dx\;q(x)\int_{0}^{x}dy\;(q(y)-q(x))(h(y)-h(x))\] \[+\int_{0}^{1}dx\;h(x)\int_{0}^{x}dy\;dy(q(y)-q(x))(h(y)-h(x))\Big{]}, \tag{54}\] where \[h(x) =\int_{0}^{x}dy\;(q(y)-q(x))^{2} \tag{120}\] \[g(x) =\int_{0}^{x}dy\;\left(q^{2}(y)-q^{2}(x)\right)(q(y)-q(x)) \tag{121}\] Stationary conditions for the free energy obtained from the quintic contributions are quite complicated. In this Appendix, we only present \[0=\left(\frac{1}{q^{\prime}(x)}\frac{d}{dx}\right)\left(\frac{1}{q^{\prime}(x)} \frac{d}{dx}\right)\left[\frac{\delta}{\delta q(x)}\left(\beta F_{\text{FRSB}}^ {(5)}/N\right)\right]. \tag{122}\] This is given by \[0 =60z_{1}q^{2}(x)-6z_{2}\langle q^{2}\rangle-z_{3}\left[6\int_{0}^ {x}dy\;q^{2}(y)+48q(x)\int_{x}^{1}dy\;q(y)+60xq^{2}(x)\right]\] \[-z_{4}\left[12\int_{x}^{1}dy\;q^{2}(y)+24q(x)\int_{x}^{1}dy\;q(y) +60xq^{2}(x)\right]-z_{5}\Big{[}-24\langle q\rangle^{2}+6\langle q^{2}\rangle -6\langle h\rangle-72\langle q\rangle xq(x)\] \[\qquad+36\langle q\rangle\int_{0}^{x}dy\;q(y)-6x\int_{x}^{1}dy\; q^{2}(y)+60xq(x)\int_{0}^{x}dy\;q(y)-12\left(\int_{0}^{x}dy\;q(y)\right)^{2}\] \[\qquad-54x^{2}q^{2}(x)+6\int_{0}^{x}dy\;h(y)-6xh(x)\Big{]}\] \[-z_{6}\left[-6\langle q^{2}\rangle x\right]-z_{8}\Big{[}-24 \langle q^{2}\rangle-96\langle q\rangle xq(x)+48\langle q\rangle\int_{0}^{x} dy\;q(y)+96xq(x)\int_{0}^{x}dy\;q(y)\] \[\qquad-24\left(\int_{0}^{x}dy\;q(y)\right)^{2}-60x^{2}q^{2}(x)-1 2x\int_{0}^{x}dy\;q^{2}(y)\Big{]}\] \[-z_{9}\Big{[}12x\langle q\rangle^{2}+48x\langle q\rangle\{xq(x)- \int_{0}^{x}dy\;q(y)\}+12x^{2}h(x)+6x\left(\langle h\rangle-2\int_{0}^{x}dy\; h(y)\right)\] \[\qquad+48x\left(xq(x)-\int_{0}^{x}dy\;q(y)\right)^{2}+6x\langle h \rangle+72x\langle q\rangle\int_{0}^{x}dy\;(q(x)-q(y))-12x\langle q^{2}\rangle +60x\langle q\rangle^{2}\Big{]}. \tag{123}\] Stationary conditions for the free energy obtained from the quintic contributions are quite complicated. In this Appendix, we only present \[0=\left(\frac{1}{q^{\prime}(x)}\frac{d}{dx}\right)\left(\frac{1}{q^{\prime}(x) }\frac{d}{dx}\right)\left[\frac{\delta}{\delta q(x)}\left(\beta F_{\text{FRSB} }^{(5)}/N\right)\right]. \tag{124}\] This is given by \[0 =60z_{1}q^{2}(x)-6z_{2}\langle q^{2}\rangle-z_{3}\left[6\int_{0}^ {x}dy\;q^{2}(y)+48q(x)\int_{x}^{1}dy\;q(y)+60xq^{2}(x)\right]\] \[-z_{4}\left[12\int_{x}^{1}dy\;q^{2}(y)+24q(x)\int_{x}^{1}dy\;q(y) +60xq^{2}(x)\right]-z_{5}\Big{[}-24\langle q\rangle^{2}+6\langle q^{2} \rangle-6\langle h\rangle-72\langle q\rangle xq(x)\] \[\qquad+36\langle q\rangle\int_{0}^{x}dy\;q(y)-6x\int_{x}^{1}dy\; q^{2}(y)+60xq(x)\int_{0}^{x}dy\;q(y)-12\left(\int_{0}^{x}dy\;q(y)\right)^{2}\] \[\qquad-54x^{2}q^{2}(x)+6\int_{0}^{x}dy\;h(y)-6xh(x)\Big{]}\] \[-z_{6}\left[-6\langle q^{2}\rangle x\right]-z_{8}\Big{[}-24 \langle q^{2}\rangle-96\langle q\rangle xq(x)+48\langle q\rangle\int_{0}^{x}dy \;q(y)+96xq(x)\int_{0}^{x}dy\;q(y)\] \[\qquad-24\left(\int_{0}^{x}dy\;q(y)\right)^{2}-60x^{2}q^{2}(x)-1 2x\int_{0}^{x}dy\;q^{2}(y)\Big{]}\] \[-z_{9}\Big{[}12x\langle q\rangle^{2}+48x\langle q\rangle\{xq(x)- \int_{0}^{x}dy\;q(y)\}+12x^{2}h(x)+6x\left(\langle h\rangle-2\int_{0}^{x}dy\; h(y)\right)\] \[\qquad+48x\left(xq(x)-\int_{0}^{x}dy\;q(y)\right)^{2}+6x\langle h \rangle+72x\langle q\rangle\int_{0}^{x}dy\;(q(x)-q(y))-12x\langle q^{2}\rangle +60x\langle q\rangle^{2}\Big{]}. \tag{125}\]
2303.17446
PMMA Pyrolysis Simulation -- from Micro- to Real-Scale
In fire spread simulations, heat transfer and pyrolysis are processes to describe the thermal degradation of solid material. In general, the necessary material parameters cannot be directly measured. They are implicitly deduced from micro- and bench-scale experiments, i.e. thermogravimetric analysis (TGA), micro-combustion (MCC) and cone calorimetry. Using a complex fire model, an inverse modelling process (IMP) is capable to find parameter sets, which are able to reproduce the experimental results. In the real-scale, however, difficulties arise predicting the fire behaviour using the deduced parameter sets. Here, we show an improved model to fit data of multiple small scale experiment types. Primarily, a gas mixture is used to model an average heat of combustion for the surrogate fuel. The pyrolysis scheme is using multiple reactions to match the mass loss (TGA), as well as the energy release (MCC). Additionally, a radiative heat flux map, based on higher resolution simulations, is used in the cone calorimeter setup. With this method, polymethylmetacrylate (PMMA) micro-scale data can be reproduced well. For the bench-scale, IMP setups are used differing in cell size and targets, which all lead to similar and good results. Yet, they show significantly different performance in the real-scale parallel panel setup.
Tristan Hehnen, Lukas Arnold
2023-03-30T15:19:47Z
http://arxiv.org/abs/2303.17446v2
# PMMA Pyrolysis Simulation - from Micro- to Real-Scale ###### Abstract In fire spread simulations, heat transfer and pyrolysis are processes to describe the thermal degradation of solid material. In general, the necessary material parameters cannot be directly measured. They are implicitly deduced from micro- and bench-scale experiments, i.e. thermogravimetric analysis (TGA), micro-combustion (MCC) and cone calorimetry. Using a complex fire model, an inverse modelling process (IMP) is capable to find parameter sets, which are able to reproduce the experimental results. In the real-scale, however, difficulties arise predicting the fire behaviour using the deduced parameter sets. Here, we show an improved model to fit data of multiple small scale experiment types. Primarily, a gas mixture is used to model an average heat of combustion for the surrogate fuel. The pyrolysis scheme is using multiple reactions to match the mass loss (TGA), as well as the energy release (MCC). Additionally, a radiative heat flux map, based on higher resolution simulations, is used in the cone calorimeter setup. With this method, polymethyllmetacrylate (PMMA) micro-scale data can be reproduced well. For the bench-scale, IMP setups are used differing in cell size and targets, which all lead to similar and good results. Yet, they show significantly different performance in the real-scale parallel panel setup. Fire Dynamics Simulator (FDS) Inverse Modelling Pyrolysis Arrhenius Equation Polymethyllmetacrylate (PMMA) Thermogravimetric Analysis (TGA) Micro-Combustion Calorimetry (MCC) Cone Calorimeter Parallel Panel Test MaCFP Materials Database ## 1 Introduction The simulation of fire propagation is of great interest for the fire safety engineering community. It could lead to reduced costs for mitigation measures, since the fire scenario could be less over-predicting and fire protection measures could be better evaluated. It could even make certain types of assessments possible, for instance when the release of (radioactive) combustion products is to be determined, and not prescribed within a design fire. Much research is performed in this direction internationally [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]. This approach requires material parameter sets, which allow meaningful reaction to changed physical conditions near the fire. For example, reduced oxygen should lead to less energy release from the flame, which in turn reduces the heat transfer to the sample, impacts the release of combustible gas and ultimately leads to a smaller flame. The performance of these parameter sets needs to be assessed over all involved length scales, not only in the micro- and bench-scale. Specifically, the transition from the bench- to the real-scale is important. Assessment of the parameter set's performance is only meaningful in the real-scale, i.e. in terms of validation. Thus, the real-scale should not be part of the estimation of the parameter set itself. We present here a general strategy to estimate material parameter sets, which is built on existing approaches. These approaches are based on micro- and bench-scale tests, here using cast black PMMA as an example. The parameter sets are applied for fire spread simulations, using the Fire Dynamics Simulator (FDS) [13]. Although the material parameter sets lead to similarly good representation in the micro- and bench-scale, they lead to significantly different results in the real-scale. It is highlighted, that the experiments at this scale introduce further modelling parameters, e.g. the characteristics of the ignition source. Thus, the system is not only dependent on the performance of the material parameters alone. For the parameter estimation, an optimisation algorithm is employed in an inverse modelling process (IMP). In this work, we deliberately assume no information on pyrolysis and combustion parameters of PMMA. With this, the modelled system gains many degrees of freedom to represent, e.g., the intermediate states and structural changes, which are in general not measurable from the virgin material. Additionally, these parameters may not be available in practical scenarios, and the presented approach aims for a general applicability. As a general strategy, the process presented here is divided into three major steps, in an effort to reduce the otherwise significant computational demand. In the first step, reaction kinetics of the material decomposition (pyrolysis) and the energy release in the gas phase (combustion) are determined. This is based on micro-scale tests, thermogravimetric analysis (TGA) and micro-combustion calorimetry (MCC). A large number of parameters (33) is used to define the PMMA decomposition scheme. By using an extremely simplified micro-scale simulation setup the computational demand can be kept low - in the order of days. In the second step, the thermophysical and optical parameters are determined in the bench-scale. The simplified cone calorimeter setup used in this step is computationally much more expensive. Looking at fewer parameters (15), the computational demand can be kept relatively low as well. Still, the needed computing time is in the order of months on a high performance computing cluster to complete a full IMP, including multiple sampling limit adjustments. Finally, the performance of the parameter sets is assessed in a real-scale simulation setup of a parallel panel test. This is considered here as validation step, since the goal is to determine parameter sets in the small-scale that lead to the appropriate behaviour to predict the fire development in the real-scale. The here proposed method is built on state-of-the-art strategies of using FDS for pyrolysis simulation, e.g. [8, 9, 10, 14]. The primary changes are the use of multiple superimposed pyrolysis reactions, using a gas mixture as surrogate fuel and conducting simplified cone calorimeter simulations with higher fluid cell resolution during the inverse modelling. The surrogate fuel consists of combustible and non-combustible primitive species. During the IMP, their fractions are adjusted such that the average heat of combustion (HOC) of this mixture fits to the experiment data. With this, FDS can use the mass loss rate from the solid directly as input for the gas phase without the usual scaling. Furthermore, the radiative flux from the cone heater to the sample surface is not uniform [15, 16], which is taken into account here. A high resolution simulation with a conical heater geometry was conducted and the resulting radiative heat flux determined. This result is then baked into a flux map for a simplified cone calorimeter setup. This setup has a higher fluid cell resolution than is typically used. With the higher resolution and the inhomogeneous heat flux it is possible to capture an uneven sample consumption. The inverse modelling is conducted with PROPTI [17, 18, 19], an in-house developed publicly available inverse modelling framework. New dependencies were implemented to determine the gas mixtures (Git commit hash: 3a05366), more recent versions of PROPTI should support this directly. In this work, PROPTI used the shuffled complex evolutionary algorithm from SPOTPY [20], version 1.5.14. With FDS6.7.6-810-ge5f90f-HEAD we use a self-compiled FDS version, to incorporate a fix when FDS computes the stoichiometry for gas mixture combustion, see issue "Fuel FORMULA for SIMPLE_CHEMISTRY #9862" in the FDS GitHub repository. Thus, one may encounter errors when trying to reproduce this work with the stable FDS 6.7.6 or earlier versions, in general it should work with newer versions. Throughout this work, FDS is used with default settings if not stated otherwise. The experimental data for the micro- and bench-scale tests are taken from the MaCFP materials database [7] (Git commit hash: 7f89fd8). The result of the inverse modelling is an effective material parameter set. Its performance is compared against real-scale experiment data of parallel panel tests, taken from the MaCFP database [21] (Git commit hash: 25614bd). Comparisons with the FDS validation suite are provided as reference [22]. This article is accompanied by a publicly available data repository on Zenodo [23], containing the simulation data and analysis scripts, as well as a video series on YouTube explaining how they are set up [24]. ## 2 Materials and Methods At first, in this section, a brief introduction to the pyrolysis and combustion basics for PMMA is provided. Then the micro-scale experiment and simulation setups are presented, which are the basis for the pyrolysis parameter estimation. The fuel mixture for the gas phase combustion is introduced afterwards. Then the bench-scale experiment and simulation setups for the estimation of the thermophysical parameters are discussed. Finally, an overview of the inverse modelling process and a description of the real-scale setup are presented. ### Pyrolysis and Combustion Basics Fire spread on solid materials involves the transformation of the material into a combustible gas. This transformation, controlled by the temperature of the solid, is called pyrolysis. The long molecule chains of a polymer are split into smaller molecules. In the case of PMMA, this is mostly its monomer methylmetacrylate (MMA), more than 90 %, and small amounts of carbon dioxide [14, 25, 26, 27]. However, the MMA is not directly involved in the combustion [27]. It further decomposes into even smaller chemical compounds, among which are methane, acetylene, ethylene, carbon monoxide, carbon dioxide, and hydrogen [27]. These smaller molecules are then taking part in the gas phase combustion (flame). The combustion reaction involves many intermediate species and reactions. Already for simple hydrocarbons like methane, reaction mechanisms are proposed [28, 29, 30] that involve 30 to over 1200 intermediate reactions and over 200 intermediate species. Reaction mechanisms for longer carbon chains contain the reactions of the shorter molecules [31]. Concentrations of the individual species also change across the combustion reaction zone [32]. It seems that the limiting factor to the fidelity of the reaction models is primarily the available computing power [13, 33]. Reaction schemes also differ by which PMMA decomposes, depending on the polymerisation method and molecular weight [26, 27, 34], as well as with different test apparatus designs [private communication with Karen De Lannoye]. The material decomposition and combustion models used within this work using FDS is strongly simplified, yet reflects common practise in scientific and engineering applications. First, using a model based on Arrhenius equations, the sample is transformed into a gas (NU_SPEC) and a solid inert residue (NU_MATL), see section 2.2. This is solely controlled by the sample temperature. The released gas is directly involved in the combustion reaction, see section 2.3. Intermediate reaction steps of further decomposition of the MMA are neglected here. This is regarded as an intermediate approach, located between a single surrogate or many intermediate chemical reactions and species. It still maintains the benefit of the surrogate: the reduced computational cost, because fewer species and reactions need to be tracked. ### Micro-Scale Setup The focus of the micro-scale simulations is to determine the temperature-dependent material decomposition reactions (pyrolysis). The experimental data, that is used as target during the parameter estimation, is taken from the open-access MaCFP git repository [7]. Two data sets are used in two different IMP setups. First, TGA data recorded with a heating rate of 10 K/min, and MCC data at 60 K/min, provided by the National Institute of Standards and Technology (NIST). Secondly, TGA data from Sandia National Laboratories (Sandia) recorded at a higher heating rate of 50 K/min. This is used in an effort to get close to matching heating rates for TGA and MCC. As of writing, no 60 K/min TGA data set is available from the MaCFP repository. All the experimental data series, see figure 1, are averaged within each group, to be used as target during the IMP. The TGA data is normalised to get the normalised residual sample mass over temperature, and the final residue amount is determined (figure 0(a)). The amount of residue produced is very small, less than one percent of the starting sample mass. This value is used directly as residue production for each decomposition reaction in FDS (NU_MATL). From the processed MCC experimental data the average heat of combustion is determined. Two different IMP setups are designed (table 1), the first uses the TGA with a heating rate of 10 K/min and the MCC at 60 K/min. The second uses the TGA with a heating rate of 50 K/min and the MCC at 60 K/min. With this, the first setup has a relatively large difference in the heating rate and the second setup a smaller one. An FDS functionality (TGA_ANALYSIS) is used to evaluate an extremely simplified micro-scale simulation setup. The pyrolysis is simulated by employing Arrhenius equations for the decomposition reactions. It is used as provided with \begin{table} \begin{tabular}{l l} \hline \hline IMP Setup & Target Details \\ \hline MCCCTG\_01 & TGA at 10 K/min, MCC at 60 K/min \\ MCCCTG\_02 & TGA at 50 K/min, MCC at 60 K/min \\ \hline \hline \end{tabular} \end{table} Table 1: Overview over the different micro-scale IMP setups. FDS. The goal is to use as few reactions as possible, yet approximate the experiment to a high precision. The overall process starts out from the MCC experiment data. Multiple pyrolysis reactions, eight in total, are manually positioned, such that they roughly approximate the experiment data and form the first guess for the parameter set. The fine-tuning of the reaction parameters using an IMP concludes the first step of the procedure. In FDS, the decomposition reactions are associated with the material definitions (MATL). A sample can consist of multiple material components. These are combined into a boundary condition (SURF). The materials defined here get mostly the same parameters and only differ with respect to the parameters of the Arrhenius equations. This leads to a homogeneous material which decomposes differently depending on its temperature. The reference values, REFERENCE_TEMPERATURE and PYROLYSIS_RANGE, are chosen for being more human-readable, compared to the pre-exponential factor and activation energy. They assume the reaction order to be unity. The reaction order essentially skews the peak, which can be reproduced by multiple peaks/reactions in superposition. The reference values define the shape of the peak, its area is controlled by the fraction of the sample mass associated to this reaction in the surface definition (MATL_MASS_FRACTION). Thus, the energy release is controlled by the mass fractions. This allows to create a uniform gas mixture to be used for all decomposition reactions. It also means that each reaction has access to a predefined amount of sample mass. Each of the reactions releases the same fractions of residue and gas mixture. Furthermore, each Arrhenius reaction is assigned a heat of reaction (HOR), which is also determined during the IMP. In total, four parameters describe a single reaction and are determined: the two reference values, the sample mass fraction per reaction and the respective HOR. ### Gas Phase Combustion In the here proposed method, the complete energy release is assumed to take place as a gas phase reaction in the flame - no oxidation at the solid surface. Furthermore, it is assumed that the involved materials, the combustible gases and Figure 1: Data from different repetitions (Rep.) of micro-scale experiments, MaCFP [7]. Average (Avrg.) used as IMP target. the polymer, are hydrocarbons. Thus, for different materials the strategy might need to be adjusted, but should be transferable in principle. Since PMMA mostly decomposes into its monomer MMA when heated [14, 25, 26, 27], it could be considered as the gaseous fuel. This is also commonly done in practice, see for example the "NIST/NRC Parallel Panel Experiments" validation case for FDS [22]. Due to neglecting all intermediate reaction steps and species, this fuel is considered a surrogate, i.e. a surrogate fuel. In FDS, the surrogate is often chosen to be a pure, primitive species, like propane or the aforementioned MMA. This might lead to difficulties connecting it to the gas released in an experiment, of which the heat of combustion is likely different. FDS deals with this situation, by scaling the mass of combustible species introduced into the gas domain based on the energy release [13]. Appendix section C provides an exemplary description of this concept. In the proposed method here, a simple gas mixture is used with the goal to get an average HOC so that the mass loss rates match for the solid and gas phase. With "simple" meaning only a few primitive components. It is built, using the lumped species concept in FDS. Here, components are chosen that are already implemented in FDS and are also part of the overall combustion reaction mechanism [27]. They differ in their respective molecular weight and heat of combustion. No specific emphasis is given to the radiative fraction (RADIATIVE_FRACTION), thus it uses the default of 0.35 for unknown species [13]. The chosen species are: methane, ethylene and carbon dioxide, see section 2.1. The fractions of methane and ethylene are directly adjusted during the IMP. Carbon dioxide is used to account the remaining difference. Using three adjustable components, the degrees of freedom for the mixture and therefore the computational demand is kept low. They are also part of a computationally inexpensive simulation setup, compare table 3, but still connect to the simulations with gas phase combustion. Since the combustible species are hydrocarbons, the "simple chemistry" approach of FDS 6.7.6 can be used, with a soot yield of 0.022 g/g taken from [35], table 8.1. During the real-scale validation simulations, two different gas phase reaction definitions are used: the gas burner (propane) and the fuel mixture for PMMA. This requires the "complex chemistry" approach in FDS 6.7.6 and individual gas phase reactions. The stoichiometry of the gas mixture is extracted from the best parameter set of the first IMP step, for details see appendix B. Two different gas phase reactions enable, for example, the assignment of different values for the radiative fraction and soot yield. ### Bench-Scale Setup The cone calorimeter experiment data is taken from MaCFP [7], provided by Aalto University. The experiments have been conducted at a radiative heat flux of 65 kW/m2, without a retainer frame. Square samples of cast black PMMA with an edge length of 10 cm and a thickness of 0.6 cm were used. Heat release rates and back side temperatures, are processed similarly as described in section 2.2. The experiments show good repeatability, as demonstrated by the energy release in figure 1(a). It is assumed here that the back side temperature is measured at the sample centre. However, it is reported that in the experiment the temperature was measured at three points: the centre (Temp_1) and 1.5 cm to the sides (Temp_2/3). With a cell size of 3.3 cm, which is the main grid resolution as explained below, and a cell at the sample centre, all of these locations fall into this centre cell. Regardless of their location, most thermocouples show very similar temperature development, see figure 1(b). Few diverge and are rejected here. In the bench-scale, the cone calorimeter apparatus is simulated in a simplified way. The simulation mode in FDS is set to Large Eddy Simulation (LES) to facilitate the transfer to the real-scale setup. The computati Figure 2: Cone calorimeter experiment results, for 65 kW/m2 radiative flux condition (MaCFP [7], Aalto). “Temp_1” values are recorded at centre, “Temp_2/3” are recorded 1.5 cm to the sides. Averages are used as IMP target. mesh and computed by a single computing core. The fluid cell sizes are based on the sample dimensions. It is assumed to be a square with an edge length of 10 cm. The domain extents 30 cm in the x- and y-directions, with the sample centred. From the top of the sample, the domain extents 60 cm in the positive and two cells in the negative z-direction. Throughout this document, the fluid mesh resolutions are referred to by the number of fluid cells dividing a sample edge. For example, consider that each edge is divided by 3, thus \(3\times 3\) cells cover the sample surface. This is referred to as "C3". It results in an edge length for the cells of 3.33 cm. Consequently, for C5 the cell edge length is 2.0 cm and \(5\times 5\) cells are covering the sample surface. Different IMP setups are used to determine the material parameter sets. They vary with respect to the temperature-dependent specific heat and thermal conductivity definitions, IMP targets, as well as fluid cell sizes. The material parameter set is built on a base case, labelled "Cone_01" (table 2). For it, the PMMA density is computed to about 1201.72 kg/m\({}^{3}\), based on the reported sample mass and dimensions. The remaining thermophysical and optical parameters are: emissivity, absorption coefficient, refractive index, specific heat and thermal conductivity. They are solely determined during the IMP. Their initial sampling ranges are guessed and changed with successive limit adjustments. Parameters for the pyrolysis and combustion are taken from the micro-scale IMP (MCCTGA_02), see section 3.1. Thermal conductivity and specific heat for the sample material are represented as temperature dependent values (RAMP). The parameter values are adjusted during the IMP, while the temperature points are fixed. In Cone_01 and Cone_03 to 05 the three temperature points are arbitrarily chosen to be 150 \({}^{\circ}\)C, 480 \({}^{\circ}\)C and 800 \({}^{\circ}\)C. Cone_02 uses the definitions from the NIST parallel panel validation case (dashed lines in figure 31 and figure 32), which are physically informed. In Cone_06 and Cone_07 the temperature values are determined based on the significant temperature interval of the MCC measurement, see stars in figure 3. The chosen values are 150 \({}^{\circ}\)C, 300 \({}^{\circ}\)C and 450 \({}^{\circ}\)C to represent this interval. They are also used for Cone_08, but here the conductivity for PMMA and the backing material are in addition using the low temperature data reported by DBI/Lund from the MaCFP materials database [7]. In general, the backing material and the residue are treated as unknown. The density of the backing material is set to 65 kg/m\({}^{3}\), taken from the Aalto contribution [7]. The density of the residue is chosen arbitrarily to be 2500 kg/m\({}^{3}\). For backing and residue, their individual emissivity, thermal conductivity and specific heat are adjusted during the IMP. The respective sampling ranges are guessed. This is intended to provide some freedom, in an attempt to separate the sample behaviour from the boundary conditions of the experiment. The back face temperature is used as IMP target to achieve this separation. For one IMP setup (Cone_04) only the energy release is used as target to serve as comparison. With respect to fluid cell sizes, the default is C3. However, Cone_05 uses the C5 and Cone_07 the C2 setup. An overview of all investigated setups is outlined in table 2. In the boundary definition of the PMMA layer, the solid mesh is set to be uniform with a stretch factor of 1. The number of solid cells was increased by a factor of 10 (cell size factor of 0.1). The radiative heat flux of the heater is imprinted to the sample surface in the low resolution setups, such that a heater model can be neglected. This radiative heat flux is determined by employing a high resolution simulation (C12) containing a geometrical model of the heater, see appendix section A. The model is designed based on information in the literature [36, 37]. The resulting heat flux distribution on the sample surface is recorded (GAUGE HEAT FLUX), see figure 4. It is observable, that the radiative flux is not uniform across the sample surface, as was also reported earlier [15, 16, 38]. Based on the fluid cell size of the respective the IMP (C2, C3 and C5), low resolution maps are Figure 3: Example of a temperature dependence of the specific heat, realised as a ramp in FDS (Cone_06 and 07). The chosen temperature values are based on the significant temperature interval in the MCC experiment. The same temperature references are used for the heat conductivity. computed, see figure 5. These maps are implemented, using multiple surface definitions with different heat flux values (EXTERNAL_FLUX) for the individual sample surface cells. The low fluid cell resolutions allow to conduct the IMP in a manageable time frame (table 3) and still incorporate gas phase combustion. Just adding the flame heat flux to the imposed flux from the virtual heater is not sufficient. The flame formation is based on the combustible mass released from the sample, which in turn is based on the heat flux to the sample. If the contribution of a flame to the radiative flux is prescribed, it effectively defines a static imaginary flame with no connection to the gas phase model and should therefore be avoided. ### Inverse Modelling Process The inverse modelling is controlled using the framework PROPTI [17, 18, 19]. It uses a shuffled complex evolutionary (SCE) algorithm [39], implemented in Python in the SPOTPY package [20]. The initial sampling limits of the individual parameters are chosen as best guess. If during the IMP parameters get stuck at one of their limits, their sampling space is expanded in the respective direction. Thus, the sampling space gets only larger and the same parameter combinations of the previous runs can still be reached. Typically, after a couple generations it is clear which parameter approaches a limit and a new IMP run can be set up. These changes in sampling limits are denoted with a capital "L" followed by a number, for example "L0" is the original sampling space, "L1" would be the first expansion and so on. See appendix D for an example. This leads to a staggering of the IMP runs. The overall parameter estimation is divided into two steps. In the first IMP step, the reaction kinetics parameters for the material pyrolysis are determined. In the second step, the thermophysical and optical parameters are estimated using the reaction kinetics of the first step. The goal is to distribute the workload for the parameter estimation over two different simulation setups. This allows to move a large amount of parameters to a simulation that can be conducted quite fast \begin{table} \begin{tabular}{l l} \hline IMP Setup & Details \\ \hline Cone\_01 & Base case \\ Cone\_02 & Temperatures for the specific heat and conductivity RAPPs \\ & based on FDS parallel panel validation case \\ Cone\_03 & PMMA slab thickness set to 6.1 mm \\ Cone\_04 & Only HRR as IMP target \\ Cone\_05 & With 2 cm fluid cell resolution (C5) \\ Cone\_06 & Temperatures of conductivity and spec. heat RAPPs based on MCC plot (figure 3) \\ Cone\_07 & Like Cone\_06, with 5 cm fluid cell resolution (C2) \\ Cone\_08 & Like Cone\_06, lower temperature data added to RAPPs for conductivity of \\ & PMMA and backing, from DBI [7] \\ \hline \end{tabular} \end{table} Table 2: Overview over the different simplified cone calorimeter IMP setups. Density is 1201.72 kg/m\({}^{3}\) for all cases. Fluid cell size is C3, if not stated otherwise. Figure 4: Gauge heat flux from simulation with geometrical heater model (GEOM) for a target of 65 kW/m\({}^{2}\). Sample with an edge length of 0.1 m (C12). in the matter of seconds rather than minutes. Given the structure of the shuffled complex evolutionary algorithm, this is beneficial. With increasing amount of parameters, the generation size grows more than quadratic. The number of simulations to be conducted per generation \(\Phi\) depends on the amount of parameters to be considered. In the work presented here, the number of complexes \(n_{\text{complex}}\) is chosen to equal the number of parameters \(n_{\text{parameter}}\), see equation 1. \[\Phi=(2\cdot n_{\text{parameter}}+1)\cdot n_{\text{complex}} \tag{1}\] The number of generations was chosen to be about 150 for the micro-scale simulations and about 100 generations for the simplified cone calorimeter. In general, this is a sufficient number of generations to reach convergence. As cost function a root mean square error (RMSE) is used. It is computed between the simulation response of a given parameter set and the experiment data used as target. The RMSE takes the whole data series into account and yields a single value as result. During the inverse modelling the optimiser minimises the RMSE, thus the lowest value is associated to the best parameter set. Separating the reaction kinetics from the thermophysical parameters, is beneficial in two aspects. For one, it reduces the complexity of the inverse modelling itself (\(n_{a+b}^{2}>n_{a}^{2}+n_{b}^{2}\)). Furthermore, about two thirds of the parameters can be determined in a less costly setup. The computing time for the IMP massively depends on the fidelity of the employed simulation. Even though the number of simulations in a single IMP run is about 6 times larger for the micro-scale than the simplified cone calorimeter setup, the latter takes a good 30 times longer for the base case, as summarised in table 3. The simple cone setup with C2 resolution can be completed in about 2 weeks, while the C5 is estimated to take 10 months to over a year Figure 5: Coarse gauge heat fluxes mappings for the IMP, constructed from a high resolution simulation (C12, figure 4). Sample with an edge length of 0.1 m. ### Real-Scale Setup As a validation step, the material parameter sets are used in a real-scale simulation setup of a parallel panel test. The results are compared to the energy release measured in the experiment, see figure 6. The parallel panel test consists of two 0.61 m wide panels facing each other with a separation of 0.3 m. In between both is a gas burner located with a width of 0.3 m and a length of 0.61 m, see validation guide [22] cases "FM Parallel Panel Experiments" and "NIST/NRC Parallel Panel Experiments", as well as the MaCFP data base [21] -- the data set used here is "Test_7_PMMA_R6", labelled "PMMA R6" further on. The combustible sample, attached to the panels, extents 2.44 m above the burner surface. The burner is fed with propane gas. It reaches a quasi-steady energy release of about 60 kW, about 80 s after its ignition. After the sample is confirmed burning (sustained flaming across the panel walls), the burner is shut off. This happens about 120 s after the start of the experiment. This slows down the fire development for about half a minute. The sample material is the same cast black PMMA used throughout the MaCFP test campaign. The simulations are conducted for the three different fluid cell resolutions introduced in section 2.4. The computational domain spans a volume of 1.2 m \(\times\) 0.8 m \(\times\) 4.8 m and is divided into multiple sub-domains (MESH). The number and dimensions of the sub-domains were adjusted compared to the FDS validation setups, from (4, 2, 12), to (3, 1, 12). Thus, the individual mesh dimensions are multiples of 10 cm and can be nicely divided following the scheme outlined in section 2.4. Furthermore, the number of meshes is reduced and the simulation can be run on a single computing node with its 64 cores. The surface definitions are taken from the "NIST/NRC Parallel Panel Experiments" case. The sample definitions are built from the parameter sets created within this work. The simulation mode is set to LES. Propane is used as fuel species for the gas burner in the simulation. This differs to the FDS validation setup "NIST/NRC Parallel Panel Experiments", where the combustion reaction for MMA is used for the burner and the sample. In contrast to the experiments, during the simulation the gas burner is kept at a continuous energy release of 60 kW throughout, releasing a mass flux of about 0.00732 kg/(m\({}^{2}\) s). Shutting the burner off earlier leads to fire extinction relatively fast, see discussion in section 4.3. \begin{table} \begin{tabular}{l r r} \hline \hline & Micro-scale & Bench-scale \\ \hline Number of parameters & 33 & 15 \\ Generation size & 2211 & 465 \\ Number of generations & 150 & 100 \\ IMP run time (approx.) & 3.5 days & \textgreater{} 110 days \\ \hline \hline \end{tabular} \end{table} Table 3: Overview of number of optimisation parameters during different steps of the IMP. The amount of CPU cores (MPI) used equals the number of parameters. Time necessary in the bench-scale setup massively depends on the fluid cell number and size, listed here is the base case (C3). Counting of ”IMP run time” begins with the start of L0 and ends with the stop of L3. Figure 6: Fire development over PMMA panels in the parallel panel experiment. ## 3 Results ### Micro-Scale Simulation The first step of the parameter estimation focuses on an inverse modelling process in the micro-scale setup. Two IMP setups have been run (table 1). Both converge to their best fitness values within the first 40 generations, see figure 7. The overall fitness value is a combination of the performance in the MCC and in the TGA simulations. The responses of the best parameter sets for both IMPs are compared against the target data for both heating rates, see figure 8. The "(IMP)" marks which IMP used the respective target. The mass loss for a heating rate of 10 K/min happens at lower temperatures in the simulation for both IMPs, with MCCTG_01 being slightly closer to its target (figure 7(a)). For a heating rate of 50 K/min, both yield a very similar response (figure 7(b)). Both IMP setups are able to reproduce the target of the MCC at 60 K/min well, see figure 9. Figure 9(a) shows the eight predefined decomposition reactions, labeled "PMMA 1" to "PMMA 8", after being fine tuned through the IMP. Reactions "PMMA 4" and "PMMA 8" provide significantly lower contributions compared to the other reactions (figure 9(b)). Due to its better fitness value, focus is shifted to MCCTG_02 and no further sampling limit adjustment is conducted for MCCTG_01. MCCTG_02 is used in the following cone calorimeter simulations. ### Bench-Scale Simulation Here, an overview of the simple cone calorimeter IMP results of the best parameter sets is presented. The full data is provided in appendix F for completeness. The best fitness values of the IMP runs are summarised in table 4. For each Figure 8: Normalised residual mass from TGA, for heating rates of 10 K/min and 50 K/min. Comparison of response for the best parameter sets, (IMP) indicates the respective target. Figure 7: Fitness development of the micro-scale IMP setups. MCCTG_02, L1, cut short due to good performance. IMP run, 100 generations have been completed, see figure (a)a and figure 28. Cone_04-L3 has the lowest fitness value, but its target is only the energy release, thus it is not directly comparably to the remaining IMPs. Figure 11 shows the responses of the best parameter sets of all the different IMP setups. For all cases, the optimiser is able to find a parameter set that reproduces the experiment data relatively well. This is emphasised by drawing all their responses without distinction, including different fluid cell resolutions (C2, C3 and C5). Cone_04 is highlighted, because it uses only the energy release as target. With respect to the energy release, difficulties exist in reproducing the first bump at around 20 s to 50 s, the final peak at about 190 s and the following decay phase. In some cases, pronounced steps are visible towards the end of the simulations. See for example figure 29 for Cone_06. These steps are associated with the burn-out of the individual cells. \begin{table} \begin{tabular}{l l l l} \hline \hline IMP Setup & Limits & Repetition & Fitness Value \\ \hline Cone\_01 & L2 & 43718 & 0.361 \\ Cone\_02 & L2 & 38608 & 0.402 \\ Cone\_03 & L3 & 17857 & 0.383 \\ Cone\_04 & L3 & 39571 & 0.096 \\ Cone\_05 & L3 & 13568 & 0.288 \\ Cone\_06 & L3 & 45893 & 0.314 \\ Cone\_07 & L2 & 35532 & 0.364 \\ Cone\_08 & L3 & 32262 & 0.290 \\ \hline \hline \end{tabular} \end{table} Table 4: Best parameter sets of the different simple cone calorimeter IMP setups. Note: Cone_04 is lower, since it only uses energy release as target, while all others also solve for back face temperature. Figure 10: Individual reaction steps (MCCTGA_02, L1); residue production excluded. Figure 9: Heat release rate from MCC for a heating rate of 60 K/min. Comparison between experiment and best parameter set from the IMP. Furthermore, in all cases a small peak is visible in the beginning of the cone calorimeter simulations, at about 15 s. Cone_04 and Cone_08 capture the energy release profile best (figure 29). Reproducing the temperature recorded at the back face of the sample proves to be challenging, see figure 10(c) and figure 30. For cases where it is a target, the temperature development during the first about 160 s can be reproduced. Between about 160 s to about 270 s departures are visible, with a pronounced step around 200 s. A peak can be observed towards the end, yet less pronounced as in the experiment. Cone_04 does not have the temperature development target and is not able to reproduce a similar behaviour on its own. The residual sample masses during the simulation are close to the experiment data, see figure 10(d) and figure 33. ### Real-Scale Simulation The real-scale simulations are used as validation step of the inverse modelling. As above, only selected data is shown here and the simulation results of all best parameter sets are provided in appendix H. Heat release rates in the parallel panel simulation setups are presented in figure 12. For the smallest fluid cells (C5, figure 11(a)), the peaks are overall narrower and taller compared to the largest cells (C2, figure 11(b)). This is emphasised by comparing the peak energy release, see figure 12(b). In the simulation the fire develops overall faster compared to the experiment, see figure 12(a). Larger cells slow the development down slightly. In general, faster fire development leads to higher energy release, see figure 14. Total energy release (TER) of all parallel panel simulations is provided in figure 36 in the appendix. Figure 11: Condensed results of the IMPs. Cone_04 uses only energy release as target. Parameter set of the FDS parallel panel validation case for reference (Vali. PP). Full data provided in appendix F. ## 4 Discussion ### Micro-Scale Both IMP setups are able to reproduce the MCC data well (figure 9). The TGA data for a heating rate of 50 K/min is better reproduced than for 10 K/min (figure 8). The deviation between simulation and experiment for 50 K/min is attributed to the non-linear heating rate in the experiment, see figure 27d. Larger differences between experiment and simulation are observable for a heating rate of 10 K/min. Here, MCCTGA_01 gets slightly closer to its target. Otherwise, the results are similar, yet happen at lower sample temperatures compared to the experiment. With the lower heating rate in the TGA (MCCTGA_01) the algorithm is not able to find a parameter set suitable for both conditions, i.e. 10 K/min in the TGA and 60 K/min in the MCC. It comes as some surprise that the IMP favours the MCC and does not position the fit somewhere in between both targets. Some bias may have been built into the setup, by manually positioning the first guess reactions based on the MCC data, or by the chosen cost function. Possibly, some aspects of the apparatus are not captured well enough in the highly simplified micro-scale model. We know, from private communication with Karen De Lannoye, that the design of the TGA apparatus has an observable impact on the results. The divergence could also be related to the released gas mixture, which in this contribution assumes an average heat of combustion over the course of the experiment. It is argued [34], that the first peak at about 187 \({}^{\circ}\)C (figure 1b) could be attributed to residual solvent within the polymer. Furthermore, radically polymersed PMMA is somewhat unstable and starts decomposition at about 220 \({}^{\circ}\)C, due to unsaturated end groups [27]. Even though the primary decomposition products of PMMA are MMA and carbon dioxide, the MMA is not directly involved in the gas phase combustion [27]. Some major compounds involved in the combustion of PMMA are methane, methanol, formaldehyde and acetylene, with ethylene being involved during the acetylene combustion. As an example, for pure compounds the heats of Figure 12: Comparison between experiment and simulation of the parallel panel setup for different fluid cell sizes. Gas burner fuel is propane in the simulation. Data for fluid cell resolution C3 provided in appendix H. combustion are tabulated in the literature [35] with 50.0 MJ/kg for methane, 50.4 MJ/kg for ethylene, 19.8 MJ/kg for methanol and 24.2 MJ/kg for PMMA. In the work presented here, a variable mixture of methane, ethylene and carbon dioxide is used for the PMMA pyrolysis, that leads to an average heat of combustion of the PMMA pyrolysis products. This is an additional degree of freedom in the IMP to match the MCC and TGA data. However, fixing this mixture over all reactions could be too rigid. In future work individual mixtures could be investigated. The difference for the TGA test at 10 K/min, see figure 8, could be a manifestation of this rigidness, as low and high heating rates need to be reproduced simultaneously. A possible solution would be to use multiple gas mixtures. In an effort to reduce complexity downstream, i.e. definition of multiple chemical reactions and solving more transport equations, the mixture could be generated on-the-fly by releasing a single species per reaction. It might also be sufficient to mix only methane and carbon dioxide. Each decomposition reaction in figure 10 could be doubled. One would release methane and the other carbon dioxide, the mixture would then be controlled from the mass fraction in the surface definition. This strategy is to be investigated in future work. Here, no IMP target is provided to explicitly match the heats of reaction for the individual decomposition reactions. This could be accomplished by using experiment data from differential scanning calorimetry (DSC). Alonso et al. [5] used TGA and DSC data as targets in their IMP setup, changing the contribution of each to the overall fitness assessment. With this, the target of higher importance is reproduced better to the detriment of the other. The employed decomposition scheme uses two consecutive decomposition reactions forming an intermediate material and a residue. A parallel decomposition scheme, as is proposed here, could be able to capture more gradual changes in the heats of reaction across the temperature range of the experiment. This could improve the overall performance of the parameter sets generated here and should be investigated in future work. Figure 14: Total energy release of the best parameter set in parallel panel test for all IMP setups. Comparison between experiment (Exp) and simulation data with different fluid cell sizes. Simulation response with gas burner reaction of propane. Dashed line indicates theoretical total energy release of the sample in the simulation. Figure 13: Peak energy release and time to reach 1 MW in parallel panel simulation setup, using the best parameter sets from IMP. Comparison between experiment (Exp) and simulation data with different radiative fractions (RF) for the gas burner reaction. The proposed approach using gas mixtures allows to model more sophisticated technical materials. Specifically, the behaviour of fire retardant materials could be reproduced. Non-combustible gas could be released early on, which cannot be captured with a single surrogate fuel. This makes it necessary to take MCC and TGA data into account simultaneously. This could even be expanded further, by adjusting which reaction contributes most to the production of the residue. Also considering different residues, for example for intumescent materials. Arguably, the goal to use as little decomposition reactions as possible is not achieved. Looking at figure 10, PMMA 4 (\(0.07\%\)) could be removed, possibly also PMMA 8 (\(1.23\%\)) - even tough it is not too far off of PMMA 6 (\(1.88\%\)). It should be noted that there is an error in the definition of the pyrolysis reaction input for PMMA 4. Its heating rate is set to 80 K/min instead of the desired 60 K/min. Since the contribution of PMMA 4 is negligible, it is regarded inconsequential here. This is confirmed with a corrected IMP, see data repository (MCCTGA_2b), which virtually yields the same result. ### Bench-Scale In terms of the fitness value, Cone_08-L3 and Cone_05-L3 performed best across all IMP setups, see table 4 and figure 28. This is excluding Cone_04-L3, because it neglects the back side temperature. Adjusting the sampling limits leads mostly to better parameter sets. Occasionally, the IMPs do not find better sets within the given amount of generations. For example, L1 of Cone_03 shows worse fitness values throughout, compared to L0, see 28. The likely reason is that with each adjustment the process starts anew, combined with the randomness for choosing the individual values. With more generations, better parameter sets may be found. The impact of smaller fluid cells during the IMP is not clear. It might lead to better parameters for the Cone_05 series, yet its enormous runtime makes it not feasible to wait its completion during this work. As of now, it shows fitness values that are just marginally better than Cone_08-L3 (table 4). Compared with previous work [8, 10], here, higher fluid cell resolutions are used to cover the cone calorimeter sample. With C3 and above, the cells become distinguishable between corners, edges and centre cells -- compare the flux maps for two, three and five cells (figure 5). For the C2 configuration, each cell has essentially the same value - the average over the whole surface. This is summarised in figure 15. Higher resolutions capture the ring of higher and the substantially lower heat flux in the corners better, see figure 4. The steps in the simulations during the decay phase might be a result of it, see figure 29 between about 200 s to 250 s for Cone_06_L0. Two processes control the decay: cells burning out and the local burning behaviour, depending on the material parameters. Combining both can smooth out the decay phase. The C2 cases are primarily controlled by the parameters and show a steep drop at the end (Cone_07), because uneven sample consumption cannot be covered well. In the C3 setups the formation of pronounced steps is visible in some L0 cases, but higher limit adjustments can show a smoother decay. This behaviour is also reflected in the back side temperature. During the decay period a spike is visible (figure 2b), until the temperature settles at a constant value, from around 280 s onward. This constant value primarily shows the influence from the heating element, without the sample. During the experiment, the sample material is consumed and at some point the thermocouples below are exposed, starting from the centre. Thus, they are able to receive the heat radiation from the flame of the surrounding sample directly, in addition to the heater radiation. Due to thermal tension, they may also bend towards the heater. All these aspects together lead to the formation of the spike. The optimiser has difficulties to capture both, the heat release rate and the material temperature. With lower fluid cell resolution (C2, Aalto_6b in figure 30) the temperature peak between 200 s and 250 s cannot be reproduced. This is Figure 15: Gauge heat flux distribution for different resolutions (C2 to C12) across all surface cells. associated to the near-uniform consumption of the sample material, see above. In the other setups, the peak can be captured for the initial sampling limits, but mostly disappears with further adjustments. Certainly, neglecting three dimensional heat conduction inside the sample, is influencing the outcome as well. On the other hand, the sample shape can change significantly during the experiment, see figure 16. Relatively early on, it creates a foam layer and starts to bend towards the heater. In experiments performed by Karen De Lannoye, the maximum height of the bump was observed to extend approximately two sample thicknesses above the original surface of a sample with a thickness of 6 mm (figure 16, right), but the behaviour can change depending on the experiment conditions. This relatively symmetrical bump can change its shape significantly during its decay. While the sample material is consumed, its surface retracts further away from the heater, compared to the beginning of the experiment. Thus, in the experiment, the received radiative flux from the heater should change. These deformations are not replicated in the simulation - the component of the radiative flux of the heater stays constant, by construction. Only the heat flux component from the flame can change. The sample deformation is likely misinterpreted as a change in mass and energy release during the inverse modelling. Given the deformation, it is fundamentally unclear where the back face temperature is actually recorded. Furthermore, changes in the rigidity of the PMMA sample (e.g. melting) such that the thermocouple tip can move into the sample, thermal expansion of the sample holder assembly and associated movement between individual components, mechanical tension on the thermocouples and others are contributing to the uncertainty. Assuming they would be tightly attached to the backing material, an air gap could form between them and a bent sample. Such a gap would be interpreted as lower thermal conductivity during the IMP. As an example, compare Cone_04 with the other IMP results in figure 11. It only uses the energy release as target, not the back side temperature. The temperatures are higher throughout the simulation. Curiously, in the parallel panel simulation setup this behaviour appears to be beneficial, as discussed later in section 4.3. This seems to hint at incorrect temperature readings during the cone calorimeter test. If the sample separates from the thermocouple due to deformation, its recorded temperature should be lower than the actual back face temperature. Consequently, higher back face temperatures are visible in Cone_04 (figure 11), since in the simulation it is recorded at the back face of the sample by construction. From the above, the uncertainty of the recorded temperature increases during the run time of the experiment. Reliable temperatures might only be able to be recorded for low sample temperatures during the beginning. Specifically considering the deformation, melting and consumption of the sample. Therefore, the intended separation between sample behaviour and the boundary conditions could not be achieved. In the future, it would be interesting to assess the surface deformation of the sample during cone calorimeter tests. Maybe one could leverage methodologies used in assessing the performance of intumescent coating, e.g. [40]. This could then be used to adjust the prescribed radiative heat flux to the sample surface over time. The conductivity and specific heat change with sample temperature in the simulation (RAMP). This can account for the sample deformation to some degree, when an air gap forms between sample and thermocouple. The temperatures are arbitrarily chosen and used for both parameters, except for Cone_02. At high temperatures Cone_01, 03 to 05 get relatively high values assigned by the optimiser, see figures 31 and 32. Since the material is consumed way before the 800 \({}^{\circ}\)C could be reached, only a very small fraction of the ramp piece between the last two points can meaningfully contribute. Thus, ramp values for 800 \({}^{\circ}\)C are poorly chosen and should be disregarded. The cases where the temperature points are chosen based on the MCC experiment data, see figure 3, lead to more reasonable results for the conductivity, Figure 16: Formation of a bump during cone calorimeter test (50 kW/m\({}^{2}\)) of a PMMA sample with 6 mm thickness (side view). Begin of sample deformation at about 1:51 min after experiment start (left). Peak sample deformation at about 2:41 min after experiment start (right). Images provided by Karen De Lannoye via private communication, modified (cropped) to highlight the deformation. which is different for the specific heat. The final point is at a temperature close to the maximum the sample material can reach. Still, it seems to be a useful approach for unknown materials to align both temperature dependent values to the micro-scale data. Maybe the highest temperature value of the ramp could be chosen to be about 20 K lower than the highest meaningful value in the experiment data. This is an attempt to prevent confusion of the optimiser with temperatures that are impossible to reach, because the material is consumed. Another strategy could be to run an IMP solely to determine the ramps for some best parameter set. Thus, more parameters could be spent on the ramp alone without getting too large generation sizes. With data provided from DBI/Lund [7], extending the conductivity ramps in Cone_08-L3 seems not to change the temperature development significantly, see figure 31. For most of the IMP setups, values for the lower temperatures are determined that are already in the vicinity of the experiment data. The conductivity ramp of Cone_02 shows a relatively narrow but high peak at 105 \({}^{\circ}\)C, which is used to capture the glass transition temperature of PMMA [34, 41]. It should be noted, however, that the glass transition temperature is reported at about 122 \({}^{\circ}\)C for cast black PMMA, while being 105 \({}^{\circ}\)C for extruded clear PMMA [34]. In the presented setups here, the choice of temperature values prevents finding an accurate representation of the glass transition and should be taken into account in future work. Overall, the sample mass loss during the cone calorimeter simulation is relatively close to the experiment data, see figure 33. This behaviour is an emergent phenomenon of the steps taken with the gas mixture and gives confidence into the proposed method, since it is not an explicit target of the IMP. However, the flame height and gas temperature change with the release of the different surrogate fuel species, see appendix C. This is likely due to dilution of the surrogate fuel with carbon dioxide. Which of the presented centre line gas temperatures in figure 25b is more realistic in context of the PMMA cone calorimeter experiment studied here is unclear for now. Still, it is worth noting that the difference exists, because higher flames might have an impact on the fire spread in a simulation. For the IMP, the fluid cell resolution of 3.3 cm (C3) seems to be beneficial. It is able to capture the uneven sample consumption, yet it runs relatively fast. However, the question arises, how will the parameters perform at higher resolutions, here C. For this, figure 17 demonstrates an exemplary comparison: a parameter set determined at a given resolution (labelled "IMP") is use at an other resolution (labelled "Check"). Some of the investigated cases show similar behaviour across all limit adjustments, see figure 17c and figure 17d. Others converge towards the higher resolution over the course of multiple limit adjustments, see figure 17a and figure 17b. All limits are provided in appendix G. Given the reduction in computational demand, this is promising. ### Real-Scale With the real-scale setup, the performance of the best parameter sets of the different IMPs (table 4) is assessed. The fire development in the simulation is faster than the experiment for all parameter sets and fluid cell seizes, see figure 12. This could be related to the faster ramp-up of the burner in the simulation. It takes 10 s to reach the desired heat release, compared to 80 s in the experiment. Also, the burner is kept burning throughout the simulation, because the fire would extinguish otherwise (see below). Shutting the burner off in the experiment leads to a visible delay in fire development. With larger fluid cells the overall fire development is more drawn out. This could be related to a poorer resolution in the radiation field, since for example the radiation angles are not adjusted here. This is also reflected in the TER indicating that less sample material is consumed than in the experiment, see figure 14. The peak heat release is about 30% to 40% lower in the simulation, depending on fluid cell size and parameter set, see figure 12. Notable exception is the parameter set of Cone_04. Its performance stands out, by most closely resembling the shape of experiment "PMMA R6" and reaching a similar peak HRR. This behaviour seems to be associated to neglecting the back face temperature as IMP target. In future work, it is worth to look in more detail at Cone_04. Removing the back face temperature constraint seems to be beneficial for the real-scale and may be improved with better chosen temperature values for the ramps. During the work presented here, only the flame heat flux along the vertical centre line of the empty panels is taken into account as starting condition (figure 18). The flame heat flux data across the lower part of the panels is available via MaCFP [21], and it is compared against simulation responses for different fluid cell resolutions in figure 19. All four plots show flux data averaged over 20 s during the steady-state. The dots show the device locations during the experiment. From the simulation, the heat flux is extracted from the solid boundary directly (GAUGE HEAT FLUX). In the experiment, the flux is spread out nearly horizontally along the panels (figure 19a). While in the simulation, it is more focused towards the centre line, which coincides with the location of the simulated flame (figure 19b). With larger fluid cells the heat flux is more concentrated at the lower centre line (figure 19d). This indicates it is not sufficient to simply match the heat flux to the vertical centre line of the panels. To properly assess the performance of the parameter sets, the burner itself needs to be accurately modelled first. For future work it is necessary to develop a more comprehensive representation of the gas burner setup. Further investigations should incorporate the impact of simulation parameters like soot production, cell sizes, parameters of the radiation model and material parameters of the burner top face and empty panels. Simulations with different burner cut-off times have been conducted. Burner cut-off times of 120 s and 220 s are chosen. In all cases the fire is not able to recover, see figure 37, even though the peak energy release in some cases is in excess of 1 MW. As an example, figure 20 shows an image of the experiment "PMMA R6" [21] and an image series captured in Smokeview, covering 60 s after the burner is shut off. The flame region is flat against the panels, about one to two cells. This might interfere with the radiative heat transfer to cells below the lower edge of the flame, as well as to the sides. A closer look at the parameters of the FDS radiation model might be necessary, for example the path length or number of radiation angles. Smaller fluid cells might be beneficial as well, due to a better resolution of the resulting Figure 17: Comparison of the energy release of a best parameter set (IMP) across different fluid cell resolutions (Check) for the simple cone calorimeter setup. Figure 18: Centre line heat flux, from propane gas burner to empty panel. temperature distribution. It should be noted further, that the radiative fraction for combustion reaction of the PMMA pyrolysis products is treated here as unknown and the FDS default is used, i.e. 35 %. Overall, there is a clear need to investigate the conditions necessary for self-sustained fire spread in the real-scale simulation and the parameter transfer from micro- to bench-scale and further from bench- to real-scale. In the given setups here, the model seems to struggle to provide meaningful energy transfer to the cells around the reaction zone to keep the flame. ### General All IMP results are able to reproduce the cone calorimeter data well, see figure 11. Yet, in the parallel panel simulation, differences become apparent, see figure 12. Just from the cone calorimeter simulation results alone, it is not obvious how the parameter sets perform in the real-scale. This indicates that individual parameters of the material model may Figure 19: Flame heat flux to an empty panel, for different fluid cell resolutions. The dots indicate locations of heat flux gauges during the experiment. Simulation data extracted from boundary (GAUGE HEAT FLUX). All show 20 s average during steady-state flaming. be differently sensitive to the simulation and experiment setup. In the cone calorimeter setup fire spread is negligible, specifically for higher radiative heat fluxes. This might mask the behaviour of some parameter. For example, the emissivity has certainly a high impact on the received energy and therefore how the sample material heats up. However, in the cone calorimeter the sample receives a constant and high flux, which might be high enough to heat up the sample fast, regardless of which value for the emissivity is chosen. The energy transfer to the sample has a significant impact on the fire development. Specifically for the cone calorimeter setup the thermal radiation is important. More detailed investigation of the impact of the radiation model parameters in this setup is necessary. This assessment should also take the performance in the real-scale simulation into account, to ensure that the model for both setups is the same. More care should be taken when setting up the gas burner simulation model. Using individual gas phase combustion reactions for burner and sample should allow to simulate the initial sample ignition more precisely, without compromising the overall sample behaviour. In this work, the energy release is assumed to solely take place as a gas phase combustion reaction. This may be a sufficient model for PMMA. For other materials that show significant surface reactions, like wood, this assumption might not hold. Furthermore, experimental campaigns should incorporate medium-scale setups that focus on fire spread. This allows to test the model specifically on this aspect, which a cone calorimeter cannot provide due to its design and severe condition. It would also be useful to provide more information on the burners themselves. Specifically, their surface temperature development and emissivities over the course of the experiment. ## 5 Conclusions In general, it seems attainable to simulate fire propagation in FDS, based on material parameters. Usage of a gas mixture allows to capture the MCC and TGA experiments and prevents FDS from scaling the mass introduced into the fluid domain. Thus, it is a step towards more physical parameter sets. The higher resolution in the simple cone setup can account for uneven radiative heat flux and sample consumption. Overall, it seems clear that many parameters on all levels of this endeavour are important, and their influence needs more cohesive investigation. It is not sufficient to focus on the bench- and micro-scale experiments alone. Despite good performance during these simulations, it is not obvious how the parameters perform in the real-scale. It is necessary to look into the whole chain of setups, to understand how well the parameters eventually translate over to the real-scale. Furthermore, the impact of other model parameters, like the radiative fraction, or the radiation model in general, needs Figure 20: Flame extinction after burner cut-off, at 120 s and ramp down over 6 s. Pilot flame of one cell in the centre at the bottom of each panel. (Cone_03, L3, propane RF=0.15), Photograph from experiment (figure 20a) at HRR of 500 kW, close to value of burner cutoff in the simulation (figures 20b), cropped out from [21]. further investigation. Finally, the landscape of experimental data is fractured specifically for real-scale setups with the same sample material as in the smaller scales. Within these constrains, the parameter set should yield a response close to the observations in the experiments at all scales. This ultimately means the parameter set needs to compensate for the simulation model and experimental shortcomings, which can hardly be accomplished by a "physical" parameter set, thus it needs to be an "effective" representation. ## Data Availability The experiment data is available from the MaCFP git repositories [7, 21] and the FDS validation suite [22]. The input for the inverse modelling, the results of the IMPs, the scripts used for data processing and the validation simulation results are provided in a Zenodo data repository [23]. A video series on how the data processing and inverse modelling is setup and used is provided on YouTube [24]. ## Acknowledgements The authors thank Karen De Lannoye, for discussions on conducting the micro- and small-scale experiments and provided images. The authors thank Isaac Leventon, for discussions on conducting the parallel panel experiments. We gratefully acknowledge the computing time granted through JARA (project jjsc27) on the supercomputer JURECA [42] at Forschungszentrum Julich and through the project on the CoBra-system, funded by the German Federal Ministry of Education and Research with the grant number 13N15497. This research was partially funded by the German Federal Ministry of Education and Research with the grant number 13N15497. ## CRediT Authorship Contribution Statement **Tristan Hehnen:** conceptualisation, data curation, formal analysis, investigation, methodology, software, validation, visualisation, writing - original draft preparation, writing - review and editing **Lukas Arnold:** conceptualisation, funding acquisition, methodology, project administration, resources, software, supervision, validation, writing - review and editing ## Appendix A Cone Calorimeter Simulation Setup Based on Babrauskas' [36] original report on the development of the cone calorimeter, a simplified geometrical representation of the heating element is created. The simplification is primarily focused on the heating coil, which is represented as a smooth conical surface and not as a wound wire, see figure 21. The fluid cell resolution was chosen, such that the sample surface (10 cm by 10 cm) was covered with 12 by 12 cells. The geometry itself was built in Blender, using the BlenderFDS addon by Emanuele Gissi. The heater calibration procedure is mimicked in the simulation, to determine the parameters of the boundary condition (SURF) of the heater. A device (DEVICE) with the GAUGE HEAT FLUX GAS quantity is located in the centre of where the sample surface is supposed to be during the test. A Python script is used to automatically find an emissivity value for the heater boundary condition that leads to the 65 kW/m2 at the device. Refer to the FindTMP_FRONT.py in the ConeRadiationAssessment directory of the data set [23]. The heater temperature is set, based on the temperature reported by Babrauskas, but linearly interpolated between the two enveloping values to get to the desired radiative flux. The simulation includes the gas phase, thus interactions between the radiation and the air are taken into account. The radiative flux is assessed over 20 s, after reaching a quasi-steady state. It is averaged over this time span. Afterwards, a simulation is conducted in which an obstruction (OBST) is introduced to represent the sample and its holder. There is a distance of 25 mm between the sample surface and the bottom of the cone heater assembly. From the top boundary of the obstruction the radiative heat flux is recorded (GAUGE HEAT FLUX). Per cell, it is averaged over 20 s after reaching a quasi-steady state, same as in the previous step. ## Appendix B Complex Chemistry During the simple chemistry FDS calculates the stoichiometry itself and provides the results in the CHID.out file. This, however, only works if FDS is used regularly, meaning not with TGA_ANALYSIS. A Python script was designed that automatically finds the recent best parameter set (lowest RMSE, see section 2.5) from the micro-scale IMP and creates a new FDS input file to generate a respective CHID.out file. After manual execution of this simulation the script can extract the needed stoichiometry information from the CHID.out file, build the appropriate input lines and write them to a new FDS input file. These steps are handled in the GetChemicalReaction.ipynb notebook, which can be found in the data repository [23]. To ensure consistency, the repetition information of the best parameter set is written to the respective FDS input files as well. For the parallel panel simulations this information can be copied over manually. ## Appendix C Mass Losses and Flame Heights An example parameter set is used to demonstrate the different mass loss rates and energy release. The cases "Cone 01" to "Cone 03" in figures 23a, 23b and 24 all use the same pyrolysis scheme. "Cone 01" and "Cone 02" use only methane as surrogate fuel. For "Cone 01" the heat of combustion in the material definition is set to 25 MJ/kg, which is about half of the value of pure methane. No heat of combustion value is provided for "Cone 02", thus it is the predefined value of 50 MJ/kg of methane. "Cone 03" uses a surrogate fuel gas mixture that consists of 26 volume percent of carbon dioxide and 74 volume percent of methane. This leads to roughly the same average heat of combustion than the 25 MJ/kg of "Cone 01". Also, the radiative fraction of the gas mixture was set to 0.20 to match the value of pure methane. Again, no HOC value is provided in the material definition, thus the released mass in the solid is transported directly to the gas domain. This highlights the distinction between the solid and gas phase side of the FDS simulation. FDS uses the heat of combustion parameter provided in the material definition to scale the mass of fuel that is introduced into the gas domain. Figure 23a shows the mass loss in the solid. All three cases experience a very similar development. In Figure 21: Geometrical model of the cone calorimeter heater, using the GEOM namelist. Heater surface idealised as a simple conical shape. Black areas on the sample surface receive a radiative heat flux of about 65 kW/m2. Figure 22: Geometrical model of the cone calorimeter heater, using the GEOM namelist. Sample surface resolved as C12. figure 14(b) it can be observed that the mass introduced into the gas domain is about half for "Cone 01" compared to the others, due to the scaling of the HOC. Consequently, figure 15(b) shows about double the energy release for case "Cone 02" than the others. The different surrogate fuel strategies from above, "Cone 01" and "Cone 03", lead also to different flames. Two simulations are conducted with a constant mass release (HRRPUA), shown in figure 16(a) to mimic both setups. Gas temperatures are recorded on the vertical centre line of the flame and averaged over the second half of the simulation (30 s). This leads to differences in the flame structure as shown in figure 16(b). No claim is made here as to which one is more "realistic", just the difference pointed out. ## Appendix D Example Limit Adjustments During the IMP individual parameters can get stuck at their limits. Figure 17(b) shows an example of this. During the initial limit definition (L0) the pyrolysis range parameter got stuck at its upper limit. Another IMP run was set up, with an expanded range (L1). Note: only the upper limit was adjusted and the lower limit was kept at its original value. Thus, the sampling space only grows larger over multiple adjustments. In the beginning, both developments are different, because not only this parameters limits are adjusted for this new run, but also for others that were stuck. Figure 15: Mass loss rates for different surrogate fuel species (pyolysis) at different locations in the same simulation. Visualising the energy release based scaling. Different heats of combustion are used as indicated, ”Mixture” has a HOC of about 25.5 MJ/kg. Figure 15: Energy release for different surrogate fuel species. Different heats of combustion are used as indicated, ”Mixture” has a HOC of about 25.5 MJ/kg. Figure 26: Example for sampling limit adjustment. Figure 25: Simplified cone calorimeter simulation (C15) for different surrogate fuel species to compare flame heights. Mass flux adjusted to get the same energy release. Fuel mixture: methane, ethene and carbon dioxide. ## Appendix E Micro-Scale Tests Data sets from the micro-scale tests. NIST reported that the equipment for the TGA was a Netzsch F1 Jupiter and a FAA microscale combustion calorimeter for the MCC, Sandia used a Netzsch F3 Jupiter [7]. Figure 27: Different heating rates of micro-scale experiments from MaCFP data base [7] ## Appendix F Simple Cone Calorimeter Simulation Results ### IMP Fitness Development Figure 28: IMP fitness development from simplified cone calorimeter simulations at 65 kW/m2. ### Energy Release Rates ### Back Side Temperatures Figure 30: Back side temperature in simplified cone calorimeter simulation at 65 kW/m2. ### Thermal Conductivity Figure 31: Thermal conductivity in simplified cone calorimeter simulation at 65 kW/m\({}^{2}\). ### Specific Heat ### Residual Sample Mass ## Appendix G Simple Cone Calorimeter Simulation - Fluid Cell Convergence ## Appendix H Parallel Panel Simulation Results Figure 34: Comparison of the energy release of Cone_04 across different fluid cell resolutions for simple cone calorimeter setup. Best parameter set of IMP conducted in 3.3 cm resolution (3C), same parameter set used in 2.0 cm resolution (5C). Figure 35: Comparison between experiment and simulation of the parallel panel setup for fluid cell size C3. Gas burner fuel is propane in the simulation. Figure 36: Total energy release (TER) of best parameter sets in parallel panel setup. Comparison between different radiative fractions (RF) and fluid cell sizes. Dashed line indicates theoretical total energy release in the simulation. Figure 37: Parallel panel simulation with burner cut-off at 120 s and 220 s, ramp down over 6 s. Gas burner fuel is propane.
2308.15123
In-situ Plasma Studies using a Direct Current Microplasma in a Scanning Electron Microscope
Microplasmas can be used for a wide range of technological applications and to improve our understanding of fundamental physics. Scanning electron microscopy, on the other hand, provides insights into the sample morphology and chemistry of materials from the mm-down to the nm-scale. Combining both would provide direct insight into plasma-sample interactions in real-time and at high spatial resolution. Up till now, very few attempts in this direction have been made, and significant challenges remain. This work presents a stable direct current glow discharge microplasma setup built inside a scanning electron microscope. The experimental setup is capable of real-time in-situ imaging of the sample evolution during plasma operation and it demonstrates localized sputtering and sample oxidation. Further, the experimental parameters such as varying gas mixtures, electrode polarity, and field strength are explored and experimental $V$-$I$ curves under various conditions are provided. These results demonstrate the capabilities of this setup in potential investigations of plasma physics, plasma-surface interactions, and materials science and its practical applications. The presented setup shows the potential to have several technological applications, e.g., to locally modify the sample surface (e.g., local oxidation and ion implantation for nanotechnology applications) on the $\mu$m-scale.
Lukas Grünewald, Dmitry Chezganov, Robin De Meyer, Andrey Orekhov, Sandra Van Aert, Annemie Bogaerts, Sara Bals, Jo Verbeeck
2023-08-29T08:41:26Z
http://arxiv.org/abs/2308.15123v1
# In-situ Plasma Studies using a Direct Current Microplasma in a Scanning Electron Microscope ###### Abstract Microplasmas can be used for a wide range of technological applications and to improve our understanding of fundamental physics. Scanning electron microscopy, on the other hand, provides insights into the sample morphology and chemistry of materials from the mm- down to the nm-scale. Combining both would provide direct insight into plasma-sample interactions in real-time and at high spatial resolution. Up till now, very few attempts in this direction have been made, and significant challenges remain. This work presents a stable direct current glow discharge microplasma setup built inside a scanning electron microscope. The experimental setup is capable of real-time _in-situ_ imaging of the sample evolution during plasma operation and it demonstrates localized sputtering and sample oxidation. Further, the experimental parameters such as varying gas mixtures, electrode polarity, and field strength are explored and experimental \(V\)-\(I\) curves under various conditions are provided. These results demonstrate the capabilities of this setup in potential investigations of plasma physics, plasma-surface interactions, and materials science and its practical applications. The presented setup shows the potential to have several technological applications, e.g., to locally modify the sample surface (e.g., local oxidation and ion implantation for nanotechnology applications) on the \(\upmu\)m-scale. Microplasma, Plasma, SEM, ESEM, In-situ, EDS, Sputtering + Footnote †: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding: author: Corresponding author: Corresponding author: author: Corresponding: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding author: author: Corresponding author: Corresponding: author: author: Corresponding author: author: Corresponding author: Corresponding: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: Corresponding: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding: author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding author: author: Corresponding: author: author: Corresponding: author: author: Corresponding author: author: Corresponding: author: author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding: author: Corresponding: author: author: Corresponding author: Corresponding: author: author: Corresponding: author: Corresponding author: author: Corresponding: author: author: Corresponding: author: author: Corresponding: author: author: Corresponding author: author: Corresponding: author: Corresponding: author: author: Corresponding author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding author: author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding author:: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding:: author: Corresponding:: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding:: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: Corresponding: author: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: Corresponding: author: Corresponding: author: Corresponding: author: Corresponding: author:: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author: Corresponding:: author: Corresponding:: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding:: author:: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: author:: Corresponding: Corresponding: author: Corresponding: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: Corresponding: author:: Corresponding: author:: Corresponding:: author:: Corresponding: author:: Corresponding: author:: Corresponding:: author::: Corresponding: author:::: Corresponding: author::: Corresponding:: author::: Corresponding:: author::: Corresponding: author:::: Corresponding: author:::: Corresponding: author::: Corresponding:: author:::: Corresponding: author:::: Corresponding: author:::: Corresponding: author::::: Corresponding: author:::::: Corresponding: author: range.[8] Besides the (i) practical aspect of reduced operation cost of microplasma setups compared to large plasma reactors for laboratory-scale experiments and (ii) a general trend toward miniaturization of devices in plasma-application areas, microplasmas also have interesting properties. For example, the large surface-to-volume ratio and short gap distances between the electrodes (typically a few \(100\,\mathrm{\SIUnitSymbolMicro m}\)) lead to a non-equilibrium state where the ion/gas temperature is lower than the electron temperature.[3] This results in a "cold" plasma with gas temperatures close to room temperature,[7, 8, 9] which shows great promise, e.g., in nanomaterial and nanoparticle fabrication.[3] In addition, microplasmas are not confined to vacuum operation. Paschen's law relates the breakdown voltage of a gas with the product \(pd\) of the pressure \(p\) and the gap distance \(d\) between two parallel electrode plates. For many gases, the smallest breakdown voltages lie in the range of about \(10\,\mathrm{Pa}\,\mathrm{cm}\) to \(1000\,\mathrm{Pa}\,\mathrm{cm}\).[9] Reducing \(d\) to \(100\,\mathrm{\SIUnitSymbolMicro m}\) or less allows plasma operation at or near atmospheric pressure (\(p=101\,\mathrm{kPa}\)). The plasma setup presented in this work is a direct current (DC) microplasma where one of the electrodes is a nozzle with a small orifice through which gas is supplied (see Figure 9a in the Experimental section). Whereas the geometry closely resembles that of a jet, the setup isn't technically defined as a plasma jet since the plasma is generated in the gap between the nozzle and the grounded electrode/sample.[12] The interaction of plasmas with flat surfaces or nanoparticles is of interest for technical applications and a better understanding of plasma physics and chemistry. Often, _ex-situ_ structural and chemical investigations on the milli- to nanometer scale are performed after plasma treatment of a material. For these length scales, scanning electron microscopy is a valuable technique for microstructural and chemical investigations (typically using energy-dispersive x-ray spectroscopy, EDS). Recently, the first microplasmas were generated inside scanning electron microscopes (SEMs).[13, 14, 15] A plasma-in-SEM setup not only reduces the time between plasma treatment and subsequent SEM analyses compared to a separate plasma setup, but also prevents exposure of the sample surface to ambient air. The latter aspect enables studies of plasma-treated surfaces where subsequent contact with oxygen, humidity, or contamination must be avoided. Different approaches to generate plasmas in SEMs were demonstrated in earlier studies. For example, local sputter etching was achieved by Mulders and Trompenaars[14] by introducing a small gas nozzle into an SEM and using the electron beam for ionization. In the setup by these authors, the electron beam is scanned in a small slit in the nozzle near the orifice to generate ions in the gas stream. The generated ions flow out of the orifice with the gas flow and are then accelerated toward the sample using an applied voltage between the nozzle and the stage. Modern SEMs often have a built-in option to apply the required negative voltage to the sample stage, typically used for beam-deceleration SEM imaging.[16, 17] This approach does not require reaching the breakdown voltage of the gas, hence leading to a low-energy ion bombardment of the sample. With this setup, low-energy Ar\({}^{+}\) ions with energies ranging from \(20\,\mathrm{eV}\) to \(500\,\mathrm{eV}\) were used to remove amorphous surface layers.[18, 19] Another plasma setup consists of a micro hollow cathode (or anode) DC plasma configuration in an environmental SEM (ESEM).[15] In the latter, the chamber pressure and gas type (in this case Ar) is directly controlled with the ESEM. A supplied high voltage (in this case \(\pm 1\,\mathrm{kV}\)) generates the plasma, and the plasma-surface interaction subsequently can be analyzed within the ESEM. Depending on the electrode polarity, either (i) redeposition of sputtered material from the counter electrode onto the sample surface or (ii) direct sputtering of the sample surface with positive ions was observed. The sputtered area had a relatively large width of about \(2\,\mathrm{mm}\).[15] A benefit of this experimental setup is that the gas-flow controls of the ESEM are used, which reduces the requirements for the hardware modifications to an SEM. However, a drawback is that using the low-vacuum mode reduces the image quality due to electron-beam scattering in the gas, resulting in a so-called electron-beam skirt.[20, 21] This aspect impedes _in-situ_ SEM imaging of the plasma-sample interactions, limiting high-quality imaging to the normal high-vacuum mode of the ESEM. To optimize image quality in gaseous environments, the distance between the end of the microscope's pole piece and the sample, i.e., the gas-path length, is typically minimized to reduce the beam skirt. However, the gas-path length cannot be reduced too much for plasma experiments due to the risk of unwanted arcing to the microscope hardware. Indeed, arcing from the micro hollow cathode to the microscope hardware over a relatively large distance of about 25 mm was reported for this setup using the low-vacuum mode [15]. Matra _et al._[13, 22, 23] demonstrated a working jet-like microplasma setup inside an SEM. This approach combines the properties of a jet (enabling a comparably high pressure in the gas jet compared to its environment) with the small dimensions of a microplasma for local plasma application (typically within a few ten um). The gas flows from a gas nozzle with a small orifice (nominal diameter of a few ten um) toward a (flat) sample surface, whereas the chamber is continuously pumped to maintain a low overall pressure. A plasma is generated by applying a voltage, here denoted as source voltage \(V_{\mathrm{S}}\), between the nozzle and the sample, somewhat similar to a plasma reactor with two electrode plates [6]. However, the non-uniform pressure profile between the nozzle and the sample makes this plasma configuration unique, complicating the characterization of the plasma discharge. The gap distance can be adjusted by using SEM imaging for alignment. The pressure profile between the nozzle and the sample can be modified by changing the gas flow, though it will also be heavily affected by the distance between the orifice and the sample. A plasma is generated by applying at least the breakdown voltage between the nozzle and the sample (although the electron beam can be used to aid plasma ignition). Depending on the gas, material removal by Ar\({}^{+}\) sputtering [13] and growth of an C-rich thin film [24] on a Si surface were observed. This proof-of-principle study [13] showed that a microplasma jet can be generated in the evacuated SEM chamber. However, the (desired) DC glow discharge was reported not to be fully stable, resulting in arcing to the sample [13] and a self-pulsing plasma mode for discharge currents in the range of about 3 mA to 30 mA (depending on voltage, gas flow rate, and gap distance) [22]. This arcing led to strong local heating and pronounced damaged spots on the sample [13]. Furthermore, these previous studies did not investigate the possibility of "true" _in-situ_ SEM imaging, i.e., live SEM imaging during plasma operation. Instead, SEM images were taken before and after the plasma-treatment steps (also in ref. [15]), which will be denoted as "quasi" _in-situ_ operation in this work. Still, these studies prove that a microplasma can be generated in an SEM and used for surface treatment. This provides the opportunity to observe _in-situ_ changes of a sample's morphology and chemistry on the mm to nm scale during plasma treatment using an SEM, ultimately leading to a better understanding of plasma-surface interactions and fundamental plasma properties. However, the availability of more studies is hampered by (i) the required non-trivial modifications of an SEM and (2) the lack of commercial solutions. In this work, a microplasma setup built inside a modern ESEM based on the work of Matra _et al._[13] is presented. A stable operation of a DC discharge without arcing is realized. Further, we present real-time _in-situ_ SEM imaging during plasma operation and show exemplary applications of our plasma-in-SEM setup for sputtering and local surface oxidation. Finally, experimental challenges and potential upgrades of the setup are discussed. ## 2 Results and discussion The first part of this section shows results related to the microplasma and _in-situ_ SEM imaging. The second part discusses some exemplary results when applying the microplasma to materials. Finally, the third part reviews the limitations of this setup and proposes potential solutions to overcome these limitations. ## 3 Microplasma Characterization ### Gas-Pressure Profile The used plasma setup has a non-uniform gas pressure along the plasma gap. The gas density profile can be visualized by SEM imaging (Figure 1a) by using a low primary electron energy (here 2 keV) to increase the electron-scattering probability and secondary electron (SE) generation within the gas cloud [25]. As a result, the SE-SEM image presumably shows higher intensity in regions with higher gas densities (Figure 1a). Here, the gas cloud in Figure 1a flows into the microscope vacuum without obstruction. The contrast variations in the background result from out-of-focus imaging of the sample stage a few mm below the nozzle along the electron-beam direction. The gas density is highest close to the orifice and gradually decreases away from it. This monotonic decrease is in accordance with calculated gas density profiles of restricted gas flows, e.g., in references [26, 27, 28]. More explicitly, Salehi _et al._[29] report an exponential decay of the gas density away from an orifice from a simulation of gas jets for different pressure differences between the inside of the nozzle and the chamber. Experimental measurements of the pressure gradient away from the nozzle by Patel _et al._[30] reveal a continuous pressure decrease away from the nozzle for a distance of about 20 orifice diameters (in their experiment about 20 mm for a 0.8 mm orifice diameter), which would correspond to a continuous pressure decrease away from the orifice of about 400 um for a nominal 20 um orifice diameter. From comparison with these results, we suspect a monotonic decrease in gas density and pressure across the microplasma gap in our experimental setup. However, if the gap distance is reduced by bringing the sample close to the orifice (here about 120 um), an increase in SE signal is visible on the sample surface as well (Figure 1b, dashed arrow). The increased SE signal at the sample indicates an increased gas density at the sample surface. From these observations, it becomes clear that the gas density profile in the gap depends, among other parameters, also on the gap distance. This non-uniform gas pressure impedes predictions and comparison with conventional plasma reactors with a constant pressure between the electrodes. As a beneficial side aspect, the visible gas spot on the sample surface can be used to predict the plasma-spot region. This can be seen by comparing the images before and after plasma operation in Figures 1b and c, respectively, where the pit due to plasma sputtering forms in the region predicted in Figure 1b. ### Voltage-Current Characteristics of the Plasma Next, the voltage-current characteristics (i.e., the dependence of discharge voltage \(V_{\mathrm{D}}\) and discharge current \(I_{\mathrm{D}}\)) of a N\({}_{2}\) microplasma were investigated for three gap distances (75 um, 100 um, and 125 um) and three gas flow rates (2.5 sccm, 5.0 sccm, and 7.5 sccm). Nitrogen was chosen over Ar because it resulted in lower chamber pressures for the same gas flow rate, allowing for higher gas flow rates (up to 8 sccm) into the high-vacuum microscope chamber. The pumping speed of different gases is discussed in more detail in the supplementary information. For each gas-flow/gap-distance pair, two measurements were taken for repeatability (here denoted in the brackets in the figure legends). Figures 2a-c show the values sorted with decreasing gap distance from left to right. The same axis limits were used for easier comparison. In general, a positive slope is visible for all curves, indicative of a so-called abnormal glow discharge plasma [6]. This was also observed by Matra _et al._[22], but not in all of their measurements. After this initial positive increase of discharge current with discharge voltage, nearly all curves show a maximum current followed by a current decrease (cf. arrows in Figures 2b and c). The last aspect is a measurement artifact, probably caused by rapid sputtering of the electrode, and should not be interpreted as an actual voltage-current characteristic of the microplasma. This artifact is discussed in more detail in the supplementary information (Figure S1). Next, the ordinate intercepts of the curves in Figure 2 are discussed. These points correspond to the lowest discharge voltage at which a plasma discharge can be sustained. Note that this isn't equal to the breakdown voltage, as the voltage required to initiate a breakdown is often (significantly) higher than the voltage required to sustain one [5]. The actual breakdown voltages were not measured since our setup does not produce the necessary uniform gas pressure for a given gap distance for a controlled measurement [31]. Figures 2a-c show a decreasing minimum discharge voltage for increasing gas flow rates for the same gap distance. Since an increase in gas flow rate for a constant gap distance is assumed to result in an increasing gas density, this decreasing minimum discharge voltage offers an interesting insight into the plasma discharge. As described by the Paschen curve for simple parallel-plate and uniform-pressure DC plasma systems, an increased pressure heavily affects the discharge properties (see also supplementary Figure S2). On the one hand, if the gas density is higher than the optimum (i.e., the point with the lowest minimum discharge voltage, similar to the minimum in the Paschen curve), the electrons undergo many collisions, which limit their possibility to gain enough energy to ionize a molecule. This ionization is required to create an avalanche effect, which is needed to sustain a discharge. In this case, a higher voltage is required to sustain the discharge to ensure the electrons can gain sufficient energy to cause subsequent ionization. On the other hand, if the gas density is lower than the optimum, the electrons can easily gain sufficient energy, but they may not collide frequently enough to cause the further ionization required to sustain the discharge. Then, again, a higher voltage is required to ensure that the collisions will cause ionization. As the minimum voltage required to sustain a discharge decreases with increasing gas density, it is implied that the gas density is lower than the optimal case overall. This is analogous to being on the left side of the minimum in the Paschen curve. It should be noted, though, that given the strong pressure gradient in this setup, the discharge mechanisms are not as straightforward as they are assumed by the Paschen curve, so a direct comparison is difficult. This behavior of the minimum discharge voltage implies that the plasma could be categorized as a so-called obstructed abnormal glow discharge [5, 6]. When comparing the curves for the same gas flow rate and different gap distances in Figures 2a-c, both the gas density and the gap distance are varied since the former is affected by the latter. Assuming that the gas density at a constant gas flow rate increases for a decreasing gap distance, the changes in minimum discharge voltage in Figures 2a-c indicate that the gas density is increasing non-linearly (in contrast to linearly decreasing distance) and more substantial than the gap distance. An additional complication affecting the interpretation of the data is the setup geometry. The shown setup with a rounded nozzle with an orifice as one electrode and a possibly textured Figure 1: Investigation of the gas density profile in the plasma gap. **a** SE-SEM image of the gas flow into vacuum acquired with a primary electron energy of 2 keV. **b** A spot with a slightly increased SE signal is visible on the sample surface (marked with a dashed arrow) when a sample is brought into proximity, probably due to an increased gas density when the gas jet hits the sample surface. **c** After plasma treatment, the bright spot coincides with the plasma-treated region, indicating that the gas spot in **b** can be used for aiming the microplasma at the desired region of interest. sample surface as another electrode is different from earlier publications studying various electrode geometries [31, 32, 33, 34]. Microplasmas are especially sensitive to surface effects due to the small spatial scale in the sub-mm range, as the electric field can be strongly altered by small morphological changes in the electrode surfaces [34]. In addition, due to the high-pressure gradient, it is impossible to accurately control the pressure in the discharge gap using this setup. ### Plasma Generation and Stability in an SEM The plasma-in-SEM setup enables studying the interplay between the electron beam of the SEM and the plasma. Different aspects of this interaction are discussed in the following. Firstly, an electron beam can be used to ignite the plasma at lower voltages than required for the self-ignition when reaching the breakdown voltage [13] (Figure 3a). For example, in one case a plasma discharge could not be achieved, even when applying a maximum source voltage \(V_{\mathrm{S}}=2\,\mathrm{kV}\) to the nozzle without an electron beam. However, scanning with the electron beam caused a plasma discharge already at \(V_{\mathrm{S}}=920\,\mathrm{V}\) for the same gap distance and gas flow rate. This can be explained by the generation of SEs, backscattered electrons (BSEs), and x-rays upon the interaction of the electron beam with the sample, which then triggers the plasma ignition. Notably, the electron beam ignites the plasma even if not directly scanning in the gap region. A webcam video comparing plasma ignition by (i) reaching breakdown voltage (the conventional way) or (ii) using the electron beam is found in the supplementary information (_Breakdown-vs-SEM-Plasma.mp4_). In this video, the SEM-triggered plasma shows a less intense plasma cloud than the self-ignited plasma. Therefore, the electron beam can be advantageously employed to ignite a less intense plasma at lower voltages (cf. middle and right images in Figure 3a). In addition, for conditions where a plasma is not self-sustainable, i.e., with a large gap distance and/or low gas flow rate, a plasma discharge was observed that was only active during active electron-beam scanning (see supplementary information Figure S3). Secondly, applying an electric potential to the nozzle will create an electric field that deflects the incoming electron beam, e.g., toward the positive potential on the nozzle (Figure 3b). The deflection depends on the electron energy (less Figure 2: Voltage-current characteristics of a \(\mathrm{N_{2}}\) microplasma for different gas flow rates and gap distances. All axis limits are equal for easier comparison. The data is shown for decreasing gap distance from left to right. Two measurements were performed for each gas flow rate and gap distance. No discharge was observed for \(125\,\mathrm{\SIUnitSymbolMicro m}/2.5\,\mathrm{sccm}\). Slight deviations between these measurements are mainly caused by uncertainties in gap distance. The apparent drop in current for higher voltages (marked with arrows in b and c) is a measurement artifact caused by sample-surface sputtering. In general, larger discharge currents are observed for higher gas flow rates and smaller gap distances. A positive slope for all curves indicates a so-called abnormal glow discharge behavior. deflection for higher keV) and probably also on the extent of the exposed metal part of the steel nozzle. In our setup, insulating tape was used to cover most of the steel nozzle, excluding the tip (see black tape in Figure 9). The deflection may be minimized by (i) shielding the open metallic surface of the nozzle tip and (ii) using a higher primary electron energy. However, the deflection can also be used advantageously. For example, the deflection can be strong enough so that the SE-SEM image is formed from the nozzle-tip surface, e.g., at \(V_{\mathrm{S}}=920\,\mathrm{V}\) for a primary beam energy of \(10\,\mathrm{keV}\) (Figure 3b, right). In this way, the tip region of the nozzle can be imaged with the SEM even though it is aligned parallel to the electron beam, i.e., without a direct line of sight. This effect is more pronounced at lower electron energies. A supplementary movie (_SEM_Plasma_Ignition.mp4_) shows correlative imaging of the webcam and SEM images during a gradual increase in the source voltage \(V_{\mathrm{S}}\) and subsequent SEM-induced plasma ignition. The Figure 3: Aspects of microplasma operation in a scanning electron microscope. **a** Webcam images of plasma operation. The electron beam can be used to ignite the plasma at a lower applied source voltage to generate a less intense plasma (right) compared to self-ignition by reaching the breakdown voltage (middle). The shown plasma images correspond to the plasma conditions right after plasma ignition. **b** Top-view SE-SEM images (\(10\,\mathrm{keV}\)) of the nozzle and sample (left) without and (right) with applied voltage on the nozzle (\(V_{\mathrm{S}}=920\,\mathrm{V}\)). In this example, the electrons are attracted to the positive potential on the nozzle, which enables imaging of the orifice area. **c** True _in-situ_ SE-SEM imaging during plasma operation is possible and shows the formation of a pit in the sample due to sputtering. SEM image is increasingly "tilted" toward the nozzle with increasing \(V_{\mathrm{S}}\). Thirdly, it was observed that _in-situ_ SEM imaging during plasma operation is indeed possible, opening up the opportunity for time-resolved studies. In SE-SEM imaging, a working plasma leads to an increase in signal (brightness) using the Everhart-Thornley detector (ETD). For imaging, this effect can be compensated by reducing the ETD bias setting. For a CO2 plasma, this method proved effective for discharge currents up to about 7 uA, after which the ETD was saturated (i.e., no further reduction in bias possible), and no SE-SEM imaging was possible. It is remarkable that _in-situ_ SE-SEM imaging during plasma operation is feasible, despite several challenges: (i) the electron-beam current used (few nA) is about a thousand times lower than the measured discharge current (few uA), (ii) many spurious SEs are likely generated in the plasma region [5, 6], and (iii) the positive suction voltage on the ETD of 250 V to attract SEs is comparatively low compared to the nozzle voltage (typically \(>\)1 kV). For example, three SE-SEM images taken during continuous microplasma operation are shown in Figure 3c. The plasma duration increases from left to right, leading to increasing pit diameter and depth due to surface sputtering. The most notable distortion in the SE-SEM image is caused by the applied nozzle voltage, resulting in an electron-beam deflection (Figure 3b). Similarly, BSE-SEM imaging was tested by negatively biasing the ETD with \(-150\) V to suppress (mainly) SEs from the image signal. In contrast to SE-SEM imaging, the BSE-SEM image brightness is not affected by the discharge current during plasma operation, meaning that BSE-SEM imaging is still possible even when the SE signal becomes saturated at high discharge currents (e.g., \(>\)7 uA for CO2). A video comparing BSE- and SE-SEM imaging is found in the supplementary information (_In-situ-SEM_SE-vs-BSE.mp4_). Since the ETD covers only a relatively small solid angle, it is inefficient for BSE detection. This results in a lower signal yield than for SE-SEM imaging. However, the low BSE signal may be increased by using a more efficient and low-vacuum compatible BSE detector [35, 36], but this was not tested in this work. Since both SE- and BSE-SEM imaging is possible and similar to conventional SEM imaging, the signals can be chosen depending on the experiment, or both signals can be collected with two different detectors. This enables more surface-sensitive imaging with SEs and \(Z\)-dependent imaging with BSEs [21]. An application-relevant observation from the demonstrated setup is the absence of undesired high-current and high-frequency arc discharges, which were reported by Matra _et al._[22] as a self-pulsating plasma mode. Instead, we observed stable DC glow discharges with discharge currents ranging from about 0.1 uA to 175 uA, which can be controlled by adjusting \(V_{\mathrm{S}}\). This corresponds to current densities ranging from 5 mA cm\({}^{-2}\) to 9 A cm\({}^{-2}\) for an assumed plasma-spot diameter of 50 um. The latter can vary depending on the gap distance. We did not investigate higher currents than 175 uA since the 30 um thick Cu target is sputtered away in a few (ten) seconds at the plasma spot. Conversely, the plasma could not be sustained below the lower limit of about 0.1 uA. The absence of arcing may be explained by the lower chamber pressure in our used SEM (about \(2\times 10^{-2}\) Pa) compared to the reported values "below 1 Pa" [22]. Notably, a self-pulsing plasma was observed for the shown setup when powering it in ambient air during prototyping. The high-frequency arcing in this self-pulsing mode (a few ten kHz) causes significant electromagnetic interference to surrounding electronic devices, including the SEM. In addition, powering the setup in the low-vacuum mode of the SEM at a chamber pressure of 40 Pa leads to undesired discharges in the SEM chamber, similarly as observed by Pardinas [15]. This restricts the plasma operation to the high-vacuum mode (below \(3.3\times 10^{-2}\) Pa for the used SEM). Here, only occasional arcs during plasma operation were observed when non-flat samples with surface protrusions were used. It may be possible to fully mitigate the self-pulsing plasma mode by an optimal choice of electronic components in the circuit. Still, in our case, the reduced chamber pressure (about \(2\times 10^{-2}\) Pa) is the most likely reason for a stable DC plasma operation compared to Matra _et al._[22]. ### Microplasma Applications #### Sputtering and Cone Formation Sputtering is the process of removing atoms of the target material by impinging ions. Sample material was removed by this process in all experiments, where the sample was used as the cathode. The positively charged ions are accelerated toward the cathode and cause sputtering, as is common in glow discharges. This results in changes in surface morphology in the plasma-spot regions, with diameters ranging from about \(50\,\mathrm{\SIUnitSymbolMicro m}\) to \(150\,\mathrm{\SIUnitSymbolMicro m}\) (depending on the gap distance, pressure, discharge voltage/current, and plasma duration). In the following, results for sputtering on (i) a polished or (ii) a Ni nanoparticle-covered Cu surface are shown. The formation of a pit under CO\({}_{2}\) and Ar-containing plasma was observed for a polished Cu surface. An example is shown in Figure 4a, which was created with Ar plasma. Experimentally, this pit formed after \(1.2\,\mathrm{keV}\) Ar\({}^{+}\) exposure with a discharge current of about \(15\,\mathrm{\SIUnitSymbolMicro m}\) (current density of \(1.2\,\mathrm{A}\,\mathrm{cm}^{-2}\) for a plasma-spot pit diameter of \(40\,\mathrm{\SIUnitSymbolMicro m}\)) for about \(10\,\mathrm{s}\). The rapid pit formation is indicative of the high sputter rates of the setup. The pit surfaces are rougher than the original polished surface. A comparably small conical structure is visible at the edge of the pit, which is magnified in Figure 4b. This may have been an impurity or other contamination present in or on the Cu surface, which deformed to the shown conical structure during sputtering. Its bright appearance in the SE-SEM images may be explained by the penetration depth of primary electrons, here at an energy of \(15\,\mathrm{keV}\). For relatively thin structures such as the shown impurity in Figure 4b, SEs are emitted not only on the entrance surface of the beam but also on the exit surface of the cone (and also the sample material behind the cone). The additional SE emission from the exit surface (relative to the incoming electron-beam direction) leads to higher SE-SEM image intensity for thinner sample regions. on the surface with lower sputter yield. The latter correlates with the melting temperature of a material. Wehner [39] has tested numerous surface/seed combinations of metals with different melting temperatures and found that cone formation requires seed materials with higher melting temperatures than the surface material. This is the case for Ni particles (\(T_{\mathrm{melt}}=1728\,\mathrm{K}\)) on a Cu substrate (\(T_{\mathrm{melt}}=1358\,\mathrm{K}\)) observed in Figure 5. Note that the used nanoparticles are large enough (around \(100\,\mathrm{nm}\)) so that a reduction in melting points is assumed to be negligible [40, 41]. A mean apex angle of \(\Theta=(61.4\pm 11.1)^{\circ}\) (the error being the standard deviation) was measured for 40 cones. According to Stewart and Thompson [42], \(\Theta\) is related to the ion-incidence angle for maximum sputter yield \(\Theta_{\mathrm{m}}\), as \(\Theta_{\mathrm{m}}=\left(180^{\circ}-\Theta\right)/2=(59.3\pm 5.6)^{\circ}\). The experimental value \(\Theta\) is in good agreement with the maximum \(\Theta_{\mathrm{m,sim}}\approx 65^{\circ}\) of a simulation of the angle dependence of the sputter yield of Ar on Ni using SRIM (supplementary information Figure S4). The differences between measured and simulated values can be explained by (i) limited statistics based on only 40 measured cones, (ii) systematic errors in the angle measurement from SEM images, and (iii) uncertainties in the simulation [43]. ### Local Oxidation Plasma finds applications in both the oxidation and reduction of materials [44, 45]. Here, we investigate the possibilities of local plasma-induced sample oxidation in the SEM. As a first example, a polished Cu surface was exposed to a \(\mathrm{CO_{2}}\) plasma (Figure 6). The gap distance was approximately \(130\,\mathrm{\SIUnitSymbolMicro m}\) (Figure 6a). In Figure 6a, a sputtered hole from a previous experiment is visible in the top right corner, and the nozzle is visible in the bottom right corner. The applied source voltage was \(V_{\mathrm{S}}=2\,\mathrm{kV}\) and discharge currents between \(70\,\mathrm{\SIUnitSymbolMicro m}\) to \(120\,\mathrm{\SIUnitSymbolMicro A}\) were measured. After \(10\,\mathrm{s}\) of plasma operation, Figure 6b, a pit starts forming with a diameter of about \(70\,\mathrm{\SIUnitSymbolMicro m}\). Chemical analysis by EDS shows increased Cu and decreased O signals in the pit Figure 4: Cone formation after Ar\({}^{+}\)-ion sputtering for different concentrations of surface particles. The lower row shows higher magnification SEM images of the upper row. **a** Cone formation is not visible in the shown region of a polished Cu surface. A small cone is visible on the edge (**b**), probably due to a small contaminating particle on the sample surface. **c** and **d** Debris on the Cu surface forms cones under plasma treatment. **e** Ni particles deposited on a Cu substrate show clear cone formation in the plasma-treated region. In the early stages of sputtering, the Ni particles locally agglomerate to form a cone (see the example in **f** marked with a dashed arrow). region, indicating a removal of the native Cu oxide by sputtering. This exposes the underlying Cu metal, leading to a higher Cu L\(\alpha\) signal. After 50 s, the pit is widened to about 100 \(\mathrm{\SIUnitSymbolMicro m}\) diameter (Figure 6c). The sputtered pit area still shows a higher Cu signal than the unaffected Cu surface around it, similar to Figure 6b. The reduction in Cu L\(\alpha\) signal in the top part of the Cu elemental map in Figure 6c results from shadowing of the generated Cu L\(\alpha\) signal x-rays from the inside of the pit toward the EDS detector. An increase in O K\(\alpha\) signal is visible at the pit's edge (Figure 6c). This observation indicates the oxidation of Cu in this region. The O signal increases under prolonged CO\({}_{2}\)-plasma exposure (not shown here), which we attribute to the continuous growth of this Cu-oxide layer. The sample was investigated again in the SEM and using light microscopy after the _in-situ_ experiments (Figures 6d-h). The top-view BSE-SEM image acquired at 20 keV shows different experimental sites of local CO\({}_{2}\) plasma treatment (Figure 6d). The black areas show regions where the total thickness (about 30 \(\mathrm{\SIUnitSymbolMicro m}\)) of the Cu support was sputtered away, leaving holes behind. One of the holes is also displayed in the SE-SEM image in Figure 6e. The tilted view Figure 5: Quasi _in-situ_ observation of Ni nanoparticle agglomeration and subsequent cone formation during Ar\({}^{+}\)-ion sputtering (5 \(\mathrm{\SIUnitSymbolMicro m}\), 1.32 keV) for the given duration shown above the SE-SEM images. The latter were taken with a large-field detector (LFD) in the low-vacuum mode (40 Pa) after each plasma operation in high-vacuum mode. A few interesting regions are marked with arrows. Region (1) shows gradual change from (**a**) a round particle morphology to an increasing distortion toward a conical shape. After reaching the latter in **d**, the cone is then starting to be removed by sputtering, as visible in **e**. Region (2) shows the sudden agglomeration of a few nanoparticles in **c**. A larger cone is forming from this agglomeration (**d** and **e**). Region (3) exemplifies that, after initial formation, the cones are sputtered away under further Ar\({}^{+}\)-ion bombardment (**d** and **e**). reveals the high aspect ratio of the sputtering process, resulting in vertical sidewalls. The elemental map of O shows an increased O signal around the plasma spots, similar to Figure 6c, which is decreasing in radial direction away from the spots. For the pits, the removal of the native oxide layer of Cu leads to an increased Cu L\(\alpha\) signal. The increased O concentration around the holes reduces the effective atomic number relative to metallic Cu. This results in a reduced BSE intensity in Figure 6d in the oxidized regions due to the BSE signal's atomic number \(Z\) dependence [21]. Interestingly, the oxidation of the Cu surface reaches a few hundred \(\upmu\)m away from the initial plasma spots. This phenomenon is more clearly visible in the light-microscopy image (Figure 6h), which shows interference effects related to the gradually changing thickness of the grown Cu-oxide film (Newton rings). In the top left corner of the image, there is an unaffected (i.e., without plasma-induced oxidation) area of the sample (marked with an arrow in Figure 6h). Overall, the polished Cu surface is sputtered away under CO\({}_{2}\) plasma. A local Figure 6: Sputtering and oxidation of a polished Cu surface under CO\({}_{2}\) plasma. **a** SE-SEM image showing the sample surface opposite to the nozzle with a 130 \(\upmu\)m gap. The hole in the top-right corner is from an earlier experiment. **b and c** Images and O/Cu elemental maps after 10 s and 50 s plasma treatment. A pit forms due to sputtering. A higher Cu signal in the pit indicates the removal of the native oxide in the plasma spot. **c** Enhanced O signal is visible at the pit’s edge (marked with a solid arrow). The depletion of Cu signal is due to the shadowing of the x-ray signal toward the detector. **d** Top-view BSE-SEM image of various pits and holes in the Cu foil after plasma treatment. **e** Side-view SE-SEM image of a hole showing vertical side walls. **f and g** The elemental maps reveal enhanced oxidation around the plasma spots and higher Cu signal in the pits similar to **b** and **c**. **h** Light-microscopy image showing interference effects in the oxidized regions around the plasma spots. CO2 plasma causes oxidation around the plasma spot, probably forming a Cu-oxide film with decreasing thickness away from the plasma spot. This oxidation is most likely caused by oxygen species (such as atomic or ionized O) generated in the plasma. These species can be transported out of the plasma (so-called afterglow) by the gas flow, explaining why the oxidation of the Cu is observed away from the plasma spot as well. Next, similar experiments with CO2 plasma on Ni nanoparticles were performed (Figure 7a, left column). The Ni particles were deposited on a Cu support film and formed a layer with a (varying) thickness of a few \(\mathrm{\SIUnitSymbolMicro m}\) (Figure 4e). The gap distance was 250 \(\mathrm{\SIUnitSymbolMicro m}\), and the discharge current was 5 \(\mathrm{\SIUnitSymbolMicro A}\). Local oxidation was observed _inside_ the plasma spot, as marked by the arrow in the elemental map acquired after 10 \(\mathrm{s}\) plasma exposure. The O signal increases with increasing plasma duration from 0 \(\mathrm{s}\) to 60 \(\mathrm{s}\). This aspect is not as evident in the noisy elemental maps but more clearly visible in the summed-up and normalized EDS spectra (see Figure S5 for details) from the plasma-spot region as an increasing O \(\mathrm{K}\alpha\) peak (Figure 7b, left). This observation is different from the oxidation _outside_ the plasma spots observed for a flat Cu sample (Figure 6). This may be caused by a more pronounced sputtering of Cu compared to Ni, where any oxidized Cu in the central plasma spot is directly removed by ion bombardment. In addition, the ion dose applied to the Ni nanoparticles (Figure 7, 5 \(\mathrm{\SIUnitSymbolMicro A}\)) was lower than for bare Cu (Figure 6, about 70 \(\mathrm{\SIUnitSymbolMicro A}\) to 120 \(\mathrm{\SIUnitSymbolMicro A}\)), resulting in more sputtering for the latter. Besides oxidation, the sputtering during CO2 plasma changed the morphology of the Ni particles inside the plasma spot from round shapes toward conical shapes, as discussed earlier (Figure 4). Overall, the EDS signals for Ni and Cu (from the underlying substrate) are nearly unchanged for CO2 plasma for this ion dose (Figure 7b, right). Besides using CO2, oxidation and sputtering of Ni nanoparticles was also studied for a 25 % O2-75 % Ar gas mixture (denoted as Ar/O2 in the following). The plasma parameters were kept the same as for CO2 (gap distance of 250 \(\mathrm{\SIUnitSymbolMicro m}\) and an approximate discharge current of 5 \(\mathrm{\SIUnitSymbolMicro u}\)). The oxidation of the Ni particles by Ar/O2 plasma is similar to CO2 plasma; the oxidation is localized to the plasma region (Figure 7a, right column), and the oxidation gradually increases with plasma duration (see O \(\mathrm{K}\alpha\) signal in Figure 7c, left). It is noteworthy, that the oxygen-rich spot at 0 \(\mathrm{s}\) in Figure 7a for Ar/O2 (marked with a dashed arrow) results from a previous experiment. Overall, the sputter rate of Ni particles for Ar/O2 plasma is higher than for CO2. The enhanced sputter yield for Ar/O2 plasma is evident from the change in Ni and Cu \(\mathrm{K}\alpha\) signals in the right plot in Figure 7c, where the Ni/Cu signal decreases/increases due to the continuous removal of Ni particles and subsequent exposure of the underlying Cu support. This aspect is also slightly visible as a reduction of O signal in the central part of the plasma spot in the O elemental map after 60 \(\mathrm{s}\) (Figure 7a). After the removal of the oxidized Ni particles in this area, the underlying Cu support is not oxidized _inside_ the plasma-spot region, leading to the observed O depletion (cf. with O maps in Figure 6f). This observation qualitatively agrees with simulated sputter yields using SRIM (Table S1 in the supplementary information), where Ar has higher sputter yields \(Y\) than O. However, CO is another typical molecule in CO2 plasmas[46] that could not be simulated and compared with Ar using SRIM. Since the sputtering is primarily caused by the bombardment of the grounded sample surface (relative to a positively biased nozzle) with positively charged ions, switching the polarity between the nozzle and the sample can mitigate sputtering. This aspect was verified experimentally by switching the polarity upon using another DC-DC converter (XP Power, CA12N) than the previously used one (XP Power, CA20P). The experiment was then repeated using again CO2 gas and a Cu target. The experimental setup is shown in Figure 8a with the EDS acquisition area marked with a dashed line. The polarity between the nozzle and the sample is reversed compared to all other conducted measurements in this work. Comparison of the O elemental maps before and after plasma treatment (Figure 8b) reveals a pronounced oxidation of the surface in a comparatively wide area (about 400 um diameter), i.e., larger than the actual plasma spot. The latter is not clearly visible in the highly tilted view onto the Cu target's surface in Figure 8b, but it is visible in the top-view BSE-SEM image in Figure 8c. This BSE-SEM image was captured during the investigation of the same sample after the plasma experiments using standard SEM imaging parameters. The top-view BSE-SEM image in Figure 8c reveals the plasma spot with a higher image intensity relative to the surrounding dark area related to the oxidized Cu surface. Note that a low primary electron energy of 5 keV was used for BSE imaging to increase surface sensitivity. The increased BSE-image intensity of the bright plasma spot (Figure 8c) can be explained by mild sputtering in this region by negatively charged ions bombarding the positively charged Cu surface. Figure 7: Local oxidation of Ni particles under CO\({}_{2}\) and O\({}_{2}\)/Ar plasma treatment. **a** Elemental maps showing the O K\(\alpha\) intensity for increasing plasma duration between 0 s to 60 s (top to bottom) for CO\({}_{2}\) plasma (left column) and O\({}_{2}\)/Ar plasma (right column) for similar discharge current (about 5 μA) and gap distance (about 250 μm). A spot of local oxidation is visible after 10 s (marked with horizontal arrows). The O-rich spot at 0 s for O\({}_{2}\)/Ar is from a previous experiment (dashed vertical arrow). (**b** and **c**) Comparison of extracted EDS signals in selected energy region for the O (left), and Ni and Cu energy regions (right). The increase in O signal for increasing plasma duration is visible. **c** For O\({}_{2}\)/Ar plasma, sputtering of Ni particles and subsequent exposition of the underlying Cu support reduces the Ni K\(\alpha\) signal and increases the Cu K\(\alpha\) signal. This effect is absent in **b**, indicating a significantly reduced sputter yield for CO\({}_{2}\) plasma. For comparison, the EDS spectra in **b** and **c** were normalized to the integrated signals in the energy intervals \([2\,\mathrm{keV},5\,\mathrm{keV}]\) and \([10\,\mathrm{keV},14\,\mathrm{keV}]\) containing only bremsstrahlung background signal. This removes the oxide layer and reveals metallic Cu, ultimately leading to higher BSE image intensity due to a higher average \(Z\) than the surrounding oxidized Cu surface. Even though mild sputtering is present, no large pit or hole is visible in the plasma-spot region (Figures 8d and e) compared to the initially used negative sample polarity (Figure 6d). The plasma spot area has a diameter of about 25 um (marked with a dashed circle in Figure 8d) and shows the formation of small pits with 200 nm to 300 nm (surface) diameter (Figure 8e). These pits are likely caused by the sputtering process and may show its initial stage. Overall, the sputtering of the sample surface is highly reduced when the sample surface is positively biased relative to the nozzle. In the configuration shown in Figure 8a, the mainly positively charged ions are accelerated toward the negatively biased nozzle, resulting in sputtering of the nozzle surface. Indeed, the orifice diameter increased after these experiments and sputtered material was re-deposited inside the orifice (Figure S6). The sputtered nozzle material is likely also re-deposited onto the opposing sample surface. Since the same nozzle was used throughout all experiments here, previously deposited sample material (mostly Cu) _onto_ the nozzle from earlier experiments is now sputtered and re-deposited _from_ the nozzle onto the sample (see the schematic in Figure S6j). In our case, the orifice area is mostly covered with (oxidized) Cu (Figure S6b) before the experiments shown in Figure 8, meaning that part of the oxidized region is likely caused by re-deposited Cu oxide from the nozzle. To test this hypothesis, the gas was switched from CO2 to N2, and the plasma-treated area Figure 8: Local oxidation of a polished Cu surface under CO2 plasma treatment with reversed electrode polarity. **a** Overview SEM image of the plasma gap with the EDS region for (b) marked with a dashed rectangle. Note the reversed nozzle/sample polarities. **b** Quasi _in-situ_ EDS measurements before (upper row) and after (lower row) CO2 plasma treatment. The increased O signal is caused by oxidation and re-deposition of oxidized Cu from the nozzle. **c** BSE-SEM image (5 keV) of the plasma treated after plasma experiments. **d** Higher magnification SE-SEM image of the central plasma spot. **e** Pits formed in the central plasma spot, probably caused by ion\({}^{-}\) sputtering. showed a N and O signal (see Figure S7 in the supplementary information). For a N\({}_{2}\) plasma on a Cu surface, an O signal is unexpected and should not be present without considering the aforementioned re-deposition effects. Our results suggest that part of the O signal in Figure 8b is caused by re-deposited oxidized Cu from the nozzle from previous experiments. Even though this effect is undesired for pure oxidation with plasma-generated radicals, it may be interesting to study film growth during sputtering. In summary, oxidation of Ni nanoparticles was observed for CO\({}_{2}\) and Ar/O\({}_{2}\). Oxidation is limited to the central plasma-spot region. For the same ion dose, Ar/O\({}_{2}\) sputtering of Ni nanoparticles is more pronounced than for CO\({}_{2}\). In contrast, oxidation of a flat Cu surface occurs around the central plasma-spot region, which is mostly sputtered rather than oxidized. Sputtering with positively charged ions causes rapid removal of sample material when the nozzle is used as an anode (positive polarity). This results in pits and holes in the central plasma region. Sputtering of the sample can be strongly reduced by reversing the polarity between the sample and the nozzle, leading to less damage during oxidation. However, sputtering of the nozzle material in this configuration causes damage to the tip of the nozzle and redeposition of this material onto the sample surface. Pure sample oxidation without sputtering or redeposition of material requires other plasma configurations. ### Limitations and outlook The current setup presented here demonstrates significant advances compared to the state-of-the-art, including a stable DC discharge and true _in-situ_ SEM imaging. This enables further research regarding plasma-surface interactions, plasma physics, sputtering, and more. However, certain limitations remain, particularly in terms of expanding the scope of potential research areas. For example, fields such as plasma catalysis or biomedical applications of plasma are growing rapidly, increasing the need for more advanced experimental techniques to study, e.g., plasma-catalyst or plasma-cell interactions [47, 48]. For such research topics, this setup is currently unsuited since sputtering of the sample (or redeposition of material from the nozzle) is undesirable and prevents studying the samples under relevant conditions. In order to study such samples, the sputtering behavior of the plasma should be eliminated. In principle, the current setup could be optimized further to reduce the discharge voltage to decrease the ion energy, lowering the sputtering rates. One potential approach would be to increase the ballast resistor in the system, to limit the current and lower the discharge voltage. Another approach would be to further increase the pressure, as it is expected that the current setup operates below the optimum value. However, increasing the gas flow rate would require an upgrade to the pumping system of the SEM since the current experiments were performed at the limit of the microscope when operating in high-vacuum mode. The pressure could also be increased by decreasing the gap distance, but this would then also increase the probability of unwanted arcing behavior, as was also observed in our experiments. Rather, we believe that in order to expand the research potential of (quasi) _in-situ_ plasma in SEM experiments without sputtering or redeposition effects, a fundamentally different plasma type may be required. However, this would require a significant alteration of the plasma setup and a complete redesign of the electronics. A number of plasma types could be of interest, each with their potential applications and limitations, as well as practical drawbacks. A common plasma discharge is the dielectric barrier discharge (DBD) [5]. This alternating current (AC, or pulsed) discharge is characterized by a dielectric layer covering one or both electrodes, limiting the current and thus preventing arc formation. This is a non-thermal plasma which is often used in plasma catalysis and biomedical research. However, DBD plasmas are generally filamentary, where the filaments consist of microdischarges (short duration, high current discharges). These filaments make the plasma treatment of the sample heterogeneous, complicating the analysis, and cause issues with electromagnetic interference. In principle, DBDs can be operated in a uniform mode [49], but this requires precise tuning of all relevant parameters (including the dielectric material, voltage, frequency, discharge gas, and pressure) further impeding rapid development of such an experimental setup. An alternative discharge based on the DBD is the so-called surface discharge. This plasma is similar to the DBD, but one of the electrodes is embedded or below the dielectric, whereas the other electrode is placed on the surface of the dielectric. With this, the discharge will be generated at the surface of the dielectric. This plasma still requires AC or pulsed power, but is generally more convenient to operate in a uniform mode [5]. Another approach could be using a plasma jet. Many geometries exist, either powered by DC, pulsed, or AC power, but they all have in common that the plasma is generated within a device, after which it flows outwards, e.g., to a sample [12]. The main difference with the setup presented here is that in the current setup, the plasma is generated in the gap between the nozzle and the sample rather than in the nozzle and sent to the sample. A main advantage of such a plasma jet could be the elimination of the sputtering behavior, as charged plasma species are not predominant (or even absent in the so-called afterglow). Based on this geometry, an electron beam plasma can be generated [5], of which a variation was previously introduced in an SEM [14]. In such plasmas, a high-energy electron beam is sent through a neutral gas, where the electrons ionize gas molecules. The plasma can then be sent to a sample through a gas flow, or the ions/electrons could be selectively attracted by biasing the sample. An external AC or DC circuit can also be added to further sustain and alter the plasma discharge, depending on the desired properties. Having access to a high-energy electron beam makes an SEM promising to further explore such plasmas. Note that all AC or pulsed-powered plasmas are very likely to interfere with the true _in-situ_ imaging of the SEM since the electron beam will be deflected periodically during scanning, drastically decreasing the image resolution. Depending on the desired experiment, this issue could be overcome by turning off the plasma during image acquisition, though this does limit the _in-situ_ capabilities of the setup. Further, introducing a microplasma may enable very different experiments and applications. On the one hand, the _in-situ_ plasma may lead to new analytical techniques in an SEM, such as glow discharge optical emission spectroscopy (GDOES) [50, 51], where the emission from sputtered material in a plasma is studied while ablating the sample material for depth profiling (similar to secondary ion mass spectroscopy in focused ion beam instruments [52]). On the other hand, established (e.g., EDS or wavelength dispersive x-ray spectroscopy, WDS [53]) or more recently available (e.g., electron energy loss spectroscopy, EELS [54]) analytical methods in SEMs may have the potential to probe the ionic species in the plasma cloud. This would provide essential and direct _in-situ_ feedback for plasma simulation codes and holds promise for improved control over plasma setups. ## 4 Conclusions A custom-built microplasma setup was realized inside an SEM based on the design by Matra _et al._[13]. A nozzle with a small orifice feeds a gas into the evacuated SEM chamber, from which a plasma can be generated by applying a certain electrical potential. Stable DC glow discharge plasmas with Ar, Ar/O\({}_{2}\), CO\({}_{2}\), and N\({}_{2}\) gases were successfully generated in the SEM's vacuum chamber. In general, larger discharge currents were measured for higher gas flow rates and smaller gap distances. A non-uniform gas-pressure profile was observed in the plasma gap, which -- in combination with a non-uniform electric field of the electrode geometry -- complicates a direct comparison of the shown setup with conventional plasma systems. Simultaneous SEM imaging with SEs and BSEs during plasma operation was demonstrated, enabling _in-situ_ studies of sample-plasma interactions in the SEM. A few exemplary plasma-sample interactions were studied. Sputtering of Cu surfaces and Ni nanoparticles under different gases was ob served. The lower sputter yield of the Ni particles compared to the Cu support, as well as the incidence-angle dependence of the sputter yield, results in the local formation of cones in the plasma-treated area. The same phenomenon was studied with conventional plasma reactors, which shows that our setup can replicate such experimental conditions on the local scale of several tens of \(\mathrm{\SIUnitSymbolMicro m}\). Local oxidation of Cu and Ni was observed for \(\mathrm{CO_{2}}\) gas and an \(\mathrm{Ar/O_{2}}\) gas mixture. At the same time, however, the sample was either simultaneously sputtered away by ion bombardment on the sample, or nozzle material was redeposited on the sample by sputtering of the nozzle. These limitations might be overcome by further optimizations of the setup, though for applications where sputtering is detrimental, other types of plasma are to be considered. In conclusion, we have demonstrated that _in-situ_ studies of plasma-sample interactions in a modern SEM are possible. This approach provides direct insight into morphological and chemical changes (via EDS) of the sample during and after plasma treatment. Overall, this may lead to a better understanding of plasma physics and plasma-surface interactions. ## 5 Experimental ### SEM Operation with the Plasma Setup Plasma experiments were performed using an FEI Quanta 250 ESEM equipped with an Oxford Instruments X-Max EDS detector (80 mm\({}^{2}\) sensor area). Figure 9a schematically shows the main parts of the plasma setup that was built in-house. A horizontally aligned steel nozzle with a small orifice (SS-1/8-TUBE-CAL-20, 20 \(\mathrm{\SIUnitSymbolMicro m}\) nominal orifice diameter, Lenox Laser) is fixed opposite to a nearly vertically aligned sample surface. The sample surface is slightly tilted with an angle \(\alpha\approx 10^{\circ}\) toward the electron beam for better SEM imaging conditions. The sample-nozzle distance ("Gap" in Figures 9a-c) determines the plasma gap distance and can be adjusted by moving the sample with SEM microscope stage controls. A gas flows from the nozzle into the gap toward the sample surface. The nozzle can be biased with a DC voltage \(V_{\mathrm{S}}\) in the range of \(-1.25\,\mathrm{kV}\) to \(2\,\mathrm{kV}\), i.e., with a positive or negative polarity relative to the sample. A ballast resistance \(R_{\mathrm{B}}=4.3\,\mathrm{M\SIUnitSymbolO}\) is used to limit the discharge current. The discharge current \(I_{\mathrm{D}}=V_{\mathrm{M}}/R_{\mathrm{M}}\) is measured by the voltage drop \(V_{\mathrm{M}}\) across a \(R_{\mathrm{M}}=1\,\mathrm{k\SIUnitSymbolO}\) resistor. Figure 9b displays the experimental setup with an image taken with the microscope's built-in infrared (IR) camera. A few additional components compared to the schematic in Figure 9a are visible, which are explained from top to bottom in the following. The ETD and the large-field detector (LFD) are used for SEM imaging in high-vacuum and low-vacuum modes, respectively. The shown images in this work are mainly SE-SEM images. Selected BSE-SEM images are mentioned explicitly in the text. A pressure-limiting aperture (PLA) with a 500 \(\mathrm{\SIUnitSymbolMicro m}\) diameter is mounted on the SEM pole piece to restrict gas flow into the microscope column. An IR-USB webcam (Arducam B0205) is mounted in addition to the microscope's built-in IR camera to improve imaging conditions of the plasma and control the gap distance. The sample stage consists of a threaded metal rod that is rigidly fixed with two nuts to a Teflon piece. The Teflon piece isolates the sample from the microscope stage to prevent current flow through the latter and possible damage to the microscope. Instead, the current flows via a cable to the measurement resistor \(R_{\mathrm{M}}\). The sample stage with the threaded metal rod and the Teflon block are fixed on an SEM stub, which itself is fixed on the moveable SEM stage. Two micrometer stages (Thorlabs MS3/M) are used to laterally position the nozzle close to the optical axis (below the SEM pole piece) before closing the SEM chamber. The nozzle and the webcam are mounted on a Al platform that is fixed above the moving microscope stage. The height of the Al platform can be adjusted to change the working distance between the SEM column and the sample (typically 15 mm). The gas line and electrical connections are routed through a custom home-made feedthrough flange. A detailed image of the plasma gap is shown in the webcam view (Figure 9c). Commercially available grids or apertures made for transmission electron microscopy (TEM) with 3 mm diameter (Gilder Grids GA50 Cu apertures) were typically used as sample or sample support for nanoparticles. The sample is mounted on an Al wedge with conductive Ag paste (EM-Tec AG15). The Al wedge was ground at an angle \(\alpha\) and fixed to the threaded metal rod's end with conductive Ag paste. The lower image in Figure 9c shows the working setup with a glowing DC microplasma. More details about the experimental setup can be found in the supplementary information (Figure S8). ### Plasma Operation Plasma experiments were performed in the high-vacuum mode of the microscope since undesired discharges in the SEM chamber in low-vacuum mode were observed when applying high voltage between the nozzle and the sample. The high-vacuum mode reached a stable chamber pressure of around \(2\times 10^{-2}\) Pa while providing a gas flow of about 2 sccm to 8 sccm through the nozzle (20 um nominal orifice diameter as per the manufacturer) into the microscope chamber. The gas flow was monitored using an Alicat flow meter (M-200SCCM-D/5M). We used CO\({}_{2}\) (purity 99.995 %), Ar (99.9999 %), and N\({}_{2}\) (99.9999 %) gases, and a 75 %Ar/25 %O\({}_{2}\) gas mixture (measured: 74.88 %/25.12 %) in this work (bought from Air Products). The plasma was operated by applying and controlling the voltage difference on the nozzle relative to the sample. A DC-DC converter with a 1 M\(\Omega\) output resistor (CA20P or CA12N depending on polarity, XP Power) was powered by an RS PRO IPS-3303 power supply. The 1 M\(\Omega\) output resistor limits the output current of the DC-DC converter in standalone usage for user safety. The output resistor is in series with a 3.3 M\(\Omega\) resistor, resulting in a total ballast resistance \(R_{\mathrm{B}}=4.3\) M\(\Omega\). The output high voltage \(V_{\mathrm{S}}\) of the DC-DC converter was adjusted with a control voltage between 0 V to 5 V using a Keysight E36106B power supply. After plasma ignition, the discharge current was regulated by adjusting \(V_{\mathrm{S}}\) with the control voltage. Voltage-current characteristics of the plasma were measured with a Keithley 2400 source measurement unit. The highest source voltage of 2 kV was applied, after which the source voltage was gradually reduced while registering the current until no discharge current was measurable. The discharge voltage of the DC plasma \(V_{\mathrm{D}}\) is calculated as \(V_{\mathrm{D}}=V_{\mathrm{S}}-I_{\mathrm{D}}\left(R_{\mathrm{B}}+R_{\mathrm{ M}}\right)\).[6] Figure 9: Schematics and images of the plasma-in-SEM setup. **a** Schematic showing the experimental setup and the most important components. Gas flows from a nozzle orifice over an adjustable gap distance toward a sample surface. A high voltage \(V_{\mathrm{S}}\) is applied to ignite the plasma. The sample surface is slightly tilted at an angle \(\alpha\) toward the SEM incidence, allowing for _in-situ_ SEM imaging. **b** Image of the setup taken with the built-in infrared camera of the SEM showing the setup. A few additional components are shown compared to **a**, such as a webcam and the electron detectors, ETD and LFD. **c** Higher-magnification side-view of the plasma region using the webcam without plasma (upper) and with ignited plasma (lower, the microplasma is marked with an arrow) ### Sample Preparation A Cu TEM aperture (50 um, Gilder Grids GA50) with a diameter of 3 mm and a thickness of about 30 um was used in most experiments to ensure a well-defined, flat electrode opposing the nozzle. For experiments with nanoparticles, commercial Ni particles (nanopowder, \(<\)100 nm nominal average particle size, \(>\)99 % purity, Sigma-Aldrich, CAS number 7440-02-0) were mixed with acetone and then drop cast on the Cu disc. After solvent evaporation, a thin film of Ni particles is left on the Cu surface. Drop casting was repeated multiple times until the TEM aperture was fully covered with Ni particles. ### Data Processing _Fiji_[55] was used for general image processing. Images were stitched together using the "Grid/collection stitching" plugin.[56] Image series were registered using the "Descriptor-based series registration (2d/3d + t)" plugin.[57] The background-corrected x-ray peak intensities (net intensities) for the EDS maps were extracted using the "TruMap" function in the Oxford Instruments _AZtec_ software (version 2.1). Additional analyses of extracted (summed-up) EDS spectra from specific regions were processed with the _Hyper-Spy_ Python package.[58] ## 5 Data Availability Statement Raw data files and data-treatment scripts are available at Zenodo[59] ([https://doi.org/10.5](https://doi.org/10.5) 281/zenodo.8042029). ## 6 Author Contributions **LG**: Conceptualization, Methodology, Investigation, Software, Validation, Formal Analysis, Data Curation, Visualization, Writing - Original Draft. **DC**: Conceptualization, Methodology, Investigation, Writing -- Review & Editing. **RDM**: Conceptualization, Methodology, Investigation, Validation, Writing - Original Draft. **AO**: Conceptualization, Methodology, Writing -- Review & Editing. **SVA**: Conceptualization, Supervision, Project Administration, Funding -- Review & Editing. **AB**: Conceptualization, Supervision, Project Administration, Funding Acquisition, Funding Acquisition, Writing -- Review & Editing. **SB**: Conceptualization, Supervision, Project Administration, Funding Acquisition, Writing -- Review & Editing. **JV**: Conceptualization, Methodology, Supervision, Project Administration, Funding Acquisition, Writing -- Review & Editing ## 7 Conflicts of Interest There are no conflicts to declare. ## 8 Acknowledgments LG, SB, and JV acknowledge support from the iBOF-21-085 PERsist research fund. DC, SVA, and JV acknowledge funding from a TOP-BOF project of the University of Antwerp (FFB 170366). RDM, AB, and JV acknowledge funding from the Methusalem project of the University of Antwerp (FFB 15001A, FFB 15001C). AO and JV acknowledge funding from the Research Foundation Flanders (FWO, Belgium) project SBO S000121N. ## 10 Supporting Information Available Details of \(V\)-\(I\) measurements, exemplary Paschen curve for N\({}_{2}\), sputter yield simulations, details about EDS spectrum comparison, SEM/EDS characterization of the orifice and redepostion effects, and more details about the experimental setup are found in the supplementary information. Supplementary Information for _"In-situ_ Plasma Studies using a Direct Current Microplasma in a Scanning Electron Microscope" ### Details about Voltage-Current-Characteristic Measurements The main text shows voltage-current characteristics of the generated microplasma in the scanning electron microscope's (SEM's) chamber. A problem during measurements was the continuous sputtering of the sample surface when using the nozzle as an anode with positive bias. Typical measurements of the voltage drop across the measurement resistor \(R_{\mathrm{M}}\) versus the measurement duration are shown in Figure S1. All measurements were started by first applying the highest possible source voltage \(V_{\mathrm{S}}\) with the DC converter (2 kV, visible as strong onset in the plots) and then gradually decreasing the source voltage with 40 V steps until no voltage across the measurement resistor \(R_{\mathrm{M}}\) was measurable anymore. A small parasitic offset voltage was measured and subtracted from a reference region (e.g., the shaded area in Figure S1a). Ideally, the voltage steps in the measured curve should be horizontal plateaus, whereby each step corresponds to a defined step in the applied source voltage (here in steps of 40 V). However, as visible in the inset in Figure S1a, the discharge current was not stable but instead steadily increasing, especially in the first few seconds of plasma operation and at high currents. We attribute this to surface sputtering and rapid change of the electrode geometry, which significantly decreased the resistance of the gap and thus increased the discharge current (given the constant applied voltage). Even though it is, in principle, possible to correct the slope of the \(V_{\mathrm{M}}\)-time curve, we opted to simply calculate the average value of each voltage step by manual extraction (see Jupyter notebook on Zenodo [59]). Figure S1b shows a more extreme example of higher discharge currents, resulting in faster surface sputtering and an even more pronounced discharge-current increase over time. A strong slope is visible in the inset figure for the first few seconds of plasma operation. The voltage steps become horizontal at around 20 s in the plot. A theoretical Paschen curve calculated for N\({}_{2}\) is shown in Figure S2. The vertical line marks the minimum breakdown voltage \(V_{\mathrm{B,min}}=346\) V of the curve at \((pd)_{\mathrm{min}}=142\) Pa cm. The breakdown voltages were calculated according to the equation [6] \[V_{\mathrm{B}}=\frac{Bpd}{\ln(Apd)-\ln\left[\ln\left(1+\frac{1}{\gamma_{\mathrm{ see}}}\right)\right]}\quad, \tag{1}\] with the parameters \(A=11.8\) cm\({}^{-1}\) Torr\({}^{-1}=8.85\times 10^{-2}\) cm\({}^{-1}\) Pa\({}^{-1}\), \(B=325\) V cm\({}^{-1}\) Torr\({}^{-1}=2.44\) V cm\({}^{-1}\) Pa\({}^{-1}\), and \(\gamma_{\mathrm{see}}=0.01\) (secondary-electron-emission coefficient) for N\({}_{2}\)[60]. The Python code of the Wikipedia user "Krishnavedala" ([https://commons.wikimedia.or](https://commons.wikimedia.or) g/wiki/File:Paschen_curves.svg) was used and modified for Figure S2. No continuous plasma discharges were observed for specific combinations of (large) gap distances, (low) gas flow rates, and (low) source voltages. In such cases, the electron beam may be able to ignite the plasma and initiate a continuous discharge. In other cases, a discharge current was only measured when the electron beam was on and immediately vanished after the beam was switched off (Figure S3). Two examples for the latter are shown in Figures S3a and b, where each step in the signal corresponds to the electron beam being switched on or off. A smoothed signal is plotted as well for better visibility of the steps. The signal was smoothed using locally weighted regression (LOWESS) with _HyperSpy_[58] with _smoothing_parameter=0.03_ and _number_of_iterations=1_. The calculated current is in the nA-range, which is typical for SEM measurements, but was not explicitly measured here. This implies that the generated plasma current directly relates to the electron-beam cur rent. The beam conditions were 15 keV, 30 um objective aperture, and spot size 5. Interestingly, some steps show an initial current spike in the non-smooth signal (about 40 nA to 100 nA for Figure S3a or 20 nA to 50 nA Figure S3b) and then the reduction to the actual electron-beam current (presumably, but not measured). ### 3.3 **Sputter Yield Simulation** Monte Carlo simulations using SRIM 2013 ([http://www.srim.org/](http://www.srim.org/)) were run to investigate sputter yields for different ions and materials (Table S1) and ion-incidence angles (Figure S4). Overall, heavier ions (here Ar) show a higher sputter yield \(Y\) than lighter ions (C and O) on Cu and Ni targets. Note, that CO\({}_{2}\) plasma creates mostly CO and O [46] instead of elemental C, meaning that the latter is given here only for completeness. CO sputtering could not be simulated with SRIM. The higher sputter yield of Cu compared to Ni may explain the formation of Ni cones on a Cu substrate under Ar\({}^{+}\) bombardment. Regarding the angle dependence of the sputter yield, Figure S4, Ar\({}^{+}\) ions with an energy of 1.5 keV hitting a Ni target show that the maximum sputter yield is found for an incidence angle of about 65\({}^{\circ}\). This angle is related to the cone shape, and the measured value and its standard deviation for the maximum-sputter-yield angle are shown with a vertical line and the shaded area. The differences between the ex periment and the simulation may be explained by limited statistics for the experimental value and the simplified flat geometry used in the SRIM simulations (e.g., neglecting nanoparticle morphology) and their limited accuracy [43]. ### Spectrum Normalization in EDS Figures S5a-c show energy-dispersive x-ray spectroscopy (EDS) spectra without normalization and varying total electron dose, resulting in different total x-ray counts. The upper plot shows the full energy range from \(0\,\mathrm{keV}\) to \(15\,\mathrm{keV}\) (Figure S5a). The insets in the lower row (Figures S5b and c) show selected energy ranges for energy windows containing the C and O signals (Figure S5b), and the Ni and Cu signals (Figure S5c). For these spectra, a direct comparison is impeded by the difference in total x-ray counts, resulting in varying peak heights even without relative changes between spectra. After normalization (Figures S5d-f), the increasing O signal is revealed (Figure S5e), and the signals for Ni and Cu are unchanged (Figure S5f). The shaded areas in Figure S5d mark the regions used for spectrum normalization. These contain no elemental peaks and only bremsstrahlung background. The sum of the x-ray counts in these two areas was used for normalization of the spectra. ### Characterization of the Nozzle Ori-fice SEM imaging and chemical analysis with EDS were used to inspect the nozzle's orifice between different plasma experiments (Figure S6). The unused, fresh nozzle in the left column of Figure S6 shows some contamination near the orifice (Figure S6d), but the inner walls of the laser-cut orifice are well-defined. The outer diameter is about \(87\,\mathrm{\SIUnitSymbolMicro m}\), which is substantially larger than the nominal \(20\,\mathrm{\SIUnitSymbolMicro m}\). Since the measured gas flow rate was close to the nominal value, we suspect that the orifice diameter gets gradually smaller toward the inside of the nozzle. The EDS elemental map of Fe K\(\alpha\) resulting from the steel nozzle is used to show the absence of debris (Figure S6g). The lack of signal from the lower left corner of the EDS map is caused by shadowing effects toward the EDS detector (no direct line of sight for emerging x-rays). When the nozzle is used as an anode with a positive bias, most of the positively charged ions are hitting the sample surface, and the sputtered material is redeposited on the nozzle tip. Since mostly Cu apertures were used as sample material, a pronounced Cu K\(\alpha\) signal is visible around the orifice (Figure S6h). The deposited material reduces the outer diameter of the orifice (here to about \(80\,\mathrm{\SIUnitSymbolMicro m}\)). Afterward, the nozzle was used as a cathode and the ions are now mostly bombarding the nozzle instead of the sample. Sputtering of the orifice region leads to the removal of the previously-deposited Cu. The sputtered Cu is now deposited on the sample instead. The orifice is widened after sputtering (here about \(130\,\mathrm{\SIUnitSymbolMicro m}\) diameter) and filled with redeposited material. The EDS maps reveal a higher signal for Cu at the orifice edges, probably due to pronounced sputtering, which may have removed the native or CO\({}_{2}\)-plasma-induced Cu oxide. Since N\({}_{2}\) was mostly used as a gas with the nozzle as a cathode, N is implanted at the orifice edge. The lower row (Figure S6j) schematically shows the described deposition/removal of material on the nozzle depending on its polarity. ### Redeposition Effects with Nozzle as Cathode (\(-\)) Material from the sample is redeposited onto the nozzle by sputtering when the latter is used as anode. _Vice versa_, if the nozzle is used as a cathode, material from the nozzle is redeposited onto the sample (Figure S6j). Figure S7 shows an analysis of this aspect after exposing a Cu sample to N\({}_{2}\) and CO\({}_{2}\) plasmas with the nozzle used as a cathode. The overview backscattered electron (BSE)-SEM image (taken at 10 keV), Figure S7a, shows reduced image intensity in the plasma-treated regions due to a locally reduced average atomic number \(Z\). Two spots on the right result from CO\({}_{2}\) plasma treatment (marked in blue in Figure S7a), and the left one is from N\({}_{2}\) plasma treatment (marked in yellow). Higher magnification secondary electron (SE)-SEM images of the central plasma spots (marked with dashed circles in Figure S7b for N\({}_{2}\) and Figure S7e for CO\({}_{2}\)) reveal a finely-grained surface structure. An approximate particle size of \((49.9\pm 23.9)\) nm (arithmetic mean and standard deviation) was determined from Figure S7c by analyzing the minimum Feret diameters of the grains (Figure S7d). A similar surface structure is found for N\({}_{2}\) (Figure S7c) and CO\({}_{2}\) (Figure S7f). The latter shows the transition region between the central plasma spot (flat) and the surrounding area (grains). For CO\({}_{2}\), the central area is relatively flat with small pits, which is probably a result of ion\({}^{-}\) sputtering and longer plasma treatment duration than for N\({}_{2}\). Inspection of the elemental maps for CO\({}_{2}\) reveals apparent oxidation of the plasma-treated region (Figure S7h). Similarly, the N\({}_{2}\) plasma causes a local N signal in EDS (Figure S7g), indicating N implantation into the Cu sample. However, Figure S7g also shows an unexpected O signal for N\({}_{2}\) plasma treatment. We suspect that this is caused by the redeposition of previously-oxidized Cu that is present on the nozzle from earlier experiments (Figure S6j). The layer of redeposited (oxidized) Cu is likely also the cause of the grain structure visible in Figure S7c and Figure S7f. Independent of the gas used for plasma treatment, some material is sputtered from the nozzle when the latter is used as the cathode (negative polarity). Material near the nozzle orifice is then primarily sputtered and redeposited onto the opposing sample. ### Details about the Experimental Setup Figure S8 shows photos of the assembled microplasma setup with opened SEM chamber. The subfigures are explained in detail in the following, and the parts are listed in Table S2. * Figure S8a (top view): Some components (gas nozzle, webcam, DC-DC converter) are mounted on an Al platform to isolate them from the movements of the sample stage of the SEM. The gas nozzle is fixed with a binder clip to two micrometer stages. The high-voltage cable (starting at the BNC connection of the DC-DC converter, here visible below the gas line) is fixed close to the end of the nozzle and here hidden below the black insulating tape. The DC-DC converter was mounted on the side of the Al platform to save space on the top. The sample is positioned in the center of the image opposite to the nozzle (Figures S8b and d). The printed circuit board (PCB, made according to the data sheet of the DC-DC converter, [https://www.xppower.com/portals/0/pdfs/SF_CA_Series.pdf](https://www.xppower.com/portals/0/pdfs/SF_CA_Series.pdf), with an output resistor of 1 M\(\Omega\)) with the measurement resistor \(R_{\mathrm{M}}\) is fixed directly on the moveable part of the SEM stage. Its SMA connector and the cable go directly to the vacuum flange (Figure S8c). * Figure S8b (top view): Close-up view of the nozzle-sample configuration. An adapter made of Teflon is used to isolate the sample from the SEM stage. This choice was made to protect the SEM electronics from possible current bursts (e.g., arcing). The discharge current runs through the red cable, then through \(R_{\mathrm{M}}\) (see Figure S8a), and finally through the flange to the electronics outside the SEM chamber (not shown here). The Cu foil on the top part of this Teflon piece is used to minimize charging. Since a 500 \(\mathrm{\SIUnitSymbolMicro m}\) diameter pressure-limiting aperture (PLA) was used, the field-of-view (FOV) for SEM imaging is significantly reduced. The actual visible area is around 1 \(\mathrm{\SIUnitSymbolMicro m}\) and is exemplarily marked in the image with a circle. The center of the circle is given by the optical axis of the SEM as manufacturer-calibrated to the \(x=y=0\) position of the SEM stage. After setting the stage to \(x=y=0\) without a mounted sample, the orifice of the nozzle is positioned as close as possible to the optical axis using the two micrometer stages. The nozzle position is fixed after closing the SEM chamber. In case the nozzle is out of the SEMs FOV after pumping the chamber, its position has to be realigned after venting the chamber. When the nozzle is positioned within the SEMs FOV, the sample can be brought closer to/moved away from the nozzle using the SEM stage controls to change the gap distance. * Figure S8c (side view): The PLA is visible on the bottom of the SEM pole piece. It reduces gas flow into the SEM column to keep it at higher vacuum levels compared to the plasma-gap region. A self-made flange with gas and electronic feedthroughs is mounted on one of the free chamber ports (here visible in the back). Notably, the gas line does a \(90^{\circ}\) bend in the feedthrough to account for x-ray safety. The cable of the DC-DC converter goes on the flange on the opposite side (not visible here, but in the top part of Figure S8d), where a DB9 connector is present. This connector and the corresponding feedthrough are typically used for cooling/heating SEM stages provided by the microscope manufacturer. The pin layout was measured, and the shown custom cable with a DB9 connector was made to control the DC-DC converter. * Figure S8d (side view): The image from this angle reveals the sample stage made of an "angle adapter" to tilt the sample surface slightly toward the incident electron beam for analyses. Different angle adapters were ground with angles from \(5^{\circ}\) to \(20^{\circ}\). These are fixed to the threaded metal rod with conductive Ag paste. The sample used here is a \(3\,\mathrm{mm}\) Cu disc with a small aperture in the center (\(50\,\mathrm{\SIUnitSymbolMicro m}\) diameter, Gilder Grids GA50), which is glued onto the angle adapter with conductive Ag paste. The default vacuum system of the SEM (FEI Quanta 250 FEG) was used, consisting of a pre-vacuum rotary pump, a turbo molecular pump, and ion getter pumps (**IGPs**) for the electron-gun area. Without gas flow, the SEM-chamber pressure was able to reach the \(4\times 10^{-4}\) Pa range after a few hours of pumping. Especially, the residual air inside the gas line takes this time to get pumped through the nozzle orifice. The gas lines were flushed with the process gas before experiments to reduce contamination with air. With gas flow, the chamber pressure for a given gas flow rate depends on the gas type. The pressure is typically around \(2\times 10^{-2}\) Pa for gas flow rates of about 5 sccm, which is just below the threshold value of the microscope software for the high-vacuum mode (about \(3.3\times 10^{-2}\) Pa). If the chamber pressure exceeds the threshold value, the gas flow rate must be reduced to allow for microscope and plasma operation in high-vacuum mode. It was observed that the use of Ar leads to higher chamber pressures than for N\({}_{2}\) or CO\({}_{2}\) and, thus, Ar-containing gas mixtures must be used more carefully. The SEM chamber is mainly pumped by the turbo molecular pump, so a higher pumping speed for gases with smaller molecular weight is expected,[61] i.e., higher pumping speed for N\({}_{2}\) (28 Da), followed by Ar (40 Da), and finally CO\({}_{2}\) (44 Da). In practice, Ar is probably less efficiently pumped than CO\({}_{2}\) because the IGP of the electron column might contribute to the total pumping speed as well. As indicated by the microscope manufacturer in the microscope's manual, "the argon use should be minimized to a short time, because the IGPs are not optimized for pumping of it at all.", meaning that N\({}_{2}\) and CO\({}_{2}\) are likely to be more efficiently pumped by the IGP.
2310.09998
SeUNet-Trans: A Simple yet Effective UNet-Transformer Model for Medical Image Segmentation
Automated medical image segmentation is becoming increasingly crucial to modern clinical practice, driven by the growing demand for precise diagnosis, the push towards personalized treatment plans, and the advancements in machine learning algorithms, especially the incorporation of deep learning methods. While convolutional neural networks (CNN) have been prevalent among these methods, the remarkable potential of Transformer-based models for computer vision tasks is gaining more acknowledgment. To harness the advantages of both CNN-based and Transformer-based models, we propose a simple yet effective UNet-Transformer (seUNet-Trans) model for medical image segmentation. In our approach, the UNet model is designed as a feature extractor to generate multiple feature maps from the input images, then the maps are propagated into a bridge layer, which is introduced to sequentially connect the UNet and the Transformer. In this stage, we approach the pixel-level embedding technique without position embedding vectors, aiming to make the model more efficient. Moreover, we apply spatial-reduction attention in the Transformer to reduce the computational/memory overhead. By leveraging the UNet architecture and the self-attention mechanism, our model not only retains the preservation of both local and global context information but also is capable of capturing long-range dependencies between input elements. The proposed model is extensively experimented on seven medical image segmentation datasets including polyp segmentation to demonstrate its efficacy. Comparison with several state-of-the-art segmentation models on these datasets shows the superior performance of our proposed seUNet-Trans network.
Tan-Hanh Pham, Xianqi Li, Kim-Doang Nguyen
2023-10-16T01:13:38Z
http://arxiv.org/abs/2310.09998v3
# seUNet-Trans: A Simple yet Effective ###### Abstract Automated medical image segmentation is becoming increasingly crucial to modern clinical practice, driven by the growing demand for precise diagnosis, the push towards personalized treatment plans, and the advancements in machine learning algorithms, especially the incorporation of deep learning methods. While convolutional neural networks (CNN) have been prevalent among these methods, the remarkable potential of Transformer-based models for computer vision tasks is gaining more acknowledgment. To harness the advantages of both CNN-based and Transformer-based models, we propose a simple yet effective UNet-Transformer (seUNet-Trans) model for medical image segmentation. In our approach, the UNet model is designed as a feature extractor to generate multiple feature maps from the input images, then the maps are propagated into a bridge layer, which is introduced to sequentially connect the UNet and the Transformer. In this stage, we approach the pixel-level embedding technique without position embedding vectors, aiming to make the model more efficient. Moreover, we apply spatial-reduction attention in the Transformer to reduce the computational/memory overhead. By leveraging the UNet architecture and the self-attention mechanism, our model not only retains the preservation of both local and global context information but also is capable of capturing long-range dependencies between input elements. The proposed model is extensively experimented on five medical image segmentation datasets including polyp segmentation to demonstrate its efficacy. Comparison with several state-of-the-art segmentation models on these datasets shows the superior performance of our proposed seUNet-Trans network. Keywords: Polys, Colonoscopy, Medical image analysis, Deep learning, Vision transformers. ## I Introduction Medical image segmentation involves identifying and extracting meaningful information from complex medical images, playing a crucial step in many clinical applications including computer-aided diagnosis, image-guided surgery, and treatment planning [1, 2]. To date, manual segmentation by trained experts such as radiologists or pathologists remains the gold standards for delineating anatomical structures and pathological abnormalities. However, this process is costly, labor-intensive, and often requires significant experience. Deep learning-based models, on the other hand, have demonstrated outstanding performance in the automatic segmentation of objects of interest. This is attributed to their capability to discern and comprehend complex patterns and features within medical images. As a result, there is a significant demand for deep learning-driven automated medical image segmentation in clinical practice. As a prominent subset of various image segmentation models, convolutional neural networks (CNN) have proven to be highly effective and greatly promising in numerous medical image segmentation tasks [3, 4], especially UNet [5], a type of fully convolutional network [6], consisting of a symmetric encoder and decoder architecture with skip connections to pass features from the encoder path to the decoder path. However, due to the lack of ability to capture the long-range dependencies and global context information in images, these architectures typically produce inferior performance, particularly for target information that exhibits significant differences among patients in texture, shape, and size. To address these shortcomings, current research suggests implementing self-attention mechanisms grounded in CNN attributes [7, 8]. It is worth noting that Transformer [9], initially conceived for sequence-to-sequence tasks in natural language processing (NLP) frameworks and being emerged as alternative architectures that entirely abandon convolutional operators and relies exclusively on attention mechanisms [9], has ignited significant debate within the computer vision (CV) community. In contrast to previous CNN-driven methods, Transformers not only excel at capturing global context information but also showcase enhanced adaptability for downstream tasks when pre-trained on a large scale. For example, the first fully self-attention-based vision transformers (ViTs) for image recognition was introduced in [10] and achieved competitive outcomes on ImageNet [11] using 2D image patches with positional embedding as an input sequence, provided it was pre-trained on an extensive external dataset. Detection transformer (DETR) [12] employs a transformer-based approach as a fully end-to-end object detector, delving into the connections between objects and the overall image context for object detection. Segmentation Transformer (SETR) [13] replaces the traditional encoders with transformers in the standard encoder-decoder networks, effectively attaining state-of-the-art (SOTA) outcomes in the task of natural image segmentation. While Transformer is good at capturing global context, it struggles to grasp fine-grained details, especially for medical images. To overcome this limitation, efforts have been made by researchers to integrate CNN- and Transformer-based models into each other. In particular, TransUNet [14] and TransFuse [15] are the representative ones by combining the Transformer and UNet for medical image segmentation. As a continuous effort to harness the strengths of CNN and Transformer-based models, we introduce a novel UNet-Transformer model, named as seUNet-Trans, tailored for medical image segmentation. Within this framework, the UNet serves as a feature extractor, deriving multiple feature maps from the input images. These maps are then fed into a bridge layer, strategically placed to bridge the UNet and the Transformer components in a sequential manner. Notably, our approach employs a pixel-level embedding technique without position embedding vectors to enhance the model's efficiency. Furthermore, the Transformer head plays a central role in modeling the relationships and dependencies among input sequences, culminating in the generation of a prediction map for the input images. By leveraging the UNet architecture and the Transformer mechanism, our model not only retains the preservation of both local and global context information but also is capable of capturing long-range relationships between input elements. The rest of this paper is organized as follows. Section II provides an overview of related work in the field of automated medical image segmentation. Section III presents the architecture of the proposed seUNet-Trans model. Section IV focuses on numerical experiments and comparisons with other state-of-the-art segmentation models. Section IV draws the conclusion for our work. ## II Related work In this section, we begin by providing an overview of the commonly used CNN-based methods for medical image segmentation. We then explore recent advancements in the application of transformers within the realm of computer vision, particularly in segmentation tasks. Finally, we highlight the standard techniques that merge both CNN and Transformer architectures. ### _CNN-based Medical Image Segmentation_ Over the last decade, the field of medical image segmentation has witnessed remarkable achievements using CNNs, especially the FCN, UNet, and their variants. For instance, UNet++ [16] introduces a set of nested and densely skip connections to minimize the discrepancy between the encoding and decoding process. Attention U-Net [17] proposes an innovative attention gate method, which empowers the model to prioritize targets with varying sizes and exclude non-pertinent feature responses. Res-UNet [18] incorporates a weighted attention mechanism and a skip connection scheme [19] to enhance the performance of retinal vessel segmentation. R2U-Net merges the advantages of residual networks with UNet to elevate its feature representation capabilities. The PraNet [20], a.k.a. the parallel reverse attention network, employs the parallel partial decoder (PPD) and reverse attention (RA) model for polyp segmentation. KiU-Net [21] designs a unique architecture that leverages both under-complete and over-complete features to improve the segmentation performance of small anatomical structures. DoubleU-Net [22] establishes a robust foundation for medical image segmentation by chaining two U-Nets and implementing atrous spatial pyramid pooling (ASPP). FANet [23], during training, consolidates the mast from the previous epoch with the feature map of the current epoch. Given that these methods are anchored in CNNs, they inherently miss out on capturing long-range dependencies and understanding global contextual ties. ### _Transformer-based Medical Image Segmentation_ Transformers [9] were first developed for machine translations and have now achieved top-tier performance in various NLP tasks. Inspired by their successes, many efforts have been made to adapt Transformers for computer vision tasks. In particular, ViT [10] is the pioneering endeavor demonstrating that a solely transformer-based architecture can attain superior performance in image recognition, given pre-training on a substantial dataset. Utilizing ViT as an encoder, Segmenter [24] provides a segmentation framework by proposing a mask transformer decoder to generate class embeddings. With a combination of a transformer-based hierarchical encoder and a lightweight multilayer perceptron (MLP), SegFormer [25] offers a simple yet potent segmentation architecture. By integrating an additional control function into the self-attention module, MedT [26] proposed a gated axial-attention that extends the existing transformer-based architecture. Swin Transformer [27] recently attracted great attention due to its exceptional performance on a number of benchmarks for tasks such as image classification, object detection, and semantic segmentation. In contrast to many previous transformer-based models, Swin Transformer proposes a hierarchical architecture whose representation is computed with shifted windows. This strategy enhances efficiency by restricting self-attention computation to non-overlapping local windows while also allowing for cross-window connection. The hierarchical structure combined with the shifted window technique as a backbone can benefit other network architecture. By incorporating Swin Transformer into the encoder and decoder of the U-shaped architecture, DS-TransUNet [28] proposes a novel deep medical image segmentation framework that can effectively capture the non-local dependencies and multiscale contexts for improving the semantic segmentation quality of varying medical images. Extensive numerical experiments across seven typical medical image segmentation tasks show the effectiveness of this framework. ### _CNN-Transformer - based Medical Image Segmentation_ Despite transformer-based methods that can model the global context at all stages, they process inputs as 1D sequences which may result in low-resolution features, thereby lacking precise localization information. Simply resorting to direct upsampling to achieve full resolution doesn't effectively recover this information which therefore leads to an imprecise segmentation result. To address this issue, significant research efforts have been made to integrate CNN with the self-attention mechanism by characterizing global relationships of all pixels through the feature maps. TransUNet [14] is the first such framework combining Transformer with UNet and achieving SOTA performance on medical image segmentation tasks. TransFuse [15] proposes a shallow CNN-based encoder and transformer-based segmentation network in parallel to enhance the efficiency for modeling global contexts. Inspired by these works, we conduct further investigations. Specifically, the UNet model is designed to extract and output multiple feature maps from the input images. Then these feature maps are fed into an introduced bridge layer, which plays the role of sequentially connecting UNet and Transformer, enhancing the practical performance of various medical image segmentation tasks significantly. ## III Methodology In this section, we introduce our proposed model in detail for medical image segmentation. Our model is comprised of a UNet as a backbone coupled with a Transformer head. The U-shaped backbone, consisting of an encoder and a decoder, processes input images to produce multiple feature maps, which are fed into specially designed bridge layers. Subsequently, the Transformer head processes the output from these bridge layers to yield the final prediction. The architecture of the sEUNet-Trans model is shown in 1. ### _Encoder_ The encoder's role is to extract features from the input images within the network. This is achieved through a series of convolutional layers, referred to as UNet blocks, succeeded by max-pooling layers. As the input images progress through these UNet blocks, their spatial dimensions are reduced, while the depth (or number of channels), of the feature maps increases. Based on [5], we constructed the encoder section with four UNet blocks, with the specific design of a UNet block illustrated in Fig. 2. The UNet block includes two convolutional neural networks (Conv) [29] followed by a batch normalization function [30] and a rectified linear unit ReLU activation function [31]. The structure of the UNet block can be formulated as: \[\begin{split}\hat{F}_{i}&=\text{ReLU}\left(\text {BatchNorm}\left(\text{Conv}_{(C_{in},C_{h})}(\hat{F}_{i-1})\right)\right),\\ F_{i}&=\text{ReLU}\left(\text{BatchNorm}\left( \text{Conv}_{(C_{h},C_{o})}(\hat{F}_{i})\right)\right),\forall i\geq 1. \end{split} \tag{1}\] Where \(\hat{F}_{i}\) and \(F_{i}\) are intermediate and final feature maps for each UNet block, respectively. \(C_{in}\), \(C_{h}\), \(C_{o}\) represent input, hidden and output layers, respectively. ### _Decoder_ The decoder component focuses on upsampling the encoded feature maps back to the input image size and mirrors the architecture of the encoder. Instead of max-pooling layers, it uses an up-convolution (or transpose convolution) to increase the spatial dimensions. Skip connections, a critical component of UNet, are also employed to help the decoder retrieve spatial information lost during encoding. At each level in the decoder, the output from the corresponding encoder level (before pooling) is concatenated with the upsampled feature maps. After the skip connection, the concatenated features are passed through convolutional layers to refine the upscaled features. Distinct from the conventional decoder's final layer, which features a single channel for binary segmentation or several channels for multi-class segmentation with each channel representing the probability of a pixel belonging to a certain class, our enhanced layer outputs multiple feature maps, which will be fed into the next specially designed layers. ### _Bridge Layers_ At the end of the decoder, the input images undergo a transforming process, yielding high-level features with identical dimensions but a different number of channels. To refine and expand these features, a convolution layer is employed with a kernel size of 1 and a batch size of 1. These layers act as the bridge between the UNet and the Transformer head, therefore denoted as "bridge layers". ### _Transformer head_ The Transformer head begins by merging the features from the bridge layers. Subsequently, these merged features are flattened into sequences, and fed into the multi-head attention (MHA) mechanism. Then the output of the MHA is passed through the multi-layer perceptron (MLP), mainly used for mapping the input features to output features. Eventually, the output from the MLP is linearly upsampled, and processed by convolutional layers in the CBR block before outputting the final prediction. The structure of the Transformer head is shown in Fig. 4. #### Iii-D1 Feature embedding The bridge layers with the size of (\(H\), \(W\), \(C_{b}\)), height, width, and number of the bridge channels, are merged by using a convolutional layer with the kernel size \(E\), stride \(S\), and padding \(P\) are 3, 4, and 1 respectively. After passing through the convolutional layer, the output resolution of the bridge layers is computed as: \[\begin{split} H_{out}&=\frac{(H-E+2P)}{S}+1,\\ W_{out}&=\frac{(W-E+2P)}{S}+1.\end{split} \tag{2}\] In the context of image segmentation, our objective is to establish the relationship between pixels in the image. This can be accomplished through various methods, such as CNN-based techniques, attention mechanisms, and graph neural networks [29, 10, 32]. In this particular study, we utilize the attention mechanism due to its effectiveness in capturing long-range features. We treat each pixel and its variations across different spatial dimensions (represented by various features in different channels) as a single input vector denoted as \(a\). In other words, the merging features are flattened into sequences, and the dimensions of the sequences are \(A\in\mathbb{R}^{N\times C_{b}}\), where \(N=H_{out}\times W_{out}\). Different from the Vision Transformer [10], in this study, we opted not to use position embedding vectors during the input image flattening process. This choice is grounded in our approach of merging the input image and embedding the resultant features at the pixel level. Typically, the process of merging and embedding the bridge features into sequences can be formulated as: \[F_{f}=\text{Flatten}\left(\text{Conv}_{(C_{b},C_{b})}\left(F_{b}\right)\right). \tag{3}\] Here, \(F_{b}\) represents bridge layers, and \(F_{f}\) is the embedding features. #### Iii-B2 Transformer block The Transformer block consists of multi-head attention, multi-layer perceptron, LayerNorm, and residual connections, and it can be formulated as \[\begin{split}\hat{F}_{i}&=\text{MHA}\left(\text{ LN}\left(F_{i-1}\right)\right)+F_{i-1},\\ F_{i}&=\text{MLP}(\text{LN}(\hat{F}_{i}))+\hat{F }_{i}.\end{split} \tag{4}\] Fig. 1: The architecture of seUNet-Trans models. Fig. 3: Decoder block. Fig. 2: UNet block. Again, \(\hat{F}_{i}\) and \(F_{i}\) are intermediate and output layers of the \(i^{th}\) Transformer block. For the first Transformer block or \(i=1\), the input is the embedding features (\(F_{f}\)). In the MHA, the dependencies between sequences are computed by using cross-attention. In this step, the computational complexity is \(N^{2}\) with \(N\) as the number of input sequences. To reduce the computation, we used the sequence reduction technique implemented in [33] and [25], making it adaptable for high-resolution input images. Therefore, the complexity becomes \(N^{2}/R\), where \(R\) is the reduction rate. The input sequences are divided into multiple heads \(h\) in the MHA, in which the dimension of each head is \(d_{h}\), \(d_{h}=d_{N}/h\). In this study, we employ the length of the embedding vector \(d_{N}=64\), and the number of heads is \(h=4\). The attention in each head is calculated as \[\text{Attention}(Q,K,V)=\text{softmax}(\frac{QK^{T}}{\sqrt{d_{h}}})V, \tag{5}\] in which \(Q\in\mathbb{R}^{N\times d_{h}}\), \(K\in\mathbb{R}^{N/R\times d_{h}}\), and \(V\in\mathbb{R}^{N/R\times d_{h}}\). Once the attention of each head is calculated, we combine all of them together to obtain the final attention matrix, \[\begin{split}&\text{MultiHead}(Q,K,V)=\text{Concat}(\text{ head}_{1},...,\text{head}_{h})W^{O},\\ &\text{and head}_{e}=\text{Attention}(QW^{Q}_{e},KW^{K}_{e},VW^{ V}_{e}).\end{split} \tag{6}\] Where, \(W^{Q}_{e}\in\mathbb{R}^{d_{N}\times d_{h}}\), \(W^{K}_{e}\in\mathbb{R}^{d_{N}/R\times d_{h}}\), and \(W^{V}_{e}\in\mathbb{R}^{d_{N}/R\times d_{h}}\) are parameter matrices over a head, and \(W^{O}\in\mathbb{R}^{N\times d_{N}}\) is the total parameter matrix. The output from the MHA is added to its input through a residual connection. This connection facilitates the network's ability to learn residual information, representing the discrepancy between the expected output and the current estimate. Consequently, the network can adeptly capture and distribute gradient information during training, even in profoundly deep networks. Such a mechanism aids in efficiently training deeper neural architectures while addressing the vanishing gradient challenge. Beyond the MHA, a Transformer block also encompasses a connected feed-forward network or MLP, which consists of two linear transformations with a GeLU activation [34] between them. The aggregated features are first normalized before feeding into the MLP. Similar to the output from MHA, here we used another residual connection to add the MLP's output to its input. Equation 4 describes the MHA and MLP procedure, in which the input features are mapped to the output features following the standard Transformer [9]. The process of the Transformer block can be repeated \(D\) times, and in this study, we choose \(D=3\). #### Iii-B3 Feed-Forward Network The Feed-Forward Network (FFN) takes in the embedded sequences from the Transformer block to extract features and generate a prediction map. Given that the FFN operates on sequences as inputs, it becomes necessary to reshape these inputs to conform to the desired input shape (\(H_{out},W_{out},C_{b}\)). Furthermore, as computed in Section III-D1, the input shape undergoes a merging operation, resulting in a fourfold reduction in size. Consequently, the reshaped features must be upsampled by a factor of four to match the original input shape (\(H,W,C_{b}\)). This upsampling process employs a bilinear interpolation function to increase the resolution of the feature maps. Mathematically, this step can be represented as follows: \[F_{rs}=\text{Upscale}\left(\text{Reshape}\left(F_{D}\right)\right). \tag{7}\] Here, \(F_{rs}\) represents the upsampled feature maps after reshaping, and \(F_{D}\) is the features from the Transformer block D, the final Transformer block. After getting the upsampled features, they are fed into the CBR block for further processing, ultimately yielding the final prediction map. The CBR block, named for its convolutional layers, batch normalization, and ReLU activation, plays a vital role in feature refinement and spatial enhancement, enabling the network to capture intricate patterns and relationships within the data. The CBR consists of three convolutional layers, in which the first two layers with kernels size \(E\) of \(3\times 3\) are followed by batch normalization and ReLU activation, while the third layer with a kernel size \(E\) of \(1\times 1\) takes in features from previous layers and directly outputs the final prediction map \(M\). Mathematically, this can be represented as follows: \[\hat{F} =\text{ReLU}\left(\text{BatchNorm}\left(\text{Conv}_{(C_{b},C_{h1}) }(F_{rs})\right)\right), \tag{8}\] \[\hat{F} =\text{ReLU}\left(\text{BatchNorm}\left(\text{Conv}_{(C_{h1},C_{h2}) }(\hat{F})\right)\right),\] \[M =\text{Conv}_{(C_{h2},1)}(\hat{F}).\] Again, \(\hat{F}\) is the intermediate output of the CBR block. \(C_{h1}\) and \(C_{h2}\) are the hidden \(1^{st}\) and \(2^{nd}\) convolutional layers, respectively. In this study, we build the seUNet-Trans models for medical image segmentation. Hence, the final prediction \(M\) is the binary image (one class). Fig. 4: Attention head in the seUNet-Trans model. ## IV Experiment and Evaluation In this section, we compare our proposed model with the state-of-the-art (SOTA) models in medical image segmentation using publicly available datasets. We first describe the datasets and outline the employed metrics to evaluate the model's efficacy. Further, specifics regarding the training and optimization processes are detailed in this section's conclusion. ### _Dataset_ The seUNet-Trans models are trained on the Polyp Segmentation (Kvasir-SEG, CVC-ClinicDB, CVC-ColonDB, EndoScene), ISIC 2018, GlaS, and 2018 Data Science Bowl datasets, which are widely recognized and frequently used for evaluating various medical segmentation models such as DSTransUNet, PraNet, and ColonSegNet [28, 20, 35]. For data preprocessing, we first standardized the dataset by resizing the images to a uniform scale. Subsequent to this, we divided the preprocessed dataset into separate training and test datasets. Table I provides a comprehensive overview of the image divisions and the resized resolutions for various training scenarios. Notably, when dealing with the mixing Polyp segmentation case, we combined different datasets for training and testing tasks. In particular, the training set comprises 900 Kvasir-SEG images and 550 CVC-ClinicDB images, while the test set includes 100 Kvasir-SEG images, 62 CVC-ClinicDB images, 380 CVC-ColonDB images, and 60 EndoScene images following [28]. In addition to the mixing Polyp segmentation, we have conducted training for our proposed seUNet-Trans models using the Kvasir-SEG or CVC-ClinicDB datasets separately. The Kvasir-SEG dataset is partitioned into 880 images for training and 120 for testing, and the CVC-ClinicDB dataset contains 550 images for training with a test set of 62 images. Visual representations from the Kvasir-SEG dataset are displayed in Figure 5, where the input is presented as an RGB image, and the corresponding output is a binary segmentation mask. For the GlaS, ISIC 2018, and 2018 Data Science Bowl datasets, the training dataset contains 85, 2075, and 536 images, respectively, while the test dataset comprises 80, 519, and 134 images for each. ### _Evaluation metrics_ To evaluate the performance of the seUNet-Trans models, we employ standard segmentation metrics including mean IoU (mIoU), mean Dice Coefficient (mDC) or mDC score, mean Precision (mPre.), and mean Recall (mRec.). These metrics are calculated by comparing the model's predictions against the ground truths across the entire dataset of T images, and are given as follows: \[\text{mIoU} =\frac{1}{T}\sum_{t=1}^{T}\frac{TP_{t}}{TP_{t}+FP_{t}+FN_{t}}, \tag{9}\] \[\text{mDC} =\frac{1}{T}\sum_{t=1}^{T}\frac{2TP_{t}}{2TP_{t}+FP_{t}+FN_{t}},\] \[\text{mPre.} =\frac{1}{T}\sum_{t=1}^{T}\frac{TP_{t}}{TP_{t}+FP_{t}},\] \[\text{mRec.} =\frac{1}{T}\sum_{t=1}^{T}\frac{TP_{t}}{TP_{t}+FN_{t}}.\] Where \(TP,TN,FP,FN\) are the True Positive, True Negative, False Positive, and False Negative, respectively. ### _Model training_ Our seUNet-Trans models have been developed utilizing the PyTorch 1.13.1 deep learning framework. For the training process, we employ the 'AI-Panther' high-performance computing infrastructure, furnished with A100 SXM4 GPUs, hosted at the Florida Institute of Technology. As mentioned in section III-D3, the final prediction is the binary image. This prompts us to use the binary cross-entropy (BCE) loss as the objective function during the training. The BCE loss measures the difference between predicted and ground truth images. Each pixel in the prediction, \(M_{x}\), with values ranging from 0 to 1, is compared to its corresponding pixel in the ground truth, \(Y_{x}\). Consequently, the average loss function for a pair of prediction and ground truth images is formulated as follows: \[\text{Avg. BCE}(\theta)= -\frac{1}{X}\sum_{x=1}^{X}\Big{[}Y_{x}\log\big{(}M_{x}(\theta) \big{)} \tag{10}\] \[+(1-Y_{x})\log\big{(}1-M_{x}(\theta)\big{)}\Big{]}.\] Again, \(M\) is the prediction, \(Y\) is the ground truth, and \(X\) is the total number of pixels in the prediction or ground truth. We employ the Adam optimizer [36] with a learning rate of 0.0001, weight decay of 0.0001, and a batch size of 8 for estimating model parameters \(\theta\). Checkpoints are saved every Fig. 5: Visual illustration of input images and their corresponding ground truths (segmentation maps) from the Kvasir-SEG dataset. 10 epochs and are subsequently loaded for evaluating the model on the test dataset upon the training phase's completion. ## V Experimental Results In this section, we detail the performance of our model across all utilized datasets, providing a thorough comparative analysis with state-of-the-art (SOTA) models. We present our findings in tabular form and include SOTA results referenced from [28] to facilitate a clear and comprehensive comparison. For a visual illustration, Figures 6 to 9 display predicted results by seUNet-Trans models for a selection of representative images, along with those produced by SOTA models. To quantitatively evaluate our model's performance, Table II to Table VI present the calculated metric scores obtained across the various datasets. As mentioned in Section III-D2, we implemented a sequence reduction technique to mitigate the computational complexity in the attention head. Specifically, we adopted reduction ratios \(R\) with values of 4, 2, and 1, which correspond to strides \(S\) of 2, 4, and 8, respectively. It's worth noting that a smaller stride retains more information but increases the number of input sequences, which can lead to longer training times. With the pairs of combinations of reduction ratios and strides, we developed three distinct models: seUNet-Trans-L, seUNet-Trans-M, and seUNet-Trans-S. Therefore, in this section, we compare all our models' performance with other models. ### _Results on Kvasir-SEG_ Figure 5(a) presents results predicted by the seUNet-Trans-M on the Kvasir-SEG dataset. We then perform a comparative analysis of these predictions against those produced by other models. The predictions of our models are comparable to those of other models, closely aligning with the objects in the ground truth. Furthermore, the calculated metric values are summarized in Table II, revealing that the seUNet-Trans models achieve impressive values of 0.919 for mDC, 0.850 for mIoU, 0.912 for mRec., and 0.926 for mPre.. Notably, seUNet-Trans outperforms other models in terms of mDC, and mPre., demonstrating superior performance. However, it's worth mentioning that the mIoU and mRec. scores of our models are relatively smaller than those of PraNet, HarDNet-MSEG, and DS-TransUNet. In addition, seUNet-Trans-M demonstrates consistent performance among our models. ### _Results on CVC-ClinicDB_ Figure 5(b) shows the predicted results by seUNet-Trans-M and other models. As observed, the predictions by seUNet-Trans surpass not only those of the standard UNet model but also outperform SOTA models. A detailed comparison of the results is tabulated in Table III, where the seUNet-Trans-M achieves remarkable performance, including a mDC of 0.945, mIoU of 0.895, mPre. of 0.951, and mRec. of 0.950. In contrast, the standard UNet model yields lower metric values with mDC, mIoU, mPre., and mRec. values of 0.872, 0.804, 0.868, and 0.917, respectively. This comparison underscores the superior performance of the seUNet-Trans on the CVC-ClinicDB dataset when compared to the baseline UNet model and other SOTA models. outperforms most of the other models but also demonstrates comparability with DS-TranUNet. In terms of visualization, our model's predictions exhibit significantly higher accuracy, as indicated by the red rectangles, and demonstrate fewer outliers, highlighted by the yellow rectangles. As mentioned in section IV-A, although the number of training samples in this dataset is very limited, our models perform well compared to other models. Table IV further reinforces seUNet-Trans's proficiency, revealing that its metric values surpass those of other models. Specifically, the seUNet-Trans-M achieves mDC and mIoU scores of 0.899 and 0.823, respectively, indicating its proficiency in gland segmentation on the GlaaS dataset. In addition, all three variants of our models consistently yield superior results when compared to other models. ### _Results on ISIC 2018_ Figure 8 presents the predictions by our model on the ISIC 2018 dataset, and the corresponding metric values are detailed in Table V. In comparison to SOTA models, our models demonstrate strong performance on representative images. In terms of the metric values, the seUNet-Trans-M still achieves commendable scores on ISIC 2018, with mDC, mIoU, mRec., and mPre. standing at 0.922, 0.854, 0.903, and 0.941, respectively. These metrics demonstrate the model's strong performance, compared to the SOTA models. \begin{table} \begin{tabular}{l|c c c c} \hline \hline Methodology & mDC & mIoU & mRec. & mPre. \\ \hline \hline \begin{tabular}{l} U-Net [5] \\ Attention U-Net [46] \\ R2U-Net [47] \\ BCDU-Net [48] \\ FANet [23] \\ DoubleU-Net [22] \\ DS-TransUNet-L [28] \\ \end{tabular} & \begin{tabular}{l} 0.674 & 0.549 & 0.708 & - \\ 0.665 & 0.566 & 0.717 & - \\ 0.679 & 0.581 & 0.792 & - \\ Attention R2U-Net [47] & 0.691 & 0.592 & 0.726 & - \\ BCDU-Net [48] & 0.851 & - & 0.785 & - \\ Faster [23] & 0.8731 & 0.802 & 0.865 & 0.924 \\ DoubleU-Net [22] & 0.896 & 0.821 & 0.878 & **0.946** \\ DS-TransUNet-L [28] & 0.913 & 0.852 & **0.922** & 0.927 \\ \hline \begin{tabular}{l} seUNet-Trans-L (ours) \\ seUNet-Trans-M (ours) \\ seUNet-Trans-S (ours) \\ \end{tabular} & \begin{tabular}{l} 0.918 & 0.849 & 0.900 & 0.938 \\ **0.922** & **0.854** & 0.903 & 0.941 \\ **0.921** & **0.854** & 0.906 & 0.937 \\ \hline \hline \end{tabular} \end{table} TABLE V: Quantitative results of evaluation metrics for seUNet-Trans in comparison to SOTA models on the ISIC2018. \begin{table} \begin{tabular}{l|c c c c} \hline \hline Methodology & mDC & mIoU & mRec. & mPre. \\ \hline \hline \begin{tabular}{l} SFA [43] \\ ResUNet-mod [44] \\ UNet++ [16] \\ UNet [5] \\ UNet [5] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ DS-TransUNet-L [28] \\ \end{tabular} & \begin{tabular}{l} 0.700 & 0.607 & - & - \\ 0.779 & 0.455 & 0.668 & 0.888 \\ 0.794 & 0.729 & - & - \\ U-Net [5] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ DS-TransUNet-L [28] \\ \end{tabular} & \begin{tabular}{l} 0.876 & 0.660 & - & - \\ 0.779 & 0.455 & 0.668 & 0.888 \\ 0.794 & 0.729 & - & - \\ U-Net [5] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [22] \\ PANet [2] \\ PANet [2] \\ PANet [22] \\ PANet [22] \\ PANet [2] \\ PANet [22] \\ PANet [2] \\ PANet [2] \\ PANet [2] \\ PANet [2] \\ PAN ### _Results on 2018 Data Science Bowl_ The results of our seUNet-Trans-M on the 2018 Data Science Bowl dataset are visualized in Figure 9. Similar to Fig.7, we have compared the predicted masks generated by our models with those of others. In this context, the red rectangles draw attention to precision, while the yellow rectangles highlight outliers. As illustrated, our model's predictions do not include outliers and are more accurate compared to others. Table VI summarizes quantitative metrics among models. Once again, seUNet-Trans-M demonstrates its stability across the three variants and its superiority over SOTA models. Typically, the mDC and mIoU scores of seUNet-Trans-M are 0.928 and 0.867, relatively higher than other models. In comparison, DS-TransUNet, exhibits slightly lower metric values, with metric values of 0.922 and 0.861, respectively. This underscores the seUNet-Trans's proficiency on the 2018 Data Science Bowl dataset, with its predictions being notably free of outliers. ### _Results on mixed Polyp segmentation_ When training and testing with distinct datasets, it becomes evident that seUNet-Trans-M consistently yields reliable results. Therefore, for this experiment, we have employed this model to train and evaluate the mixed dataset. \begin{table} \begin{tabular}{l|c c c c} \hline \hline Methodology & mDC & mIoU & mRec. & mPre. \\ \hline \hline U-Net [5] & 0.757 & 0.910 & - & - \\ UNet++ [16] & 0.897 & 0.926 & - & - \\ Attention UNet [46] & 0.908 & 0.910 & - & 0.916 \\ DoubleU-Net [22] & 0.913 & 0.841 & 0.641 & **0.950** \\ FANet [23] & 0.918 & 0.857 & 0.922 & 0.919 \\ DS-TransUNet-L [28] & 0.922 & 0.861 & **0.938** & 0.912 \\ \hline seUNet-Trans-L (ours) & 0.926 & 0.862 & 0.894 & 0.960 \\ seUNet-Trans-M (ours) & **0.928** & **0.867** & 0.911 & 0.947 \\ seUNet-Trans-S (ours) & 0.914 & 0.842 & **0.884** & **0.950** \\ \hline \hline \end{tabular} \end{table} TABLE VI: Quantitative results of evaluation metrics for seUNet-Trans in comparison to SOTA models on the 2018 Data Science Bowl. Fig. 8: Visualization of predictions of seUNet-Trans-M and the SOTA on the ISIC2018 dataset. These images are partially taken from [28] for comparison purposes. Fig. 7: Visualization of predictions of seUNet-Trans-M and the SOTA on the GlaS dataset. In these images, accurate regions are highlighted with red rectangles, while outliers among the models are indicated with yellow rectangles. These images are partially taken from [28] for comparison purposes. As described in section IV-A, seUNet-Trans-M was trained on a combined dataset comprising four distinct datasets for the mixed Polyp segmentation case. Figure 10 illustrates our model's performance on the test set, in which its mDC and mIoU surpass those of SOTA models such as U-Net, PraNet, and DS-TransUNet. Specifically, seUNet-Trans-M attains impressive mDC and mIoU scores of 0.942 and 0.913, respectively, on the Kvasir dataset, and 0.945 and 0.915 on the ClinicDB dataset, as highlighted in Table VII. Remarkably, even on datasets it wasn't explicitly trained on, including ColonDB and EndoScene, the seUNet-Trans demonstrates exceptional predictive accuracy. On ColonDB, it attains mDC and mIoU of 0.905 and 0.864, respectively, while on EndoScene, these metrics stand at 0.903 and 0.861, showcasing the model's robustness and adaptability. ### _Results on'mislabeling' data_ We conducted a comprehensive comparison of our models across a range of specific provided images, spanning from Section V-A to Section V-F. In this section, we aim to present more intuitive results that highlight a clearer representation of the capabilities of our seUNet-Trans models. Figure 11 shows the predictions of our seUNet-Trans on different datasets. Even when dealing with relatively small datasets such as GlaS or the 2018 Data Science Bowl, the model illustrates impressive performance in comparison to the ground truth. Notably, in the case of the 2018 Data Science Bowl, as depicted in Figure 10(c), our model shows the ability to recognize mislabeling, even when the ground truth does not precisely align with the input image. Similarly, as illustrated in Figure 10(b), our model also demonstrates its capability on the prediction of the ISIC2018 dataset. These predictions, generated by the seUNet-Trans-M, closely adhere to the object boundaries within the input images, rather than rigidly following the ground truth. These results indicate the model's proficiency in providing precise predictions across various datasets, thereby promoting its potential as a strong tool for medical image segmentation. ## VI Conclusion and Discussion This paper introduces an innovative approach, named as seUNet-Trans, that synergizes the robust feature extraction capabilities of CNN with the sophisticated contextual understanding of Transformer-based models to advance medical image segmentation. Our seUNet-Trans employs a hybrid design, integrating a fully convolutional network, UNet, with a Transformer-based model. At the heart of this integration is a specially designed bridge layer that acts as a bridge, sequentially channeling rich feature maps from UNet into the Transformer. This design enables the framework to leverage the spatial hierarchies recognized by UNet and the global dependencies discerned by the Transformer, providing a more precise and context-aware segmentation performance. In our approach, we streamline the architecture of the proposed model by adopting a pixel-level embedding technique that forgoes the traditional use of position embeddings. Furthermore, we explore the trade-off between computational complexity and model accuracy by employing a computational reduction technique, resulting in the creation of three distinct models (L, M, and S). Such a design can enhance the model's efficiency, as it reduces the complexity of the input representation while maintaining the inherent spatial relationships of the pixels. This simplification is predicated on the understanding that, in the context of medical image segmentation, the relative positioning of pixel data is often implicit within the pixel intensity and texture patterns, making separate positional encoding redundant. As a result, our model remains attuned to the crucial spatial cues necessary for accurate segmentation without the computational overhead typically introduced by position embeddings and the original Transformer. Fig. 9: Visualization of predictions of seUNet-Trans-M and the SOTA on the 2018 Data Science Bowl dataset. In these images, accurate regions are highlighted with red rectangles, while outliers among the models are indicated with yellow rectangles. These images are partially taken from [28] for comparison purposes. \begin{table} \begin{tabular}{l c c|c c|c c|c c|c c|c} \hline \multirow{2}{*}{Methodology} & \multicolumn{2}{c}{Kvasir} & \multicolumn{2}{c}{ClinicDB} & \multicolumn{2}{c}{ColonDB} & \multicolumn{2}{c|}{EndoScene} & \multicolumn{2}{c}{Average} \\ \cline{2-13} & mDC & mIoU & mDC & mIoU & mDC & mIoU & mDC & mIoU & mDC & mIoU \\ \hline \hline U-Net [5] & 0.818 & 0.746 & 0.823 & 0.755 & 0.512 & 0.444 & 0.398 & 0.335 & 0.652 & 0.581 \\ U-Net++ [16] & 0.821 & 0.743 & 0.794 & 0.729 & 0.483 & 0.410 & 0.401 & 0.344 & 0.641 & 0.570 \\ PraNet [20] & 0.898 & 0.840 & 0.899 & 0.849 & 0.709 & 0.640 & 0.871 & 0.797 & 0.800 & 0.739 \\ HarDNet-MSEG [42] & 0.912 & 0.857 & 0.932 & 0.882 & 0.731 & 0.660 & 0.887 & 0.821 & 0.828 & 0.767 \\ TransPose-L [15] & 0.918 & 0.868 & 0.934 & 0.886 & 0.744 & 0.676 & 0.904 & 0.838 & 0.847 & 0.786 \\ DS-TransUNNet-L [28] & 0.935 & 0.889 & 0.936 & 0.887 & 0.798 & 0.722 & **0.911** & 0.846 & 0.868 & 0.806 \\ \hline seUNet-Trans-M (ours) & **0.942** & **0.913** & **0.945** & **0.915** & **0.905** & **0.864** & 0.903 & **0.861** & **0.934** & **0.899** \\ \hline \end{tabular} \end{table} TABLE VII: Quantitative results of evaluation metrics for seUNet-Trans in comparison to SOTA models across four different datasets. Fig. 10: Visualization of predictions produced by seUNet-Trans-M in the context of the mixing Polyp Segmentation experiment. Panels (a) and (b) pertain to CVC-ClinicDB, panels (c) and (d) to Kvasir-SEG, panels (e) and (f) to CVC-ColonDB, and panels (g) and (h) to EndoScene representations. \begin{table} \begin{tabular}{l c c|c c|c c|c c|c c} \hline \multirow{2}{*}{Methodology} & \multicolumn{2}{c}{Kvasir} & \multicolumn{2}{c}{ClinicDB} & \multicolumn{2}{c|}{ColonDB} & \multicolumn{2}{c|}{EndoScene} & \multicolumn{2}{c}{Average} \\ \cline{2-13} & mDC & mIoU & mDC & mIoU & mDC & mIoU & mDC & mIoU & mDC & mIoU \\ \hline \hline U-Net [5] & 0.818 & 0.746 & 0.823 & 0.755 & 0.512 & 0.444 & 0.398 & 0.335 & 0.652 & 0.581 \\ U-Net++ [16] & 0.821 & 0.743 & 0.794 & 0.729 & 0.483 & 0.410 & 0.401 & 0.344 & 0.641 & 0.570 \\ PraNet [20] & 0.898 & 0.840 & 0.899 & 0.849 & 0.709 & 0.640 & 0.871 & 0.797 & 0.800 & 0.739 \\ HarDNet-MSEG [42] & 0.912 & 0.857 & 0.932 & 0.882 & 0.731 & 0.660 & 0.887 & 0.821 & 0.828 & 0.767 \\ TransPose-L [15] & 0.918 & 0.868 & 0.934 & 0.886 & 0.744 & 0.676 & 0.904 & 0.838 & 0.847 & 0.786 \\ DS-TransUNNet-L [28] & 0.935 & 0.889 & 0.936 & 0.887 & 0.798 & 0.722 & **0.911** & 0.846 & 0.868 & 0.806 \\ \hline seUNet-Trans-M (ours) & **0.942** & **0.913** & **0.945** & **0.915** & **0.905** & **0.864** & 0.903 & **0.861** & **0.934** & **0.899** \\ \hline \end{tabular} \end{table} TABLE VII: Quantitative results of evaluation metrics for seUNet-Trans in comparison to SOTA models across four different datasets. ## Appendix A Additional Results Fig. 11: Qualitative comparison of different predictions on different datasets by visualization. On a specific set of images covered by a red rectangular, the input, prediction, and ground are organized from the left to the right. We have rigorously evaluated the performance of seUNet-Trans through comprehensive experiments spanning seven diverse datasets. The numerical outcomes from these experiments clearly demonstrate that our proposed model not only meets but also surpasses the benchmarks set by other state-of-the-art models in a majority of the tests. This is particularly notable in mixed Polyp segmentation tasks, where the seUNet-Trans model exhibits superior proficiency, underscoring its robustness and effectiveness in handling complex image segmentation challenges. The promising results presented in this study pave the way for the application of our proposed model across a wider range of tasks. Future endeavors will focus on the development of specialized, lightweight versions of seUNet-Trans for specific application needs. Moreover, we will investigate the integration of advanced techniques such as the Swin Transformer, which holds the potential to elevate the efficacy of our model even further. Such explorations are expected to yield significant contributions to the field of medical image analysis and beyond.
2306.14949
Optical Identification of the Shortest-Period Spider Pulsar System M71E
M71E is a spider pulsar (i.e., a millisecond pulsar with a tight binary companion) with the shortest known orbital period of P=53.3 min discovered by Pan et al. (2023). Their favored evolutionary model suggests that it bridges between two types of spider pulsars, namely, it descended from a "redback" and will become a "black widow". Using Hubble Space Telescope (HST) archival imaging data, we report the first optical identification of its companion COM-M71E. The HST and pulsar timing coordinates are in excellent agreement (within ~10 mas). If M71E is associated with the globular cluster M71, our measured brightness of COM-M71E (m_F606W ~ 25.3) is broadly consistent with the expectation from Pan et al. (2023)'s preferred binary evolutionary model of a stripped dwarf companion, while it is also compatible with an ultra-low-mass degenerate companion. Future multi-wavelength photometric and spectroscopic observations can characterize the companion and test the evolutionary scenarios.
Zhuokai Liu, Subo Dong
2023-06-26T18:00:02Z
http://arxiv.org/abs/2306.14949v2
# Optical Identification of the Shortest-Period Spider Pulsar System M71E ###### Abstract M71E is a spider pulsar (i.e., a millisecond pulsar with a tight binary companion) with the shortest known orbital period of \(P=53.3\) min discovered by Pan et al. (2023). Their favored evolutionary model suggests that it bridges between two types of spider pulsars, namely, it descended from a "redback" and will become a "black widow". Using _Hubble Space Telescope_ (HST) archival imaging data, we report the first optical identification of its companion COM-M71E. The HST and pulsar timing coordinates are in excellent agreement (within \(\sim 10\) mas). If M71E is associated with the globular cluster M71, our measured brightness of COM-M71E (\(m_{\rm F606W}\approx 25.3\)) is broadly consistent with the expectation from Pan et al. (2023)'s preferred binary evolutionary model of a stripped dwarf companion, while it is also compatible with an ultra-low-mass degenerate companion. Future multi-wavelength photometric and spectroscopic observations can characterize the companion and test the evolutionary scenarios. Millisecond pulsars (1062) -- Binary pulsars (153) -- Optical identification (1167) -- Globular clusters (656) 0000-0002-4181-2888]Zhuokai Liu 0000-0002-4882-0885]Subo Dong ## 1 Introduction Millisecond pulsars (MSPs), rapidly-spinning pulsars (PSRs) found in either the Galactic field or globular clusters (GCs), are thought to be "recycled" old neutron stars, which have spun up via accreting mass from binary companions (Alpar et al., 1982; Bhattacharya and van den Heuvel, 1991). In the classic picture, the accretion process occurs as low-mass X-ray binaries (LMXBs), after which MSPs are left with companions evolving into He white dwarfs (WDs). Most binary MSPs have WD companions following the mass-period relation expected from binary evolution models leading to He WDs (Tauris and Savonije, 1999). However, the spider pulsars, with short orbital periods (\(P\lesssim 1\) d) and primarily found in eclipsing systems, are exceptions to such a relation (see, e.g., Roberts, 2013): The "black widows" (BWs) have very low-mass companions (\(M_{\rm c}\ll 0.1\,M_{\odot}\)), and in contrast, the companions of "redbacks" (RBs) are comparatively more massive (\(M_{\rm c}\sim 0.1-0.4\,M_{\odot}\)). The companion's evolution is affected by irradiation/ablation by the pulsar wind - BWs may eventually become isolated MSPs after the evaporating companions are obliterated, and Benvenuto et al. (2014) suggest that BWs descend from ablated RBs. The formation mechanisms of spider pulsars are debated (e.g., King et al., 2003, 2005; Chen et al., 2013; Benvenuto et al., 2014; Jia and Li, 2015; Ginzburg and Quataert, 2020). By analyzing the pulsar timing data from the Five-hundred-meter Aperture Spherical radio Telescope (FAST), Pan et al. (2023) (hereafter Pan23) find that M71E (a.k.a., PSR J1953+1844, discovered by Han et al., 2021 in the field of globular cluster M71) is an extraordinary non-eclipsing spider pulsar, which has a record-breakingly short orbital period of \(P=53.3\) min and a very small mass function of \(2.3\times 10^{-7}M_{\odot}\). Pan23's analysis favors a stripped dwarf companion with \(M_{c}=0.047-0.097\,M_{\odot}\), estimated by imposing mass-radius relations for brown dwarfs (BDs)/low-mass dwarfs (Burrows et al., 1993) and restricting its radius within the Roche lobe radius (Eggleton, 1983). Such a moderate mass and its very short period suggest that it evolved from a RB and will evolve into a BW. Their favored binary evolution model based on Chen et al. (2013) expects that the companion has an effective temperature \(T_{\rm eff}\sim 4500\) K and luminosity \(L\sim 3.25\times 10^{-3}\,L_{\odot}\), which is much fainter than their upper limit on the optical brightness (\(>22\) mag) from non detections on SDSS images. In this paper, we detect the optical companion (hereafter COM-M71E) of M71E on an archival HST image. Our detection adds to a handful of optically identified GC spiders with very low-mass companions (e.g., Pallanca et al., 2014; Cadelano et al., 2015). ## 2 Optical Observations & Data Analysis We query the Mikulski Archive for Space Telescopes for archival HST images covering the location of M71E, which is \(\sim 2.5^{\prime}\) away from the center of M71. We find a single image (see Figure 1 for a section centered on the location of M71E derived from pulsar timing) taken with the F606W filter and an exposure time of 339 s by the Advanced Camera for Surveys (ACS) on UT November 22, 2021 (mid exposure time at MJD = 59540.11878) associated with program 168711 (PI: J. Anderson; Anderson et al., 2021), which was proposed to make astrometric correction of the charge transfer efficiency (CTE) effects. Visual inspections near the M71E position reveal a faint point source that is \(\sim 2^{\prime\prime}\) away from a bright star. Footnote 1: All the _HST_ data used in this paper can be found in MAST: 10.17909/xb70-ee36 We employ the software hst1pass2(Anderson, 2022) based on the effective point-spread function (ePSF) method (Anderson and King, 2000) to perform astrometric and photometric analysis using the pixel-by-pixel CTE-corrected data product (i.e., the image with "_-flc_" suffix). We set the parameters \(\rm{FMIN}=100\) and \(\rm{HMIN}=3\) to allow detecting faint sources and finding stars near bright neighbors, respectively. The faint source close to M71E visible to by-eye inspections is detected by hst1pass. To investigate possible issues of extracting a faint source in the proximity of a bright star, we inject 10 artificial stars with the same instrumental magnitude as the source of interest in the vicinity of a bright star sharing a similar background level and gradient, and the injections are at random sub-pixel positions. We recover the astrometric position within \(\sim 5\) mas and magnitude within \(\sim 0.1\) mag, suggesting that the source extraction is reliable. Footnote 2: [https://www.stsci.edu/~jayander/HST1PASS/](https://www.stsci.edu/~jayander/HST1PASS/) We subsequently compare the resulting star catalog with the HST ACS F606W results of M71 taken on UT May 12, 2006 in Sarajedini et al. (2007). We perform zero-point magnitude calibration using the common stars, and we determine that COM-M71E has \(m_{\rm F606W}=25.34\pm 0.31\) in the VEGAmag system. We also estimate the relative astrometric uncertainties as a function of magnitude from epoch-to-epoch comparisons. Then we perform astrometric calibration using common stars with the Gaia Data Release 3 (DR3) catalog (Gaia Collaboration et al., 2016, 2023). We apply 2nd-order 2D-polynomial transformations of the distortion-corrected "(r,d)" coordinates from hst1pass to the Gaia DR3 frame and translate the Gaia coordinates to the HST epoch by considering the proper motions. Our derived J2000.0 position of COM-M71E is (RA, Dec) = (\(19^{\rm h}53^{\rm m}37.947^{\rm s}\pm 0.001^{\rm s},18^{\circ}44^{ \prime}54.32^{\prime\prime}\pm 0.01^{\prime\prime}\)), and the uncertainties are estimated by combining the relative error derived from HST epoch-to-epoch comparisons and the Gaia calibration error. The comparison of the HST position with Pan23's pulsar timing position at (RA, Dec) = (\(19^{\rm h}53^{\rm m}37.946^{\rm s}\pm 0.0001^{\rm s},18^{\circ}44^{ \prime}54.310^{\prime\prime}\pm 0.002^{\prime\prime}\)) shows an excellent agreement (within \(\sim 10\) mas) at the \(<1\) \(\sigma\) level. Therefore, the HST detection of COM-M71E is a secure optical identification. We assume that M71E is associated with M71, which has a distance of \(D=4.0\) kpc (Baumgardt and Vasiliev, 2021). We adopt an extinction correction of \(A_{\rm F606W}=0.6\) estimated using the M71 reddening value from Dotter et al. (2010) and the extinction coefficient from Sirianni et al. (2005). Then we obtain COM-M71's absolute magnitude \(M_{\rm F606W}=11.7\pm 0.3\). ## 3 Discussion & Summary With the detection only in a single band (F606W), it is not possible to independently derive COM-M71E's physical properties from optical. Nevertheless, we first make a comparison with the expected properties (\(T_{\rm eff}\sim 4500\) K and \(L\sim 3.25\times 10^{-3}\) \(L_{\odot}\)) from Pan23's preferred binary-evolution model. We use SYNPHOT function of MAAT(Ofek, 2014) to estimate the bolometric correction in F606W for a blackbody at \(T_{\rm eff}=4500\) K, and find that the expected absolute magnitude is \(M\sim 11.1\). Given the potential theoretical uncertainties, it is broadly consistent with \(M_{\rm F606W}=11.7\pm 0.3\) from observation, supporting Pan23's interpretation that M71E is at an intermediate evolutionary stage between a RB and a BW. Under the assumption of this model, we can estimate the physical parameters of COM-M71E using our optical detection. By combining the theoretical \(\log(T_{\rm eff})-\log(L)\) track of the companion (blue solid line in Figure 2) and the F606W photometric constraints (black lines n Figure 2), we obtain a blackbody radius \(R_{\rm c,BB}\sim 0.08\) \(R_{\odot}\). Following Pan23, we subsequently estimate its mass \(M_{\rm c}\sim 0.09\) \(M_{\odot}\) based on the mass-radius relations of Burrows et al. (1993), and thus \(R_{\rm c,BB}\) is smaller than the Roche lobe radius \(\sim 0.1\) \(R_{\odot}\). We note some caveats regarding this interpretation. First of all, in Pan23's simulation, the BD donor's surface is He-rich, whereas Burrows et al. (1993)'s BD models assume solar abundance. Second, in this scenario, the companion was initially H-rich. While its orbital period (\(P\approx 53\) min) is larger than the theoretically allowed minimum period of \(P_{\rm min}\approx 37\) min for a generic H-rich BD (Rappaport et al., 2021), it is smaller than the minimum value \(P_{\rm min}\) either predicted by binary evolution (\(P_{\rm min,theory}\sim 70\) min; Kolb & Baraffe, 1999) or empirically seen in cataclysmic variables (\(P_{\rm min,observed}\sim 80\) min; Knigge, 2006; Gansicke et al., 2009) for mass transfer from such a donor. Lastly, reaching a mass of \(M_{\rm c}\sim 0.09\,M_{\odot}\) requires an orbital inclination of \(\lesssim 5\) deg, which has a small geometric probability of \(<1\%\) for randomly distributed orbital orientations. Pan23 disfavor WD companions because a plausible WD with the minimum mass of \(\sim 0.16\,M_{\odot}\) would need a finely-tuned face-on orbit with a very small geometric probability (\(<0.3\%\)), and they also argue such a system is challenging to form from theory. Such a minimum mass is based on the observed low-mass WD population by Brown et al. (2016). A WD with \(\approx 0.16\,M_{\odot}\) needs to have a temperature \(\approx 10^{4}\)K (typical for WDs in Brown et al., 2016) to match the observed F606W flux. A minimum WD mass of \(\sim 0.16\,M_{\odot}\) is also consistent with the classic evolution pathway by Tauris & Savonije (1999), but it corresponds to a minimum period \(\sim 1\) d, much larger than the observed \(P\approx 53\) min. Alternatively, it may be possible to reach such a short period from a partially evolved donor star at the onset of mass transfer, and the orbital period shrinks during the mass transfer, eventually leading to mass loss from the WD core (see, e.g. Nelson et al., 1986; Deloye & Bildsten, 2003). Such a scenario may explain the extremely low-mass He/C-rich degenerate companions found in some MSP systems (e.g., Bailes et al., 2011; Romani et al., 2012). We briefly discuss the possibility that M71E is an ultra-low-mass He-rich WD. We examine a fiducial He WD with mass of \(M_{\rm WD}=0.02\,M_{\odot}\) corresponding to _a prior_ likely inclination of \(i\approx 30^{\circ}\), with a radius of \(R_{\rm WD}\approx 0.04\,R_{\odot}\) at \(T_{\rm WD}\approx 6000\) K according to Deloye & Bildsten (2003), it can match the observed F606W flux. Multi-band photometric measurements would permit a temperature estimate and could thereby test the WD scenario. Furthermore, spectroscopic observation can constrain its chemical composition (H/He/C) and provide further clues to its formation pathway. Spider pulsar companions can be heated by PSR irradiation and can also be significantly tidally distorted, inducing flux changes over its orbital period. Known optical counterparts of spider pulsars can show significant photometric variations modulated by the orbital period, usually with maxima at inferior conjunctions (orbital phase \(\phi=0.75\)) and minima at superior conjunctions (\(\phi=0.25\)). For example, the optical light curves of COM-M71A (a BW companion in Figure 1: A \(5^{\prime\prime}\times 5^{\prime\prime}\) section of the HST ACS F606W image around COM-M71E. The red circle is centered at the pulsar timing position of M71E, which agrees with the HST F606W position within \(\sim 10\) mas, and the circle’s radius is 10 times of the HST astrometric error.
2301.09001
The Pipeline for the Continuous Development of Artificial Intelligence Models -- Current State of Research and Practice
Companies struggle to continuously develop and deploy AI models to complex production systems due to AI characteristics while assuring quality. To ease the development process, continuous pipelines for AI have become an active research area where consolidated and in-depth analysis regarding the terminology, triggers, tasks, and challenges is required. This paper includes a Multivocal Literature Review where we consolidated 151 relevant formal and informal sources. In addition, nine-semi structured interviews with participants from academia and industry verified and extended the obtained information. Based on these sources, this paper provides and compares terminologies for DevOps and CI/CD for AI, MLOps, (end-to-end) lifecycle management, and CD4ML. Furthermore, the paper provides an aggregated list of potential triggers for reiterating the pipeline, such as alert systems or schedules. In addition, this work uses a taxonomy creation strategy to present a consolidated pipeline comprising tasks regarding the continuous development of AI. This pipeline consists of four stages: Data Handling, Model Learning, Software Development and System Operations. Moreover, we map challenges regarding pipeline implementation, adaption, and usage for the continuous development of AI to these four stages.
Monika Steidl, Michael Felderer, Rudolf Ramler
2023-01-21T20:04:07Z
http://arxiv.org/abs/2301.09001v1
# The Pipeline for the Continuous Development of Artificial Intelligence Models - ###### Abstract Companies struggle to continuously develop and deploy Artificial Intelligence (AI) models to complex production systems due to AI characteristics while assuring quality. To ease the development process, continuous pipelines for AI have become an active research area where consolidated and in-depth analysis regarding the terminology, triggers, tasks, and challenges is required. This paper includes a Multivocal Literature Review (MLR) where we consolidated 151 relevant formal and informal sources. In addition, nine-semi structured interviews with participants from academia and industry verified and extended the obtained information. Based on these sources, this paper provides and compares terminologies for Development and Operations (DevOps) and Continuous Integration (CI)/Continuous Delivery (CD) for AI, Machine Learning Operations (MLOps), (end-to-end) lifecycle management, and Continuous Delivery for Machine Learning (CD4ML). Furthermore, the paper provides an aggregated list of potential triggers for reiterating the pipeline, such as alert systems or schedules. In addition, this work uses a taxonomy creation strategy to present a consolidated pipeline comprising tasks regarding the continuous development of AI. This pipeline consists of four stages: _Data Handling_, _Model Learning_, _Software Development_ and _System Operations_. Moreover, we map
2303.07655
Simultaneous Action Recognition and Human Whole-Body Motion and Dynamics Prediction from Wearable Sensors
This paper presents a novel approach to solve simultaneously the problems of human activity recognition and whole-body motion and dynamics prediction for real-time applications. Starting from the dynamics of human motion and motor system theory, the notion of mixture of experts from deep learning has been extended to address this problem. In the proposed approach, experts are modelled as a sequence-to-sequence recurrent neural networks (RNN) architecture. Experiments show the results of 66-DoF real-world human motion prediction and action recognition during different tasks like walking and rotating. The code associated with this paper is available at: \url{github.com/ami-iit/paper_darvish_2022_humanoids_action-kindyn-predicition}
Kourosh Darvish, Serena Ivaldi, Daniele Pucci
2023-03-14T06:52:41Z
http://arxiv.org/abs/2303.07655v1
Simultaneous Action Recognition and Human Whole-Body Motion and Dynamics Prediction from Wearable Sensors ###### Abstract This paper presents a novel approach to solve simultaneously the problems of human activity recognition and whole-body motion and dynamics prediction for real-time applications. Starting from the dynamics of human motion and motor system theory, the notion of mixture of experts from deep learning has been extended to address this problem. In the proposed approach, experts are modelled as a sequence-to-sequence recurrent neural networks (RNN) architecture. Experiments show the results of 66-DoF real-world human motion prediction and action recognition during different tasks like walking and rotating. The code associated with this paper is available at: github.com/ami-iit/paper_darvish_2022_humanoids_action-kindyn-predicition ## I Introduction This paper addresses the problem of simultaneous human whole-body motion prediction and action recognition from wearable sensors. Given an unfinished set of observed human motion, the prediction should fundamentally respond to two questions for a predefined time horizon in the future: what the human subject will do next in the short-term at the symbolic level, hence a classification problem; how the human subject will do that, i.e., motion prediction as a regression problem. Prediction of human motion and actions enables many opportunities in various domains of robotics and biomechanics. When humans collaborate to perform joint actions, they predict each others' actions to coordinate their own decisions and motion [1, 2]. Similarly, for a successful joint human-robot collaboration, both robots and humans should predict each other actions, allowing them to plan and adapt in advance. An anticipatory approach lowers the idle time and leads to a more natural and fluent collaboration [3, 4]. Moreover, prediction results, coupled with predictive control approaches, can boost human safety in collaborative workplaces by avoiding the collision of robots with human coworkers. In another example of a heavy object lifting scenario in a warehouse, employing the prediction results of the workers' joint torques or workers' future fatigue enable the robots to initiate collaboration and task sharing, i.e., enhancing ergonomics in workplaces [5, 6]. In this case, the estimation of dynamic information such as interaction forces with the environment is needed. Another application of human motion prediction and action recognition includes unilateral robot teleoperation in a remote environment to overcome the communication time delay and limited bandwidth [7]. In a similar direction, human motion prediction allows for the generation of robot motion references [8]. In other domains for exoskeletons and prostheses control, integration of human action and motion prediction with the predictive control approaches can enhance the performance and natural motion profile, therefore resulting in a better user experience and comfort [9]. Last but not least, in autonomous cars, the pedestrian motion prediction can reduce the number of accidents and increase the safety [10, 11]. As mentioned before, human action and motion prediction can be beneficial to various applications and domains. According to functional requirements of the target application, prediction time horizon, the desired accuracy, sensory information, and level of details for prediction may vary. To estimate the human and environment interaction forces, some works proposed combining the human dynamics physical constraints with the neural networks using videos [12, 13]. However, those works only estimate the current interaction forces, and estimation results precision does not satisfy many robotic applications requirements, such as exoskeleton control. In this paper, we take a first step to bridge the gap between model-based and learning-based approaches to identify the mapping from human dynamical states to future human actions and whole-body motion and dynamics information. The proposed mapping allows for designing a deep neural network (DNN) architecture to solve the two prediction problems simultaneously. To do so, we have extended the _mixture of experts_ (MoE) approach such that expert outputs predict human motion and interaction forces, and the gating network classifies human actions. Each expert is enforced to learn a specific human motion generation policy associated with human action, and the gate outputs predict the human future actions. This extension is different from the classic MoE where the user does not have control over the gate outputs. Furthermore, we allow to predict the future ground reactions forces and torques. The proposed approach permits solving the problems in real-time for a given time horizon in the future. The code associated with this paper is available at: github.com/ami-iit/paper_darvish_2022_humanoids_action-kindyn-predicition The paper is organized as follows. Sec. II provides the state of the art and Sec. III presents the paper background. Sec. IV defines the problem and presents the proposed extension of MoE. Experiments and results are discussed in Sec. V. Conclusions follow in Sec. VI. ## II Related Work The problems of human action recognition and action prediction are often solved similarly as a classification problem. Different supervised learning approaches have been proposed, including Bayesian networks [14], neural networks (NNs) [15], Gaussian mixture models and regression [16], and hidden Markov models [17]. Similarly, to predict human motion, different approaches based on neural networks including generative adversarial networks [18], graph convolutional networks [19], dropout auto-encoder LSTM (long short-term memory) [20], adversarial encoder-decoder recurrent networks [21], convolutional NNs [22], recurrent neural networks (RNN) [23, 24], and recurrent encoder-decoder architecture [25] are proposed. Another common approach in the literature to address the motion prediction problem is based on inverse reinforcement learning (IRL) methods, for example, in [26] for a reaching task, or in [27] for a shared workspace. Recently, [28] proposed an approach for short-term and long-term motion prediction using RNN and a gradient-based optimization with hand-crafted cost functions to encode environment constraints. Differently from us, in that work, the target position was given to the human subject and algorithm, i.e., human intention (or high-level action) was known. In another work [29], dynamic movement primitives parameterize human motion, and an extended Kalman filter predicts the place and the time of the handover task. In the literature, many works address solely one among the two problems. Among those who addressed the two problems together, Fig. 1 indicates different design choices and architectures. An approach is to solve the two problems separately (i.e., in parallel), as shown in Fig. 1 on top. However, this design choice neglects the reciprocal correlation of the two problems. This approach introduces the risk that the predicted action and motion do not coincide, i.e., the action \(a_{i}\) is recognized while the predicted motion is related to action \(a_{j}\). To overcome those problems, one may devise first to predict the human action and provide the result as input (along with other inputs) to the motion prediction problem, as shown in Fig. 1 in the middle. For example, [30] probabilistic dynamic movement primitives learn human hand-reaching tasks by inferring first the human intention and then predicting human motion. Yet, in this approach, action prediction results influence the motion prediction, but not reversely. Extending that, Fig. 1 in the bottom shows an architecture where a single network recognizes human action, and a pool of networks predicts human motion, picked up by a selector [31], similarly to the MoE idea. This approach may partially untangle generalization problem over different actions; nevertheless, it may introduce a discontinuity problem during the transient phases when human switches from one activity to another. To overcome this, one may consider a weighted summation of motion prediction over different action probabilities. According to this formulation, the problems of action and motion prediction are not yet mutually interconnected. To remedy this problem, in [32] an encoder-decoder NN predicts human motion and a part of the same network followed by a fully connected layer classifies human actions. Alternatively, a generative adversarial network can predict human motion whereas a part of the pre-trained discriminator can classify human pose [33]. ## III Background To address human action and motion prediction problems, this section presents the underlying principles of human motion generation and action from a dynamical system and human motor system perspectives. This study will support the formulation of the two problems with a holistic view, which in turn gives an idea of how to solve them mutually. ### _Human Modeling_ Consider a human modeled as a Markov process and is expressed via a multi-body mechanical system with \(n\) joints, each with one degree of freedom, connecting \(n+1\) links. Human configuration is denoted by \(\mathbf{q}=(^{\mathcal{I}}\mathbf{p}_{\mathcal{B}},^{\mathcal{I}}\mathbf{R}_{\mathcal{B} },\mathbf{s})\in\mathbb{R}^{3}\times SO(3)\times\mathbb{R}^{n}\) where \(\mathbf{s}\) is the joint angles, and \({}^{\mathcal{I}}\mathbf{p}_{\mathcal{B}}\) and \({}^{\mathcal{I}}\mathbf{R}_{\mathcal{B}}\) are the floating base position and orientation relative to the inertial frame. The velocity vector of the model is indicated by \(\mathbf{\nu}=(^{\mathcal{I}}\mathbf{p}_{\mathcal{B}},^{\mathcal{I}}\mathbf{\omega}_{ \mathcal{B}},\hat{\mathbf{s}})\in\mathbb{R}^{n+6}\), where its terms are the base linear and rotational (angular) velocity relative to the inertial frame, and the joint velocity vector. The velocity of a frame \(\mathcal{A}\) attached to a human link, indicated by \({}^{\mathcal{I}}\mathbf{v}_{\mathcal{A}}=(^{\mathcal{I}}\mathbf{\dot{p}}_{\mathcal{A} },^{\mathcal{I}}\mathbf{\omega}_{\mathcal{A}})\in\mathbb{R}^{3}\times\mathbb{R}^{3}\), is computed by its _Jacobian_\(\mathbf{\mathcal{J}}_{A}(\mathbf{q})\in\mathbb{R}^{6\times(n+6)}\) as \({}^{\mathcal{A}}\mathbf{v}_{\mathcal{I}}=\mathbf{\mathcal{J}}_{A}(\mathbf{q})\mathbf{\nu}\). The \(n+6\) equations of motion of the human with \(n_{c}\) applied contact wrenches (forces and torques) is [34]: \[\mathbf{M}(\mathbf{q})\dot{\mathbf{\nu}}+\mathbf{C}(\mathbf{q},\mathbf{\nu})\mathbf{\nu}+\mathbf{g}(\mathbf{q})= \mathbf{B}\mathbf{\tau}+\sum_{k=1}^{n_{c}}\mathbf{\mathcal{J}}_{k}^{T}(\mathbf{q})\mathbf{f}_{k}^{c}, \tag{1}\] with \(\mathbf{M}(\mathbf{q})\) being the symmetric positive definite inertia matrix, \(\mathbf{C}(\mathbf{q},\mathbf{\nu})\) the Coriolis and centrifugal terms, \(\mathbf{g}(\mathbf{q})\) the vector of gravitational terms, \(\mathbf{B}\) a selector matrix, \(\mathbf{\tau}\in\mathbb{R}^{n}\) the vector of joint torques, and \(\mathbf{f}_{k}^{c}\in\mathbb{R}^{6}\) and \(\mathbf{\mathcal{J}}_{k}\) the vector of the \(k\)'th contact wrenches and its associated _Jacobian_ acting on the human. Fig. 1: Schematic of possible architectures for human action and motion prediction based on supervised learning. **Remark 1**.: _One can show that, by applying a state transformation, (1) is mapped into an \(n+6\) equation where the first 6 equations (centroidal dynamics) depend only on the external wrenches acting on the human, thus being independent from the human internal joint torques [35]. Furthermore, the last \(n\) equations (free-floating system) of (1) can be expressed only with respect to joint positions and velocities using the rigid contact assumption between the human feet and ground._ Given (1) and Rem. 1, the human joint dynamics writes: \[\dot{\mathbf{x}}=\mathcal{F}(\mathbf{x},\mathbf{\tau},\mathbf{f}^{c}(t)), \tag{2}\] where \(\mathbf{x}=(\mathbf{s},\dot{\mathbf{s}})\in\mathbb{R}^{2n}\) denotes the states of the human dynamical system, and \(\mathcal{F}\) is a nonlinear function derived from (1) that maps the human states, joint torques, and external forces/torques \(\mathbf{f}^{c}\in\mathbb{R}^{6n_{c}}\) to its rate of change of states. ### _Human Motion Generation_ According to the literature on biomechanics and motor system and human dynamics, we can write down the way a human generates new joint torques as a function of current \(\mathbf{s}(t)\), \(\dot{\mathbf{s}}(t)\), \(\ddot{\mathbf{s}}(t)\), \(\ddot{\mathbf{s}}(t)\) (joint jerks), \(\mathbf{f}^{c}(t)\in\mathbb{R}^{6n_{c}}\) (\(n_{c}\) external forces), \(\mathbf{\tau}(t)\), \(\dot{\mathbf{\tau}}(t)\), \(\ddot{\mathbf{\tau}}(t)\) (the first and second derivative of joint torques resulted from muscle contractions), \(\int\mathbf{\tau}^{\mathsf{T}}(t)\mathbf{\tau}(t)d(t)\) (joint efforts), \(\int\dot{\mathbf{s}}^{\mathsf{T}}(\mathbf{\tau})\mathbf{\tau}(t)d(t)\) (kinetic energy of the joints), and \(\mathbf{r}(t)\in\mathbb{R}^{n_{r}}\) (other \(n_{r}\) terms associated with the generation of joint torques) [26]. Some of the important terms that we can identify associated with \(\mathbf{r}(t)\) are the human objective or the immediate task, social interaction constraints [36], the task space constraints such as obstacles, time constraints, and spatial constraints. In many works in robotics where human motion is predicted, \(\mathbf{r}(t)\) is considered to be known implicitly. It is injected into the problem when a human should act in a structured environment or perform a given task sequence. However, in an unstructured environment or when human subjects are not provided with a description of the tasks to execute, some \(\mathbf{r}(t)\) can be considered as a hidden state in a Markov process and is required to be estimated given input data [37, 17, 38]. Others can be retrieved from the sensory data, such as obstacles in the workspace. **Remark 2**.: _Biomechanical studies tend to show that humans generate motion to minimize a cost function. This cost function combines mechanical energy expenditure (related to joint torques and velocities) and the motion smoothness (related to minimum jerk) while executing a reaching task [26, 39]._ Following Rem. 2, the human policy for joint torque generation can be approximated as an optimal control problem with an unknown cost function \(\mathcal{J}\) and subject to (2): \[\begin{split}\mathbf{\tau}^{*}(t)=\operatorname*{arg\,min}_{\tau(t)} \mathcal{J}(\mathbf{x},\mathbf{\tau},\mathbf{\tilde{s}},\ddot{\mathbf{s}},\mathbf{f}^{c}(t),\dot{ \mathbf{\tau}},\ddot{\mathbf{\tau}},\\ \quad\quad\quad\quad\quad\quad\quad\cdots,\int\mathbf{\tau}^{\mathsf{ T}}\mathbf{\tau}dt,\int\dot{\mathbf{s}}^{\mathsf{T}}\mathbf{\tau}dt,\mathbf{r}(t))\\ s.t.\;\;\;\dot{\mathbf{x}}=\mathcal{F}(\mathbf{x},\mathbf{\tau},\mathbf{f}^{c}(t ))\;,\;\mathcal{C}(.)\leq 0,\end{split} \tag{3}\] where \(\mathcal{C}(.)\) is the vector of all inequality constraints. ## IV Methods ### _Problem Statement_ Following the description of human motion generation and dynamics, here we formalize the problems of human action and motion prediction. In this regard, first human dynamics and optimal control problem are discretized. By discretizing (2) and considering the optimal joint torques obtained from (3), we can write it as: \[\mathbf{x}_{k+1}^{*}=\mathcal{F}(\mathbf{x}_{k},\mathbf{f}_{k}^{c},\mathbf{\tau}_{k}^{*}) \Delta t+\mathbf{x}_{k}, \tag{4}\] where \(\Delta t\) is the discretization time step. Moreover, by discretizing (3) and taking advantage of the recursive relationship between the current and previous joint torques, one can compute the optimal joint torques generated at each step by: \[\begin{split}\mathbf{\tau}_{k}^{*}=\mathcal{G}^{*}(\mathbf{x}_{k},\mathbf{x} _{k-1},\mathbf{x}_{k-2},\ldots,\mathbf{x}_{k-N},\\ \mathbf{f}_{k}^{c},\ldots,\mathbf{f}_{k-N}^{c},\mathbf{r}_{k},\ldots,\mathbf{r}_{k-N }).\end{split} \tag{5}\] In this formula, \(\mathcal{G}^{*}\) is an unknown and optimal mapping with regard to (3), \(N\) is the number of time steps to look behind in time. Finally, replacing \(\mathbf{\tau}_{k}^{*}\) in (4) with (5), we can derive the following nonlinear optimal formulation: \[\begin{split}\mathbf{x}_{k+1}^{*}=\mathcal{H}^{*}(\mathbf{x}_{k},\mathbf{x} _{k-1},\mathbf{x}_{k-2},\ldots,\mathbf{x}_{k-N},\\ \mathbf{f}_{k}^{c},\ldots,\mathbf{f}_{k-N}^{c},\mathbf{r}_{k},\ldots,\mathbf{r}_{k -N}),\end{split} \tag{6}\] where \(\mathcal{H}^{*}\) is an unknown optimal nonlinear function, mapping the input terms to the next state vector \(\mathbf{x}_{k+1}^{*}\). In this formula, the input terms \(\mathbf{x}_{k-i}\), \(\mathbf{f}_{k-i}^{c}\), and \(\mathbf{r}_{k-i}\) are the states of the system, the vector of external wrenches acting on the human body, and the vector of hidden states at \(i\)-steps in the past. By recursively applying (6), we can predict the future states of the human dynamical system for the time horizon \(T\), i.e., \(\mathbf{x}_{k+1}^{*},\mathbf{x}_{k+2}^{*},\ldots,\mathbf{x}_{k+T}^{*}\). However, to estimate the future states of the human system in a recursive fashion, there are the following problems that needs to be addressed: \(i)\) the mapping \(\mathcal{H}^{*}\) in (6) is unknown; \(ii)\) external forces/torques acting on the human in the future \(\mathbf{f}_{k+i}^{c}\) in (6) are not known; \(iii)\) the hidden states \(\mathbf{r}_{k\pm i}\) in (6) are not known, neither in the past nor the future. ### _Guided Mixture of Experts_ To address the challenges derived at the end of Sec. IV-A for human motion prediction, we propose a learning-based approach, i.e., the mapping \(\mathcal{H}^{*}\) in (6) is learned from human demonstrations. As discussed in the literature, approaches based on a single neural network have been proposed to learn the mapping \(\mathcal{H}^{*}\). However, \(\mathcal{H}^{*}\) can be very complex, and yet no approach has resolved this problem effectively. Starting from (6), here first, we reformulate the action and motion prediction problems in a new form. Afterward, we adopt the Mixture of Experts (MoE) approach to solve the two problems simultaneously [40, 41]. In order to predict the external wrenches acting on the human in the future \(\mathbf{\hat{f}}_{k+i}\), one can come out with two approaches. First, given the predicted states of the human \(\hat{\mathbf{x}}_{k+i}\), we can model the human and the world and perform simulations to predict the external forces acting on the human [42]. However, this solution can be time-consuming and it may be cumbersome to model the human and the world for different scenarios. Another approach is to learn a model of the world for relevant tasks from the human offline demonstrations and try to predict the interaction forces/torques acting on the human [43]. For this work, we have decided to go for the learning approach. In regard to \(\mathbf{r}_{k\pm i}\), when the human subject is not asked to do a given task, the problem becomes even more complex and depends on many variables. For example, for daily-life activities, to estimate what a human will do and how will do them, we should know the hidden internal objective (state) of the human in his mind. Using other sensory modalities like cameras, we may infer the human action, e.g., reaching an object, and human motion and trajectory, e.g., depending on the object's location and obstacles. However, this is out of the scope of this work, and we are only considering the human dynamical states and interaction forces measured by proprioceptive sensors. Moreover, depending on the type of \(\mathbf{r}_{k\pm i}\), we can consider \(\mathbf{r}_{k\pm i}\) as the solution of a classification or a regression problem. In this work, as a simplifying assumption, we only consider human symbolic actions as the hidden state, and will estimate it as a classification problem. In the offline phase, human actions are annotated by experts, while in the online phase, given the input data human next action is estimated, i.e., \(\mathcal{P}(\mathbf{a}_{k+1}|\mathbf{x}_{k},\dots,\mathbf{x}_{k-N},\mathbf{f}^{c}_{k},\dots, \mathbf{f}^{c}_{k-N})\). Noticeably, in (6), \(\mathbf{r}_{k},\dots,\mathbf{r}_{k-N}\) are compacted and approximated as \(\tilde{\mathbf{a}}_{k+1}\). Hence, equation (6) can be revised as follows: \[\tilde{\mathbf{a}}_{k+1} =\mathcal{D}^{*}_{1}(\mathbf{x}_{k},\mathbf{x}_{k-1},\mathbf{x}_{k-2},\dots, \mathbf{x}_{k-N}, \tag{7a}\] \[\mathbf{f}^{c}_{k},\dots,\mathbf{f}^{c}_{k-N}),\] \[\tilde{\mathbf{x}}_{k+1},\tilde{\mathbf{f}}^{c}_{k+1} =\mathcal{D}^{*}_{2}(\mathbf{x}_{k},\mathbf{x}_{k-1},\mathbf{x}_{k-2},\dots, \mathbf{x}_{k-N},\] (7b) \[\mathbf{f}^{c}_{k},\dots,\mathbf{f}^{c}_{k-N},\tilde{\mathbf{a}}_{k+1}),\] where \(\mathcal{D}^{*}_{1}\) and \(\mathcal{D}^{*}_{2}\) are two optimal mappings to learn. As presented, the original complex problem of motion prediction introduced in (6) is transformed into action recognition in (7a) and motion prediction in (7b) problems. Given these mappings, the problem of motion prediction depends on the problem of action recognition at each inference step. As described in (IV-A) and shown in (6), the problems of action and motion prediction can be solved in a recursive fashion, i.e., repeat the process for the future time horizon \(T\). Consequently, those two mappings are inherently interconnected, and motion prediction results affect the future action recognition results, and at each time step action recognition influence the motion prediction. Instead of recursive fashion, inference can be performed directly for all the future time horizon \(T\) to predict human actions and motion. This way, we expect to enhance the computational time and be suitable for real-time applications, as it computes the outputs for all the time horizon in the future in a tensor form. Nevertheless, as explained before, this approach should meet the interconnection between human action and motion prediction. In order to solve the problem of human action recognition (i.e., learn \(\mathcal{D}^{*}_{1}\) in (7a)) and motion prediction (i.e., learn \(\mathcal{D}^{*}_{2}\) in (7b)) jointly, we have elaborated on the idea of MoE as shown in Fig. 2. This proposed architecture is different from both the classical MoE proposed in [41] and the architecture shown in Fig. 1 in the bottom. In MoE architectures, the gate outputs are not directly controlled, i.e., there is no control on the gate outputs. On the other side in Fig. 1 bottom, the architecture is composed of two sets of NNs, first, the human action recognition is learned, and then the output of the action recognition network and the input data are used to learn human motion prediction (fed to the motion predictors). Hence, it does not consider the inherent and mutual interconnection action and motion prediction as explained in the previous paragraph. So, as shown in Fig. 2, the two explained shortcomings are addressed with the proposed architecture, _guided mixture of experts_ (GMoE). We consider the outputs of both the gating and expert networks as the two sets of outputs of a single and large MoE network. The gate output predicts the human action as a classification problem, while the expert outputs predict human motion as a regression problem. The gate behavior is guided or controlled via enforcing it to predict human actions and as a result, each expert is trained to learn the motion associated with an action. As shown in Fig. 2, in the training phase, the gate weights are learned such that they minimize both the error of human action and motion prediction, while the expert weights are learned such that they minimize only the human motion prediction error. This approach intrinsically allows for smooth transient phases, resolving one of the challenges mentioned in II. This will be discussed in the experimental results and discussions. Given the description, the action prediction gate output is \(\mathcal{P}(a_{i}|X)\) where \(X\) is the input vector and \(a_{i}\) is the \(i\)-th action. \(i\)-th expert output associated with action \(a_{i}\) can be written as \(\mathcal{P}(y_{i}|X,a_{i})\). Therefore, the probability distribution of the Fig. 2: Proposed Guided Mixture of Experts (GMoE) for human action and motion prediction. The gate network predicts human action and experts predict human motion. motion prediction can be written as the marginal probability over the gate outputs as: \[\mathcal{P}(y|X)=\sum_{i=1}^{N}\mathcal{P}(y_{i}|X,a_{i})\mathcal{P}(a_{i}|X), \tag{8}\] where \(y\) is the motion prediction output vector. The total loss function \(L\) for GMoE can be written as a linear combination of the two output losses \(L_{1}\) (associated with action prediction loss function) and \(L_{2}\) (associated with motion prediction loss function) with the gains \(b_{1}\) and \(b_{2}\) that are set by the user. Here, \(L_{1}\) is set as a categorical cross-entropy loss and \(L_{2}\) is the mean squared error. In other problems, the user may choose different loss functions. In our case, we define the total loss as \(L=b_{1}L_{1}+b_{2}\)\(L_{2}\), namely: \[L =-\frac{b_{1}}{2M}\sum_{t=1}^{T}\sum_{j=1}^{M}\sum_{i=1}^{N}a_{i} ^{j,t}log(\tilde{a}_{i}^{j,t}) \tag{9}\] \[+\frac{b_{2}}{2M}\sum_{t=1}^{T}\sum_{j=1}^{M}\|\tilde{\mathbf{y}}^{j, t}-\mathbf{y}^{j,t}\|_{2},s.t.\;\;\tilde{\mathbf{y}}^{j,t}=\sum_{i=1}^{N}\tilde{a}_{i}^{j,t }\tilde{\mathbf{y}}_{i}^{j,t},\] where scalar value \(a_{i}^{j,t}\) and vector \(\mathbf{y}_{i}^{j,t}\) are human action and motion (e.g., joint values, joint velocities, reaction forces) ground truth related to the \(i\)-th action and \(j\)-th data at the time instance \(t\) in the future, and \(\bar{\cdot}\) indicates estimated values that are stochastically found. \(M\) is the total number of data, and \(N\) is the number of experts or modeled actions. When designing the network, \(b_{1}\) and \(b_{2}\) are positive numbers chosen manually as hyperparameters such that both classification (action recognition) and regression (motion prediction) problems converge while training. For this purpose, a suggested approach is first to tune \(b_{1}\) such that the classification problem converges, and later accordingly, set the parameter \(b_{2}\). In this way, we are ensuring that each expert is learning the motion associated with an action. Moreover, \(l_{1}\) and \(l_{2}\) regularization terms can be used to penalize the weight values and avoid overfitting, however they are not reported in the loss function in (9). Looking at (9), during back-propagation while training, we can observe that the gate weights rely on both \(L_{1}\) and \(L_{2}\) losses, while the expert weights only depend on \(L_{2}\). This shows an important feature of the proposed approach: not only does the human action affect how the human moves, but also the way the human motion affects the recognized action. Moreover, when the human subject is performing an action \(a_{i}\) (assuming the optimization problem is converged), \(a_{k}\) goes close to zero \(\forall k\neq i\), hence the \(i\)-th expert is enforced to learn the human motion associated with \(i\)-th action. Finally, in the transient phase when the subject alters from an action to another, the two associated experts try together to reduce the error on the motion prediction output, proportional to the gate outputs. ## V Experiments, Results & Discussions ### _Experimental setup_ The hardware experimental setup and software pipeline are shown in Fig. 3. In this setup, human data are collected using Xsens wearable motion capture system [1] which streams the inertial measurement unit (IMU) sensors data connected to each body link of the subject. The ground reaction forces and torques are measured using iFeel shoes equipped with force/torque sensors [2]. The data are streamed through a wearable device [3] using the YARP middleware [44]. Humans are modeled using a 66 DoFs URDF model, and an inverse kinematic implementation computes the joint values and velocities [45] [4]. Data are resampled at 25 Hz. In the offline phase, for NN training the human whole-body data are visualized and annotated, and are logged to train GMoE NN architecture later. Instead, in the online phase, the inverse kinematics outputs and shoes data are streamed to the NN inference block in order to estimate future human motion for a given time horizon. In the online phase, both the ground truth data and predicted ones are visualized. The programs run on a 64 bit i7 2.8 GHz workstation, equipped with 32 GB RAM, Ubuntu 20.04 LTS, and Intel(R) Iris(R) Xe Graphics. During experiments, human subject was asked to walk naturally inside a room space, and in total less than \(8\)\(mins\) of data have been collected and carefully annotated. The human subject was doing the following actions: _Walking_, _Rotating_, _Stopping_, and other irrelevant actions labeled as _None_. In this case, 70% of data is considered as the training data, 20% validation data, and the last 10% as the test dataset. GMoE architecture is implemented in TensorFlow2 using the functional API. Four similar experts (with one LSTM layer) associated with the number of human actions and one gate network (with two Dense layers) have been considered. For comparison purposes, an architecture with four LSTM layers for action recognition and motion prediction is implemented as well, similar to Fig.1 on the top. While training, the Adam optimizer with a decayed learning rate is used. Moreover, to overcome overshooting problems, dropout and batch normalization layers are used in the implemented architecture. Finally, the inputs to the network are joint values and velocities, and ground reaction forces/torques with \(N=5\) past data in (7). Since LSTMs are inherently recursive, we predict the human motion directly (no autoregressive implementation) for the future time horizon of \(1\)\(sec\), i.e., \(T=25\) steps. Fig. 3: Offline and online setup pipelines. ### _Results_ The mean and standard deviation results of training and validation sets over 10 trials are shown in Fig. 4 for both LSTM and GMoE architectures. In these experiments, the parameters of (9) are set to \(b_{1}=1.0\) and \(b_{2}=0.2\), and the patience number is set to \(5\) while training. Fig. 3(a) on the top shows the total losses related to \(L\) in (9), including \(l_{1}\) and \(l_{2}\) regularization terms as well; in the middle, it shows the action prediction loss related to \(L_{1}\) in (9), and at the bottom, it shows the loss associated with the motion prediction \(L_{2}\) in (9). Fig. 3(b) on the top shows the accuracy of action prediction, and at the bottom, it shows the mean absolute error (_mae_) of motion prediction. Table I demonstrates the results of the two architectures on the test set. As shown, even if LSTM architecture has a deeper network with \(5.35\)\(millions\) trainable parameters with respect to GMoE with \(2.21\)\(millions\) number of trainable parameters, the performance of GMoE surpasses the LSTM architecture. Fig 5 shows the results of the human action and motion prediction at different moments. Online inference takes \(30ms\) on average at each time step running on the specified machine. On the top, it shows the snapshots of the human motion in light gray color and the results of the prediction for \(0.2sec\) in the future in the light red color. Notice that, currently, the future base pose is not estimated, hence the two avatar bases coincide. In the second row of the figure, black, blue, red, and green colors indicate _none_, _rotating_, _standing_, and _walking_ actions. The results of \(T=1sec\) of the prediction time horizon are shown with small circles, and probabilities of the next estimated actions are drawn with solid lines. Finally, figures in the third row and at the bottom demonstrate the results of the prediction of the human right knee joint angle in degrees and the left foot ground reaction force in \(z\) direction of the body frame. In these rows, small circles show the prediction results for the future time horizon at each step, and the solid lines show the current measured values. In Fig 5, at \(t=388.7sec\) (on the left) while the subject is walking, GMoE predicts human will walk for the next \(1sec\) with high probability (close to \(1.0\)). Hence, human motion prediction predicts the motion associated with the _walking_ action for the human for the next \(1sec\). In the second figure on the left at \(t=390.9sec\), as soon as first data arrives that showing the trace of human starting the _rotating_ action, the inference outputs reflect it on the action prediction results, i.e., smoothly the probability of _rotating_ action increases (blue color) compared to _walking_ action (green color) probability which decreases in the future. When the human starts to rotate at \(t=391.6sec\), the probability of the human _rotating_ action at \(t+1.0sec\) is higher than the one at \(t\), and reversely for the _walking_. Later, at \(t=393sec\) human is predicted to rotate for the next \(T\) time horizon. Finally, at \(t=394sec\) (the fourth column on the right side), the prediction results show a trend from _rotating_ to _walking_ action for the future time horizon. For \(t\in[392,393]\), first knee joint angle and feet wrenches is predicted with a _walking_ pattern, while later this has been transformed to a _rotating_ pattern as the human starts to rotate. This is why in the figure, the predicted joint angle trajectory alter from _walking_ trajectory to _rotating_ trajectory smoothly. As denoted by the figure, one of the reasons that the inference results are very sensitive is due to the fact that only the last 5 time steps (i.e., \(0.2sec\)) are used to predict the next \(1sec\). Finally, the results of the last row of the figure validate that the proposed architecture predicts accurately the M-shape pattern of human walking stride, which is of paramount importance for biomedical applications. ### _Discussions_ In Sec III, the problem definition is formulated and inspired by the human dynamics and human motor system theory, and encoded motion and interaction forces as shown in Fig. 5 predict accurately ground truth. However, the proposed solution in the current form does not explicitly take into account human dynamics, i.e., there is no task to constrain the human dynamics, and it cannot ensure the feasibility of the predicted motion. Hence, in future development, we are considering proposing a physics-informed NN to predict the human motion [46, 13, 12]. Connected to the cost function proposed in (9), however \(L_{2}\) term encourages the associative learning of the experts and discourages the localization of the experts as stated by [41], the first term in (9) related to \(L_{1}\) encourages the localization of the experts. To further encourage the competitiveness Fig. 4: Training and validation set results of GMoE and LSTM architectures related to loss functions and metrics for action and motion prediction. \begin{table} \begin{tabular}{c c c c} \hline \hline _Architecture_ & _total loss_ & _accuracy_ & _mae_ \\ \hline _GMoE_ & \(2.15\pm 0.32\) & \(0.78\pm 0.02\) & \(0.48\pm 0.02\) \\ _LSTM_ & \(2.74\pm 0.42\) & \(0.72\pm 0.05\) & \(0.52\pm 0.02\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Test set mean and standard deviation results of GMoE and LSTM architectures. among the experts, one can use other loss functions as \(L_{2}\) in (9), for example \(\sum_{i=1}^{N}\tilde{a}_{i}^{j,t}\|\tilde{\mathbf{y}}_{i}^{j,t}-\mathbf{y}^{j,t}\|_{2}\)[41]. In this case, we expect the results of the action prediction do not change considerably while affecting the motion prediction results, especially at transient phases when human action alters. ## VI Conclusions In this paper, we proposed a novel approach for simultaneous whole-body human action and motion prediction for the short time horizon in the future. It can effectively predict the human interaction wrenches with the ground. The mixture of experts (MoE) notion has been adopted to solve the two problems together, and the results show the effectiveness of the proposed solution for real-time applications. In the future, we aim at generalizing the proposed approach over several subjects, and at encoding intraclass human action and motion variations, using a hierarchical version of MoE. Finally, we will consider human dynamics in NN architecture explicitly, to ensure the feasibility of the generated motion and consideration of human constraints.
2305.12868
NAS-FM: Neural Architecture Search for Tunable and Interpretable Sound Synthesis based on Frequency Modulation
Developing digital sound synthesizers is crucial to the music industry as it provides a low-cost way to produce high-quality sounds with rich timbres. Existing traditional synthesizers often require substantial expertise to determine the overall framework of a synthesizer and the parameters of submodules. Since expert knowledge is hard to acquire, it hinders the flexibility to quickly design and tune digital synthesizers for diverse sounds. In this paper, we propose ``NAS-FM'', which adopts neural architecture search (NAS) to build a differentiable frequency modulation (FM) synthesizer. Tunable synthesizers with interpretable controls can be developed automatically from sounds without any prior expert knowledge and manual operating costs. In detail, we train a supernet with a specifically designed search space, including predicting the envelopes of carriers and modulators with different frequency ratios. An evolutionary search algorithm with adaptive oscillator size is then developed to find the optimal relationship between oscillators and the frequency ratio of FM. Extensive experiments on recordings of different instrument sounds show that our algorithm can build a synthesizer fully automatically, achieving better results than handcrafted synthesizers. Audio samples are available at https://nas-fm.github.io/.
Zhen Ye, Wei Xue, Xu Tan, Qifeng Liu, Yike Guo
2023-05-22T09:46:10Z
http://arxiv.org/abs/2305.12868v1
NAS-FM: Neural Architecture Search for Tunable and Interpretable Sound Synthesis based on Frequency Modulation ###### Abstract Developing digital sound synthesizers is crucial to the music industry as it provides a low-cost way to produce high-quality sounds with rich timbres. Existing traditional synthesizers often require substantial expertise to determine the overall framework of a synthesizer and the parameters of sub-modules. Since expert knowledge is hard to acquire, it hinders the flexibility to quickly design and tune digital synthesizers for diverse sounds. In this paper, we propose "NAS-FM", which adopts neural architecture search (NAS) to build a differentiable frequency modulation (FM) synthesizer. Tunable synthesizers with interpretable controls can be developed automatically from sounds without any prior expert knowledge and manual operating costs. In detail, we train a supernet with a specifically designed search space, including predicting the envelopes of carriers and modulators with different frequency ratios. An evolutionary search algorithm with adaptive oscillator size is then developed to find the optimal relationship between oscillators and the frequency ratio of FM. Extensive experiments on recordings of different instrument sounds show that our algorithm can build a synthesizer fully automatically, achieving better results than handcrafted synthesizers. Audio samples are available at [https://nas-fm.github.io/](https://nas-fm.github.io/). ## 1 Introduction Creating and rendering music has become increasingly convenient with the help of digital sound synthesizers which simulate the timbre of real instruments that are potentially expensive and rare. Lots of sound synthesizers are designed whose commercial values are widely recognized, while the complexity of the synthesizers also increases dramatically. Instead of carefully designing delicate structures of highly diversified instruments, our target is to design a general framework which can flexibly construct digital synthesizers simply from recordings and allow further tuning of the timbres in a controllable and interpretable way. Early approaches to sound synthesis are parametric, designing a set of components (e.g., oscillators and modulators) based on digital signal processing (DSP), and ultimately creating complex timbres with specific structures combining these components. Typical works include subtractive synthesis [13], additive synthesis [20], frequency modulation (FM) [15] and wavetable synthesis [16], among which FM-based methods are widely used in sound synthesis due to their flexibility in tuning timbre with only a few parameters. While satisfying sounds can be produced, designing the parametric structure of the synthesizer as well as parameter values requires considerable expertise. This is due to the non-linear interactions between synthesizer parameters, as well as the wide range of possible values for each parameter. Achieving a desired sound often involves a process of iterative adjustments and fine-tuning, which can be time-consuming and require a deep understanding of digital signal processing techniques. Although some efforts have been made to determine the parameters of FM synthesizer based on estimation theory [21], genetic algorithm [10], LSTM-based [23], VAE-based [14] and dilated CNN based [13] methods, these methods are limited by static spectra or an audio clip conditioned on an entire ADSR envelope under a specific pitch value and a fixed duration length. Thus, these methods cannot be applied to the audio with continuously varying pitch and loudness on dynamic spectra. Neural synthesis methods have been developed recently to \begin{table} \begin{tabular}{c|c c c} \hline \hline Synthesizer & Tunable & Interpretable & \begin{tabular}{c} Automatic \\ Design \\ \end{tabular} \\ \hline Digital & & & \\ Synthesizer & & & \\ \hline Neural Network & & & \\ Synthesizer & & & \\ \hline Our NAS-FM & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison between the proposed NAS-FM with other strategies for digital audio synthesis. learn a deep neural network to produce audio in the data-driven scheme and achieved impressive results in terms of audio fidelity. However, for fully-neural generative models such as WaveGlow [20], and HiFi-GAN [17], sufficient data is usually required to train a complex model with non-interpretable parameters, which greatly limits the controllability of the synthesizer as required by the music industry, specifically when the generated non-perfect sound needs tuning. Although the harmonic-plus-noise model-based differential DSP (DDSP) is integrated into the network to improve controllability [1], it only allows the transfer of timbres between different instruments rather than explicit adjustment. FM-based neural synthesizer DDX7 [14] is then developed. However, this method can only learn the time-varying parameters of the submodules under the assumption that the overall structure has been manually defined. The large reliance on expert knowledge and extensive time cost greatly limit the feasibility of building digital synthesizers for general instruments. Moreover, the handcrafted framework may make the resulting synthesizer suboptimal in modelling the real instrument. This paper proposes NAS-FM, a neural architecture search (NAS) based FM synthesizer. It can be seen as an effort to build interpretable neural generative models. A key of the proposed method is NAS. With a carefully designed search procedure, different structures and oscillator sizes are included in a universal search space of the supernet, and an evolutionary search algorithm is developed to find the optimal structure between oscillators and the frequency ratio of FM. The advantages of the proposed NAS-FM are described below: * The audio synthesizer can be built based on recordings without any expert knowledge, largely simplifying the pipeline and reducing the cost of audio synthesizer construction. It is also possible to quickly build new variants of the target sound with flexible adjustments; * It returns a tunable and interpretable conventional FM-based interface with a few parameters, making it easily embedded into existing audio workstations; * Extensive experiments on recordings of different instruments demonstrate that synthesizers fully automatically built by the proposed NAS-FM can achieve better results to carefully handcrafted synthesizers. ## 2 Related Work ### Sound Synthesis Sound synthesis contains digital signal processing (DSP) methods and neural network methods. DSP methods have been integrated into the digital audio workstation used by musicians. More specifically, DSP methods start from several simple waves such as sine, sawtooth, square and triangle generated by an oscillator. The additive synthesizer [15] generates new sounds by adding various simple waves. The Subtractive synthesizer [16] filters a simple wave from white noises. FM synthesizers [15] rely on simple waves to modulate frequency to create complex timbre. Wavetable synthesis manipulate a collection of short samples to design new sounds. These traditional methods need users to determine the configuration manually for a given sound. Neural network synthesis models adopt the deep neural network to learn the mapping function between audio and given input, for instance, pitch and loudness. The early exploration begins with auto-regressive models such as WaveRNN [13] and SampleRNN [14]. The following works [1][17] are based on various generative models to further improve the quality of synthesis sound. However, the above methods may lead to glitch problems because of the lack of phase continuity. Therefore, DDSP [1] rely on the harmonic-plus-noise model to keep the phase continuous and also make the sound can be directly controlled by pitch and loudness. While these methods can be optimized automatically with the help of gradient descent to obtain the model, there are few control factors to help users manipulate the synthesized result directly. Therefore, our approach aims to introduce the FM synthesizer with controllable factors to help the user interact with the synthesized audio. ### FM Parameter Estimation FM parameter estimation also called FM matching is adopted to determine the configuration of an FM synthesizer. The early approach considers this problem as a searching problem that uses the genetic algorithm(GA) [18] or its variants to find a best-fit configuration in a specific searching space. Homer employs GA solving a sound matching problem for different FM algorithms such as Formant FM [18], Double FM [18], Nested FM and Feedback FM [19]. These methods can achieve very close re-synthesizing results with a static target spectrum when selecting an appropriate FM algorithm as prior. The following methods leverage an open-source FM synthesizer Dexed synthesizer [1] to construct a large number of pair data between presets1collected on the Internet and synthesized audio clip generated by Dexed. By reversing this process, these works employ LSTM [16], VAE [10] and dilated CNN [1] to estimate the preset. Since the synthesized sound is generated by pressing a note with specific velocity, duration and pitch using Dexed synthesizer, the result of the model for a realistic sound audio clip without any prior is unpredictable. DDX7 [14] predict the envelopes of oscillators using the widely-used TCN [1] decoder with algorithm and frequency ratios as prior. Therefore, our method is designed to construct an FM synthesizer without prior knowledge. Footnote 1: Preset means a full FM configuration for specific sound designed by users ### Neural Architecture Search Neural Architecture Search (NAS) can automatically find the best neural architecture for a specific task. Many works focus on computer vision [13]lor natural language processing [21] tasks. More recently, one-shot NAS [Guo _et al._2020] [Bender _et al._2018] train a shared supernet once which includes all candidate architectures. Then, the supernet is used as an estimator to evaluate every possible architecture in the search space. This method has been widely used on various applications to determine a good structure, for example, image classification [Guo _et al._2020], object detection [Liang _et al._2021], 3d scene understanding [Tang _et al._2020], BERT compression [Xu _et al._2022]. These methods introduce neural architecture search to each specific task which gets a better model architecture than manual design. Although NAS has been widely used in lots of areas, the applications are mainly focused on neural networks. Actually, we are the first to bring NAS to fully automatically build the sound synthesizer. ## 3 Review of FM Synthesizer ### Basics of FM FM was originally proposed in [Chowning1973] for sound synthesis and different timbres are produced by controlling a set of parameters. With two sound sources, the modulator oscillator \(\sin(2\pi f_{m}t)\) and the carrier oscillator \(\sin(2\pi f_{c}t)\), FM basically generates a time-domain signal \(y(t)\): \[y(t)=a(t)\sin(2\pi f_{c}t+I\sin(2\pi f_{m}t)), \tag{1}\] where \(f_{c}\) and \(f_{m}\) are the carrier frequency and modulation frequency respectively, \(I\) is the modulation index, and \(a(t)\) is the amplitude envelope of the carrier. \(y(t)\) can be further decomposed by using Bessel functions of the first kind as \[y(t)=a(t)\sum_{n=-\infty}^{n=+\infty}J_{n}(I)\sin(2\pi(f_{c}+nf_{m})\cdot t) \tag{2}\] which shows that the sidebands of \(y(t)\) distribute evenly around \(f_{c}\) with spacing as \(f_{m}\), and the spectra is harmonic when the frequency ratio \(r\in\mathbb{Q}\) where \(r=f_{c}/f_{m}\). \(J_{n}(I)\) is a Bessel function of the modulation index \(I\), and dynamic spectra can be generated if the \(I\) becomes a time-variant function \(I(t)\). ### FM algorithms The (1) explains the basic module of the FM synthesizer to generate sounds. More diversified and expressive FM synthesizers can be further developed by designing complicated topologies, which are called "FM algorithms", on connecting the carrier and modulator oscillators. Typical FM algorithms are shown in Fig. 1. Fig. 1(a) denotes the single FM expressed by (1). The Nested FM [Justice1979], formant FM [Horner _et al._1993], and double FM [Schottstaedt1977] are illustrated in Fig. 1(b)-(d), whose outputs are calculated by \[y(t)= a(t)\sin(2\pi f_{c}t+I_{m1}(t)\sin[2\pi f_{m1}t\] \[+I_{m2}(t)\sin(2\pi f_{m2}t)]), \tag{3}\] \[y(t)= a_{1}(t)\sin[2\pi f_{c1}t+I_{m}(t)\sin(2\pi f_{m}t)]\] \[+a_{2}(t)\sin[2\pi f_{c2}t+I_{m}(t)\sin(2\pi f_{m}t)], \tag{4}\] and \[y(t)= a(t)\sin[2\pi f_{c}t+I_{m1}(t)\sin(2\pi f_{m1}t)\] \[+I_{m2}(t)\sin(2\pi f_{m2}t)], \tag{5}\] respectively. These FM algorithms produce different timbres, and as shown in Fig. 22, by selecting different FM algorithms. Footnote 2: We do not consider the feedback FM in our work as the same reason in [Caspe _et al._2022] To briefly summarize, a set of parameters controls the FM output, which can be time-variant or fixed. In our work, time-variant parameters include the oscillators' envelopes, i.e., \(a(t)\) and \(I(t)\), which are determined by the time-variant input, such as \(f_{0}\) and loudness. Fixed parameters are the chosen FM algorithm and the frequency ratio of each oscillator, which need to be determined by the expected timbre. ## 4 Proposed NAS-FM synthesizer In this section, we aim to fully automatize the designing of an FM synthesizer in a data-driven manner, thus eliminating the reliance on expertise and labour to tune the timbres. A NAS-FM synthesizer is proposed, with an overall framework shown in Fig. 3 on the next page. For input audio, the pitch and loudness are extracted, and these two features Figure 1: Different FM algorithms: (a) Single FM; (b) Nested FM; (c) Formant FM; (d) Double FM. The green box refers to the carrier and the blue box refers to the modulator Figure 2: Digital synthesizers in the commercial product YAMAHA DX7. The lower part is the user interface of a synthesizer. The upper part shows the FM algorithms for audio synthesis, with green and blue boxes denoting carriers and modulators, respectively. are fed into an oscillator envelope prediction network to estimate the envelopes of the oscillators. We develop architecture search methods to determine the optimal FM algorithm, and given the FM algorithm and oscillator envelopes, sounds with a specific timbre can be produced. Depending on whether the sound is from the real environment, a learnable reverb module [11] can be optionally used to simulate the room effects. The framework is optimized in an auto-encoder setting, i.e., seeking to recover the original signal in the final output. Besides the FM algorithm search, the framework is similar to the framework in DDX7 [12], which estimates pitch and loudness by CREPE [13] and A-weighting loudness [14], designs the oscillator envelope prediction network as a temporal convolutional network (TCN) [1]. However, DDX7 requires a prior FM configuration designed by experts. By adopting NAS, the proposed framework designs the FM synthesizer in the fully-automated data-driven pipeline and also returns a tunable and interpretable interface used by musicians. In the following, details of the proposed NAS-FM will be introduced, which include a) converting the FM algorithm to a graph, b) designing the search space, c) training the supernet, and d) selecting the FM algorithm. ### Directed Acyclic Graph of FM Algorithm To facilitate discussion in this section, we convert an FM synthesizer, with examples shown in Fig. 1, to a directed acyclic graph (DAG) [23] with an ordered sequence of \(N\) nodes, where \(N\) represents the number of oscillators. Each node \(x^{(i)}\) refers to the oscillator's output. Each directed edge \((i,j)\) indicates whether node \(x^{(j)}\) is the modulating node of \(x^{(i)}\). The topology of the graph is associated with the FM algorithm. Thus, an intermediate node \(x^{(i)}\) can be expressed as \[x^{(i)}(t)=a_{i}(t)\sin(2\pi f_{i}t+\sum_{j\in\mathbb{M}}x^{(j)}(t)), \tag{6}\] where \(\mathbb{M}\) is a set of oscillators modulating node \(x^{(i)}\). When \(M\) is empty, \(x^{(i)}\) outputs standard sine wave. The output of FM is calculated through the sum of carrier nodes \[y(t)=\sum_{i\in\mathbb{C}}x^{(i)}(t), \tag{7}\] where \(\mathbb{C}\) is the set of carriers. ### Search Space Design There are a huge number of possible configurations for the FM synthesizer. By converting the FM algorithm to the DAG, the principles of NAS can be applied to design the FM algorithm. In NAS for neural networks, all possible architectures of the network, which span the "search space", can be represented by a general DAG, with each candidate architecture as a sub-graph. Similarly, we design the search space for FM algorithm here. We visualize the operation in (6) in Fig. 4, where two issues should be solved to define an oscillator: a) what is the frequency ratio? The frequency ratio is defined originally in (2), and is the ratio between \(\hat{f_{i}}\) in (6) and fundamental frequency \(F_{0}\) here; b) which modulators will be connected to the current oscillator? As depicted in Fig. 4, we define the frequency ratio set \(\mathbb{F}\mathbb{R}=\{1,2,3,..,K\}\) which consists of \(K\) integers, and assumes that there are \(N\) candidates in the modulator set \(\mathbb{M}\). Actually, the interval in the frequency ratio set can be reduced to 0.1 or even smaller, to 0.01 instead of 1, for a more refined exploration. Traversing the original search space is impossible considering the exponential number of possibilities for oscillator connections. Now let us analyze the design of the search space. Previous works adopted evolutionary search on frequency ratio set under a fixed FM algorithm such as double FM [15] and nested FM [15]. Therefore, the challenge is how to design an appropriate space including various FM algorithms, instead of enumerating all possible FM algorithms. In [1] and [21], the weight sharing is proposed to reduce the search space, which forces sub-graph candidates to have the same weights in the graph nodes commonly shared by different candidates. This also makes it possible to directly train a supernet including all possible candidates. Here we propose a novel envelope-sharing strategy for FM synthesizers to make all FM configuration candidates trained in the supernet. Specifically, we construct a search space as shown in Fig. 3. The search space can be divided into a carrier layer and several modulator layers. We set the oscillator at the same layer with the same frequency ratios sharing the same envelope. Due to the envelope-sharing strategy, There are following rules in our search space. (1) A oscillator in a certain layer can only be modulated by the upper layer; (2) The sum of the output of selected oscillators at the carrier layer forms the final signal;(3) A oscillator is discarded when there is no Figure 3: The overall architecture of NAS-FM. The learnable Rever module is optional depending on whether the real room effect should be simulated for real-world audios. connection with other oscillators or final output. In addition, according to our experience, we find two modulator layers are enough. The number of candidate oscillators in a layer depends on the oscillator number in the expected FM synthesizer. ### Supernet Training with Proxy Oscillator Although the proposed envelope-sharing strategy makes all possible FM configurations can be trained using a supernet that includes all candidates, we still encounter the challenge of a large search space. The huge search space makes it hard to evaluate each FM configuration in the supernet during training. To further improve the training efficiency, we propose to use the "proxy oscillator" as the proxy for all oscillators in each layer to determine the envelopes of the oscillators. Specifically, during supernet training, a fixed Nested FM in (3) with a carrier and two modulators are chosen, which has three layers in accordance with the settings in the Sec. 4.2. For each oscillator, uniform sampling [1] adopted to determine the frequency ratio. With these two configurations fixed, the envelopes of the oscillators are learned, and after training, the learned envelopes are shared within the same layer for a complicated search space. Utilizing the technique, we can expand the width of the search space flexibly. ### FM Algorithm Selection After the supernet training and search space design, we use the evolutionary algorithm to conduct FM algorithm selection. Specifically, we put all \(N\) candidate oscillators \(ol\) in order. a certain FM configuration is encoded as an individual can be formulated as \[\{f_{ol_{1}},f_{ol_{2}},...,f_{ol_{N}},l_{ol_{1}}l_{ol_{2}},...,l_{ol_{N}}\} \tag{8}\] where \(ol_{i}\) is the \(i_{th}\) oscillator, corresponding \(f_{ol_{i}}\) and \(l_{ol_{i}}\) indicate the selected frequency ratio and connection relationship. The connection relationship is the relation between the lower layer. If the oscillator belongs to the carrier, the lower layer is the output signal. If the connection relationship of this oscillator is none means it is discarded. Firstly, a random population is initialized within the initial space. Secondly, we evaluate the fitness score of the generated individuals and select the top individuals. The fitness function will be introduced in the experiment section. Thirdly, crossover and mutation are used to generate new individuals to update the population, until meeting the stopping criterion. ## 5 Experiments To evaluate the proposed NAS-FM approach which aims to automatically learn the FM synthesizer that is applicable to the music industry, we aim to answer the following questions: a) Given sound recordings, can the FM synthesizers learned by the proposed method be comparable to the manually designed synthesizers? b) We fuse different FM algorithms into one universal search space, is this strategy better than separately searching the best configuration for each FM algorithm? c) Can we controllably tune the timbre of the produced sounds or create new instrument timbres by modifying the parameters of the learned FM synthesizer? ### Dataset We conduct experiments on the benchmark URMP dataset [14]. Sound recordings of three real instruments, which are violin, flute, and trumpet, are chosen and the "optimal" manually designed FM algorithms of these instruments are given as in [11]. The three instruments are also the most typical instrument of the strings, woodwinds, and brass. Each instrument recording is divided into 4-second segments, and silent segments are discarded. The loudness and pitch are estimated by the A-weighting loudness [12] and CREPE [15] methods. Each audio clip is resampled to the 16kHz sampling rate and analyzed with a frame size of 2048 and a hop size of 64, yielding 1000 frames. We split the dataset into train, validation, and test sets with proportions of 0.75, 0.125, and 0.125, respectively. ### Experimental Setup For training, a supernet contains \(c*m*m\) paths where \(c\) is the number of frequency ratios of the carrier, and \(m\) is the number of frequency ratios in modulators. We set \(c\) as 15 and \(m\) as 5. The model adopts the stack TCN architecture [1] as a sequence-to-sequence model to predict oscillator envelope through pitch and loudness. We first stack 4 TCN architecture to extract a hidden temporal feature. Then, we construct \((c+m+m)\) TCN architectures with different weights to predict the envelope of the corresponding oscillator from the hidden feature. In fact, other sequence-to-sequence models can also be employed to model the temporal relationship in our pipeline. In addition, uniform sampling of oscillators on each layer with different ratios is adopted. We use Adam optimizer with an initial learning rate of 3e-4. For Figure 4: The oscillator in NAS-FM. The output of an oscillator conditioning on \(F0\) depends on the choice of frequency ratio, envelope and modulator. The choices are determined by NAS. regularize, we set the maximum value of the oscillator for the carrier and modulator as 1 and 2, respectively. The exponential decay strategy with a decreasing factor of 0.98 every 10k steps is adopted for the learning rate.The whole training steps are 500k, and the batch size is 16. For searching, we use the evolutionary algorithm with a population size of \(P\), crossover size of \(P/2\), and mutation size of \(P/2\) with mutation probability and max iterations of \(T\) depending on the search space size. In addition, since we use the fixed number of the candidate oscillator, the discarded candidate oscillator with different frequency ratios will cause the same fitness score. Therefore, we force the generated candidates with a unique fitness score to be legal. After searching, we directly extract the weight of the sub-model with the best fitness score from the supernet as our final model. Actually, fine-tuning the model or training from scratch with the searched FM configuration may further improve the performance. However, our aim is not to focus on the resynthesis performance but to get a controllable synthesizer close to the target sound so that musicians can further utilize it to tweak the sound or do other more interesting things such as sound morphing and sound interpolation. ### Evaluation Metric We use Frechet Audio Distance (FAD) [10] as the evaluation metric to measure the distance between real and generated sound. This method extracts the embedding of the audio using a pre-trained VGG-like model. The distance is calculated as follows: \[FAD=\left\|\mu_{r}-\mu_{g}\right\|^{2}+tr(\Sigma_{r}+\Sigma_{g}-2\sqrt{\Sigma_{ r}\Sigma_{g}}) \tag{9}\] where \(r\) and \(g\) are the real audio and generated audio respectively, \(\mu_{r}\) and \(\mu_{g}\) are the mean vector of embedding and \(\Sigma_{r}\) and \(\Sigma_{g}\) are the covariances of embedding. A smaller Frechet Audio Distance indicates a higher level of similarity between the distributions of real and generated data. In our experiment, we calculate the Frechet Audio Distance between the real data and synthesize validation data as the fitness score during searching. And calculating the FAD between the synthesized test data as the final evaluation result. ### Comparison with Manually-Designed FM Synthesizers In this part, we are interested to know if our method could achieve comparable results to manually designed FM synthesizers. The baseline is DDX7 [13] which the author retrieves on the web manually to find the patch with the most similar sound to the target sound. The patches they find have six oscillators, and we adopt the same oscillator number with them. Since the authors did not open source their evaluation code, we train the model following their method ten times with a random seed and took the best one as their result. In our method, we use a \(3*3\) candidate oscillator search space forcing three oscillators discarded to ensure six oscillators. During the search, we follow the above procedure and set the population size as 1000 and max iterations as 50. Results are shown in Table 2. The Test Data line means the Frechet Audio Distance between the entire real data from the real test data. We can see that our NAS-FM outperforms the hand-designed DDX7 baseline across all musical instrument recordings. The results show that our NAS-FM can search for more comparative FM configurations than manually designed FM synthesizers. ### Ablation Study of Search Space The previous methods of FM parameter estimation [13][14][14] were usually constrained to a specific FM algorithm. However, our approach puts the different FM algorithms in one search space. To demonstrate the significance of the method, we first want to answer whether different timbres have their own FM algorithms that are more suitable for them when operator size is fixed. If the answer is yes, can our method find the algorithm that best fits this timbres? We conduct our experiments conditioned on three oscillators and enumerate all possible FM algorithms: * Nested FM: a carrier with two nested modulators as shown in Fig. 1(b); * Formant FM: two carriers sharing a modulator as shown in Fig. 1(c); * Double FM: a carrier with two modulators in the same row as shown in Fig. 1(d); * Single FM+: a single oscillator adds a single FM. Single FM is shown in Fig. 1(a); * NAS-FM (Ours): Constructing a \(3*2\) candidate oscillator search space forcing three oscillators discarded to search both FM algorithms and frequency ratios. During the search process, the search space of a fixed FM algorithm equals only contains the frequency ratios of each oscillator. In addition, we set the population size as 30 and max iterations as 20 for each method for a fair comparison. The results are shown in Table 3. \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{4}{c}{Frechet Audio Distance (\(\downarrow\))} \\ \hline Model & Flute & Violin & Trumpet \\ \hline Test Data & 1.180 & 0.308 & 0.554 \\ DDX7 & 7.841 & 3.497 & 4.442 \\ NAS-FM & **7.077** & **3.255** & **3.384** \\ \hline \hline \end{tabular} \end{table} Table 2: FAD of resynthesis results using 6 oscillators \begin{table} \begin{tabular}{l c c c} \hline \hline \multicolumn{4}{c}{Frechet Audio Distance (\(\downarrow\))} \\ \hline Model & Flute & Violin & Trumpet \\ \hline Nested FM & 12.75 & **6.02** & **8.24** \\ Formant FM & 14.48 & 7.56 & 8.68 \\ Double FM & **11.16** & 7.27 & 9.91 \\ Single+ FM & 11.20 & 12.07 & 9.28 \\ NAS-FM & **11.16** & **6.02** & **8.24** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablated of search space using 3 oscillators To answer the first question, we compare the result across all FM algorithms. The Nested FM achieves the best performance on violin and trumpet. However, this algorithm yields the second-worst performance on the Flute. The double FM performs the best on the Flute but worst on the trumpet. Therefore, we find that no fixed FM algorithm achieves the best results for every musical instrument recording. However, our method can always find the best FM configuration for different instruments under the same searching setting. This proves the merit of our well-designed search space. ### Tuning of Learned FM synthesizers In this section, we check the tuning ability of NAS-FM. Two important tasks in sound synthesis are considered: sound morphing and timbre interpolation. #### Sound Morphing Due to the timbre of an instrument being controlled by a set of frequency ratio parameters, we want to show the ability of our NAS-FM tweaks the timbre from a trained synthesizer by adjusting a few parameters. Begin with a synthesizer designed for the trumpet, Fig. 5 (a) is the synthesized recording of the trumpet using our NAS-FM. More specifically, the searched FM algorithm is the same as the 7-th algorithm shown in Fig. 2 without the feedback module. It consists of two parts. The left part is a single FM in which the carrier frequency ratio is three and the modulator is one. The right part includes four oscillators which consist of a double FM with one more nested modulator. The frequency ratios among a carrier, double modulators and a nested modulator are 7,1,2 and 1, respectively. In Fig. 5(b), we modify the second formant position from the 7th harmonic to the 10th harmonic by modifying the frequency ratio of the carrier in the right part from seven to ten. In addition, we decrease the second and fourth harmonic by tuning the frequency ratio of the modulator in the left part from one to two, as shown in Fig. 5(c). We can find that due to the adjustment of one parameter in NAS-FM, the entire distribution and details of frequency can be easily changed. #### Timbre Interpolation Another meaningful application is timbre interpolation. Since our approach determines the FM configuration by searching for the Frechet Audio Distance to the target audio clip. We find that the target audio clips can produce new timbre. Therefore, we can change our fitness score to a variant form. For instance, we set the FAD from synthesized audio to violin as \(d_{v}\) and the FAD from synthesized audio to trumpet as \(d_{t}\). Next, we define the novel fitness function as \(d_{v}+d_{a}+|d_{v}-d_{a}|\) aiming to find a synthesizer to generate sound regarded as an intermediate timbre between violin and trumpet. Surprisingly, we use the supernet trained from violin recordings to conduct the evolutionary algorithm and get an interesting result. As shown in Fig 6, we obtain a new instrument recording similar to both violin and trumpet. ## 6 Conclusion We present NAS-FM, a tunable and interpretable sound synthesizer based on frequency modulation. Given a target sound, we prove that our method can automatically design an FM synthesizer instead of spending a huge time designing it manually. Meanwhile, our auto-designed synthesizer can achieve comparable results to the handcrafted one. Furthermore, Our NAS-FM leverages the widely-used FM synthesizer as the main component in our framework. This makes our method can be directly understood by musicians that can be used to create new sounds without extra knowledge of the neural networks. ## Acknowledgments The research was supported by the Theme-based Research Scheme (T45-205/21-N) and Early Career Scheme (ECS-HKUST22201322), Research Grants Council of Hong Kong. Figure 5: Examples of sound morphing. The top figure is the synthesized sound of the trumpet. In the middle and bottom figures, we morph the sound by tuning a certain parameter in our NAS-FM. Figure 6: An example of timbre interpolation. The top and bottom figures show the linear spectrogram of the synthesized sound of the trumpet and violin, respectively. The middle figure shows the interpolation result between the violin and the trumpet.
2304.03494
Exploiting Alternating DVS Shot Noise Event Pair Statistics to Reduce Background Activity
Dynamic Vision Sensors (DVS) record "events" corresponding to pixel-level brightness changes, resulting in data-efficient representation of a dynamic visual scene. As DVS expand into increasingly diverse applications, non-ideal behaviors in their output under extreme sensing conditions are important to consider. Under low illumination (below ~10 lux) their output begins to be dominated by shot noise events (SNEs) which increase the data output and obscure true signal. SNE rates can be controlled to some degree by tuning circuit parameters to reduce sensitivity or temporal response bandwidth at the cost of signal loss. Alternatively, an improved understanding of SNE statistics can be leveraged to develop novel techniques for minimizing uninformative sensor output. We first explain a fundamental observation about sequential pairing of opposite polarity SNEs based on pixel circuit logic and validate our theory using DVS recordings and simulations. Finally, we derive a practical result from this new understanding and demonstrate two novel biasing techniques to reduce SNEs by 50% and 80% respectively while still retaining sensitivity and/or temporal resolution.
Brian McReynolds, Rui Graca, Tobi Delbruck
2023-04-07T06:29:42Z
http://arxiv.org/abs/2304.03494v2
# Exploiting Alternating DVS Shot Noise Event Pair Statistics to Reduce Background Activity Rates ###### Abstract Dynamic Vision Sensors (DVS) record "events" corresponding to pixel-level brightness changes, resulting in data-efficient representation of a dynamic visual scene. As DVS expand into increasingly diverse applications, non-ideal behaviors in their output under extreme sensing conditions are important to consider. Under low illumination (below \(\approx\)10 lux) their output begins to be dominated by shot noise events (SNEs) which increase the data output and obscure true signal. SNE rates can be controlled to some degree by tuning circuit parameters to reduce sensitivity or temporal response bandwidth at the cost of signal loss. Alternatively, an improved understanding of SNE statistics can be leveraged to develop novel techniques for minimizing uninformative sensor output. We first explain a fundamental observation about sequential pairing of opposite polarity SNEs based on pixel circuit logic and validate our theory using DVS recordings and simulations. Finally, we derive a practical result from this new understanding and demonstrate two novel biasing techniques shown to reduce SNEs by \(\mathbf{50\%}\) and \(\mathbf{80\%}\) respectively while still retaining sensitivity and/or temporal resolution. dynamic vision sensor, event camera, DVS, noise statistics ## I Introduction Dynamic Vision Sensors (DVS), or event cameras, efficiently encode dynamic visual information into a sparse stream of ON (increasing brightness) and OFF (decreasing) events with high temporal resolution. This sensing paradigm has several benefits including wide dynamic range, high temporal resolution, and low power consumption. DVS have already proven useful for many applications related to machine vision [1]. Despite these benefits, physical noise sources cause erroneous events even when there are no brightness changes in the scene, and elevated noise rates when illumination is low have thus far hindered widespread adoption in applications requiring high performance in dim lighting. Under low illumination, Shot Noise Event (SNE)s dominate DVS noise [2, 3, 4], and denoising DVS output has been the focus of numerous efforts [5, 6, 7]. Although many custom denoising strategies have been developed, none explicitly consider noise event-pair statistics. Many aspects of DVS noise remain difficult to predict, but recent work has made significant progress toward understanding of the processes and trade-offs that influence SNEs [8, 9, 10]. We expand on these efforts and explain a simple yet previously unreported behavior inherent to the self-timed reset necessary for DVS pixel operation. In Sec. II we describe the basic functionality of the DVS pixel with an emphasis on the circuit behavior that influences noise statistics. Sec. III describes the observation that SNEs tend to occur in opposite polarity (ON/OFF) pairs, and explains this behavior based on pixel reset logic. Sec. IV then demonstrates a practical result of this observation by demonstrating two sensor bias techniques that reduce SNE rates by directly manipulating noise statistics. ## II DVS Pixel Operation The first practical DVS pixel was introduced in [11], and modern event camera pixels are based on the same fundamental stages described in Fig. 1. These core components are a logarithmic transimpedance photoreceptor which generates an output voltage, \(V_{pr}\), proportional to log photocurrent, a change amplifier that amplifies signal changes around a fixed reference point, two independent comparators for generating ON and OFF output events when the signal changes by a tunable threshold value, and a circuit to reset the change amplifier after each event to allow the pixel to respond to changes around a new reference level. In most cases, this new reference is approximately the signal level that generated the previous event. Readout circuits in the focal plane periphery record and timestamp the resulting sequence of ON and OFF events to encode pixel-level brightness changes. Pixel behavior is refined by adjusting programmable biases (highlighted in red and depicted as current sources in Fig. 1), allowing the user to tune performance for a variety of sensing tasks. \(I_{pr}\) and \(I_{sf}\) adjust the temporal response of the photoreceptor, which is also limited by the background photocurrent. The effects of these two biases on SNE rates are extremely complex, and thoroughly described in [12]. The next set of biases define the independent ON and OFF thresholds, \(\theta_{ON}\) and \(\theta_{OFF}\), which are proportional to \(\log(\frac{I_{pa}}{I_{d}})\) and \(\log(\frac{I_{d}}{I_{sf}})\) respectively. After each event, \(M_{r}\) shorts the input and output of the change amplifier to prevent subsequent events during a refractory period or "dead-time", and opens again as the reset node rises. \(I_{refr}\) controls the rate at which the reset node charges, and can be tuned to increase or decrease the maximum firing rate for individual pixels. Composite effects of these biases are further detailed in [10]. ## III Shot Noise Event Pairs To better understand the root causes of DVS SNEs, examining the scatter plot of ON and OFF noise events shown in Fig. 2 reveal ON and OFF events are nearly balanced in each pixel. At first glance, this result is counter-intuitive given the well-known mismatch in independent ON and OFF threshold levels [4, 11]. Specifically, noise rates are known to increase dramatically with sensitivity [8]. Because \(\theta_{ON}\) and \(\theta_{OFF}\) are independent, it is extremely unlikely that a pixel with a low \(\theta_{ON}\) will also have an extremely low \(\theta_{OFF}\). In Fig. 2, the 99th percentile is depicted by the outer dashed red arc, and ON and OFF SNE rates of each type are still roughly balanced for pixels outside this region, indicating a dependency that is not explained by prior reasoning. To further explore and illustrate this phenomenon, we calculated the ISI between consecutive event-pairs in each pixel and specifically examined the polarities of the pairs. Examining the ISI distribution in Fig. 2B reveals that over 90% of sequential noise event pairs are of opposite polarity **and** these pairs typically occur at shorter time intervals (\(\approx 1/10\)). Both of these observations about SNE pairs are in contrast with previous assumptions, which predict noise events should be independent of pixel history. Fig. 3 explains how this behavior is a direct result of the pixel's self-timed reset. Events are generated when the signal deviates from a memorized reference level by more than an ON (\(\theta_{ON}\)) or OFF (\(\theta_{OFF}\)) threshold. Considering a filtered white gaussian noise pattern, each event resets the pixel's reference to a level offset from the mean noise value. Since gaussian noise tends to return to its mean value, this new reference increases the probability of an event of opposite polarity happening within relatively short time. This hypothesis is upheld in 4, which demonstrates how improving the v2e DVS simulator [2]1 by injecting white noise prior to the event generation block accurately models observed noise statistics. Footnote 1: v2e on github - see -photoreceptor_noise option ## IV Bias Adjustments for SNE Rate Reduction When operating in dim conditions, noise rates are typically managed by reducing sensitivity or photoreceptor bandwidth, but true signal is suppressed as changes too small or fast for the selected biases are missed completely. If an application Fig. 1: Typical DVS pixel circuit schematic. The active logarithmic photoreceptor front-end (**A-B**) drives a cap-feedback change amplifier (**C**) with output \(V_{diff}\). When \(V_{diff}\) deviates by either an ON or OFF threshold, comparators (**D**) report an event, and after a finite refractory period (**E**) the change amplifier is reset. **F:** After each reset, the pixel again responds to signal changes around a new reference level. This reset logic explains the key observation of this paper. Fig. 2: Recorded DAVIS346 SNEs under 10 mlux illumination with high bandwidth biases. **A:** Per pixel ON and OFF SNE rates are nearly balanced, even for pixels with an abnormally high noise rates. **B:** Inter-Spike Interval (ISI) histograms reveal that over 90% of pixel SNE pairs are opposite polarity and occur at shorter time intervals than like polarity pairs. requires detecting fast moving or dim objects/features, moderately elevated noise rates can be accepted and aggressive denoising applied after reading events off-chip at the cost of increased latency, power, computation, and data bandwidth. Alternatively, reasoning from Fig. 3 reveals two novel biasing strategies to reduce background noise rates while still allowing pixels to be biased for high sensitivity and bandwidth. The first strategy is to increase the refractory period. Fig. 5 demonstrates that this method decouples the reset level from the signal level that generated the previous noise event and reduces overall noise rates. Fig. 5A shows more than 50% reduction in noise rates and Fig. 5B demonstrates decoupling of ON/OFF pairs with a longer refractory period. Simulations suggest that in order for this decoupling to occur, the refractory period must be \(\geq\frac{1}{2\pi f_{3dB}}\), where \(f_{3dB}\) is the low-pass corner frequency of the photoreceptor/source follower combination. The second technique is deliberately applying a large imbalance in ON and OFF thresholds to force the reference level to settle near the extreme of the noise distribution corresponding to the more sensitive threshold. This large imbalance reduces the probability that a subsequent noise event will occur to reset the reference, thus breaking the event-pair cycle. In practice, Fig. 6 shows that this method works well when ON is much more sensitive than OFF, and Fig. 6B demonstrates up to an 80% reduction in noise event rates, even with an expected increase in sensitivity to ON changes. ## V Conclusion SNE rate is an important consideration for expanding the utility of DVS into diverse applications in challenging lighting conditions. In this paper, we identified a key observation about how SNEs tend to occur in pairs of opposite polarity, and explained this phenomenon based on pixel architecture and logic. Leaning on this explanation, we propose and demonstrate two novel bias techniques for reducing SNE rates. Limiting noise rates in dim lighting conditions improves DVS Signal to Noise Ratio (SNR), and the techniques we describe facilitate direct manipulation of noise statistics. Further exploration of the benefits of these techniques should be explored in task specific scenarios. After achieving desired SNR performance, a deeper understanding of the resulting noise statistics can Fig. 4: Comparison of old and new v2e [2] noise models. **A:** The previous model did not accurately capture noise statistics. **B:** Adding Gaussian white noise to a DC signal and allowing the event generation model to generate noise events produces realistic DVS noise statistics. Fig. 3: Explanation of noise event pairing. Each time window (labeled 1-10) terminates with an event when the noisy signal crosses either the **ON** or **OFF** threshold. Shortly after each event (dependent on refractory period), the reference level resets near the signal level that generated the previous event, increasing the probability that an opposite polarity noise event occurs. In the example, 8 of 10 event pairs are opposite polarity and occur on shorter time scales than like-polarity pairs. aid in more efficient and effective denoising strategies, and inform improvements to already effective machine learning-based denoisers such as [6].
2303.08566
Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning
Visual Parameter-Efficient Fine-Tuning (PEFT) has become a powerful alternative for full fine-tuning so as to adapt pre-trained vision models to downstream tasks, which only tunes a small number of parameters while freezing the vast majority ones to ease storage burden and optimization difficulty. However, existing PEFT methods introduce trainable parameters to the same positions across different tasks depending solely on human heuristics and neglect the domain gaps. To this end, we study where to introduce and how to allocate trainable parameters by proposing a novel Sensitivity-aware visual Parameter-efficient fine-Tuning (SPT) scheme, which adaptively allocates trainable parameters to task-specific important positions given a desired tunable parameter budget. Specifically, our SPT first quickly identifies the sensitive parameters that require tuning for a given task in a data-dependent way. Next, our SPT further boosts the representational capability for the weight matrices whose number of sensitive parameters exceeds a pre-defined threshold by utilizing existing structured tuning methods, e.g., LoRA [23] or Adapter [22], to replace directly tuning the selected sensitive parameters (unstructured tuning) under the budget. Extensive experiments on a wide range of downstream recognition tasks show that our SPT is complementary to the existing PEFT methods and largely boosts their performance, e.g., SPT improves Adapter with supervised pre-trained ViT-B/16 backbone by 4.2% and 1.4% mean Top-1 accuracy, reaching SOTA performance on FGVC and VTAB-1k benchmarks, respectively. Source code is at https://github.com/ziplab/SPT
Haoyu He, Jianfei Cai, Jing Zhang, Dacheng Tao, Bohan Zhuang
2023-03-15T12:34:24Z
http://arxiv.org/abs/2303.08566v2
# Sensitivity-Aware Visual Parameter-Efficient Tuning ###### Abstract Visual Parameter-Efficient Tuning (VPET) has become a powerful alternative for full fine-tuning so as to adapt pre-trained vision models to downstream tasks, which only tunes a small number of parameters while freezing the vast majority ones to ease storage burden and optimization difficulty. However, existing VPET methods introduce trainable parameters to the same positions across different tasks depending solely on human heuristics and neglect the domain gaps. To this end, we study where to introduce and how to allocate trainable parameters by proposing a novel **S**ensitivity-aware visual **P**arameter-efficient **T**uning (SPT) scheme, which adaptively allocates trainable parameters to task-specific important positions given a desired tunable parameter budget. Specifically, our SPT first quickly identifies the sensitive parameters that require tuning for a given task in a data-dependent way. Next, our SPT further boosts the representational capability for the weight matrices whose number of sensitive parameters exceeds a pre-defined threshold by utilizing any of the existing structured tuning methods, e.g., LoRA [27] or Adapter [26], to replace directly tuning the selected sensitive parameters (unstructured tuning) under the budget. Extensive experiments on a wide range of downstream recognition tasks show that our SPT is complementary to the existing VPET methods and largely boosts their performance, e.g., SPT improves Adapter with supervised pre-trained ViT-B/16 backbone by 4.2% and 1.4% mean Top-1 accuracy, reaching SOTA performance on FGVC and VTAB-1k benchmarks, respectively. Source code is at [https://github.com/ziplab/SPT](https://github.com/ziplab/SPT). ## 1 Introduction The pre-training and fine-tuning paradigm has underpinned the most recent breakthroughs in vision, yielding stunning empirical performance on a series of tasks such as segmentation [12, 49] and detection [24, 10]. Transformer [58] has been widely adopted as the standard architecture for pre-trained vision models, with representatives including CLIP [47], MAE [23], BEiT [2], _etc_. To effectively adapt the pre-trained representations to the downstream tasks, the de-facto choice is full fine-tuning, which initializes the model with the pre-trained weights and tunes all the parameters. However, vanilla full fine-tuning needs to store a separate instance of parameters for each task and each deployment scenario. It can be extremely storage-intensive as the storage cost grows linearly with the number of possible cases, considering there are vast varieties of downstream tasks and dynamic deployment environ Figure 1: (a) Existing VPET methods, such as Adapter [26] introduce trainable parameters to the same positions for all downstream tasks. However, these methods design task-agnostic positions to employ trainable parameters relying on heuristics and neglect consideration of the distinct domain gaps and characteristics for the downstream tasks. (b) Our Sensitivity-aware visual Parameter-efficient Tuning (SPT) introduces trainable parameters to the task-specific important positions and allocates them with both unstructured and structured tuning granularities, simultaneously. For structured tuning, SPT can exploit any existing structured tuning methods, such as LoRA [27] or Adapter [26]. Red lines and blocks represent trainable parameters and modules, while blue lines represent frozen parameters. ments, especially when deploying the large vision models [16, 38, 63] to mobile systems. For example, even storing a single large pre-trained ViT-H [23] model on a local disk requires at least 2.3GB, while the Top-10 U.S. apps required only collectively 2.2GB in May 2021.1 Footnote 1: [https://sensortower.com/blog/ios-app-size-growth-2021](https://sensortower.com/blog/ios-app-size-growth-2021) Notably, an emerging solution is to replace vanilla finetuning with Visual Parameter-Efficient Tuning (VPET) [28, 13, 70, 29], which only tunes a small number of trainable parameters while freezing the vast majority ones that are shared by multiple tasks. As VPET approaches exhibit less than 1% of trainable parameters, the storage burden is largely alleviated. Another attractive property of VPET is that tuning fewer parameters eases the optimization difficulty and mitigates the overfitting issue when adapting large pre-trained models on the target dataset, thereby achieving comparable or even better performance than vanilla finetuning [28]. Although promising, the existing VPET approaches introduce trainable parameters to the same positions for all downstream tasks, relying on human heuristics and neglecting the task-specific domain gaps and characteristics, which limits their performance. For instance, in a task-agnostic manner, Prompt Tuning-deep [28] and Adapter [26] respectively add trainable parameters to multi-head self-attention (MSA) and feed-forward network (FFN) layers for all distinct tasks as depicted in Figure 1 (a). To address this fundamental challenge, we explore _where to introduce_ and _how to allocate_ trainable parameters under a desired parameter budget by presenting a novel **S**ensitivity-aware visual **P**arameter-efficient **T**uning (SPT) scheme that identifies the _task-specific important positions_ to adaptively allocate trainable parameters. Since the pre-trained weights at distinct positions have varying contributions for different downstream tasks [65, 32, 42], we first propose a new criterion to quickly identify the task-specific sensitive parameters that require tuning in a data-dependent way. Inspired by model pruning metrics [51, 40, 4, 5], we propose to measure the parameter sensitivity with the loss reduction when being tuned, which can be approximated by a first-order Taylor expansion derived within a single forward and backward pass ahead of fine-tuning in one-shot. Our sensitivity criterion is simple and effective, which can identify the task-specific important positions to introduce trainable parameters for any backbone quickly. For instance, calculating the sensitivity for ViT-B/16 backbone takes only 5.5 seconds with a single GPU on any of the VTAB-1k datasets. With our criterion, we empirically observe that the proportions of the sensitivity parameters for each block indeed vary markedly across different tasks in Section 4.4. To allocate the trainable parameters under a desired trainable parameter budge, an intuitive solution is to directly tune the most sensitive weight connections, which we name unstructured tuning. Despite its simplicity and flexibility, unstructured tuning only tunes a few parameters which still lacks representational capability and is challenging to bridge the domain gap. To this end, we propose to further incorporate structured tuning to replace unstructured tuning at the sensitive weight matrices whose numbers of sensitive parameters exceed a pre-defined threshold to improve the representational capability under a similar parameter budget. Structured tuning can be implemented by any parameter-efficient structured tuning methods [27, 13, 29, 28] that directly adjust the hidden representations, _e.g_., inserting an adapter module sequentially after the sensitive weight matrices. Therefore, our SPT adaptively combines both unstructured and structured tuning granularity and allocates trainable parameters with high flexibility and representational capability for each distinct downstream task. This paper has the following key contributions. 1) We make the pioneering exploration to identify the task-specific important positions under the VPET setting, which is fast, effective, versatile to be applied to various backbones with different pre-training strategies, and orthogonal to the existing VPET methods. 2) Based on the sensitivity criterion, we propose a trainable parameter allocation strategy that adaptively combines both unstructured and structured tuning under a desired parameter budget to achieve high flexibility, large capacity, and favorable trade-off between parameter efficiency and accuracy. 3) Extensive experiments on a total of 24 downstream recognition tasks with both plain and hierarchical vision Transformer backbones under supervised and self-supervised pre-trainings show that our SPT is complementary to the existing VPET methods and boosts their performance by large margins. For instance, SPT improves Adapter [26] by 4.2% mean Top-1 accuracy, outperforming the SOTA VPET methods on the FGVC benchmark. ## 2 Related Work **Parameter-efficient tuning.** Full fine-tuning is the most predominant approach when adapting a large-scale pre-trained model to downstream tasks, where the model is initialized from the pre-trained weights with all parameters trainable. Yet, when a model becomes larger, parameter-efficient tuning (PET) [33, 34] is highly desirable, which tunes only a tiny portion of parameters to alleviate the storage burden. The general PET approaches can be categorized into addition-based PET methods and reparameterization-based PET methods. _Addition-based PET_ attaches additional trainable parameters to the backbone and only tunes these parameters. Apart from Prompt tuning [28] and Adapter [26], recent addition-based methods study connecting or combining existing VPET methods. For instance, He _et al_. [22] connect Prompt tuning and Adapter and provide a unified view that all VPET approaches share the same design to ad just the hidden representations. Zhang [70] search for the optimal configurations to combine multiple VPET approaches following once-for-all scheme [7, 61]. Since the additional parameters require extra computations compared to full fine-tuning, a few recent works [53, 55] design specific architectures to avoid storing the intermediate activations, thereby alleviating the fine-tuning memory cost. However, it is noteworthy that enhancing training efficiency is not the primary objective of our work. _Reparameterization-based PET_ aims to avoid extra computational costs by tuning parameters that are inherently in or can be reparameterized into the backbone during inference. Prior works select the parameters that are inherently in the backbone, including the bias terms [66], the last several layers [65, 6], and weight connections [18, 71]. To reparameterize new parameters into the backbone [21, 35], representative work LoRA [27] optimizes two low-rank matrices which can be further merged into the weight matrices. In contrast to the aforementioned works, we argue the importance of tuning parameters at task-specific important positions and quickly identify them with our proposed parameter sensitivity criterion before tuning, which is complementary to and provides valuable guidance for the existing VPET methods. Moreover, our SPT can also be inference-efficient when implementing structured tuning with any reparameterization-based structured tuning method. Recently, SSF [35] is proposed to introduce trainable scaling and shifting parameters that can be absorbed into the previous linear layers. However, it cannot scale to higher trainable parameter budgets and requires a complex and time-consuming hyper-parameter search for learning rate, weight decay, and drop-path rate on each individual dataset, thus is not directly comparable to our method. **Task-specific transfer learning.** The effectiveness of transferring pre-trained models to downstream tasks strongly depends on the relationship between the source and target tasks [50, 60, 32, 45]. This has motivated the community to explore the optimal pre-training data [15, 64], model [54, 43], and weights [20, 62] for the target task. To seek suitable _task-specific pre-training data_, Cui [15] select the source domain data from the top-k most similar classes measured by Earth Mover's Distance; Yoon [64] weight each class in the source domain with reinforcement learning; and Puigcerver [46] first train a diverse set of experts and then select the most relevant expert for each target task. Another line of work selects a suitable _pre-trained model for the target task_ ahead of fine-tuning by measuring the transferability of pre-trained models to the target domain with interclass covariance between the source data and target classes [3] or conditional cross-entropy [54] between the source and target labels. Considering the transferability of the feature representations at distinct layers for the same pre-trained model is different [65, 42], recent works [19, 52] endeavour _transfer task-specific weights_ by freezing some pre-trained weights and fine-tuning the rest. For example, the task-specific fine-tuned weights are selected by learning a policy network with Gumbel-Softmax [20], optimizing a sparse mask with \(L_{0}\) norm [18], and learning binary gates for each parameter [71]. Our SPT also adaptively selects task-specific parameters. In contrast to the previous work, we 1) derive task-specific important positions prior to fine-tuning with only a single forward and backward pass, which is computationally efficient; 2) mask the gradients for insensitive parameters in unstructured tuning with fixed binary masks, thereby having more affordable fine-tuning memory than optimizing learnable binary masks in [18, 71]. Moreover, we are pioneering work to adaptively allocate task-specific trainable parameters with both fine-grained unstructured and coarse-grained structured tuning granularities to achieve both high flexibility and representational capability. ## 3 Method Our Sensitivity-aware visual Parameter-efficient Tuning consists of two stages. In the first stage, SPT measures the task-specific sensitivity for the pre-trained parameters (Section 3.1). Based on the parameter sensitivity and a given parameter budget, SPT then adaptively allocates trainable parameters to task-specific important positions (Section 3.2). ### Task-specific Parameter Sensitivity Recent researches have observed that pre-trained backbone parameters exhibit varying feature patterns [48, 41] and criticality [69, 11] at distinct positions. Moreover, when transferred to downstream tasks, their efficacy varies depending on how much pre-trained features are reused and how well they adapt to the specific domain gap [65, 32, 42]. Motivated by these observations, we argue that not all parameters contribute equally to the performance across different tasks in VPET and propose a new criterion to measure the sensitivity of the parameters in the pre-trained backbone for a given task. Specifically, given the training dataset \(\mathcal{D}_{t}\) for the \(t\)-th task and the pre-trained model weights \(\mathbf{w}=\{w_{1},w_{2},\ldots,w_{N}\}\in\mathbb{R}^{N}\) where \(N\) is the total number of parameters, the objective for the task is to minimize the empirical risk: \(\min_{\mathbf{w}}E(\mathcal{D}_{t},\mathbf{w})\). We denote the parameter sensitivity set as \(\mathcal{S}=\{s_{1},\ldots,s_{N}\}\) and the sensitivity \(s_{n}\) for parameter \(w_{n}\) is measured by the empirical risk difference when tuning it: \[s_{n}=E(\mathcal{D}_{t},\mathbf{w})-E(\mathcal{D}_{t},\mathbf{w}\mid w_{n}=w_{n}^{*}), \tag{1}\] where \(w_{n}^{*}=\underset{w_{n}}{\operatorname{argmin}}(E(\mathcal{D}_{t},\mathbf{w}))\). We can reparameterize the tuned parameters as \(w_{n}^{*}=w_{n}+\Delta_{w_{n}}\), where \(\Delta_{w_{n}}\) denotes the update for \(w_{n}\) after tuning. Here we individually measure the sensitivity of each parameter, which is reasonable given that most of the parameters are frozen during fine-tuning in VPET. However, it is still computationally intensive to compute Eq. (1) for two reasons. Firstly, getting the empirical risk for \(N\) parameters requires forwarding the entire network \(N\) times, which is time-consuming. Secondly, it is challenging to derive \(\Delta_{w_{n}}\), as we have to tune each individual \(w_{n}\) until convergence. To overcome the first barrier, we simplify the empirical loss by approximating \(s_{n}\) in the vicinity of \(\mathbf{w}\) by its first-order Taylor expansion: \[s_{n}^{(1)}=-g_{n}\Delta_{w_{n}}, \tag{2}\] where the gradients \(\mathbf{g}=\partial E/\partial\mathbf{w}\), and \(g_{n}\) is the gradient of the \(n\)-th element of \(\mathbf{g}\). To address the second barrier, following [37, 9], we take the one-step unrolled weight as the surrogate for \(w_{n}^{*}\) and approximate \(\Delta_{w_{n}}\) in Eq. (2) with a single step of gradient descent. We can accordingly get \(s_{n}^{(1)}\approx g_{n}^{2}\epsilon\), where \(\epsilon\) is the learning rate. Since \(\epsilon\) is same for all parameters, we can eliminate it when comparing the sensitivity with the other parameters and finally get \[s_{n}^{(1)}\approx g_{n}^{2}. \tag{3}\] Therefore, the sensitivity of a parameter can be efficiently measured by its potential to reduce the loss on the target domain. Note that although our criterion draws inspiration from pruning work [40], it is distinct from it. [40] measures the parameter importance by the squared change in loss when removing them, _i.e_., \(\left(E(\mathcal{D}_{t},\mathbf{w})-E(\mathcal{D}_{t},\mathbf{w}\mid w_{n}=0)\right)^ {2}\) and finally derives the parameter importance by \(\left(g_{n}w_{n}\right)^{2}\), which is different from our formulations in Eqs. (1) and (3). In practice, we accumulate \(\mathcal{S}\) from a total number of \(C\) training samples ahead of fine-tuning to generate accurate sensitivity as shown in Algorithm 1, where \(C\) is a pre-defined hyper-parameter. In Section 4.3, we show that employing only 400 training samples is sufficient for getting reasonable parameter sensitivity, which requires only 5.5 seconds with a single GPU for any VTAB-1k dataset with ViT-B/16 backbone [16]. ### Adaptive Trainable Parameters Allocation Our next step is to allocate trainable parameters based on the obtained parameter sensitivity set \(\mathcal{S}\) and a desired parameter budget \(\tau\). A straightforward solution is to directly tune the top-\(\tau\) most sensitive unstructured connections (parameters), which we name unstructured tuning. Specifically, we rank the parameters by their sensitivity scores in \(\mathcal{S}\) and directly select the top-\(\tau\) weight connections to form the sensitive weight connection set \(\mathcal{T}\). Then, for any weight matrix \(\mathbf{W}\in\mathbb{R}^{d_{\mathrm{in}}\times d_{\mathrm{out}}}\), we can get a binary mask \(\mathbf{M}\in\mathbb{R}^{d_{\mathrm{in}}\times d_{\mathrm{out}}}\) computed by \[\mathbf{M}^{j}=\left\{\begin{array}{ll}1&\mathbf{W}^{j}\in\mathcal{T}\\ 0&\mathbf{W}^{j}\notin\mathcal{T}\end{array}\right., \tag{4}\] where \(\mathbf{W}^{j}\) and \(\mathbf{M}^{j}\) are the \(j\)-th element in \(\mathbf{W}\) and \(\mathbf{M}\), respectively. Accordingly, we can train the sensitive parameters by gradient descent and the updated weight matrix can be formulated as \(\mathbf{W}^{\prime}\leftarrow\mathbf{W}-\epsilon\mathbf{g_{W}}\odot\mathbf{M}\), where \(\mathbf{g_{W}}\) is the gradient for \(\mathbf{W}\). In this way, only the sensitive parameters are updated while the remaining parameters are frozen. However, considering VPET approaches generally limit the proportion of trainable parameters to less than 1%, tuning only a small number of unstructured weight connections might not have enough representational capability to handle the downstream datasets with large domain gaps from the source pre-training data. Therefore, to improve the representational capability, we propose to replace unstructured tuning with structured tuning at the sensitive weight matrices that have a high number of sensitive parameters. To preserve the parameter budget, we can implement structured tuning with any existing efficient structured tuning VPET method [27, 13, 26, 29] that learns to directly adjust the hidden representation. We depict an overview of our trainable parameter allocation strategy in Figure 2. For example, we can employ the low-rank reparameterization trick LoRA [27] to the sensitive weight matrices and their updates can be formulated as \(\mathbf{W}_{\mathrm{s}}^{\prime}\leftarrow\mathbf{W}+\mathbf{W}_{\mathrm{down}}\mathbf{W}_{ \mathrm{up}}\), where \(\mathbf{W}_{\mathrm{down}}\in\mathbb{R}^{d_{\mathrm{in}}\times r}\) and \(\mathbf{W}_{\mathrm{up}}\in\mathbb{R}^{r\times d_{\mathrm{out}}}\) are two learnable low-rank matrices to approximate the update of \(\mathbf{W}\). The rank \(r\) is a hyper-parameter and \(r\ll\min(d_{\mathrm{in}},d_{\mathrm{out}})\). The updated weight matrix \(\mathbf{W}^{\prime}\) with SPT can be formulated as: \[\mathbf{W}^{\prime}=\left\{\begin{array}{ll}\mathbf{W}_{\mathrm{s}}^{\prime}&\text{ if }\sum_{j=0}^{d_{\mathrm{in}}\times d_{\mathrm{out}}}\mathbf{M}^{j}\geq\sigma_{\mathrm{ opt}}\\ \mathbf{W}-\epsilon\mathbf{g_{W}}\odot\mathbf{M}&\mathrm{otherwise},\end{array}\right.. \tag{5}\] In this way, we perform structured tuning on \(\mathbf{W}\) when its number of sensitive parameters exceeds \(\sigma_{\mathrm{opt}}\), whose value depends on the pre-defined type of structured tuning method. For example, since implementing structured tuning with LoRA requires \(2\times d_{\mathrm{in}}\times d_{\mathrm{out}}\times r\) trainable parameters for each sensitive weight matrix, we set \(2\times d_{\mathrm{in}}\times d_{\mathrm{out}}\times r\) to ensure that the number of trainable parameters introduced by structured tuning is always equal to or lower than the number of sensitive parameters. In this way, our SPT adaptively incorporates both structured and unstructured tuning granularities to enable higher flexibility and stronger representational power, simultaneously. In Section 4.3, we show that structured tuning is important for the downstream tasks with larger domain gaps and both unstructured and structured tuning contribute clearly to the superior performance of our SPT. ## 4 Experiments ### Experimental Setup **Datasets and metrics.** We evaluate our SPT on total \(24\) downstream tasks in two groups following [28]. 1) FGVC is a benchmark for fine-grained visual classification, including CUB-200-2011 [59], NABirds [57], Oxford Flowers [44], Stanford Cars [17], and Stanford Dogs [30] datasets. Each FGVC dataset contains between 55 to 200 classes and a few thousand images for train, validation, and test. We follow the validation splits in [28] if the validation set is unavailable. 2) VTAB-1k [67] is a large-scale transfer learning benchmark consisting of a collection of 19 visual classification tasks. VTAB-1k can further be divided into three groups, including Natural tasks with natural images, Specialized tasks with images captured by specialized equipment, e.g., medical images, and Structured tasks with images mostly generated from synthetic environments. Each of the VTAB-1k dataset has only 800 training and 200 validation samples, while the test set sizes vary. We use top-1 accuracy (%) averaged within each group as our main metric following [28]. **Pre-trained backbones.** We conduct experiments on the plain vision Transformer backbone ViT-B/16 [16] that is pre-trained on ImageNet [31] with different pre-training strategies following [28], including supervised pre-training and self-supervised pre-training with MAE [23] and MoCo v3 [14] following [28]. We also conduct experiments on the representative hierarchical vision Transformer backbone Swin-B [38] under supervised pre-training. **Contenders.** We categorize the baseline methods into addition-based and reparameterization-based VPET methods as introduced in Section 2. Unless specified, all baseline methods keep the backbone frozen. Addition-based methods require extra computations during inference, including Mlp-\(k\), Prompt-shallow [28], Prompt-deep [28], Adapter-\(k\)[26], AdaptFormer [13], and NOAH [70]. Reparameterization-based methods have no additional computational overhead during inference, including Linear, Partial-\(k\), Bias [66], and LoRA-\(k\)[27]. Here \(k\) represents the number of bottleneck dimension in Adapter-\(k\) and LoRA-\(k\). We also compare with full fine-tuning which is denoted by Full. We introduce the details of these methods in appendix. We also introduce two variants of our SPT: addition-based SPT-Adapter and reparameterization-based SPT-LoRA. SPT-Adapter directly adjusts the hidden representations that are computed by sensitive weight matrices following [26], while SPT-LoRA approximates updating the sensitive weight matrices following [27]. For the two variants, we follow the exact weight initializations that are described in [27] and follow [70] to set the bottleneck dimension as 8. **Implementation details.** Following [70], we use the AdamW optimizer [39] with cosine learning rate decay and set the batch size, learning rate, and weight decay as 64, \(1\times 10^{-3}\), and \(1\times 10^{-4}\), respectively. We also follow [70] for the standard data augmentation pipeline. We set the number of training samples \(C\) used to calculate our parameter sensitivities in Algorithm 1 to be 400 for VTAB-1k and 800 for FGVC benchmarks. ### Main Results We evaluate the effectiveness of our method by comparing it with the baseline methods under vision Transformer backbones with various pre-training strategies. First, _our proposed_ SPT-Adapter _and_ SPT-LoRA _achieve the best performance under different trainable parameter budgets_ with supervised pre-trained ViT-B/16 backbone, as shown in Table 1 and Figure 3 (a). For instance, SPT-Adapter outperforms the SOTA method NOAH by a clear margin of 0.9% mean top-1 accuracy over the 19 VTAB-1k datasets with fewer trainable parameters. We speculate that our SPT variants allocate trainable parameters at task-specific positions compared to the heuristically selected positions in the baseline methods, which contributes to our superior performance. We also Figure 2: Overview of our trainable parameter allocation strategy. With the parameter sensitivity set \(\mathcal{S}\), we first get the top-\(\tau\) sensitive parameters. Instead of directly tuning these sensitive parameters, we also boost the representational capability by replacing unstructured tuning with structured tuning at sensitive weight matrices that have a large number of sensitive parameters, which can be implemented by any existing structured tuning method, _e.g_., LoRA [27] and Adapter [26]. Red lines and blocks represent trainable parameters and modules, while blue lines represent frozen parameters. observe that our SPT-Adapter and SPT-LoRA achieve large performance gains over Adapter and LoRA variants, respectively. For example, SPT-Adapter and SPT-LoRA with 0.41% trainable parameters respectively improve Adapter-8 and LoRA-8 significantly by 4.0% and 3.3% mean accuracy on the FGVC benchmark. This suggests that identifying task-specific important positions and combining both unstructured and structured tuning granularities with SPT are complementary to the existing VPET methods and boost their performance. Second, SPT _variants outperform baseline methods and full fine-tuning by significant margins with the self \begin{table} \begin{tabular}{l|c|c c|c c c c c c} \hline \hline **ViT-B/16** & \multicolumn{2}{c|}{**Total**} & \multicolumn{4}{c|}{**VTAB-1k MAE**} & \multicolumn{4}{c}{**VTAB-1k**} & \multicolumn{4}{c}{**MoCo v3**} \\ **(85.8M)** & **Params** & **Tuned / Total** & **Natural** & **Spetalized** & **Structured** & **Mean Acc.** & **Tuned / Total** & **Natural** & **Spetalized** & **Structured** & **Mean Acc.** \\ \hline \hline Full & 38.02\(\times\) & 100\% & 59.3 & 79.7 & 53.8 & 64.3 & 100\% & 72.0 & 84.7 & 42.0 & 69.6 \\ \hline \multicolumn{8}{c}{**Addition-based methods**} \\ \hline Adapter-8 & 1.08\(\times\) & 0.23\% & 57.2 & 78.4 & 54.7 & 63.4 & 0.23\% & 27.6 & 70.9 & 48.4 & 49.0 \\ Adapter-32 & 1.28\(\times\) & 0.95\% & 55.3 & 78.8 & 53.3 & 62.5 & 0.99\% & 74.2 & 82.7 & 47.7 & 68.2 \\ Prompt-shallow & 1.02\(\times\) & 0.12\% & 40.0 & 69.7 & 27.5 & 45.7 & 0.12\% & 67.3 & 82.3 & 37.6 & 62.4 \\ Prompt-deep & 1.05\(\times\) & 0.23\% & 36.0 & 60.6 & 26.6 & 41.1 & 0.07\% & 70.3 & 83.0 & 42.4 & 65.2 \\ SPT-Adapter (Ours) & 1.07\(\times\) & 0.26\% & 64.8 & 82.4 & 60.4 & 69.2 & 0.08\% & 76.1 & 84.9 & 60.1 & 73.7 \\ SPT-Adapter (Ours) & 1.13\(\times\) & 0.41\% & **65.6** & **82.7** & **60.7** & **69.7** & 0.30\% & **76.6** & **85.0** & **61.7** & **74.4** \\ \hline \hline \multicolumn{8}{c}{**Reparameterization-based methods**} \\ \hline Linear & 1.02\(\times\) & 0.12\% & 79.3 & 0.04\% & 68.9 & 77.2 & 26.8 & 57.6 \\ Partial-1 & 3.00\(\times\) & 8.38\% & 82.6 & 8.30\% & 69.4 & 78.5 & 34.2 & 60.7 \\ Bias & 1.05\(\times\) & 0.13\% & 88.4 & 0.13\% & 73.3 & 78.3 & 44.1 & 65.2 \\ LoRA-8 & 1.07\(\times\) & 0.55\% & 86.0 & 0.23\% & 79.5 & 84.6 & 60.5 & 74.9 \\ Lora-16 & 1.18\(\times\) & 0.90\% & 84.8 & 0.69\% & 79.8 & 84.9 & 60.2 & 75.0 \\ SPT-LoRA (Ours) & 1.08\(\times\) & 0.41\% & 89.3 & 0.31\% & 81.5 & 85.6 & 60.7 & 75.9 \\ SPT-LoRA (Ours) & 1.15\(\times\) & 0.60\% & **90.1** & 0.63\% & **81.9** & **85.9** & **61.3** & **76.4** \\ \hline \hline \end{tabular} \end{table} Table 1: Comparisons on FGVC and VTAB-1k [67] benchmarks using supervised pre-trained ViT-B/16 backbone pre-trained on ImageNet-21k. “Total params” denotes the ratio of the total number of parameters needed for all downstream tasks relative to the one for the pre-trained backbone, and “Tuned/Total” denotes the fraction of trainable parameters. Top-1 accuracy (%) is reported. The best result is in **bold**, and the second-best result is underlined. \begin{table} \begin{tabular}{l|c|c c c c|c c c c c} \hline \hline **ViT-B/16** & \multicolumn{2}{c|}{**Total**} & \multicolumn{4}{c|}{**VTAB-1k**} & \multicolumn{4}{c}{**VTAB-1k**} & \multicolumn{4}{c}{**MoCo v3**} \\ **(85.8M)** & **Params** & **Tuned / Total** & **Natural** & **Spetalized** & **Structured** & **Mean Acc.** & **Tuned / Total** & **Natural** & **Spetalized** & **Structured** & **Mean Acc.** \\ \hline \hline Full & 38.02\(\times\) & 100\% & 59.3 & 79.7 & 53.8 & 64.3 & 100\% & 72.0 & 84.7 & 42.0 & 69.6 \\ \hline \multicolumn{8}{c}{**Addition-based methods**} \\ \hline Adapter-8 & 1.08\(\times\) & 0.23\% & 57.2 & 78.4 & 54.7 & 63.4 & 0.23\% & 27.6 & 70.9 & 48.4 & 49.0 \\ Adapter-32 & 1.28\(\times\) & 0.95\% & 55.3 & 78.8 & 53.3 & 62.5 & 0.99\% & 74.2 & 82.7 & 47.7 & 68.2 \\ Prompt-shallow & 1.02\(\times\) & 0.12\% & 40.0 & 69.7 & 27.5 & 45.7 & 0.12\% & 67.3 & 82.3 & 37.6 & 62.4 \\ Prompt-deep & 1.05\(\times\) & 0.23\% & 36.0 & 60.6 & 26.6 & 41.1 & 0.07\% & 70.3 & 83.0 & 42.4 & 65.2 \\ SPT-Adapter (Ours) & 1.07\(\times\) & 0.26\% & 64.8 & 82.4 & 60.4 & 69.2 & 0.08\% & 76.1 & 84.9 & 60.1 & 73.7 \\ SPT-Adapter (Ours) & 1.13\(\times\) & 0.41\% & **65.6** & **82.7** & **60.7** & **69.7** & 0.30\% & **76.6** & **85.0** & **61.7** & **74.4** \\ \hline \multicolumn{8}{c}{**Reparameterization-based methods**} \\ \hline Linear & 1.02\(\times\) & 0.04\% & 18.9 & 52.7 & 23.7 & 32.1 & 0.04\% & 67.5 & 81.1 & 30.3 & 59.6 \\ Partial-1 & 4.16\(\times\) & 8.30\% & 58.4 & 78.3 & 47.6 & 61.5 & 8.30\% & 72.3 & 84.6 & 47.9 & 68.3 \\ Bias & 1.06\(\times\) & 0.13\% & 54.6 & 75.7 & 47.7 & 59.3 & 0.13\% & 72.9 & 81.1 & 53.4 & 69.2 \\ LoRA-8 & 1.08\(\times\) & 0.23\% & 57.5 & 77.7 & 57.7 & 64.3 & 0.23\% & 21.2 & 66.7 & 45.1 & 44.3 \\ LoRA-16 & 1.28\(\times\) & 0.69\% & 57.3 & 77.1 & 59.9 & 64.8 & 0.69\% & 16.0 & 64.0 & 48.7 & 42.9 \\ SPT-LoRA (Ours) & 1.11\(\times\) & 0.29\% & 63.8 & 81.6 & 60.0 & 68.5 & 0.30\% & **76.5** & 85.4 & 63.0 & 75.0 \\ SPT-LoRA (Ours) & 1.23\(\times\) & 0.69\% & **65.4** & **82.4** & **61.5** & **69.8** & 0.50\% & **76.5** & **86.0** & **63.6** & **75.3** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparisons on VTAB-1k [67] benchmark using self-supervised ViT-B/16 backbone pre-trained by MAE [23] and MoCo v3 [14]. “Total params” denotes the ratio of the total number of parameters needed for all downstream tasks relative to the one for the pre-trained backbone, and “Tuned/Total” denotes the fraction of trainable parameters. Top-1 accuracy (%) is reported. The best result is in **bold**, and the second-best result is underlined. supervised pre-trained ViT-B/16 backbones._ As shown in Table 2, existing VPET approaches exhibit inferior results than full fine-tuning with the self-supervised pre-trained backbones MAE and MoCo v3. It is worth noting that previous VPET methods yield inconsistent results with the backbones of different pre-training strategies. In contrast, SPT variants consistently outperform full fine-tuning. In particular, SPT-Adapter achieves remarkable 5.8% and 5.5% mean top-1 accuracy gains over the best-performing baseline method on VTAB-1k benchmark with only 0.26% and 0.08% trainable parameters for MAE and MoCo v3 pre-trained backbones, respectively. Moreover, our observation in appendix suggests that self-supervised pre-trained ViT backbones have more diverse sensitivity distributions and a higher variance in sensitivity across different tasks than the supervised pre-trained one. This leads to the conjecture that baseline methods that assign trainable parameters to the same positions for all tasks may fail to mitigate the distinct domain gaps in individual downstream datasets, whereas our SPT allocates trainable parameters to task-specific positions accurately. From Table 3, we observe that our SPT-LoRA and SPT-Adapter also achieve SOTA performance with Swin-B backbone on all dataset groups, which further demonstrates the versatility and effectiveness of our SPT. ### Ablation Study **Effect of the sensitivity criterion.** We investigate the effectiveness of our sensitivity criterion on VTAB-1k by employing structured tuning methods from [28, 26, 27] to the task-specific sensitive weight matrices. Note that we do not conduct unstructured tuning to ensure fair comparisons. The results are presented in Figure 3 (b). Our criterion brings consistent 1.1%, 1.6%, and 0.8% performance gains for Prompt-deep, Adapter-32, and LoRA-16, respectively, which demonstrates the effectiveness and versatility of our sensitivity criterion to identify accurate task-specific important positions. **Effect of structured and unstructured tuning.** We investigate the effectiveness of unstructured and structured tuning individually on VTAB-1k. The results are presented in Table 4. We start by applying Adapter-8 to the sensitive weight matrices identified by our sensitivity criterion (SPT-Adapter w/o unstructured). We observe that our sensitivity criterion boosts the performance of all the dataset groups by clear margins, which again demonstrates the importance of our sensitivity criterion. Next, we observe that allocating the trainable parameters to the unstructured sensitive weight connections also brings accuracy improvement to the Natural and Specialized datasets from Adapter-8. However, we find that structured tuning is especially important for achieving good performance on Structured datasets. To further investigate this phenomenon, we observe that Structured datasets have larger domain gaps from the pre-training source domain [31] compared to Natural and Specialized datasets as visualized in Figure 3 (c). We hence conjecture that structured tuning has a higher representational capability than unstructured tuning which facilitates mitigating the large domain gaps during fine-tuning (see the appendix for visual examples). Finally, we observe that incorporating both structured and unstructured tuning at task-specific important positions achieves the highest performance on all dataset groups. **Effect of number of training samples \(C\) to get parameter sensitivity.** We investigate the effect of the number of training images \(C\) for calculating our parameter sensitivity (Algorithm 1). We randomly sample training samples and report the mean results over three runs in Table 5. We find that our SPT is robust to the number of training samples \(C\) and randomly sampling 400 out of a total of 800 training samples is sufficient to obtain accurate task-specific important positions, _e.g._, calculating the sensitivity for ViT-B/16 backbone takes only 5.5 seconds with a single GPU on any \begin{table} \begin{tabular}{l|c c c c} \hline \hline \(C\) & 240 & 400 & 560 & 800 \\ \hline \hline **Mean Acc.** & 76.3 & **76.4** & **76.4** & **76.4** \\ \hline \hline \end{tabular} \end{table} Table 5: Effect of the number of training samples used to get the sensitivity for SPT-LoRA with supervised pre-trained ViT-B/16 backbone on VTAB-1k [67]. Top-1 accuracy (%) is reported. \begin{table} \begin{tabular}{l|c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \begin{tabular}{c} **Tuned** \\ **Total** \\ \end{tabular} & **Natural** & **Specialized** & **Structured** & \begin{tabular}{c} **Mean / Acc.** \\ \end{tabular} \\ \hline \hline Full & 100\% & 79.1 & 86.2 & 59.7 & 75.0 \\ \hline \multicolumn{5}{c}{**Addition-based methods**} \\ \hline MLP-3 & 1.60\% & 73.6 & 75.2 & 35.7 & 61.5 \\ Prompt-shallow & 0.04\% & 79.9 & 82.5 & 37.8 & 66.7 \\ Prompt-deep & 0.23\% & 76.8 & 84.5 & 53.4 & 71.6 \\ Adaptek-8 & 1.18\% & 81.7 & **87.3** & 61.2 & 76.7 \\ SPT-Adapter (ours) & 0.33\% & **83.0** & **87.3** & **62.1** & **77.5** \\ \hline \multicolumn{5}{c}{**Reparameterization-based methods**} \\ \hline Linear & 0.04\% & 73.5 & 80.8 & 33.5 & 62.6 \\ Partial-1 & 2.15\% & 73.1 & 81.7 & 35.0 & 63.3 \\ LoRA-8 & 1.18\% & 81.7 & 87.2 & 60.1 & 76.3 \\ SPT-LoRA (ours) & 0.49\% & **83.1** & **87.4** & **60.4** & **77.2** \\ \hline \hline \end{tabular} \end{table} Table 3: Comparisons on VTAB-1k [67] benchmark with supervised pre-trained Swin-B [38]. “Tuned/Total” denotes the fraction of trainable parameters. Top-1 accuracy (%) is reported. \begin{table} \begin{tabular}{l|c c c c} \hline \hline \(C\) & 240 & 400 & 560 & 800 \\ \hline **Mean Acc.** & 76.3 & **76.4** & **76.4** & **76.4** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study on structured and unstructured tuning only with supervised pre-trained ViT-B/16 backbone. Top-1 accuracy (%) is reported. We set different parameter constraints to align the fractions of trainable parameters for these cases. of the VTAB-1k datasets and this computation is required only once. ### Observations on Sensitivity Patterns Our sensitivity criterion identifies task-specific important positions, which can reveal the contributions of the pre-trained weights to different downstream tasks during transfer learning. We visualize the proportions of the sensitive parameters for the supervised pre-trained ViT-B/16 backbone under 0.4M trainable parameter budget in Figure 4. First, we investigate the most sensitive blocks, whose numbers of sensitive parameters are summed and normalized over the 12 ViT-B/16 blocks. We observe that the patterns of the sensitive parameter proportions vary markedly across different tasks, which echoes the observations made in [20]. This suggests that we should not introduce trainable parameters to the same positions for each individual task but allocate trainable parameters at task-specific ones as we proposed. Next, we investigate the most insensitive weight matrices within a block. A ViT block consists of a query \(\mathbf{W}_{q}\), a key \(\mathbf{W}_{k}\), a value \(\mathbf{W}_{v}\), and an output \(\mathbf{W}_{o}\) weight matrices in the multi-head self-attention layer and two weight matrices \(\mathbf{W}_{fc1}\) and \(\mathbf{W}_{fc2}\) in the feed-forward network as elaborated in [58, 16]. We observe that the query \(\mathbf{W}_{q}\) and key \(\mathbf{W}_{k}\) weight matrices have the lowest proportions of sensitive parameters for all three sample tasks. Since \(\mathbf{W}_{q}\) and \(\mathbf{W}_{k}\) are responsible for learning the attention scores which indicate the pairwise similarity among the patches, we speculate that although domain changes, the patch relationships learned during pre-training can be efficiently reused when transferred to downstream classification tasks. ## 5 Conclusion In this paper, we have explored identifying and allocating trainable parameters to task-specific important positions for visual parameter-efficient tuning. Specifically, we have proposed a novel criterion to quickly measure the sensitivity of the pre-trained parameters for each specific task before fine-tuning. Based on the parameter sensitivity, we have proposed a trainable parameter allocation strategy that adaptively combines both unstructured and structured tuning under a desired trainable parameter budget, enabling high representational capability and flexibility. Finally, we have conducted extensive experiments on a total of 24 downstream recognition tasks with both plain and hierarchical vision Transformer backbones under different pre-training strategies to demonstrate the versatility and effectiveness of our proposed SPT. Notably, we have shown that our approach is complementary to the existing VPET methods and improves their performance significantly. In the future, we will explore adapting large vision models to more downstream tasks with SPT,, dense prediction Figure 4: Parameter sensitivity patterns under 0.4M trainable parameter budget for supervised pre-trained ViT-B/16 backbone on three sample tasks from VTAB-1k [68]. Left: proportions of the sensitivity parameters for each block vary across different tasks. Right: the most insensitive matrices are the query \(\mathbf{W}_{q}\) and key \(\mathbf{W}_{k}\) weight matrices. Figure 3: (a) Accuracy vs. parameter efficiency with supervised pre-trained ViT-B/16 backbone on VTAB-1k [67]. SPT variants perform favorably against the other VPET approaches and are more scalable. (b) Applying other VPET structured tuning methods [28, 26, 27] to the task-specific sensitive weight matrices (denoted by TSM) identified by our criterion with supervised pre-trained ViT-B/16 backbone on VTAB-1k. Our criterion brings consistent performance gain. (c) Domain vs. performance gaps for different dataset groups in VTAB-1k [67]. The blue bars show the domain gaps between the source domain (ImageNet [31]) and target domains, which are measured by Maximum Mean Discrepancy (MMD) distance [56]. The red line represents the performance gaps between SPT-Adapter w/o unstructured and w/o structured, using supervised pre-trained ViT-B/16 backbone. The dataset groups are Natural, Specialized, and Structured. Structured tuning is important for achieving good performance on Structured datasets with larger domain gaps. and vision-and-language tasks, and improve the training efficiency of SPT for on-device training [8, 36].
2308.03999
Understanding CNN Hidden Neuron Activations Using Structured Background Knowledge and Deductive Reasoning
A major challenge in Explainable AI is in correctly interpreting activations of hidden neurons: accurate interpretations would provide insights into the question of what a deep learning system has internally detected as relevant on the input, demystifying the otherwise black-box character of deep learning systems. The state of the art indicates that hidden node activations can, in some cases, be interpretable in a way that makes sense to humans, but systematic automated methods that would be able to hypothesize and verify interpretations of hidden neuron activations are underexplored. In this paper, we provide such a method and demonstrate that it provides meaningful interpretations. Our approach is based on using large-scale background knowledge approximately 2 million classes curated from the Wikipedia concept hierarchy together with a symbolic reasoning approach called Concept Induction based on description logics, originally developed for applications in the Semantic Web field. Our results show that we can automatically attach meaningful labels from the background knowledge to individual neurons in the dense layer of a Convolutional Neural Network through a hypothesis and verification process.
Abhilekha Dalal, Md Kamruzzaman Sarker, Adrita Barua, Eugene Vasserman, Pascal Hitzler
2023-08-08T02:28:50Z
http://arxiv.org/abs/2308.03999v2
Understanding CNN Hidden Neuron Activations Using Structured Background Knowledge and Deductive Reasoning ###### Abstract A major challenge in Explainable AI is in correctly interpreting activations of hidden neurons: accurate interpretations would provide insights into the question of what a deep learning system has internally _detected_ as relevant on the input, demystifying the otherwise black-box character of deep learning systems. The state of the art indicates that hidden node activations can, in some cases, be interpretable in a way that makes sense to humans, but systematic automated methods that would be able to hypothesize and verify interpretations of hidden neuron activations are underexplored. In this paper, we provide such a method and demonstrate that it provides meaningful interpretations. Our approach is based on using large-scale background knowledge - approximately \(2\) million classes curated from the Wikipedia concept hierarchy - together with a symbolic reasoning approach called _Concept Induction_ based on description logics, originally developed for applications in the Semantic Web field. Our results show that we can automatically attach meaningful labels from the background knowledge to individual neurons in the dense layer of a Convolutional Neural Network through a hypothesis and verification process. 1 Kansas State University 2 Bowie State University [email protected], [email protected], [email protected], [email protected], [email protected] ## 1 Introduction Deep learning has led to significant advances in artificial intelligence applications including image classification [14], speech recognition [17], translation [13], drug design [2], medical diagnosis [1], climate sciences [15], and many more. Despite these successes, the black-box nature of deep learning systems remains problematic for some application areas, especially those involving automated decisions and safety-critical systems. For example, Apple co-founder Steve Wozniak accused Apple of gender discrimination, claiming that the new Apple Card gave him a credit limit that was ten times higher than that of his wife even though the couple shares all property [10]. In an image search, only 11% of the top image results for "CEOs" were images of women despite the fact that women make up 27% of US CEOs [20]. Other application areas of particular concern include safety-critical systems such as self-driving cars [1], drug discovery and treatment recommendations [14, 15], and others, as deep learning systems are prone to adversarial attacks, e.g., by altering classification results by introducing adversarial examples [1] or simply controlling the order in which training images are presented [16]. Some of these attacks are difficult or impossible to detect after the fact [18, 19]. Standard assessments of deep learning performance consist of statistical evaluation, but do not seem sufficient to address these shortcomings as they cannot provide reasons or explanations for particular system behaviors [10]. Consequently, it remains very important to develop strong explanation methods for deep learning systems. While there has been significant progress on this front (see Section 2), the current state of the art is mostly restricted to explanation analyses based on a relatively small number of predefined explanation categories. This is problematic from a principled perspective, as this relies on the assumption that explanation categories pre-selected by humans would be viable explanation categories for deep learning systems - an as-yet unfounded conjecture. Other state of the art explanation systems rely on modified deep learning architectures, usually leading to a decrease in system performance compared to unmodified systems [21]. Ideally, we would want strong explanation capabilities while maintaining the underlying learning architecture. In this paper, we address the aforementioned shortcomings by using _Concept Induction_, i.e., formal logical deductive reasoning [1]. We show that our approach can indeed provide meaningful explanations for hidden neuron activation in a Convolutional Neural Network (CNN) architecture for image scene classification (on the ADE20K dataset [18]), using a class hierarchy consisting of about \(2\cdot 10^{6}\) classes, derived from Wikipedia, as the pool of categories [20]. The benefits of our approach to explainable deep learning are: (a) it can be used on unmodified and pre-trained deep learning architectures, (b) it assigns semantic categories (i.e., class labels expressed in formal logic) to hidden neurons such that images related to these labels activate the corresponding neuron with high probability, and (c) it can construct these labels from a very large pool of categories. The rest of this paper is organized as follows. Section 2 discusses related work on explainable deep learning. Section 3 presents our approach. Section 4 provides evaluation results and Section 5 discussions thereof. Section 6 concludes and discusses follow-up research directions. A technical appendix provides more complete details of our experiments and results. Source code, input data, raw result files, and parameter settings for replication are available online.1 Footnote 1: [https://github.com/abhilekha-dalal/xai-using-wikidataAndEcii/](https://github.com/abhilekha-dalal/xai-using-wikidataAndEcii/) ## 2 Related Work Explaining (interpreting, understanding, justifying) automated AI decisions has been explored from the early 1970s. With the recent advances in deep learning [14], its wide usage in nearly every field, and its opaque nature make explainable AI more important than ever, and there are multiple ongoing efforts to demystify deep learning [15, 16, 17]. Existing explainable methods can be categorized based on input data (feature) understanding, e.g., feature summarizing [21, 20], or based on the model's internal unit representation, e.g., node summarizing [16, 17]. Those methods can be further categorized as model-specific [23] or model-agnostic [24]. Another kind of approach relies on human interpretation of explanatory data returned, such as counterfactual questions [16]. We focus on the understanding of internal units of the neural network-based deep learning models. Given a deep learning model and its prediction, we ask the questions "What does the deep learning model's internal unit represent? Are those units activated by human-understandable concepts?" Prior work has shown that internal units may indeed represent human-understandable concepts [16, 17], but these approaches require semantic segmentation [22] (which is time- and comprehensive) or explicit concept annotations [15] (which are expensive to acquire). To get around these limitations, we take a different approach by using a hypothesis generation and validation approach based on Concept Induction analysis for hypothesis generation (details in Section 3). The use of large-scale description logic background knowledge means that we draw explanations from a very large pool of explanation categories. There has been some work using knowledge graphs to produce explanations from deep learning models [16, 15], and also on using Concept Induction to provide explanations [23, 24], but they focused on analysis of input-output behavior, i.e., on generating an explanation for the overall system. We focus instead on the different task of understanding internal (hidden) node activations. To the best of our knowledge, our use of Concept Induction with large-scale background knowledge as pool for explanation generation (to understand the internal node activations) is novel. Furthermore our method (training, Concept Induction analysis, and verification) is fully automateable without the need for human intervention. ## 3 Approach In this section we detail our technical approach. Section 3.1 covers the scene recognition scenario that we use to present our approach; Section 3.2 describes the technical components used for explanation generation; Section 3.3 presents our results and how we obtain label hypotheses for hidden node activations (with examples); and Section 3.4 details the label hypothesis validation process and results. Experimental evaluation can be found in Section 4. More details regarding our experimental parameters are in Appendix A. ### Preparations: Scenario and CNN Training We use a scene classification from images scenario to demonstrate our approach, drawing from the ADE20K dataset [16] which contains more than 27,000 images over 365 scenes, extensively annotated with pixel-level objects and object part labels. _The annotations are not used for CNN training_, but rather only for generating label hypotheses that we will describe in Section 3.3. We train a classifier for the following scene categories: "bathroom," "bedroom," "building facade," "conference room," "dining room," "highway," "kitchen," "living room," "skyscraper," and "street." We weigh our selection toward scene categories which have the highest number of images and we deliberately include some scene categories that should have overlapping annotated objects - we believe this makes the hidden node activation analysis more interesting. We did not conduct any experiments on any other scene selections yet, i.e., _we did not change our scene selections based on any preliminary analyses_. We trained a number of CNN architectures in order to use the one with highest accuracy, namely Vgg16 [14], InceptionV3 [20] and different versions of Resnet - Resnet50, Resnet50V2, Resnet101, Resnet152V2 [13, 12]. Each neural network was fine-tuned with a dataset of 6,187 images (training and validation set) of size 224x224 for 30 epochs with early stopping2 to avoid overfitting. We used Adam as our optimization algorithm, with a categorical cross-entropy loss function and a learning rate of 0.001. \begin{table} \begin{tabular}{l c c} \hline Architectures & Training acc & Validation acc \\ \hline Vgg16 & 80.05\% & 46.22\% \\ InceptionV3 & 89.02\% & 51.43\% \\ Resnet50 & 35.01\% & 26.56\% \\ **Resnet50V2** & **87.60\%** & **86.46\%** \\ Resnet101 & 53.97\% & 53.57\% \\ Resnet152V2 & 94.53\% & 51.04\% \\ \hline \end{tabular} \end{table} Table 1: Performance (accuracy) of different architectures on the ADE20K dataset. The system we used, based on performance, is bolded. We select Resnet50V2 because it achieves the highest accuracy (see Table 1). Note that for our investigations, which focus on explainability of hidden neuron activations, achieving a very high accuracy for the scene classification task is not essential, but a reasonably high accuracy is necessary when considering models which would be useful in practice. ### Preparations: Concept Induction and Background Knowledge For label hypotheses generation, we make use of _Concept Induction_Lehmann and Hitzler (2010) which is based on deductive reasoning over description logics, i.e., over logics relevant to ontologies, knowledge graphs, and generally the Semantic Web field Hitzler et al. (2010); Hitzler (2021). Concept Induction has indeed already been shown, in other scenarios, to be capable of producing labels that are meaningful for humans inspecting the data Widmer et al. (2022). A Concept Induction system accepts three inputs: (1) a set of positive examples \(P\), (2) a set of negative examples \(N\), and (3) a knowledge base (or ontology) \(K\), all expressed as description logic theories, and all examples \(x\in P\cup N\) occur as instances (constants) in \(K\). It returns description logic class expressions \(E\) such that \(K\models E(p)\) for all \(p\in P\) and \(K\not\models E(q)\) for all \(q\in N\). If no such class expressions exist, then it returns approximations for \(E\) together with a number of accuracy measures. For scalability reasons, we use the heuristic Concept Induction system ECII Sarker and Hitzler (2019) together with a background knowledge base that consists only of a hierarchy of approximately 2 million classes, curated from the Wikipedia concept hierarchy and presented in Sarker et al. (2020). We use _coverage_ as accuracy measure, defined as \[\text{coverage}(E)=\frac{|Z_{1}|+|Z_{2}|}{|P\cup N|},\] where \(Z_{1}=\{p\in P\mid K\models E(p)\}\) and \(Z_{2}=\{n\in N\mid K\not\models E(n)\}\). \(P\) is the set of all positive instances, \(N\) is the set of all negative instances, and \(K\) is the knowledge base provided to ECII as part of the input. For the Concept Induction analysis, positive and negative example sets will contain images from ADE20K, i.e., we need to include the images in the background knowledge by linking them to the class hierarchy. For this, we use the object annotations available for the ADE20K images, but only part of the annotations for the sake of simplicity. More precisely, we only use the information that certain objects (such as windows) occur in certain images, and we do not make use of any of the richer annotations such as those related to segmentation. All objects from all images are then mapped to classes in the class hierarchy using the Levenshtein string similarity metric Levenshtein (1975) with edit distance 0. For example, the ADE20K image ADE_train_0001556.jpg has "door" listed as one of the objects shown, which is mapped to the "door" concept of the Wikipedia concept hierarchy. Note that the scene information is not used for the mapping, i.e., the images themselves are not assigned to specific (scene) classes in the class hierarchy - they are connected to the hierarchy only through the objects that are shown (and annotated) in each image. ### Generating Label Hypotheses The general idea for generating label hypotheses using Concept Induction is as follows: given a hidden neuron, \(P\) is a set of inputs (i.e., in this case, images) to the deep learning system that activate the neuron, and \(N\) is a set of inputs that do not activate the neuron (where \(P\) and \(N\) are the sets of positive and negative examples, respectively). As mentioned above, inputs are annotated with classes from the background knowledge for Concept Induction, but these annotations and the background knowledge are not part of the input to the deep learning system. ECII generates a label hypothesis for the given neuron on inputs \(P\), \(N\), and the background knowledge. We first feed 1,370 ADE20K images to our trained Resnet50V2 and retrieve the activations of the dense layer. We chose to look at the dense layer because previous studies indicate Olah et al. (2017) that earlier layers of a CNN respond to low level features such as lines, stripes, textures, colors, while layers near the final layer respond to higher-level features such as face, box, road, etc. The higher-level features align better with the nature of our background knowledge. The dense layer consists of 64 neurons. We chose to analyze each of the neurons separately. We are aware that activation patterns involving more than one neuron may also be informative in the sense that information may be distributed among several neurons, but the analysis of such activation patterns will be part of follow-up work. For each neuron, we calculate the maximum activation value across all images. We then take the positive example set \(P\) to consist of all images that activate the neuron with at least 80% of the maximum activation value, and the negative example set \(N\) to consist of all images that activate the neuron with at most 20% of the maximum activation value (or do not activate it at all). The highest scoring response of running ECII on these sets, together with the background knowledge described in Section 3.2, is shown in Table 2 for each neuron, together with the coverage of the ECII response. For each neuron, we call its corresponding label the _target label_, e.g., neuron \(0\) has target label "building." Note that some target labels consist of two concepts, e.g., "footboard, chain" for neuron 49 - this occurs if the corresponding ECII response carries two class expressions joined by a logical conjunction, i.e., in this example "footboard \(\sqcap\) chain" (as description logic expression) or \(\text{footboard}(x)\wedge\text{chain}(x)\) expressed in first-order predicate logic. Let us take neuron 1 as a concrete example. After training, neuron 1 has a maximum activation value of 10.90, 80% of which is 8.72, and 20% of which is 2.18. The positive example set \(P\) thus consist of all images activating the neuron with at least 8.72, and the negative example set \(N\) consists of all images activating the neuron with at most 2.18. Example images are shown in Figure 1 middle top (positive) and bottom (negative). The top ranked ECII response on this input was "cross_walk," with a coverage score of 0.994. (Note that some of the positive images may not actually have a crosswalk, like the top left and bottom right positive images shown in Figure 1 - we discuss this in Section 4.) We consider these target labels to be working hypotheses for acti vation triggers for the corresponding neuron. As hypotheses, they require further confirmation, i.e., some of these hypotheses may be rejected. ### Confirming Label Hypotheses The process described in Section 3.3 produces label hypotheses for all neurons investigated. The next step is to confirm or reject these hypotheses by testing the labels with new images.3 We use each of the target labels to search Google Images with the labels as keywords (requiring responses to be returns for _both_ keywords if the label is a conjunction of classes). We call each such image a _target image_ for the corresponding label or neuron. We use Imageye4 to automatically retrieve the images, collecting up to 200 images that appear first in the Google Images search results, filtering for images in JPEG format (ADE20K images are in JPEG format) and with a minimum size of 224x224 pixels (again corresponding to ADE20K). For each retrieval label, we use 80% of the obtained images, reserving the remaining 20% for the statistical evaluation described in Section 4. The number of images used in the hypothesis confirmation step, for each label, is given in Table 2. These images are fed to the network to check (a) whether the target neuron (with the retrieval label as target label) activates, and (b) whether any other neurons activate. Footnote 3: We would reject labels with low coverage scores, but coverage was \(>0.960\) across all activated neurons (see Tables 2 and 4). Footnote 4: [https://chrome.google.com/webstore/detail/image-downloadader-imageye/agionbommeaifngbhincahgmolfcikhm](https://chrome.google.com/webstore/detail/image-downloadader-imageye/agionbommeaifngbhincahgmolfcikhm) The Target % column of Table 2 shows the percentage of the target images that activate each neuron to at least 80% of its maximum activation. Using neuron 1 as an example, 88.710% of the images retrieved with the label "cross_walk" activate the neuron (\(\geq\) 80%). However, this neuron only activates only for 28.923% of these images, (indicated in the Non-Target % column) when presented with images retrieved using all other labels from Table 2 excluding "cross_walk." We define a target label for a neuron to be _confirmed_ if it activates (with at least 80% of its maximum activation value) for at least 80% of its target images regardless of how much or how often it activates for non-target images (see Section 4 for the analysis of non-target activation). We use 80% as the cut-off for both neuron activation and label hypothesis confirmation - these are ad-hoc values that could both be chosen differently (we discuss this in Technical Appendix A.2). This cut-off value ensures strong association and responsiveness to images retrieved under the target label. We discuss the relevance of the Non-Target column in the next section. Returning to neuron 1, we retrieve 233 new images with keyword "cross_walk," 186 of which (80%) are used in this step. Example images are shown in Figure 1 to the right. 165 of these images (i.e., 88.710%) activate neuron 1 with at least 8.72 (which is 80% of its maximum activation value of 10.90). Since \(88.710\geq 80\), we consider the label "cross_walk" confirmed for neuron 1. After this step, we arrive at a list of 20 _confirmed_ labels listed in Table 3. ## 4 Statistical Evaluation After generating the confirmed labels (as in Section 3), we statistically evaluate the node labeling using the remaining images from those retrieved from Google Images as described in Section 3.4. Results are shown in Table 3, omitting neurons that were not activated by any image, i.e., their maximum activation value was 0. The statistical evaluation shows that Concept Induction analysis with large-scale \begin{table} \begin{tabular}{c l c c c c} Neuron \# & Obtained Label(s) & Images & Coverage & Target \% & Non-Target \% \\ \hline **0** & **building** & **164** & **0.997** & **89.024** & **72.328** \\ **1** & **cross_walk** & **186** & **0.994** & **88.710** & **28.923** \\ **3** & **night_table** & **157** & **0.987** & **90.446** & **56.714** \\ 6 & dislcloth, toaster & 106 & 0.999 & 16.038 & 39.078 \\ 11 & river\_water & 157 & 0.995 & 31.847 & 22.309 \\ \hline **16** & **mountain, bushes** & **108** & **0.995** & **87.037** & **24.969** \\ **18** & **slope** & **139** & **0.983** & **92.086** & **69.919** \\ **22** & **skyscraper** & **156** & **0.992** & **99.359** & **54.893** \\ 26 & skyscraper, river & 112 & 0.995 & 77.679 & 35.489 \\ **30** & **teapot, saucepan** & **108** & **0.998** & **81.481** & **47.984** \\ \hline 40 & sculpture, side\_rail & 119 & 0.995 & 25.210 & 21.224 \\ **41** & **open_fireplace, coffee\_table** & **122** & **0.992** & **88.525** & **16.381** \\ **43** & **central_reservation** & **157** & **0.986** & **95.541** & **84.973** \\ 46 & cassercole & 157 & 0.999 & 45.223 & 36.394 \\ **48** & **road** & **167** & **0.984** & **100.000** & **73.932** \\ \hline **49** & **footboard, chain** & **126** & **0.982** & **88.889** & **66.702** \\ **51** & **road, car** & **84** & **0.999** & **98.810** & **48.571** \\ **54** & **skyscraper** & **156** & **0.987** & **98.718** & **70.432** \\ 58 & plank, cassercole & 80 & 0.998 & 3.750 & 3.925 \\ **63** & **edifice, skyscraper** & **178** & **0.999** & **92.135** & **48.761** \\ \hline \end{tabular} \end{table} Table 2: Selected representative data as discussed throughout the text (the full version is Table 4 in Appendix A). Images: Number of images used per label. Target %: Percentage of target images activating the neuron above 80% of its maximum activation. Non-Target %: The same, but for all other images. **Bold** denotes neurons whose labels are considered confirmed. background knowledge yields meaningful labels that stably explain neuron activation. We consider each neuron-label pair in each row in Table 3, e.g., for neuron 1, the hypothesis is that this neuron activates more strongly for images retrieved using the keyword "cross_walk" than for images retrieved using other keywords. The corresponding null hypothesis is that activation values are _not_ different. Table 3 shows the 20 hypotheses to test, corresponding to the 20 neurons with confirmed labels - recall that a double label such as neuron 16's "mountain, bushes" is treated as one label consisting of the conjunction of the two keywords. There is no reason to assume that activation values would follow a normal distribution, or that the preconditions of the central limit theorem would be satisfied. We therefore base our statistical assessment on the Mann-Whitney U test [10] which is a non-parametric test that does not require a normal distribution. Essentially, by comparing the ranks of the observations in the two groups, the test allows us to determine if there is a statistically significant difference in the activation percentages between the target and non-target labels. The resulting z-scores and p-values are shown in Table 3. Of the 20 null hypotheses, 19 are rejected at \(p<0.05\), but most (all except neurons 0, 18 and 49) are rejected at much lower p-values. Only neuron 0's null hypothesis could not be rejected. The Non-Target % column of Table 2 provides some insight into the results for neurons 0, 18 and 49: target and non-target values for these neurons are closer to each other - the difference is particularly small for neuron 0. Likewise, differences between target and non-target values for mean activation values and median activation values in Table 3 are smaller for these neurons. This hints at ways to improve label hypothesis generation or confirmation, and we will discuss this and other ideas for further improvement below under possible future work. For our running example (neuron 1), we use the remaining 47 target images (20% of the 165 images retrieved during the label hypothesis confirmation step) for the statistical analysis. 43 of these images (91.49%) activate the neuron at \(\geq\) 8.72 (80% of its maximum activation value of 10.90), with a mean and median activation of 4.17 and 4.13, respectively. Of all other images (non-target images) used in the evaluation (the sum of the numbers in the image column in Table 3 minus 47), only 28.94% activate neuron 1 at \(\geq\) 8.72, for a mean of 0.67 and a median of 0.00. The Mann-Whitney U test yields a z-score of -8.92 and \(p<0.00001\), thus rejecting the null hypothesis that activation values for target and non-target images are _not_ different. In addition, the negative z-score indicates that the activation values for non-target images are indeed lower than for the target images. Figure 2 shows examples of target images that do not activate neuron 1 (left) and non-target images that do activate it (right). The Mann-Whitney U results show that, for most neurons listed in Table 3 (with \(p<0.00001\)), activation values for target images are _overwhelmingly_ higher than for non-target images. The negative z-scores with high absolute values informally indicate the same, as do the mean and median values. Neurons 16 and 49, for which the hypotheses also hold but with \(p<0.05\) and \(p<0.01\), respectively, still ex Figure 1: Example of images that were used for generating and confirming the label hypothesis for neuron 1. hibit statistically significant higher activation values for target than for non-target images, but not overwhelmingly so. This can also be informally seen from lower absolute values of the z-scores, and from smaller differences between the means and the medians. ## 5 Discussion While the statistical analysis clearly supports the viability of our approach as carried out, we can see from Table 3 that there is still significant activation of neurons by non-target images, leaving room for refinement. Ideally, we would be able to arrive at confirmed neuron labels where the number of non-target activations (Table 3, # Activations (%) non-t - column 5) is very low while the number of target activations (# Activations(%) tag - column 4) remains high. For example, neuron 16 is always activated by target images and is only activated by 25% of non-target images, meaning that we can use the neuron activation to predict (with relatively high certainty) whether or not mountains and bushes are in the image. In contrast, for neuron 29 - with a much higher non-target value - we can be much less certain if an image activating the neuron indeed contains lid and soap dispenser. High certainty, i.e., high target and low non-target values, would provide highly accurate explanations of system behavior. At the same time, however, the data collected during the label generation step, in particular that in the Target % and Non-Target % columns of Table 2, can already be used as a proxy for the certainty we can attach to a detection. It is instructive to have another look at our example neuron 1. The images depicted on the left in Figure 2 - target images not activating the neuron - are mostly computer-generated as opposed to photographic images as in the ADE20K dataset. The lower right image does not actually show the ground at the crosswalk, but mostly sky and only indirect evidence for a crosswalk by means of signage, which may be part of the reason why the neuron does not activate. The right-hand images are non-target images that activate the neuron. We may conjecture that other road elements, prevalent in these pictures, may have triggered the neuron. We also note that several of these images show bushes or plants, which is particularly interesting because the ECI response with the third-highest coverage score is "bushes, bush" with a coverage score of 0.993 and 48.052% of images retrieved using this label actually activate the neuron (the second response for this neuron is also "cross_walk"). Our results point to promising future research directions, including studying ensemble activation, analyzing different hidden layers, transfer to other application scenarios, and application to other deep learning architectures. Other possible avenues involve additional strengthening of our results including detailed analyses and exploration - and thus optimization - of parameters that were often chosen ad-hoc for this paper. We discuss some of them below, in sequence of appearance in the paper. Additional results along these lines can be found in Appendix A. The choice of background knowledge - based on objects appearing in the images - was mostly one of convenience, as suitable (large-scale) datasets were available that would serve the purpose of this study. However it is conceivable - if not likely - that neuron activations are caused not (only) by the positioning of (types of) objects, but also by other image features such as prevalence, (relative) positioning of lines or round shapes, contrasts across the image, colors, etc., some of which may be numerical in nature. It is of course possible \begin{table} \begin{tabular}{r l|r r|r r r|r r|r r} Neuron \# & Label(s) & Images & \multicolumn{2}{c|}{\# Activations (\%)} & \multicolumn{2}{c|}{Mean} & \multicolumn{2}{c}{Median} & \multicolumn{2}{c}{z-score} & \multicolumn{2}{c}{p-value} \\ \cline{3-10} & & & & \multicolumn{1}{c|}{targ} & non-t & \multicolumn{1}{c|}{targ} & non-t & \multicolumn{1}{c|}{targ} & non-t & \multicolumn{1}{c}{} & \\ \hline 0 & building & 42 & 80.95 & 73.40 & 2.08 & 1.81 & 2.00 & 1.50 & -1.28 & 0.0995 \\ 1 & cross\_walk & 47 & 91.49 & 28.94 & 4.17 & 0.67 & 4.13 & 0.00 & -8.92 & \(<\)0.0001 \\ 3 & night\_table & 40 & 100.00 & 55.71 & 2.52 & 1.05 & 2.50 & 0.35 & -6.84 & \(<\)0.0001 \\ 8 & shower\_stall, cistern & 35 & 100.00 & 54.40 & 5.26 & 1.35 & 5.34 & 0.32 & -8.30 & \(<\)0.0001 \\ 16 & mountain, bushes & 27 & 100.00 & 25.42 & 2.33 & 0.67 & 2.17 & 0.00 & -6.72 & \(<\)0.0001 \\ \hline 18 & slope & 35 & 91.43 & 68.85 & 1.59 & 1.37 & 1.44 & 1.00 & -2.03 & 0.0209 \\ 19 & wardrobe, air\_conditioning & 28 & 89.29 & 65.81 & 2.30 & 1.28 & 2.30 & 0.84 & -4.00 & \(<\)0.0001 \\ 22 & skyscraper & 39 & 97.44 & 56.16 & 3.97 & 1.28 & 4.42 & 0.33 & -7.74 & \(<\)0.0001 \\ 29 & lid, soap\_dispenser & 33 & 100.00 & 80.47 & 4.38 & 2.14 & 4.15 & 1.74 & -5.92 & \(<\)0.00001 \\ 30 & teapot, saucepan & 27 & 85.19 & 49.93 & 2.52 & 1.05 & 2.23 & 0.00 & -4.28 & \(<\)0.0001 \\ \hline 36 & tap, crapper & 23 & 91.30 & 70.78 & 3.24 & 1.75 & 2.82 & 1.29 & -3.59 & \(<\)0.0001 \\ 41 & open\_fireplace, coffee\_table & 31 & 80.65 & 15.11 & 2.03 & 0.14 & 2.12 & 0.00 & -7.15 & \(<\)0.0001 \\ 43 & central\_reservation & 40 & 97.50 & 85.42 & 7.43 & 3.71 & 8.08 & 3.60 & -5.94 & \(<\)0.0001 \\ 48 & road & 42 & 100.00 & 74.46 & 6.15 & 2.68 & 6.65 & 2.30 & -7.78 & \(<\)0.0001 \\ 49 & footboard, chain & 32 & 84.38 & 66.41 & 2.63 & 1.67 & 2.30 & 1.17 & -2.58 & 0.0049 \\ \hline 51 & road, car & 21 & 100.00 & 47.65 & 5.32 & 1.52 & 5.62 & 0.00 & -6.03 & \(<\)0.00001 \\ 54 & skyscraper & 39 & 100.00 & 71.78 & 4.14 & 1.61 & 4.08 & 1.12 & -7.60 & \(<\)0.00001 \\ 56 & flusher, soap\_dish & 53 & 92.45 & 64.29 & 3.47 & 1.48 & 3.08 & 0.86 & -6.47 & \(<\)0.00001 \\ 57 & shower\_stall, screen\_door & 34 & 97.06 & 32.31 & 2.60 & 0.61 & 2.53 & 0.00 & -7.55 & \(<\)0.00001 \\ 63 & edifice, skyscraper & 45 & 88.89 & 48.38 & 2.41 & 0.83 & 2.36 & 0.00 & -6.73 & \(<\)0.00001 \\ \hline \end{tabular} \end{table} Table 3: Evaluation details as discussed in Section 4. Images: Number of images used for evaluation. # Activations: (target(et)): Percentage of target images activating the neuron (i.e., activation at least 80% of this neuron’s activation maximum); (non-t): Same for all other images used in the evaluation. Mean/Median (targ(et)/non-t(arget)): Mean/median activation value for target and non-target images, respectively. to compile corresponding background knowledge for Concept Induction analysis, together with appropriate annotations of images, and we assume that results can be strengthened by making use of a suitably designed knowledge base. It should also be noted that the background knowledge (and mappings) are not tightly curated, but - because of scale - their generation was based on heuristics, and thus contains some imperfections. More tightly quality controlled background knowledge should further improve our results. Background knowledge bases that make more sophisticated use of description logic axiomatization (together with the DL-Learner Concept Induction system [10] or new heuristics that would need developing if at large scale) should also strengthen the results. Regarding label hypothesis generation (Section 3.3), our use of 80% and 20% of the maximum activation value for each neuron as cut-offs for selecting the images that go into the Concept Induction analysis can likely be refined, they were mostly selected ad-hoc. Use of coverage score to select the top-ranking Concept Induction system response could be replaced by others such as f-measure. We have also, so far, ignored lower ranked responses by the Concept Induction system, although often their coverage scores are very close to that of the top ranked response. Exploring ways to leverage top-n ranked responses should lead to ways to improve (i.e., increase) the target vs. non-target activation gap. Further refinement may be possible by also taking the values in the Non-Target % column into consideration, or incorporating statistical analysis at this stage as well. ## 6 Conclusion and Future Work We have demonstrated that our approach using Concept Induction and large-scale background knowledge leads to meaningful labeling of hidden neuron activations, as confirmed by our statistical analysis. To the best of our knowledge, this approach is new, and in particular the use of large-scale background knowledge for this purpose - which means that label categories are not restricted to a few pre-selected terms - has not been explored before. A major direction for future work is analysing activations of neuron ensembles rather than single neurons - intuitively, information would often be distributed over several simultaneously activated neurons. Scale is a major obstacle to this type of investigation, as even with only, say, 64 hidden neurons in a layer, there are already about \(2^{64}\) possible neuron ensembles that could be investigated, i.e., brute-force analysis methods are not feasible in most contexts, and better ways to navigate this search space will have to be found. Possible refinements are to combine the neurons that activate for semantically related labels (e.g., neurons 0, 22, 26, 54, and 63 in Table 4) and/or take the top-n ranked responses from the Concept Induction system into account. Eventually, our line of work aims at comprehensive and conclusive hidden layer analysis for deep learning systems, so that, after analysis, it is possible to "read off" from the activations, (some of) the implicit features of the input that Figure 2: Examples of some Google images used: target images (“cross_walk”) that did not activate the neuron; non-target images from labels like “central_reservation,” “road and car,” and “fire_hydrant” that activated the neuron. the network has detected, thus opening up avenues to really explaining the system's input-output behavior. AcknowledgmentsThis research has been supported by the National Science Foundation under Grant No. 2033521.
2310.10211
GEVO-ML: Optimizing Machine Learning Code with Evolutionary Computation
Parallel accelerators, such as GPUs, are key enablers for large-scale Machine Learning (ML) applications. However, ML model developers often lack detailed knowledge of the underlying system architectures, while system programmers usually do not have a high-level understanding of the ML model that runs on the specific system. To mitigate this gap between two relevant aspects of domain knowledge, this paper proposes GEVO-ML, a tool for automatically discovering optimization opportunities and tuning the performance of ML kernels, where the model and training/prediction processes are uniformly represented in a single intermediate language, the Multiple-Layer Intermediate Representation (MLIR). GEVO-ML uses multi-objective evolutionary search to find edits (mutations) to MLIR code that ultimately runs on GPUs, improving performance on desired criteria while retaining required functionality. We demonstrate GEVO-ML on two different ML workloads for both model training and prediction. GEVO-ML finds significant Pareto improvements for these models, achieving 90.43% performance improvement when model accuracy is relaxed by 2%, from 91.2% to 89.3%. For the training workloads, GEVO-ML finds a 4.88% improvement in model accuracy, from 91% to 96%, without sacrificing training or testing speed. Our analysis of key GEVO-ML mutations reveals diverse code modifications, while might be foreign to human developers, achieving similar effects with how human developers improve model design, for example, by changing learning rates or pruning non-essential layer parameters.
Jhe-Yu Liou, Stephanie Forrest, Carole-Jean Wu
2023-10-16T09:24:20Z
http://arxiv.org/abs/2310.10211v1
# GEVO-ML: Optimizing Machine Learning Code with Evolutionary Computation ###### Abstract. Parallel accelerators, such as GPUs, are key enablers for large-scale Machine Learning (ML) applications. However, ML model developers often lack detailed knowledge of the underlying system architectures, while system programmers usually do not have a high-level understanding of the ML model that runs on the specific system. To mitigate this gap between two relevant aspects of domain knowledge, this paper proposes GEVO-ML, a tool for automatically discovering optimization opportunities and tuning the performance of ML kernels, where the model and training/prediction processes are uniformly represented in a single intermediate language, the Multiple-Layer Intermediate Representation (MLIR). GEVO-ML uses multi-objective evolutionary search to find edits (mutations) to MLIR code that ultimately runs on GPUs, improving performance on desired criteria while retaining required functionality. We demonstrate GEVO-ML on two different ML workloads for both model training and prediction. GEVO-ML finds significant Pareto improvements for these models, achieving 90.43% performance improvement when model accuracy is relaxed by 2%, from 91.2% to 89.3%. For the training workloads, GEVO-ML finds a 4.88% improvement in model accuracy, from 91% to 96%, without sacrificing training or testing speed. Our analysis of key GEVO-ML mutations reveals diverse code modifications, while might be foreign to human developers, achieving similar effects with how human developers improve model design, for example, by changing learning rates or pruning non-essential layer parameters. Genetic Improvement, Multi-objective Evolutionary Computation, Deep Neural Networks + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition + Footnote †: journal: Computer Vision and Pattern Recognition this domain. For instance, it can only optimize certain model layers or operations--those that can be compiled into LLVM-IR. In many ML frameworks most operations are implemented to invoke the device vendor library where the source code and LLVM-IR are not available. This restriction drastically limits which part of a neural network model can be tuned by GEVO. Further, without access to the high-level neural network model architecture, GEVO could discover only local optimizations within a single neural network operation. This paper presents GEVO-ML, an EC approach for optimizing ML workloads, which addresses the aforementioned issues. First proposed in (Zhu et al., 2017), GEVO-ML optimizes ML models expressed in the Multi-Level Intermediate Representation (MLIR), which represents the entire model in a single representation. Moving to the MLIR representation required designing new mutation operators and overcoming significant engineering challenges. Our evaluation shows that GEVO-ML finds optimizations both to the ML model and to the low-level code implementation. To summarize, the key contributions of this paper are: * We present GEVO-ML--an EC-based tool for finding cross-layer optimizations for machine learning workloads that are expressed in the MLIR format. GEVO-ML generalizes across ML models, MLIR dialects, and underlying system architectures. It finds Pareto solutions for different tradeoffs of model accuracy and runtime and identifies optimizations that are tailored to the particular workloads (Section 4). * With a novel mutation operator that resizes tensor variables, GEVO-ML uncovers optimization opportunities, ranging from model architectures (e.g., by removing unneeded network layers) to low-level implementation inefficiencies, by leveraging nuanced interactions across abstraction layers. We follow up the evaluation for GEVO-ML with in-depth code analysis (Sections 6.1 and 6.2). * We perform detailed experimental evaluation to assess the performance of GEVO-ML, using two different ML models: MobileNet on the CIFAR10 dataset (Krizhevsky et al., 2014) and a two-layer fully-connected neural network on the MNIST (Krizhevsky et al., 2014) dataset. For MobileNet as the model prediction task, GEVO-ML finds optimizations that achieve 90.43% performance improvement at a cost of 2% model accuracy. For the model training task on the two-layer fully-connected neural network, GEVO-ML finds optimizations that improve model accuracy by 5% without changing the runtime performance (Section 6). We plan to open source GEVO-ML _URL-elided-for-blind-review_ upon paper acceptance to advance the field with open and reproducible science. ## 2. Related Work Automating the construction and optimization of machine learning models, known as AutoML, is a growing body of research. An array of search and optimization methods in this domain includes evolutionary computation, reinforcement learning, and superoptimization. Using evolutionary computation (EC) to improve ML workloads dates back to 1989, where Montana and Davis proposed using EC to train a neural network (Motton et al., 2017). The most established and commonly-used approach in this domain is NEAT, first proposed by Stanley et al. in 2002 (Stanley et al., 2002), which simultaneously learns the connection topology and weights for each neuron. Since then, many papers have expanded the NEAT approach to operate on larger networks and more complex tasks (Stanley et al., 2002; Stanley et al., 2002; Stanley et al., 2002; Stanley et al., 2003). More recently, convolution neural networks (CNNs) have achieved extraordinary performance in image classification tasks by providing additional convolution layers as filters. These layers are used to identify relevant spatial patterns in images so the number of features can be reduced before being fed into a traditional neural network. Many approaches, including applying reinforcement learning, for identifying performant CNN architectures (topologies) have been proposed (Krizhevsky et al., 2014; Ba et al., 2015; Ma et al., 2016; Li et al., 2017; Li et al., 2018; Li et al., 2019; Li et al., 2019), outperforming manually designed architectures on several tasks. Similar to NEAT, Real et al. proposed using EC to design CNNs in a limited search space of convolution layers composed by common arithmetic operations (Shen et al., 2016). This work achieves state-of-the-art performance classifying the ImageNet dataset compared to other network architecture searches, which use random search and reinforcement learning. NSGA-Net (Li et al., 2017) complements the work of Real et al. by seeking to improve model accuracy as well as decreasing the model complexity, using a well-known, multi-objective selection method in EC-NSGA-II (Li et al., 2019). The aforementioned prior works with EC as the search method have one thing in common--they rely on customized binary encoding, usually corresponding to the connection of selected, common operations used in CNN architecture. The encoding serves as the individual representation for EC to search. In a lower level of the system stack, Liou et al. proposed GEVO (Li et al., 2017), that searches the implementation of common neural network operations, in particular, the stochastic gradient decent operation, in the form of LLVM intermediate representation. While GEVO targets low-level code, the result implies that the discovered optimization has high-level intention similar to weight pruning techniques. Our work, GEVO-ML, based on GEVO, extends the individual representation to the entire neural network model,through multi-level intermediate representation. The most comprehensive work that is comparable to this paper is AutoML-Zero (Shen et al., 2016). AutoML-Zero engages the fundamental mathematical operations as the building blocks to compose basic ML tasks: training and prediction. Starting from almost scratch, AutoML-Zero utilizes EC search and eventually rediscovers the algorithm similar to stochastic gradient decent for the purpose of training. The results showcase that EC search within a generic framework can discover human knowledge with minimal human intervention and restriction. While the result is insightful and impressive, searching from scratch is extremely resource hungry. Around 50,000 CPU days (10,000 processors for 5 days) are required. GEVO-ML intends to search based on an existed and established algorithm or a neural network model and looks for opportunities for performance improvement. While fundamental designs are alike between AutoML-Zero and GEVO-ML, GEVO-ML leverages a more standard, cross-board representation from compiler experts instead of customized and selected operations. As a result, GEVO-ML is easier to deploy in a production-ready environment and can perform code optimization search for machine learning models described in any machine learning framework, running on a wide array of hardware back-ends, including GPUs. As introduced above, with search on top of code representation, our work and relevant prior works also relate to program synthesis, where superoptimization provides as an alternative option as the search process. Unlike EC which relies on pre-defined test cases for verification, superoptimization transforms a given program into a boolean equation and search for the program rewrite where semantic equivalence is guaranteed. Thus, test cases are often not required for validation. Jia et al. recently proposed TASO (Jia et al., 2019), which uses superoptimization methods to optimize a computational graph of a deep neural network. TASO enumerates all possible combinations of operator implementations and selects the graph implementation that minimizes runtime. A SAT solver is used to ensure that the original graph's functionality is preserved. Although promising, this approach currently does not scale beyond small graphs comprised of more than four operators. ## 3. Background GEVO-ML's design targets the Multi-level Intermediate Representation (MLIR) representation for Deep Neural Networks (DNNs). This section discusses relevant representations of MLIR and reviews how MLIR fits into the TensorFlow deep learning framework. ### LLVM Intermediate Representation The core of the LLVM compiler is its intermediate representation, LLVM-IR (Li et al., 2017). LLVM-IR is a strongly-typed, abstract assembly language that is target-independent. The LLVM compiler front-end first compiles high-level languages like C/C++ into this IR and applies code optimization without considering target device dependence. LLVM-IR uses Single Static Assignment (SSA) with infinite register allocation to enable many modern compiler optimizations, including data flow and variable reachability analysis, dead code elimination, and other optimizations we expect from modern compilers. Many projects built on top of LLVM use both the flexible extensibility of SSA and the surrounding LLVM compiler infrastructure. Relevant examples include: Nvidia NVVM (Nvidia, 2018) which is an extension to LLVM-IR for their GPU code (CUDA) compilation; Glow (Zhou et al., 2018) which is a compilation framework for PyTorch DNN models; and, GEVO which is is built on top of LLVM-IR and Nvidia NVVM. As the number of LLVM-IR extensions for DNNs grow, LLVM developers realized that similar functionality was being developed repeatedly for different extensions with similar domains. MLIR attempts to unify these efforts. ### Multi-Level Intermediate Representation (MLIR) Mature deep learning frameworks, such as TensorFlow, allow programmers to conveniently construct a deep neural network as a sequence of high-level operations using convolution, fully connected layers, and such. The model is then translated into the the intermediate representation and further optimized by the framework compiler. Eventually, the instructions are mapped to a device, like a GPU, which executes the model. Many compiler optimizations can be performed at each step of this multi-level compilation process. These optimizations are applied to abstraction layers across the different frameworks, compilers, and hardware systems. There are many redundancies in this process. For example, within one framework, same optimization, such as dead code elimination, can be applied repeatedly across different abstract layers. Another example is that, when spanning across different frameworks, DNN optimizations are essentially linear algebra domain optimizations, and they appear in many established frameworks, including Intel's nGraph (Hung et al., 2017) and image processing frameworks such as Halide (Han et al., 2017), as being reinvented. To address these redundancies, Lattner et al. proposed MLIR (Li et al., 2017) to enable developers representing different abstraction levels in a customized operation or instruction set under a unified compiler infrastructure. The centerpiece of MLIR is dialects. A MLIR dialect is a developer-defined and customized operation set (operations in MLIR are similar to instructions in low-level representation). Despite customization, all dialects follow the same SSA rules and operation field Figure 1. A MLIR code snippet from a neural network with two fully-connected layers, written in TensorFlow. The high level code is presented on the left while the translated MLIR code is shown in the HLO dialect on the right. The operand type and the attribute in each MLIR operation are omitted for clarity. format, which allows code analysis to be unified across different dialects. MLIR maintains a list of core or contributed dialects. The list is extensive, ranging from generic/low-level dialects (e.g., LLVM-IR/NVVM) to high-level dialects like linear algebra or tensor operations. Different dialects can be mixed in the MLIR. To summarize, MLIR is a compiler eco-system, which encourages sharing and out-of-box transformation. ### MLIR in TensorFlow To date MLIR, been adopted only by TensorFlow, and Google is leading the development of both platforms. As background for GEVO-ML, we next describe briefly how a DNN model is processed and represented in TensorFlow with respect to the MLIR. TensorFlow's compiler is called XLA (Song et al., 2019). It compiles a DNN model using Google's IR, called High-level Operation (HLO). XLA performs target independent optimization on HLO before further translating into CPU code via LLVM-IR, Nvidia GPU code with NVVM, or via a private, unpublished IR for Google TPU. All the above representations, HLO, LLVM, NVVM, and other formats, can be expressed as a MLIR dialect. Figure 1 shows an example of how MLIR represents a TensorFlow model under the HLO dialect. GEVO-ML is designed to interact with MLIR in the HLO dialect, mainly because this dialect is currently the most stable and has complete and precise documentation. However, GEVO-ML's interface can easily be expanded to other MLIR dialects as they become available. ## 4. GEVO-ML Design In this paper, we propose GEVO-ML--an evolutionary computation tool for automatically searching for DNN model architectures and execution optimizations, focusing on MLIR. GEVO-ML takes a MLIR program, a fitness function to optimize, and user-supplied datasets as inputs. The datasets serve as test cases for GEVO-ML. GEVO-ML seeks to maximize the fitness function by evolving and evaluating mutated program variants. As mentioned earlier, we demonstrate GEVO-ML for the TensorFlow HLO dialect in MLIR, as shown in Figure 2. The initial population is formed by making copies and applying random mutations to the original MLIR program. By default, three mutations are applied to each individual in the initial generation. GEVO-ML uses the Non-dominated Sorting Genetic Algorithm (NSGA-II) (Krishnan et al., 2017), extending the DEAP implementation. Each new generation of individuals is formed by ranking them according to the objectives, recombining individuals, applying mutation, comparing the new variants to a set of elites retained from the previous generation, and finally selecting the next generation. The next few subsections describe how we implemented these search procedures for MLIR optimization. ### Mutation GEVO-ML uses two mutation operators, each of which modifies a MLIR instruction: Copy (copy a MLIR operation from one location in the program to another location); Delete (delete a MLIR operation). SSA enforces the requirement that the value of a variable can only be assigned once without being further modified. This makes the use-def chain explicit. Mutations are highly likely to create invalid programs by violating this restriction and breaking variables in the use-def chain. GEVO-ML repairs the use-def chain by replacing invalid variable usage due to the mutation with other valid variables of the same types randomly. In the HLO dialect, most variables are of type tensor, but operations tend to generate uniquely sized tensors, and tensors of different sizes are treated as different types. To repair the mutation, GEVO-ML shrinks or expands the selected tensor variable by dropping values from the tensor's edges or padding the tensor with value 1. Figure 3 gives an example of a tensor mutation, which shrinks a tensor from size 3x4x4 to 2x2. The mutation operator selects one mutation type randomly (delete or copy) and applies the mutation to generate a new MLIR variant (with repairs as necessary). After each mutation is applied, Figure 2. GEVO-ML execution flow in the context of a TensorFlow model running under Google’s IREE environment. GEVO-ML immediately evaluates the edit against all test cases. If it fails, the mutation operator selects another mutation until it finds a valid MLIR variant. ### Crossover For crossover, GEVO-ML uses a _patch representation_ in which an individual is represented as a list of edits to the original program. We adopt this representation to maximize the chance of crossover producing a valid program. Here a program is a deep neural network. An alternative would be to recombine instructions from two different individuals, i.e., two neural networks. However, it is highly likely that recombining two random program slices will require many repairs to create a valid individual. GEVO-ML uses one-point messy crossover, which combines shuffle (Gevo and McKeown, 2015) and variable-length (Krishnan et al., 2016) crossover operations. GEVO-ML begins with two randomly selected individuals, concatenates the two lists of mutations (edits) in the patch representation; shuffles the sequence; and then randomly selects a location to cut the list back into two. GEVO-ML then reapplies each patch in sequence to the original GPU kernel and generates two new individuals. Although unusual, this strategy produces a wide diversity of recombinations from a minimal number of mutations. Mutations are relatively expensive in GEVO-ML due to the repair process. Each new individual is then evaluated to test if the new combination of edits is valid, and we find that about 80% of the time they are. If not, GEVO-ML repeats the process until it finds a successful recombination. ### Fitness Evaluation Individuals are evaluated according to the fitness objectives, e.g., runtime and model error. Most earlier genetic improvement approaches require that an individual passes all of its input/output test cases exactly, or within a pre-specified error tolerance. GEVO-ML requires only that individuals execute successfully, and minimizes output error as one of the optimization objectives. This approach succeeds because ML applications can usually tolerate errors in the output, often in a vector composing the likelihood of each prediction category, as long as model accuracy can still be evaluated. The fitness objectives are to minimize MLIR program execution time and, at the same time, reducing the model error, namely, \(argmin(time,error)\). Based on the specific ML task, fitness is evaluated either by retraining the model on a given dataset and recording the training time and model error (_training workloads_), or simply by passing dataset into the pre-trained model and recording the inference time and prediction error (_prediction or inference workloads_). At the end of the search, the fittest individual is evaluated against a separate dataset unseen to GEVO-ML, to verify that the recorded time and error are consistent. ### Selection As in NSGA-II, GEVO-ML selects individuals according to multi-objective fitness criteria and reports the pareto frontier of individuals that best satisfy the two objectives. GEVO-ML retains the top 16 individuals at time \(t\) and copies them unchanged to the population at time \(t+1\). It then chooses the remainder of the population using tournament selection. ## 5. Experimental Setup GEVO-ML is developed in a modular fashion, with the main search framework implemented in Python using DEAP (Dwork et al., 2017), which interacts with a separate C++ program. The C++ program handles the MLIR parsing task and implements the MLIR mutation operations described in Section 4. At the time of this work, the new, modular TensorFlow runtime system (Zheng et al., 2017), that intends to take MLIR programs as an input, is under development. It does not support the end-to-end TensorFlow model execution completely. That is, the current TensorFlow compiler is not modular or flexible enough for third party programs to intercept the internal MLIR program, modify it, and then reinsert it for execution. However, the Google Intermediate Representation Execution Environment (IREE) (Krishnan et al., 2016), an experimental project, can execute TensorFlow models on edge/mobile devices and uses MLIR. THus, we build GEVO-ML for IREE as IREE has a functional MLIR execution runtime that can execute MLIR programs, independent of TensorFlow. We evaluate GEVO-ML on two neural network models, which are compatible with the IREE requirements. Despite its ability to execute external MLIR programs, IREE is under development and lacks support for many TensorFlow operations. Note, the MLIR IREE runtime is less performant than the native TensorFlow framework, which we expect to improve in the near future. With these considerations in mind, we selected MobileNet (Krishnan et al., 2016) to evaluate GEVO-ML's ability to optimize model prediction task by minimizing the execution time of forward pass and maximizing model Figure 3. An example of GEVO-ML mutation: Shrink a tensor from size 3x4x4 to 1x2x2. The number of transitions shown in the figure corresponds to the number of MLIR operations required to reshape this tensor. accuracy. MobileNet is a highly efficient convolution neural network architecture. The weights of the model is retrieved from a pre-trained TensorFlow model. For model training, we chose a simple neural network with two fully-connected layers (denoted as 2fcNet in the rest of the paper). GEVO-ML is able to freely optimize both model forward pass and back-propagation pass. Stochastic gradient decent (SGD) is used as the training operation for this model. Currently, SGD on the fully-connected layer is the only functional model training workload supported by the IREE runtime. Training MobileNet requires SGD on the convolution layer, which is not yet supported. However, it is possible for GEVO to modify the SGD into other form during the search process. We allocated a 48 hour wall-clock budget for GEVO-ML to optimize each model on an Nvidia P100 GPU. As mentioned earlier, the fitness function rewards both runtime and model error. CIFAR10 (Krizhevsky et al., 2012) is used for model prediction in MobileNet where we use only the training data set (50,000 samples) to calculate model accuracy during the GEVO-ML run. The testing dataset (10,000 samples) is used post hoc to evaluate and verify model quality and execution time. Similarly, the MNIST (Krizhevsky et al., 2012) dataset (split into the 60,000 training and 10,000 testing samples) is used as the neural network training workload in 2fcNet due to the longer execution time and heavier computation requirement in model training. ## 6. Experimental Results and Analysis Figure 4 shows the Pareto frontier results on the last generation of the GEVO-ML search for each model. Overall, GEVO-ML improves both the execution time performance and model quality--it improves the execution time by by 90.43% from 39.59 to 20.79 seconds for the prediction task (MobileNet with CIFAR10) and improves model accuracy by 4.88% from 8.62% to 3.74% for the training task (2fcNet with MNIST). For both models, we evaluate model accuracy using the training dataset and examined the testing dataset for the model variants GEVO-ML discovered. In MobileNet (Prediction), no testing accuracy improvement was observed, and if we can tolerate a 2% reduction in testing accuracy, then we achieve 90% execution time performance improvement. In 2fcNet (Training), we obtain 5% training accuracy, which is preserved when we evaluate on the testing data. In the following subsections, we examine how these performance/accuracy improvements were achieved. ### Mutation analysis: Model Prediction in MobileNet In the MobileNet experiments, we found three key mutations that contributed to improved performance/accuracy trade-offs: * Replacing the \(\gamma\) value in one Batch Normalization (BN) layer with the \(\gamma\) value in its prior BN layer * Removing the bias term from the last fully connected layer * Removing the last convolution layer It is challenging to interpret exactly how these mutations reduce the model's accuracy, but the significant 90% performance improvement can be explained to some extent. The three key mutations are epistatic and work together synergistically to reduce runtime. When considered individually, none of the mutations has a significant impact on performance. For example, one mutation removes the last convolution layer which one might expect would have large impact. However, there are 52 convolution operations in total, and the last layer certainly does not contribute to a half of the runtime overhead. Taken alone, this mutation does not have \begin{table} \begin{tabular}{|c|c|} \hline & MobileNet & 2fcNet \\ \hline \hline & **17x** Depthwise- \\ & Convolution & **35x** Standard- \\ Layer & Convolution & **2x** Fully- \\ composition & **52x** Batch Norm. & connected Layer \\ & **1x** Average Pool & \\ & connected Layer & \\ \hline \end{tabular} \end{table} Table 1. Model Parameters Figure 4. GEVO-ML result in Runtime/Model Error Pareto Front for (a) MobileNet - Prediction and (b) 2fcNet - Training. The orange diamond shows the original model whereas blue dots are the modification from original model generated by GEVO-ML. large performance effect, but when applied in conjunction with the other two mutations, the 90% execution time improvement is obtained. Although we were unable to explain exactly how these mutations work, we speculate that the mutation combinations influence the multiple transformation passes and introduce low-level optimizations in IREE. This example shows how GEVO-ML can discover implicit, non-obvious optimization opportunities in these highly complex software systems and provide the code variant as an option for programmers to investigate. ### Mutation analysis: Model training in 2fcNet In contrast with the previous example, a 4.88% accuracy improvement was achieved with a single mutation in the model training workload. At a high level, primary impact of the mutation increases the degree of gradient, leading the model to update weights more aggressively. Figure 5 shows the GEVO-ML mutation (highlighted in red), inside the process that updates model weights and bias. In SGD with mini-batch, the model weights are updated using gradient from each example in one'mini-batch,' which is multiplied with the learning rate. The gradient of a mini-batch is first retrieved as the difference between the true label and the model prediction value from the forward pass (Lines 1-6). To average out the gradient in the mini-batch, the accumulation (the reduce function in Lines 11-14) and the divide operation (the multiplication with 1/32 in Line 10 where 32 is the batch size) are integrated to the calculation. The mean of the gradient is then multiplied with the leaning rate and used to change the weight and bias of the model (Lines 15-18). The single GEVO-ML mutation is shown in Line 9. GEVO-ML copies a broadcast operation from another location in the program, connects the %_label_ as input and inserts the output of the newly copied operation into the next operation (Line 10), replacing the value 0.0325 (seen at the top of the figure in Line 7). However, since the copied broadcast operation has the input type signature (ten-sor<32>) which is mismatched with the intended input variable %label (tensor<32x10>), GEVO-ML performs the repair process (Section 4.1), with two additional operations that modify the tensor into a compatible shape (pad and slice operations in Lines 7-8). After the tensor containing labels is reshaped, it is filled mostly by value '1', and only one label vector remains in the center of the tensor. The average value within the reshaped tensor is certainly larger than the value 0.03125 used in the unmodified model, resulting in a larger gradient value in general. Thus, we infer that the accuracy improvement is achieved with more aggressive training updates through larger gradients. Although most end users cannot directly control or scale the gradient, as GEVO-ML did, increasing the learning rate, which is applied in Line 15, can also enlarge the gradient and achieve a similar effect. We verified this assumption by increasing the learning rate from 0.01 to 0.3 and achieve comparable accuracy improvement. ## 7. Discussion, Challenge, and Future Opportunities In GEVO-ML, mutations either copy or delete existing HLO operations and then connect variables. However, many HLO operations provide an additional mode in their attribute field, which is statically assigned, e.g., the kernel size of a convolution operation. This is a tempting target as an additional mutation operator. We did not implement this particular operator in GEVO-ML because it would expand the search space intractably. Prior works in genetic improvement of software have sometimes supported mutations of constant numbers indirectly (Han et al., 2017; Wang et al., 2018). A typical approach is to rely on existing numbers in the program, e.g., stored as variables, and allow mutation to manipulate them through arithmetic instructions. This approach is currently not feasible in MLIR because the attribute field can accept only predefined constant values, and no dynamic assignment through in-program variables is allowed. Thus, we leave it as future work for GEVO-ML. GEVO-ML contributes new mutation operators, which change the size of the tensor variables, which are the primary variable type in the HLO dialect. Our motivation for this mutation operator was to enhance GEVO-ML's ability to mutate HLO programs successfully. We acknowledge that changing the tensor type by dropping or padding values could change program semantics dramatically. It is an interesting future avenue to understand the effect of tensor size mutation on the behavior of ML models more extensively. We also expect that, when the HLO dialect is lowered onto stable dialects such as linear algebra or affine dialects, this issue will be mitigated because tensor operations will then be decomposed into loops and fundamental arithmetic operations on integer or floating point numbers. Another limitation on GEVO-ML's implementation comes from the immature MLIR eco-system. Only TensorFlow has actively adopted MLIR, although Google has some other internal implementations. Moreover, the TensorFlow modular runtime system is still under development. We expect this situation to change quickly now that MLIR is upstream in the LLVM family, which means that LLVM-IR can be treated as a subset of MLIR. We expect GEVO-ML to reveal more unseen optimization opportunities, as MLIR Figure 5. GEVO-ML optimization of the training workload. The code snippet shows the original MLIR code, which calculates a gradient based on the batch size. The highlighted code (Line 7-9 and part of Line 10) shows the mutation that led to 4.88% training accuracy increase. continues to expand its reach into many domains, including more fundamental and general usage and additional back-end device support, with integration beyond deep neural networks. Finally, given the drastically-rising cost of machine learning (Beng et al., 2015; Chen et al., 2016; Chen et al., 2017; Chen et al., 2018), we believe automatic optimization tools, such as GEVO-ML, are instrumental to accelerate the performance and efficiency optimization for deep learning. ## 8. Conclusion GEVO-ML is an automatic EC tool for cross-layer optimization of ML software. We demonstrate GEVO-ML running on a production level ML framework using the MLIR as its program representation. Although the demonstration uses a single MLIR dialect, GEVO-ML identifies optimizations for individual parameters (changing the learning rate), model architecture management (removing a convolution layer), and we speculate that it finds indirect low-level implementation optimizations by combining operation manipulations in subtle ways. As ML models continue to grow in size and complexity and ML infrastructures continue to mature, we hope that evolutionary computation as illustrated with GEVO-ML will play a key role in continuing to improve the design and execution characteristics of ML.
2301.06332
Discrete-velocity-direction models of BGK-type with minimum entropy: II. Weighted models
In this series of works, we develop a discrete-velocity-direction model (DVDM) with collisions of BGK-type for simulating gas flows, where the molecular motion is confined to some prescribed directions but the speed is still a continuous variable in each orientation. In this article, we introduce a weighted function in each orientation when recovering the macroscopic parameters. Moreover, the internal molecular degrees of freedom are considered. With this weighted DVDM, we develop three submodels by incorporating the discrete velocity method, the Gaussian-extended quadrature method of moments and the Hermite spectral method in each direction. These spatial-time submodels are novel multidimensional versions corresponding to the three approaches. Numerical tests with a series of 1-D and 2-D flow problems show the efficiency of the weighted DVDM.
Yihong Chen, Qian Huang, Wen-An Yong
2023-01-16T09:53:59Z
http://arxiv.org/abs/2301.06332v1
# Discrete-velocity-direction models of BGK-type with minimum entropy: II. Weighted models ###### Abstract. In this series of works, we develop a discrete-velocity-direction model (DVDM) with collisions of BGK-type for simulating gas flows, where the molecular motion is confined to some prescribed directions but the speed is still a continuous variable in each orientation. In this article, we introduce a weighted function in each orientation when recovering the macroscopic parameters. Moreover, the internal molecular degrees of freedom are considered. With this weighted DVDM, we develop three submodels by incorporating the discrete velocity method, the Gaussian-extended quadrature method of moments and the Hermite spectral method in each direction. These spatial-time submodels are novel multidimensional versions corresponding to the three approaches. Numerical tests with a series of 1-D and 2-D flow problems show the efficiency of the weighted DVDM. Key words and phrases:BGK equation; minimum entropy principle; discrete-velocity model; extended quadrature method of moments; Hermite spectral method * Footnote *: Corresponding author ###### Abstract We consider the 1-D problem of the 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-3-dimensional 3-dimensional 3-3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-dimensional 3-3-dimensional 3-3-dimensional 3-3-dimensional 3-3-dimensional 3-3-dimensional 3-3-dimensional 3-3-dimensional 3-3-dimensional 3-3-3dimensional 3-3-3-3dimensional 3-3-3-3dimensional 3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-3-33-3-3-3-3-3-3-3-3-33-3-3-3-33-3-3-33-3-3-3-3-33-3-3-3-3-33-3-3-3-3-33-3-33-3-33-3-33-3-3-3-33-3-3-33-33-3-33-33-3-3-33-33-3-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-33-333-33-33-33-33-33-33-33-333-33-333-33-333-33-333-333-33-333-33-33-333-33-33-333-33-333-33-333-333-33-333-333-333-333-333-333-333-333-333-333-333-333-333-333-333-333-333-333-333-333-3333-333-333-333-3333-3333-3333-3333-3333-333-3333-333-3333-3333-3333-3333-3333-3333-3333-3333-3333-333-3333-3333-333-3333-3333-3333-33333-3333-3333-3333-3333-3333-33333-3333-33333-3333-33333-3333-3333-33333-33333-3333-333333-3333-33333-33333-333333-33333-3333-333333-33333-333333-3333333-333333-3333333-333333-3333333-33333333- Here \(|\mathbf{U}|\) denotes the Euclidean length of the vector \(\mathbf{U}\). The classical fluid quantities including density \(\rho\), velocity \(\mathbf{U}\), energy \(E\), temperature \(\theta\) and pressure \(p\) are defined by \[\rho=\langle f\rangle,\quad\mathbf{U}=\frac{\langle\mathbf{\xi}f\rangle}{\rho}\in \mathbb{R}^{D},\quad E=\frac{|\mathbf{U}|^{2}+(D+L)\theta}{2}=\frac{1}{\rho}\left< \frac{|\mathbf{\xi}|^{2}+|\mathbf{\zeta}|^{2}}{2}f\right>,\quad p=\rho\theta, \tag{2.3}\] where the bracket \(\langle\cdot\rangle\) is defined as the integral \(\langle g(\mathbf{\xi},\mathbf{\zeta})\rangle=\int_{\mathbb{R}^{L}}\int_{\mathbb{R}^ {D}}g(\mathbf{\xi},\mathbf{\zeta})d\mathbf{\xi}d\mathbf{\zeta}\) for any reasonable \(g(\mathbf{\xi},\mathbf{\zeta})\). The equilibrium can be rewritten in a concise form \[\mathcal{E}[f]=\exp\left(\mathbf{\alpha}_{eq}\cdot\mathbf{m}(\mathbf{\xi},\mathbf{\zeta})\right)\] with \[\mathbf{\alpha}_{eq}=\left(\ln\frac{\rho}{(2\pi\theta)^{(D+L)/2}}-\frac{|\mathbf{U}| ^{2}}{2\theta},\ \frac{\mathbf{U}}{\theta},\ -\frac{1}{\theta}\right)^{T}\quad\text{and}\quad\mathbf{m}(\mathbf{\xi},\mathbf{\zeta})= \left(1,\ \mathbf{\xi},\ \frac{|\mathbf{\xi}|^{2}+|\mathbf{\zeta}|^{2}}{2}\right)^{T} \tag{2.4}\] both being \((D+2)\)-dimensional real vectors. This form enlightens us on the model development in later sections. The equilibrium distribution \(\mathcal{E}[f]\) satisfies two important properties. First, \(\mathcal{E}[f]\) reproduces the local macroscopic quantities in the same manner as \(f\): \[\langle\mathbf{m}(\mathbf{\xi},\mathbf{\zeta})\mathcal{E}[f]\rangle=\mathbf{\rho}:=(\rho,\ \rho\mathbf{U},\ \rho E )^{T}\in\mathbb{R}^{D+2}, \tag{2.5}\] and thus the BGK equation respects the conservation laws of mass, momentum, and energy. Then, given any \(\mathbf{\rho}\in\mathbb{R}^{D+2}\) with positive components \(\rho\) and \(E\), \(\mathcal{E}[f]\) is the unique non-negative solution that minimizes the following kinetic entropy \[H[f]=\langle f\ln f-f\rangle \tag{2.6}\] subject to the constraint \(\langle\mathbf{m}(\mathbf{\xi},\mathbf{\zeta})f\rangle=\mathbf{\rho}\). This property has been adapted in both a discrete-velocity model [31] and our previous discrete-velocity-direction model for the BGK equation without internal degrees of freedom [23]. ### Discrete-velocity-direction models A discrete-velocity-direction model (DVDM) based on the BGK equation has been proposed in our previous work [23]. Our aim here is to enhance the model and extend it to the case with internal molecular degrees of freedom. The DVDM assumes that the molecule transport is limited to \(N\) prescribed directions denoted by \(\{\mathbf{l}_{m}\}_{m=1}^{N}\) with each \(\mathbf{l}_{m}\) located on the unit sphere \(\mathbb{S}^{D-1}\), but the velocity magnitude \(\xi\in\mathbb{R}\) in each direction remains continuous. The directions are selected with the following two requirements. 1. \((\mathbf{l}_{1},\ldots,\mathbf{l}_{N})\in\mathbb{R}^{D\times N}\) is of rank \(D\) and therefore \(N\geq D\). 2. Each direction \(\mathbf{l}_{m}\) and its opposite \(-\mathbf{l}_{m}\) belong to \(S_{m}\subset\mathbb{S}^{D-1}\), where the \(S_{m}\)'s constitute a disjoint partition of the unit sphere \(\mathbb{S}^{D-1}=\bigcup_{m=1}^{N}S_{m}\) and each \(S_{m}\) has the same measure. The equal measure means that the directions are 'uniformly distributed'. For \(D=2\), such a partition on \(\mathbb{S}^{1}\) can be realized by setting \(\mathbf{l}_{m}=(\cos\gamma_{m},\sin\gamma_{m})\) and \(\gamma_{m}=\frac{(m-1)\pi}{N}\) or \(\frac{(2m-1)\pi}{2N}\), which will be adopted for all numerical tests in this paper. For \(D=3\), the algorithm in [28] can help to yield such a partition on \(\mathbb{S}^{2}\). Once the directions are selected, the distribution \(f=f(t,\mathbf{x},\mathbf{\xi},\mathbf{\zeta})\) is replaced by \(N\) distributions \(\left\{f_{m}(t,\mathbf{x},\xi,\mathbf{\zeta})\right\}_{m=1}^{N}\) with \(\xi\in\mathbb{R}\) and \(\mathbf{\zeta}\in\mathbb{R}^{L}\). The transport velocity for \(f_{m}=f_{m}(t,\mathbf{x},\xi,\mathbf{\zeta})\) is \(\xi\mathbf{l}_{m}\), and the governing equation for \(f_{m}\) becomes \[\partial_{t}f_{m}+\xi\mathbf{l}_{m}\cdot\nabla_{\mathbf{x}}f_{m}=\frac{1}{\tau}( \mathcal{E}_{m}-f_{m}),\quad m=1,\ldots,N, \tag{2.7}\] with the local equilibriums \(\mathcal{E}_{m}=\mathcal{E}_{m}(t,\mathbf{x},\xi,\mathbf{\zeta})\) yet to be determined. For Eq. (2.7), we use the weight function \(|\xi|^{D-1}\) and define new fluid quantities \[\begin{split}\rho&=s\sum_{m=1}^{N}\int_{\mathbb{R}} \int_{\mathbb{R}^{L}}f_{m}|\xi|^{D-1}d\mathbf{\zeta}d\xi,\quad\rho\mathbf{U}=s\sum_{m=1} ^{N}\int_{\mathbb{R}}\int_{\mathbb{R}^{L}}\xi\mathbf{l}_{m}f_{m}|\xi|^{D-1}d\mathbf{ \zeta}d\xi,\\ \rho&=s\sum_{m=1}^{N}\int_{\mathbb{R}}\int_{\mathbb{R} ^{L}}\frac{\xi^{2}+|\mathbf{\zeta}|^{2}}{2}f_{m}|\xi|^{D-1}d\mathbf{\zeta}d\xi.\end{split} \tag{2.8}\] Here \(s\) is half of the measure of \(S_{m}\). **Remark 2.1**.: In contrast to our previous model in [23], the local equilibrium for the new model Eq. (2.7) will be evaluated at the just defined fluid quantities computed with the weight function \(|\xi|^{D-1}\). This weight function is inspired by changing variables from the Cartesian coordinate to polar or spherical coordinates. Its introduction is independent of the internal degrees of freedom. Our numerical tests show that this weight function is substantial for correctly reconstructing macroscopic quantities. As for the equilibrium states \(\{\mathcal{E}_{m}\}_{m=1}^{N}\) on the RHS of Eq. (2.7), we require that the following conservation property must be satisfied: \[s\sum_{m=1}^{N}\int_{\mathbb{R}}\int_{\mathbb{R}^{L}}\left(1,\xi\mathbf{l}_{m}, \frac{\xi^{2}+|\mathbf{\zeta}|^{2}}{2}\right)\mathcal{E}_{m}|\xi|^{D-1}d\mathbf{\zeta }d\xi=(\rho,\rho\mathbf{U},\rho E). \tag{2.9}\] This can be viewed as a discrete-velocity-direction analogue of Eq. (2.5), while \(\rho\), \(\mathbf{U}\), and \(E\) are computed with the weighted integrals Eq. (2.8) based on \(f_{m}\). In this way, we can derive the classical Euler equations by multiplying \(1\), \(\xi\mathbf{l}_{m}\) and \(\frac{\xi^{2}+|\mathbf{\zeta}|^{2}}{2}\) on both sides of Eq. (2.7) and taking the weighted integrals; see details in [23]. Next we assume that the local equilibrium \(\mathcal{E}_{m}=\mathcal{E}_{m}(t,\mathbf{x},\xi,\mathbf{\zeta})\) has the variable-separating form \[\mathcal{E}_{m}(t,\mathbf{x},\xi,\mathbf{\zeta})=\mathcal{E}_{tr,m}(t,\mathbf{x},\xi) \mathcal{E}_{in,m}(t,\mathbf{x},\mathbf{\zeta}),\] which is consistent with the Maxwellian Eq. (2.2) of the BGK equation. The internal part \(\mathcal{E}_{in,m}\) is taken to be the same as that in Eq. (2.2): \[\mathcal{E}_{in,m}(t,\mathbf{x},\mathbf{\zeta})=\frac{1}{\sqrt{(2\pi\theta)^{L}}}\exp \left(-\frac{|\mathbf{\zeta}|^{2}}{2\theta}\right).\] Notice that the equilibrium temperature \(\theta\) is \[\theta=\frac{2E-|\mathbf{U}|^{2}}{D+L} \tag{2.10}\] due to Eq. (2.3). Substituting such an \(\mathcal{E}_{in,m}\) into Eq. (2.9), we derive constraints for the transport part \(\mathcal{E}_{tr,m}(t,\mathbf{x},\xi)\): \[s\sum_{m=1}^{N}\int_{\mathbb{R}}\mathbf{m}_{m}\mathcal{E}_{tr,m}|\xi|^{D-1}d\xi= \left(\rho,\rho\mathbf{U},\rho\left(E-\frac{L}{2}\theta\right)\right)^{T}=:\mathbf{ \rho}_{tr}, \tag{2.11}\] where \(\mathbf{m}_{m}(\xi)=\left(1,\xi\mathbf{l}_{m},\xi^{2}/2\right)^{T}\in\mathbb{R}^{D+2}\). To determine the transport part, we refer to the minimum entropy property of the Maxwellian in Eq. (2.6) and require that \(\mathcal{E}_{tr,m}\) minimizes a discrete analogue of the entropy \[H\left[\{g_{m}\}_{m=1}^{N}\right]:=s\sum_{m=1}^{N}\int_{\mathbb{R}}\int_{ \mathbb{R}^{L}}(g_{m}\ln g_{m}-g_{m})|\xi|^{D-1}d\mathbf{\zeta}d\xi \tag{2.12}\] among all possible 1-D distributions \(\{g_{m}(\xi)\geq 0\}_{m=1}^{N}\) satisfying \[s\sum_{m=1}^{N}\int_{\mathbb{R}}\mathbf{m}_{m}g_{m}|\xi|^{D-1}d\xi=\mathbf{\rho}_{tr}.\] For the transport part, we have the following theorem which can be proved with the same argument as that of Theorem 2.1 in our previous work [23]. **Theorem 2.2**.: _Given \(\mathbf{\rho}_{tr}\in\mathbb{R}^{D+2}\) satisfying \(0<|\mathbf{\rho}_{tr}|<\infty\), if there exists \(\{g_{m}(\xi)\geq 0\}_{m=1}^{N}\) such that_ \[s\sum_{m=1}^{N}\int_{\mathbb{R}}\mathbf{m}_{m}g_{m}|\xi|^{D-1}d\xi=\mathbf{\rho}_{tr},\] _then the discrete kinetic entropy Eq. (2.12) has a unique minimizer \(\{\mathcal{E}_{tr,m}\}_{m=1}^{N}\). Moreover, the minimizer has the exponential form_ \[\mathcal{E}_{tr,m}=\exp(\mathbf{\alpha}\cdot\mathbf{m}_{m})\] _and \(\mathbf{\alpha}=(\alpha_{0},\hat{\mathbf{\alpha}},\alpha_{D+1})\in\mathbb{R}^{D+1} \times\mathbb{R}^{-}\) is the unique minimizer of the following convex function_ \[J(\mathbf{\alpha}):=s\sum_{m=1}^{N}\int_{\mathbb{R}}\exp\left(\mathbf{\alpha}\cdot\mathbf{ m}_{m}\right)|\xi|^{D-1}d\xi-\mathbf{\rho}_{tr}\cdot\mathbf{\alpha}. \tag{2.13}\] Thanks to this result, the computation of \(\mathcal{E}_{tr,m}\) only requires solving \(\mathbf{\alpha}\in\mathbb{R}^{D+2}\) by minimizing \(J(\mathbf{\alpha})\). This is particularly beneficial when a large number of directions are used, i.e., \(N\gg D+2\). We will present the algorithm for this optimization problem in later section, which is shown to be highly efficient. In some cases, it is convenient to rewrite \(\mathcal{E}_{tr,m}\) in the form of standard Gaussian distribution \[\mathcal{E}_{tr,m}=\exp(\mathbf{\alpha}\cdot\mathbf{m}_{m})=\frac{\rho_{m}}{\sqrt{2\pi \sigma^{2}}}\exp\left(-\frac{(\xi-u_{m})^{2}}{2\sigma^{2}}\right), \tag{2.14}\] and the parameters \(\rho_{m},\ u_{m}\), and \(\sigma^{2}\) are related to \(\mathbf{\alpha}=(\alpha_{0},\hat{\mathbf{\alpha}},\alpha_{D+1})\) as follows: \[\sigma^{2}=-\frac{1}{\alpha_{D+1}},\quad u_{m}=(\hat{\mathbf{\alpha}}\cdot\mathbf{l}_ {m})\,\sigma^{2},\quad\rho_{m}=\sqrt{2\pi\sigma^{2}}\exp\left(\alpha_{0}+\frac {u_{m}^{2}}{2\sigma^{2}}\right). \tag{2.15}\] **Remark 2.3**.: Due to the weighted integral in Eq. (2.8), \(\mathbf{\rho}_{tr}\) cannot be expressed, in general, by \(\rho_{m}\), \(u_{m}\) and \(\sigma^{2}\) with the simple algebraic relations \[\rho\neq s\sum_{m=1}^{N}\rho_{m},\quad\rho\mathbf{U}\neq s\sum_{m=1}^{N}\rho_{m}u _{m}\mathbf{l}_{m},\quad\rho(|\mathbf{U}|^{2}+D\theta)\neq s\sum_{m=1}^{N}\rho_{m}(u_{ m}^{2}+\sigma^{2}).\] ## 3. Spatial-time models Our BGK-DVDM model Eq. (2.7) contains continuous variables \(\xi\in\mathbb{R}\) and \(\mathbf{\zeta}\in\mathbb{R}^{L}\). In this section, we treat these variables to derive spatial-time models with only \(t\) and \(\mathbf{x}\) as continuous variables. As to \(\mathbf{\zeta}\), we define \[g_{m}(t,\mathbf{x},\xi)=\int_{\mathbb{R}^{L}}f_{m}(t,\mathbf{x},\xi,\mathbf{\zeta})d\mathbf{ \zeta},\quad h_{m}(t,\mathbf{x},\xi)=\int_{\mathbb{R}^{L}}|\mathbf{\zeta}|^{2}f_{m}(t, \mathbf{x},\xi,\mathbf{\zeta})d\mathbf{\zeta}\] and derive from Eq. (2.7): \[\begin{cases}\partial_{t}g_{m}+\xi\mathbf{l}_{m}\cdot\nabla_{\mathbf{x}}g_{m}=\frac{1 }{\tau}\left(\exp(\mathbf{\alpha}\cdot\mathbf{m}_{m})-g_{m}\right),\\ \partial_{t}h_{m}+\xi\mathbf{l}_{m}\cdot\nabla_{\mathbf{x}}h_{m}=\frac{1}{\tau}\left( L\theta\exp(\mathbf{\alpha}\cdot\mathbf{m}_{m})-h_{m}\right)\end{cases} \tag{3.1}\] for \(m=1,...,N\). Next we treat \(\xi\) with the following three methods: the discrete-velocity method (DVM) [31], the extended quadrature method of moment (EQMOM) [5], and the Hermite spectral method (HSM) [20]. ### Discrete-velocity model To derive this kind of model, we choose a positive integer \(M\), a positive real number \(\Delta\xi\), and a real number \(\xi_{0}\), which can vary for different directions. Set \(\xi_{mk}=k\Delta\xi+\xi_{0}\) for \(k=1,...,M\). Based on Eq. (3.1), the discrete-velocity model is \[\begin{cases}\partial_{t}g_{mk}+\xi_{mk}\mathbf{l}_{m}\cdot\nabla_{\mathbf{x}}g_{mk}= \frac{1}{\tau}(g_{mk}^{eq}-g_{mk}),\\ \partial_{t}h_{mk}+\xi_{mk}\mathbf{l}_{m}\cdot\nabla_{\mathbf{x}}h_{mk}=\frac{1}{\tau} (h_{mk}^{eq}-h_{mk}),\end{cases} \tag{3.2}\] for \(k=1,...,M\) and \(m=1,...,N\), where \(g_{mk}^{eq}\) and \(h_{mk}^{eq}\) need to be determined. To this end, we first compute \[\begin{split}\rho&=s\sum_{m=1}^{N}\sum_{k=1}^{M}g_{ mk}|\xi_{mk}|^{D-1}\Delta\xi,\quad\rho\mathbf{U}=s\sum_{m=1}^{N}\sum_{k=1}^{M}\xi_{ mk}\mathbf{l}_{m}g_{mk}|\xi_{mk}|^{D-1}\Delta\xi,\\ \rho E&=s\sum_{m=1}^{N}\sum_{k=1}^{M}\frac{\xi_{ mk}^{2}g_{mk}+h_{mk}}{2}|\xi_{mk}|^{D-1}\Delta\xi,\end{split} \tag{3.3}\] corresponding to the last model. With these fluid quantities, \(\mathcal{E}_{tr,m}\) in Eq. (2.14) can be derived by finding the minimizer of the convex function \(J(\mathbf{\alpha})\) in Eq. (2.13) (see detailed algorithms in Section 5.1). Having \(\mathcal{E}_{tr,m}\), we determine the discretized equilibriums \(g_{mk}^{eq}\) as the minimizer of the discrete entropy \[H_{m}\left[\{u_{mk}\}_{k}\right]=\sum_{k=1}^{M}(u_{mk}\ln u_{mk}-u_{mk})|\xi_{ mk}|^{D-1}\Delta\xi\] among all \(\{u_{mk}\geq 0\}_{k=1}^{M}\) satisfying the conservation constraint in the \(m\)-th direction: \[\sum_{k=1}^{M}\mathbf{m}_{mk}u_{mk}|\xi_{mk}|^{D-1}\Delta\xi=\int_{ \mathbb{R}}\mathbf{m}_{m}|\xi|^{D-1}\mathcal{E}_{tr,m}d\xi\in\mathbb{R}^{D+2}=:\bm {\rho}_{tr,m}, \tag{3.4}\] where we have \(\mathbf{m}_{mk}=\left(1,\xi_{mk}\mathbf{l}_{m},\xi_{mk}^{2}/2\right)^{T}\). With the argument in [31], we can prove that this discretized equilibrium has the form \[g_{mk}^{eq}=\exp(\mathbf{\alpha}_{m}\cdot\mathbf{m}_{mk}), \tag{3.5}\] where \(\mathbf{\alpha}_{m}\in\mathbb{R}^{D+2}\) is the unique minimizer of the convex function \[J_{m}(\mathbf{\alpha})=\sum_{k=1}^{M}\exp(\mathbf{\alpha}\cdot\mathbf{m}_{mk})|\xi_{mk}|^ {D-1}\Delta\xi-\mathbf{\alpha}\cdot\mathbf{\rho}_{tr,m}.\] Then we set \(h_{mk}^{eq}=L\theta g_{mk}^{eq}\) with \(\theta=\frac{2E-|\mathbf{U}|^{2}}{D+L}\) as defined in Eq. (2.10). With \(g_{mk}^{eq}\) and \(h_{mk}^{eq}\) determined above, our discrete-velocity spatial-time model reads as \[\begin{cases}\partial_{t}g_{mk}+\xi_{mk}\mathbf{l}_{m}\cdot\nabla_{ \mathbf{x}}g_{mk}=\frac{1}{\tau}(\exp(\mathbf{\alpha}_{m}\cdot\mathbf{m}_{mk})-g_{mk}),\\ \partial_{t}h_{mk}+\xi_{mk}\mathbf{l}_{m}\cdot\nabla_{\mathbf{x}}h_{mk}=\frac{1}{\tau} (L\theta\exp(\mathbf{\alpha}_{m}\cdot\mathbf{m}_{mk})-h_{mk})\end{cases}\] for \(m=1,...,N\), and \(k=1,...,M\). We close this subsection with two remarks on this model. **Remark 3.1** (Computation of \(\mathbf{\rho}_{tr,m}\)).: Since \(\mathcal{E}_{tr,m}\) is of the Gaussian form Eq. (2.14), it is not difficult to verify that \(\mathbf{\rho}_{tr,m}=\rho_{tr,m}(1,u_{tr,m}\mathbf{l}_{m},E_{tr,m})^{T}\) in Eq. (3.4) can be computed with the following formulae \[\begin{split}\rho_{tr,m}&=\frac{\rho_{m}u_{m}}{2}P \left(\frac{u_{m}}{\sqrt{2\sigma^{2}}}\right)+\rho_{m}\sqrt{\frac{2\sigma^{2}} {\pi}}\exp\left(-\frac{u_{m}^{2}}{2\sigma^{2}}\right),\\ \rho_{tr,m}u_{tr,m}&=\frac{\rho_{m}(u_{m}^{2}+\sigma^{2 })}{2}P\left(\frac{u_{m}}{\sqrt{2\sigma^{2}}}\right)+\rho_{m}u_{m}\sqrt{\frac{2 \sigma^{2}}{\pi}}\exp\left(-\frac{u_{m}^{2}}{2\sigma^{2}}\right),\\ 2\rho_{tr,m}E_{tr,m}&=\frac{\rho_{m}u_{m}(u_{m}^{2}+ 3\sigma^{2})}{2}P\left(\frac{u_{m}}{\sqrt{2\sigma^{2}}}\right)+\rho_{m}(u_{m} ^{2}+2\sigma^{2})\sqrt{\frac{2\sigma^{2}}{\pi}}\exp\left(-\frac{u_{m}^{2}}{2 \sigma^{2}}\right),\end{split}\] where \(P(x):=\operatorname{erfc}(-x)-\operatorname{erfc}(x)\) and \(\operatorname{erfc}(x)=\frac{2}{\sqrt{\pi}}\int_{x}^{\infty}e^{-\eta^{2}}d\eta\) is the complementary error function. \(\rho_{m}\), \(u_{m}\), and \(\sigma^{2}\) are defined in Eq. (2.15). **Remark 3.2**.: Besides the above procedure in deriving a discrete-velocity model, there is another way to close Eq. (3.2). Indeed, the equilibrium \(\{g_{mk}^{eq}\}_{m,k}\) in Eq. (3.2) can be taken as the minimizer of the 'total' discrete entropy \[H\left[\{u_{mk}\}_{m,k}\right]=s\sum_{m=1}^{N}\sum_{k=1}^{M}(u_{mk}\ln u_{mk}- u_{mk})|\xi_{mk}|^{D-1}\Delta\xi\] among all \(\{u_{mk}\geq 0\}_{m,k}\) satisfying the conservation constraint \[s\sum_{m=1}^{N}\sum_{k=1}^{M}\left(1,\xi_{mk}\mathbf{l}_{m},\frac{|\xi_{mk}^{2}|}{ 2}\right)u_{mk}|\xi_{mk}|^{D-1}\Delta\xi=\left(\rho,\rho\mathbf{U},\rho\left(E- \frac{L}{2}\theta\right)\right).\] In this way, \(g_{mk}^{eq}=\exp(\mathbf{\alpha}\cdot\mathbf{m}_{mk})\) and \(\mathbf{\alpha}\in\mathbb{R}^{D+1}\times\mathbb{R}^{-}\) minimizes the convex function \[J(\mathbf{\alpha})=s\sum_{m=1}^{N}\sum_{k=1}^{M}\exp(\mathbf{\alpha}\cdot\mathbf{m}_{mk})| \xi_{mk}|^{D-1}\Delta\xi-\mathbf{\alpha}\cdot\mathbf{\rho}_{tr}.\] Here \(\mathbf{\rho}_{tr}\) is defined as in Eq. (2.11). This treatment is a variant of that for the DVM in [31], except that the discrete velocity nodes are chosen radially with a weight function \(|\xi_{mk}|^{D-1}\). By contrast, the conventional DVM prefers discrete velocity nodes in a cubic lattice in \(\mathbb{R}^{D}\). Compared with the DVM approach that solves one (larger-scale) optimization problem, our DVD-DVM requires additional computation of \(\mathcal{E}_{tr,m}\), \(\mathbf{\rho}_{tr,m}\) and \(N\) optimization problems. But computing \(\mathcal{E}_{tr,m}\) and \(\mathbf{\rho}_{tr,m}\) are numerically efficient, and minimizing each \(J_{m}(\mathbf{\alpha})\) has a smaller scale than \(J(\mathbf{\alpha})\). Therefore, the computational cost is acceptable. ### Gaussian-EQMOM In this subsection, we apply a method of moment to the BGK-DVDM Eq. (3.1). The \(k\)-th velocity moment of \(g_{m}(t,\mathbf{x},\xi)\) and \(h_{m}(t,\mathbf{x},\xi)\) are defined as \[M_{m,k}^{[g]}(t,\mathbf{x})=\int_{\mathbb{R}}\xi^{k}g_{m}(t,\mathbf{x},\xi)d\xi,\quad M _{m,k}^{[h]}(t,\mathbf{x})=\int_{\mathbb{R}}\xi^{k}h_{m}(t,\mathbf{x},\xi)d\xi\] for \(k=0,1,2,...\). To derive the evolution equations for \(M_{m,k}^{[g]}\) and \(M_{m,k}^{[h]}\), we integrate the BGK-DVDM Eq. (3.1) to get \[\begin{split}&\partial_{t}M_{m,k}^{[g]}+\mathbf{l}_{m}\cdot\nabla_{ \mathbf{x}}M_{m,k+1}^{[g]}=\frac{1}{\tau}\left(\rho_{m}\Delta_{k}(u_{m},\sigma^{2} )-M_{m,k}^{[g]}\right),\\ &\partial_{t}M_{m,k}^{[h]}+\mathbf{l}_{m}\cdot\nabla_{\mathbf{x}}M_{m,k+1 }^{[h]}=\frac{1}{\tau}\left(L\rho_{m}\theta\Delta_{k}(u_{m},\sigma^{2})-M_{m,k }^{[h]}\right)\end{split} \tag{3.6}\] for \(k=0,1,2,...\). Here \(\Delta_{k}(u,\sigma^{2})\) denotes the \(k\)-th moment of the normalized Gaussian function centered at \(u\) with a variance \(\sigma^{2}\). Eq. (3.6) contains infinitely many equations. To get a system with finite equations, we resort to the Gaussian-EQMOM method. In this method, it is assumed that the 1-D distribution \(g_{m}(\xi)\) (and \(h_{m}(\xi)\)) is a sum of \(M\) Gaussian functions [30]: \[\phi_{m}(\xi)=\sum_{\alpha=1}^{M}\frac{w_{m,\alpha}^{[\phi]}}{\sqrt{2\pi} \vartheta_{m}^{[\phi]}}\exp\left(-\frac{(\xi-v_{m,\alpha}^{[\phi]})^{2}}{2 \vartheta_{m}^{[\phi]}}\right),\quad\text{for $\phi=g$ or $h$.} \tag{3.7}\] The variance \(\vartheta_{m}^{[\phi]}>0\) is independent on the index \(\alpha\). With this ansatz, the moments can be expressed as \[M_{m,k}^{[\phi]}=\sum_{\alpha=1}^{M}w_{m,\alpha}^{[\phi]}\Delta_{k}\left(v_{m,\alpha}^{[\phi]},\vartheta_{m}^{[\phi]}\right)\quad\text{for $k=0,1,\ldots$.} \tag{3.8}\] The ansatz above has \(2M+1\) parameters \(\left(w_{m,\alpha}^{[\phi]},v_{m,\alpha}^{[\phi]},\vartheta_{m}^{[\phi]}\right)\) for \(g_{m}\) or \(h_{m}\). To fix these parameters, we reserve the equations in Eq. (3.6) with \(k=0,...,2M\) and then solve the first \(2M+1\) equations in Eq. (3.8) to express the parameters in terms of the reserved lower moments \(M_{m,k}^{[g]}\) and \(M_{m,k}^{[h]}\) with \(k=0,...,2M\). An algorithm to solve this set of nonlinear algebraic equations can be found in the literature [5, 30], which is uniquely solvable in most practical situations [22]. In this way, the higher moments \(M_{m,2M+1}^{[g]}\) and \(M_{m,2M+1}^{[h]}\) in the governing equation of \(M_{m,2M}^{[g]}\) and \(M_{m,2M}^{[h]}\) can also be expressed in terms of the lower moments \[M_{m,2M+1}^{[\phi]}=\sum_{\alpha=1}^{M}w_{m,\alpha}^{[\phi]}\Delta_{2M+1}\left( v_{m,\alpha}^{[\phi]},\vartheta_{m}^{[\phi]}\right).\] Consequently, the equations in Eq. (3.6) with \(k=0,...,2M\) are closed. With the ansatz Eq. (3.7), the macroscopic quantities are naturally computed as \[\rho =s\sum_{m,\alpha}\int_{\mathbb{R}}|\xi|^{D-1}\mathcal{N}\left( \xi;W_{m,\alpha}^{[g]}\right)d\xi, \tag{3.9}\] \[\rho\boldsymbol{U} =s\sum_{m,\alpha}\boldsymbol{l}_{m}\int_{\mathbb{R}}\xi|\xi|^{D- 1}\mathcal{N}\left(\xi;W_{m,\alpha}^{[g]}\right)d\xi,\] \[\rho E =\frac{s}{2}\sum_{m,\alpha}\int_{\mathbb{R}}|\xi|^{D-1}\left[ \xi^{2}\mathcal{N}\left(\xi;W_{m,\alpha}^{[g]}\right)+\mathcal{N}\left(\xi;W_ {m,\alpha}^{[h]}\right)\right]d\xi,\] where \[\mathcal{N}\left(\xi;W_{m,\alpha}^{[\phi]}\right)=\frac{w_{m,\alpha}^{[\phi]}} {\sqrt{2\pi\vartheta_{m}^{[\phi]}}}\exp\left(-\frac{(\xi-v_{m,\alpha}^{[\phi] })^{2}}{2\vartheta_{m}^{[\phi]}}\right)\quad\text{ and }\quad W_{m,\alpha}^{[\phi]}= \left(w_{m,\alpha}^{[\phi]},v_{m,\alpha}^{[\phi]},\vartheta_{m}^{[\phi]} \right)\in\mathbb{R}^{3}.\] Notice that due to the weight function \(|\xi|^{D-1}\) in Eq. (3.9), we generally have \[\rho\neq s\sum_{m}M_{m,0}^{[g]},\quad\rho\boldsymbol{U}\neq s\sum_{m} \boldsymbol{l}_{m}M_{m,1}^{[g]},\quad\rho E\neq s\sum_{m}\frac{1}{2}\left(M_{ m,2}^{[g]}+M_{m,0}^{[h]}\right).\] Eqs. (3.6,3.8-3.9) make up a spatial-time model by incorporating Gaussian-EQMOM into the BGK-DVDM Eq. (3.1). This model, denoted as DVD-EQMOM, is a convenient multidimensional version of quadrature-based method of moments, which seems better understood than those in [5, 30]. Moreover, the moment system is hyperbolic, indicating a well-posed extension of the EQMOM. The proof is similar to that in our previous work [23] for the BGK equation without internal degrees of freedom. It mainly relies on Ref. [22], where the hyperbolicity of the 1-D EQMOM was thoroughly analyzed. ### Hermite spectral method In this subsection we treat the continuous variable \(\xi\) with the Hermite spectral method (HSM) proposed in [20]. In this method, it is assumed that the distribution \(\phi=\phi(t,\boldsymbol{x},\xi)\) is a truncation \[\phi(t,\boldsymbol{x},\xi)=\sum_{k=0}^{M-1}\phi_{k}(t,\boldsymbol{x})\mathcal{ H}_{k}^{[\bar{u},\bar{\theta}]}(\xi) \tag{3.10}\] of a series with the basis function \[\mathcal{H}_{n}^{[\bar{u},\bar{\theta}]}(\xi)=\bar{\theta}^{-n/2}H_{n}\left( \frac{\xi-\bar{u}}{\sqrt{\bar{\theta}}}\right)\frac{1}{\sqrt{2\pi\bar{\theta} }}e^{-\frac{(\xi-\bar{u})^{2}}{2\bar{\theta}}}.\] Here \(M\) is a given integer, \[H_{n}(x)=(-1)^{n}e^{\frac{x^{2}}{2}}\left(\frac{d^{n}}{dx^{n}}e^{-\frac{x^{2} }{2}}\right)\] is the \(n\)th-order Hermite polynomial, and \(\bar{u}\), \(\bar{\theta}\) are two constant parameters. In this paper, we always set \(\bar{u}=0\) and determine \(\bar{\theta}\) by the initial flow condition. Due to the orthogonality of the Hermite polynomials: \[\int_{\mathbb{R}}\mathcal{H}_{n}^{[\bar{u},\bar{\theta}]}(\xi)\frac{\bar{ \theta}^{m/2}}{m!}H_{m}\left(\frac{\xi-\bar{u}}{\sqrt{\bar{\theta}}}\right)d \xi=\delta_{nm},\] the coefficient \(\phi_{k}(t,\mathbf{x})\) in Eq. (3.10) can be uniquely determined as \[\phi_{k}(t,\mathbf{x})=\int_{\mathbb{R}}\phi(t,\mathbf{x},\xi)\frac{\bar{\theta}^{k/2}}{k!}H_{k}\left(\frac{\xi-\bar{u}}{\sqrt{\theta}}\right)d\xi.\] Thus the \(M\)-truncation of \(\phi\) is fully determined. To incorporate the HSM with the BGK-DVDM Eq. (3.1), we set \[\phi_{m,k}^{[g]}(t,\mathbf{x}) =\int_{\mathbb{R}}g_{m}(t,\mathbf{x},\xi)|\xi|^{D-1}\frac{\bar{\theta} ^{k/2}}{k!}H_{k}\left(\frac{\xi-\bar{u}}{\sqrt{\theta}}\right)d\xi,\] \[\phi_{m,k}^{[h]}(t,\mathbf{x}) =\int_{\mathbb{R}}h_{m}(t,\mathbf{x},\xi)|\xi|^{D-1}\frac{\bar{\theta }^{k/2}}{k!}H_{k}\left(\frac{\xi-\bar{u}}{\sqrt{\theta}}\right)d\xi.\] Then we multiply the both sides of Eq. (3.1) with \(|\xi|^{D-1}\frac{\bar{\theta}^{k/2}}{k!}H_{k}\left(\frac{\xi-\bar{u}}{\sqrt{ \theta}}\right)\) for \(k=0,...,M-1\) and integrate over \(\xi\in\mathbb{R}\) to obtain \[\partial_{t}\mathbf{\Phi}_{m}+\mathcal{A}\mathbf{l}_{m}\cdot\nabla_{\mathbf{x}}\mathbf{\Phi}_ {m}=\frac{1}{\tau}(\mathbf{\Phi}_{m}^{eq}-\mathbf{\Phi}_{m}). \tag{3.11}\] Here \(\mathbf{\Phi}_{m}=(\phi_{m,0},...,\phi_{m,M-1})^{T}\in\mathbb{R}^{M}\) with \(\phi_{m,k}=\phi_{m,k}^{[g]}\) or \(\phi_{m,k}^{[h]}\), the constant matrix \(\mathcal{A}\in\mathbb{R}^{M\times M}\) is tridiagonal [20]: \[\mathcal{A}=\begin{pmatrix}\bar{u}&1&&&\\ \bar{\theta}&\bar{u}&2&&\\ &\bar{\theta}&\ddots&\ddots&\\ &&\ddots&\ddots&M-1\\ &&&\bar{\theta}&\bar{u}\end{pmatrix}, \tag{3.12}\] and the equilibrium \(\mathbf{\Phi}_{m}^{eq}=(\phi_{m,0}^{eq},...,\phi_{m,M-1}^{eq})^{T}\) has components \[\phi_{m,k}^{eq}=\int_{\mathbb{R}}\frac{\bar{\theta}^{k/2}}{k!}H_{k}\left( \frac{\xi-\bar{u}}{\sqrt{\theta}}\right)\phi_{m}^{eq}d\xi\quad\text{for}\ \ \phi_{m}^{eq}=\mathcal{E}_{tr,m}|\xi|^{D-1}\ \text{ or }\ L \theta\mathcal{E}_{tr,m}|\xi|^{D-1}. \tag{3.13}\] The corresponding macroscopic quantities are computed as \[\rho =s\sum_{m=1}^{N}\phi_{m,0}^{[g]},\quad\rho\mathbf{U}=s\sum_{m=1}^{N} \mathbf{l}_{m}\left(\phi_{m,1}^{[g]}+\bar{u}\phi_{m,0}^{[g]}\right),\] \[\rho E =s\sum_{m=1}^{N}\frac{1}{2}\left(2\phi_{m,2}^{[g]}+2\bar{u}\phi_{ m,1}^{[g]}+(\bar{\theta}+\bar{u}^{2})\phi_{m,0}^{[g]}+\phi_{m,0}^{[h]}\right).\] The equations in Eq. (3.11) constitute our third kind of models, denoted as DVD-HSM. We end this subsection with details on computing \(\phi_{m,k}^{eq}\) in Eq. (3.13) when \(\bar{u}=0\). Clearly, we only need to consider \(\phi_{m}^{eq}=\mathcal{E}_{tr,m}|\xi|^{D-1}\) with \(\mathcal{E}_{tr,m}\) given in Eq. (2.14). Moreover, only the 2-D case is presented because for \(D=3\) the weight function has a simpler expression \(|\xi|^{D-1}=\xi^{2}\) and therefore the 3-D case is easier to handle. To simplify the notation, we set \[a_{k} :=\phi_{m,k}^{eq}=\int_{\mathbb{R}}\frac{\bar{\theta}^{k/2}}{k!}H _{k}\left(\frac{\xi}{\sqrt{\theta}}\right)\mathcal{E}_{tr,m}(\xi)|\xi|d\xi,\] \[b_{k} :=\int_{\mathbb{R}}\frac{\bar{\theta}^{k/2}}{k!}H_{k}\left(\frac{ \xi}{\sqrt{\theta}}\right)\mathcal{E}_{tr,m}(\xi)\text{sgn}(\xi)d\xi,\] for \(k=0,1,...\). It is not difficult to see from the recursive formula of Hermite polynomials [20] \[H_{0}(x)=1,\quad H_{1}(x)=x,\quad H_{n+1}(x)=xH_{n}(x)-nH_{n-1}(x)\] and the relation \(|\xi|=\xi\text{sgn}(\xi)\) that \[a_{0}=b_{1}\quad\text{and}\quad a_{k}=(k+1)b_{k+1}+\bar{\theta}b_{k-1}\quad \text{for}\quad k\geq 1.\] Thus, it suffices to compute \(b_{k}\). Write \[b_{k}=\int_{0}^{+\infty}\frac{\bar{\theta}^{k/2}}{k!}H_{k}\left(\frac{\xi}{\sqrt{ \bar{\theta}}}\right)\mathcal{E}_{tr,m}(\xi)d\xi-\int_{-\infty}^{0}\frac{\bar{ \theta}^{k/2}}{k!}H_{k}\left(\frac{\xi}{\sqrt{\bar{\theta}}}\right)\mathcal{E} _{tr,m}(\xi)d\xi=:b_{k}^{+}-b_{k}^{-}.\] With the recursive formula, we can obtain \[b_{k+1}^{+}= \int_{0}^{+\infty}\frac{\bar{\theta}^{(k+1)/2}}{(k+1)!}\left[ \frac{\xi}{\sqrt{\bar{\theta}}}H_{k}\left(\frac{\xi}{\sqrt{\bar{\theta}}} \right)-kH_{k-1}\left(\frac{\xi}{\sqrt{\bar{\theta}}}\right)\right]\mathcal{E} _{tr,m}(\xi)d\xi\] \[= \int_{0}^{+\infty}\frac{\bar{\theta}^{k/2}}{(k+1)!}\xi H_{k} \left(\frac{\xi}{\sqrt{\bar{\theta}}}\right)\mathcal{E}_{tr,m}(\xi)d\xi-\frac {\bar{\theta}}{k+1}b_{k-1}^{+}\] \[= \int_{0}^{+\infty}\frac{\bar{\theta}^{k/2}}{(k+1)!}(\xi-u_{m})H_ {k}\left(\frac{\xi}{\sqrt{\bar{\theta}}}\right)\mathcal{E}_{tr,m}(\xi)d\xi+ \frac{u_{m}}{k+1}b_{k}^{+}-\frac{\bar{\theta}}{k+1}b_{k-1}^{+}.\] Note that \(\frac{d}{d\xi}\mathcal{E}_{tr,m}(\xi)=-\frac{\xi-u_{m}}{\sigma^{2}}\mathcal{E }_{tr,m}(\xi)\) and \(H_{n}^{\prime}(x)=nH_{n-1}3(x)\). Using the integration by parts gives \[b_{k+1}^{+}= -\int_{0}^{+\infty}\frac{\sigma^{2}\bar{\theta}^{k/2}}{(k+1)!}H_ {k}\left(\frac{\xi}{\sqrt{\bar{\theta}}}\right)d\mathcal{E}_{tr,m}(\xi)+\frac {u_{m}}{k+1}b_{k}^{+}-\frac{\bar{\theta}}{k+1}b_{k-1}^{+}\] \[= \frac{\sigma^{2}\bar{\theta}^{k/2}}{(k+1)!}H_{k}(0)\mathcal{E}_{ tr,m}(0)+\int_{0}^{+\infty}\frac{\sigma^{2}\bar{\theta}^{(k-1)/2}}{(k+1)!}kH_ {k-1}\left(\frac{\xi}{\sqrt{\bar{\theta}}}\right)\mathcal{E}_{tr,m}(\xi)d\xi +\frac{u_{m}b_{k}^{+}-\bar{\theta}b_{k-1}^{+}}{k+1}\] \[= \frac{\sigma^{2}\bar{\theta}^{k/2}}{(k+1)!}H_{k}(0)\mathcal{E}_{ tr,m}(0)+\frac{u_{m}b_{k}^{+}+(\sigma^{2}-\bar{\theta})b_{k-1}^{+}}{k+1}.\] A similar computation can be done for \(b_{k}^{-}\) and finally we get the following recursive formula \[b_{k+1}=\frac{2\sigma^{2}\bar{\theta}^{k/2}}{(k+1)!}H_{k}(0)\mathcal{E}_{tr,m}( 0)+\frac{u_{m}b_{k}+(\sigma^{2}-\bar{\theta})b_{k-1}}{k+1}.\] Additionally, a direct computation gives \[b_{0}=\frac{1}{2}\left(\operatorname{erfc}\left(\frac{u_{m}}{\sqrt{2\sigma^{2 }}}\right)-\operatorname{erfc}\left(\frac{u_{m}}{\sqrt{2\sigma^{2}}}\right) \right),\quad b_{1}=u_{m}b_{0}+2\sigma^{2}\mathcal{E}_{tr,m}(0).\] ### Brief summary of the models Fig. 1 presents a brief summary and the hierarchy of the several models proposed up to now. The original BGK equation with internal molecular degrees of freedom is reviewed in Section 2.1. The DVDM assumes that the particles move in \(N\) fixed orientations, leading to the model Eq. (2.7) for \(f_{m}\). Then, Section 3 develops three spatial-time models by eliminating the continuous variables \(\xi\) and \(\zeta\), including DVD-DVM in Section 3.1, DVD-EQMOM in Section 3.2, and DVD-HSM in Section 3.3. For these models, the boundary conditions and numerical schemes need to be specified before practical flow simulations. Figure 1. Model hierarchy of the DVDM. The variable and the equilibrium state are shown in each model (block). ## 4. Boundary conditions Let \(\Omega\subset\mathbb{R}^{D}\) be the computational domain and denote by \(\mathbf{n}=\mathbf{n}(\mathbf{x})\) the outward unit normal vector of the boundary \(\partial\Omega\) at \(\mathbf{x}\). Two types of boundary conditions are considered in this paper. The first one is the Neumann condition \(\mathbf{n}\cdot\nabla_{\mathbf{x}}\phi=0\) with \(\phi\) representing any unknown variables in the DVDM submodels (see Fig. 1). The second one is the solid wall conditions. For simplicity, let the boundary velocity \(\mathbf{U}_{w}\) at \(\mathbf{x}_{w}\in\partial\Omega\) be perpendicular to \(\mathbf{n}=\mathbf{n}(\mathbf{x}_{w})\). For the original BGK equation, the boundary distribution \(f(t,\mathbf{x}_{w},\mathbf{\xi},\mathbf{\zeta})\) for reflecting particles, i.e. \(\mathbf{\xi}\cdot\mathbf{n}<0\), should be given by the distribution of outgoing particles, i.e. \(\mathbf{\xi}\cdot\mathbf{n}>0\). Two specific boundary conditions are the diffuse-scattering law and the bounce-back rule (also termed specular-reflection law) [34, 14]. The first one assumes that the distribution of reflecting particles is a Maxwellian: \[f(t,\mathbf{x}_{w},\mathbf{\xi},\mathbf{\zeta})=\sqrt{\frac{2\pi}{\theta_{w}}}\frac{j(t, \mathbf{x}_{w})}{\sqrt{(2\pi\theta_{w})^{D+L}}}\exp\left(-\frac{|\mathbf{\xi}-\mathbf{U}_ {w}|^{2}+|\mathbf{\zeta}|^{2}}{2\theta_{w}}\right),\quad\mathbf{\xi}\cdot\mathbf{n}<0, \tag{4.1}\] where \(\theta_{w}\) is the boundary temperature at \(\mathbf{x}_{w}\) and \(j(t,\mathbf{x}_{w})\) is the outward-flowing mass flux defined by \[j(t,\mathbf{x}_{w})=\int_{\mathbb{R}^{L}}\int_{\mathbf{\xi}\cdot\mathbf{n}>0}\mathbf{n}\cdot \mathbf{\xi}f(t,\mathbf{x}_{w},\mathbf{\xi},\mathbf{\zeta})d\mathbf{\xi}d\mathbf{\zeta}.\] This condition ensures no particle penetration through the boundary. The bounce-back rule is widely used in the lattice Boltzmann method [26]. It reads as \[f(t,\mathbf{x}_{w},\mathbf{\xi},\mathbf{\zeta})=f(t,\mathbf{x}_{w},-\mathbf{\xi},\mathbf{\zeta})+2 \rho_{w}(t,\mathbf{x}_{w})\mathcal{E}(\mathbf{\xi},\mathbf{\zeta})\frac{\mathbf{\xi}\cdot\mathbf{ U}_{w}}{\theta_{w}},\quad\mathbf{\xi}\cdot\mathbf{n}<0.\] Here \[\mathcal{E}(\mathbf{\xi},\mathbf{\zeta})=\frac{1}{\sqrt{(2\pi\theta_{w})^{D+L}}}\exp \left(-\frac{|\mathbf{\xi}|^{2}+|\mathbf{\zeta}|^{2}}{2\theta_{w}}\right)\] and \[\rho_{w}(t,\mathbf{x}_{w})=\frac{2\int_{\mathbb{R}^{L}}\int_{\mathbf{\xi}\cdot\mathbf{n}> 0}f(t,\mathbf{x}_{w},\mathbf{\xi},\mathbf{\zeta})d\mathbf{\xi}d\mathbf{\zeta}}{1-\frac{2}{\theta_ {w}}\int_{\mathbb{R}^{L}}\int_{\mathbf{\xi}\cdot\mathbf{n}<\mathbf{\xi}\cdot\mathbf{U}_{w} \mathcal{E}(\mathbf{\xi},\mathbf{\zeta})d\mathbf{\xi}d\mathbf{\zeta}}}.\] This condition ensures that the macroscopic velocity \(\mathbf{U}(t,\mathbf{x}_{w})\) equals \(\mathbf{U}_{w}\). Since \(\mathbf{U}_{w}\) is assumed to be perpendicular to \(\mathbf{n}\), \(\rho_{w}\) is simplified as \[\rho_{w}(t,\mathbf{x}_{w})=2\int_{\mathbb{R}^{L}}\int_{\mathbf{\xi}\cdot\mathbf{n}>0}f(t, \mathbf{x}_{w},\mathbf{\xi},\mathbf{\zeta})d\mathbf{\xi}d\mathbf{\zeta}.\] We now illustrate how these kinetic boundary conditions are adapted to the new DVDM submodels in Section 3. The main idea is to replace the integrals above by proper discrete sums. For the DVD-DVM in Subsection 3.1, the diffuse-scattering law is converted to \[g_{mk}(t,\mathbf{x}_{w}) =\sqrt{\frac{2\pi}{\theta_{w}}}j(t,\mathbf{x}_{w})\mathcal{E}_{tr,mk} [\mathbf{U}_{w},\theta_{w}], \xi_{k}\mathbf{l}_{m}\cdot\mathbf{n}<0,\] \[h_{mk}(t,\mathbf{x}_{w}) =L\theta_{w}g_{mk}(t,\mathbf{x}_{w}), \xi_{k}\mathbf{l}_{m}\cdot\mathbf{n}<0,\] with \[j(t,\mathbf{x}_{w})=s\sum_{m=1}^{N}\sum_{k=1}^{M}\xi_{k}\mathbf{l}_{m}\cdot\mathbf{n}g_{ mk}(t,\mathbf{x}_{w})|\xi_{k}|^{D-1}\Delta\xi\mathbf{1}_{\xi_{k}\mathbf{l}_{m}\cdot\mathbf{n}>0},\] and \(\mathcal{E}_{tr,mk}[\mathbf{U}_{w},\theta_{w}]\) the discrete equilibrium defined in Eq. (3.5) with density \(1\), velocity \(\mathbf{U}_{w}\), and temperature \(\theta_{w}\). On the other hand, we assume that \(\{\xi_{k}\}_{k=1}^{M}\) satisfies \(\xi_{k}=-\xi_{M+1-k}\) for \(k=1,...,M\) to apply the bounce-back rule to the DVD-DVM. With this assumption, the discrete-velocity version of bounce-back rule becomes \[g_{mk}(t,\mathbf{x}_{w}) =g_{m,M+1-k}(t,\mathbf{x}_{w})+2\rho_{w}(t,\mathbf{x}_{w})\mathcal{E}_{tr, mk}[\mathbf{0},\theta_{w}]\frac{\xi_{k}\mathbf{l}_{m}\cdot\mathbf{U}_{w}}{\theta_{w}}, \xi_{k}\mathbf{l}_{m}\cdot\mathbf{n}<0,\] \[h_{mk}(t,\mathbf{x}_{w}) =h_{m,M+1-k}(t,\mathbf{x}_{w})+2\rho_{w}(t,\mathbf{x}_{w})L\theta_{w} \mathcal{E}_{tr,mk}[\mathbf{0},\theta_{w}]\frac{\xi_{k}\mathbf{l}_{m}\cdot\mathbf{U}_{w}}{ \theta_{w}}, \xi_{k}\mathbf{l}_{m}\cdot\mathbf{n}<0,\] where \[\rho_{w}(t,\mathbf{x}_{w})=\frac{2s\sum_{m=1}^{N}\sum_{k=1}^{M}g_{mk}(t,\mathbf{x}_{w})| \xi_{k}|^{D-1}\Delta\xi\mathbf{1}_{\xi_{k}\mathbf{l}_{m}\cdot\mathbf{n}>0}}{1-\frac{2}{ \theta_{w}}s\sum_{m=1}^{N}\sum_{k=1}^{M}\xi_{k}\mathbf{l}\cdot\mathbf{U}_{w}\mathcal{E} _{tr,mk}[\mathbf{0},\theta_{w}]|\xi_{k}|^{D-1}\Delta\xi\mathbf{1}_{\xi_{k}\mathbf{l}_{m} \cdot\mathbf{n}<0}}.\] For the DVD-EQMOM in Subsection 3.2, only the diffuse-scattering law is used, which reconstructs the velocity distributions of reflecting particles as \[g_{m}(t,\mathbf{x}_{w},\xi) =\sqrt{\frac{2\pi}{\theta_{w}}}j(t,\mathbf{x}_{w})\mathcal{E}_{tr,m}[ (1,\mathbf{U}_{w},\theta_{w})], \xi\mathbf{l}_{m}\cdot\mathbf{n}<0,\] \[h_{m}(t,\mathbf{x}_{w},\xi) =L\theta_{w}g_{m}(t,\mathbf{x}_{w},\xi), \xi\mathbf{l}_{m}\cdot\mathbf{n}<0,\] with \[j(t,\mathbf{x}_{w})=s\sum_{m=1}^{N}\int_{\xi\mathbf{l}_{m}\cdot\mathbf{n}>0}\xi\mathbf{l}_{m} \cdot\mathbf{n}|\xi|^{D-1}\sum_{\alpha=1}^{M}\mathcal{N}\left(\xi;W_{m,\alpha}^{[ g]}\right)d\xi,\] and \(\mathcal{E}_{tr,m}[\mathbf{U}_{w},\theta_{w}]\) the discrete equilibrium defined in Eq. (2.14) with density \(1\), velocity \(\mathbf{U}_{w}\), and temperature \(\theta_{w}\). Other notations follow the definitions in Section 3.2. The moments on the boundary can then be evaluated as \[M_{m,k}^{[\phi]}(t,\mathbf{x}_{w})=\int_{\{\xi\mathbf{l}_{m}\cdot\mathbf{n}<0\}\bigcup\{ \xi\mathbf{l}_{m}\cdot\mathbf{n}>0\}}\xi^{k}\phi_{m}(t,\mathbf{x}_{w},\xi)d\xi\] for \(\phi_{m}=g_{m}\) or \(h_{m}\). Note that the integrand takes different forms in the two sets. For the DVD-HSM in Subsection 3.3, further boundary conditions are left for future work. ## 5. Algorithms ### Algorithm for the discrete equilibrium Solving the discrete equilibrium \(\mathcal{E}_{tr,m}\) defined in Eq. (2.11) out of a known \(\mathbf{\rho}_{tr}\) is necessary for all DVDM submodels in Section 3. Theorem 2.2 indicates that all we need is an \(\mathbf{\alpha}\in\mathbb{R}^{D+1}\times\mathbb{R}^{-}\) that minimizes the convex function \(J(\mathbf{\alpha})\) in Eq. (2.13). The gradient descent method was used in our previous work [23], while we use the BFGS quasi-Newton method [4] in this work. Fig. 2 presents the performance of the BFGS quasi-Newton method and the gradient descend (GD) method for \(D=2\). Here we set \(\rho=1\), \(\mathbf{U}=(0.4,0.8)^{T}\), and \(\theta=0.8\). The discrete directions are chosen as \(\left\{\mathbf{l}_{m}=\left(\cos\frac{(m-1)\pi}{N},\sin\frac{(m-1)\pi}{N}\right)^{T }\right\}_{m=1}^{N}\). The BFGS method requires much less iteration steps to converge for \(N\leq 5\). Notably, when \(N\geq 7\), the initial value \(\mathbf{\alpha}_{eq}\) is so close to the minimizer \(\mathbf{\alpha}\) that only one step of iteration leads to convergence. Therefore, the computation of discrete equilibrium in the DVDM is numerically efficient. Figure 2. Performance of the BFGS quasi-Newton method and the gradient descend (GD) method for \(D=2\), \(\rho=1\), \(\mathbf{U}=(0.4,0.8)^{T}\), and \(\theta=0.8\). ### Numerical schemes In this subsection we present some numerical schemes to solve the DVDM submodels proposed before. Recall that both the DVD-DVM and DVD-HSM can be written in a unified form as \[\partial_{t}\mathbf{\Phi}_{m}+\mathcal{A}\mathbf{l}_{m}\cdot\nabla_{\mathbf{x}}\mathbf{\Phi}_{m}= \frac{1}{\tau}(\mathbf{\Phi}_{m}^{eq}-\mathbf{\Phi}_{m})=:\mathbf{\Omega}_{m},\quad m=1,...,N. \tag{5.1}\] For the DVD-DVM, we have \(\mathbf{\Phi}_{m}=(\phi_{m1},...,\phi_{mM})^{T}\in\mathbb{R}^{M}\) with \(\phi_{mk}=g_{mk}\) or \(h_{mk}\), and the matrix \(\mathcal{A}=\text{diag}\{\xi_{1},...,\xi_{M}\}\). For the DVD-HSM, \(\mathbf{\Phi}_{m}=(\phi_{m,0},...,\phi_{m,M-1})^{T}\in\mathbb{R}^{M}\) and \(\mathcal{A}\) are defined in Eqs. (3.11 & 3.12). For a time discretization of Eq. (5.1), the implicit-explicit Runge-Kutta (IMEX-RK) schemes [35] can be applied. Here we only use a second-order scheme denoted by SSP2. It is characterised by a double tableau [35] \[\begin{array}{c|cccc}0&0&0&\gamma&\gamma&0\\ 1&1&0&1-\gamma&1-2\gamma&\gamma\\ \hline&1/2&1/2&\begin{array}{c}\\ 1-\end{array}\end{array},\quad\gamma=1-\frac{1}{\sqrt{2}}.\] Although the source term \(\mathbf{\Omega}_{m}\) is implicitly discretized, its relaxation structure renders a well-known way to solve the equations explicitly (see e.g. [14, 15]). The convection term is treated with the third-order energy stable WENO (ES-WENO) scheme [43]. For the DVD-DVM, the Godunov flux [35] is adopted, while the HLL flux [17, 20] is used for the DVD-HSM. On the other hand, for the DVD-DVM, Eq. (5.1) can also be discretized with upwind schemes of first-order accuracy, which renders a easier way to treat the boundary conditions. An implicit discretization for the collision term can be treated similarly as in the IMEX-RK scheme. Finally for the DVD-EQMOM, the \(M_{m,k}^{[g]}\)-equation in Eq. (3.6) can be approximated by the 2-D upwind scheme \[\begin{array}{c}M_{m,k,ij}^{[g],n+1}=& M_{m,k,ij}^{[g],n}-\frac{\Delta t}{\Delta x}\mathbf{l}_{m}\cdot\mathbf{e}_{1} \left(\mathcal{G}_{m,k+1,i+\frac{1}{2},j}^{n}-\mathcal{G}_{m,k+1,i-\frac{1}{2 },j}^{n}\right)\\ -\frac{\Delta t}{\Delta y}\mathbf{l}_{m}\cdot\mathbf{e}_{2}\left(\mathcal{G}_{m,k+1,i, j+\frac{1}{2}}^{n}-\mathcal{G}_{m,k+1,i,j-\frac{1}{2}}^{n}\right)+\frac{\Delta t}{ \tau}\left(M_{\mathcal{E}m,k,ij}^{[g],n}-M_{m,k,ij}^{[g],n+1}\right)\end{array} \tag{5.2}\] with a partially implicit collision term. The \(M_{m,k}^{[h]}\)-equation is treated similarly. Here the fluxes \[\mathcal{G}_{m,k+1,i+\frac{1}{2}}^{n}=\begin{cases}\int_{0}^{ \infty}\xi^{k+1}g_{m,ij}^{n}d\xi+\int_{-\infty}^{0}\xi^{k+1}g_{m,i+1,j}^{n}d \xi,&\text{if }\mathbf{l}_{m}\cdot\mathbf{e}_{1}>0,\\ \int_{0}^{\infty}\xi^{k+1}g_{m,i+1,j}^{n}d\xi+\int_{-\infty}^{0}\xi^{k+1}g_{m, ij}^{n}d\xi,&\text{if }\mathbf{l}_{m}\cdot\mathbf{e}_{1}<0\end{cases} \tag{5.3}\] are the same as those in [5, 30]. The moments \[M_{\mathcal{E}_{m,k,ij}}^{[g],n}=\rho_{m,ij}^{n}\Delta_{k}\left(u_{m,ij}^{n}, (\sigma^{2})_{ij}^{n}\right)\] correspond to the equilibrium state, where \(\Delta_{k}(u,\sigma^{2})\) is defined in Section 3.2. The equilibrium state parameters \(\rho_{m,ij}^{n},\ u_{m,ij}^{n}\) and \((\sigma^{2})_{ij}^{n}\) are obtained by solving the local equilibrium Eq. (2.14). ## 6. Numerical results In this section, we present the results of some numerical tests based on the discretizations of the previous DVDM submodels. The tests only involve planar flows (\(D=2\)). ### 1-D Riemann Problems We start with 1-D Riemann problems. Assume no internal degrees of freedom (\(L=0\)). The Riemann initial data of the fluid quantities read as [10]: \[\rho(0,x)=\begin{cases}3.093,&x<0,\\ 1,&x>0,\end{cases}\quad\mathbf{U}(0,x)=\mathbf{0},\quad\theta(0,x)=1.\] Both the continuum (infinitely fast collision limit \(\tau=0\)) and free-molecular (no collision limit \(\tau=\infty\)) regimes are considered. The theoretical solutions for both cases can be found in [29] and [15]. The 1-D physical domain \([-0.5,0.5]\) is divided into \(200\) uniform cells. The Neumann boundary condition \(\frac{\partial f}{\partial\mathbf{n}}=0\) is applied by extending the values on the boundary cells constantly along the outward-facing unit normal vector \(\mathbf{n}\). We test all three DVDM submodels with this problem. The continuum regime is characterised with \(\tau=10^{-4}\). In all DVDM submodels, we set \(N=8\) and the directions \(\left\{\mathbf{l}_{m}=\left(\cos\frac{(2m-1)\pi}{16},\sin\frac{(2m-1)\pi}{16} \right)^{T}\right\}_{m=1}^{8}\). In the DVD-DVM, the discrete velocity nodes in each direction are selected as \(\xi_{k}=0.4k-5\) for \(k=1,...,24\). In the DVD-HSM, we choose the order \(M=12\) for the truncated series in Eq. (3.10). The SSP2 scheme in Section 5.2 is applied to both the DVD-DVM and DVD-HSM. For the DVD-EQMOM, we set \(M=2\). Fig. 3 shows the spatial distributions of the macroscopic quantities \((\rho,u,E,p)\) at \(t=0.2\). Both the simulated results and theoretical solutions are plotted. The shock wave that goes right, the rarefaction wave that goes left, and the discontinuity between them are all well captured. It is seen that what produced by both the DVD-DVM and DVD-HSM agree well with the analytical solutions except some oscillations near the discontinuities, while the two-node EQMOM is less accurate. However, the DVD-DVM yields the worst result for the heat flux \(\mathbf{q}=\frac{1}{2}\left\langle(\mathbf{\xi}-\mathbf{U})(|\mathbf{\xi}-\mathbf{U}|^{2}+|\mathbf{ \zeta}|^{2})f\right\rangle\), which should be zero since it is easy to verify that \(\left\langle(\mathbf{\xi}-\mathbf{U})(|\mathbf{\xi}-\mathbf{U}|^{2}+|\mathbf{\zeta}|^{2})\mathcal{ E}[f]\right\rangle=0\) for \(\mathcal{E}[f]\) in Eq. (2.2) (the bracket \(\langle\cdot\rangle\) is defined in Eq. (2.3)). More directions and discrete nodes may be needed to reduce such a discrepancy. We emphasize that the weighted integral in Eq. (2.8), with the weight function \(|\xi|^{D-1}\), is a key feature different from our previous model in [23]. This weight function has been carefully treated in all DVDM submodels in Section 3. As a direct comparison, Fig. 4 shows that without this weight function, the predicted heat flux \(q\) deviates significantly from zero, which contradicts the Euler limit solution. Other properties have larger errors as well. Therefore, only with this weight function \(|\xi|^{D-1}\), the resultant DVDM can produce satisfactory results. As for the free-molecular flow regime, we take \(\tau=10^{4}\) to create a near-zero collision term. In all DVDM submodels, we set \(N=18\) and the directions \(\left\{\mathbf{l}_{m}=\left(\cos\frac{(2m-1)\pi}{36},\sin\frac{(2m-1)\pi}{36} \right)^{T}\right\}_{m=1}^{18}\). In the Figure 3. 1-D Riemann problem with \(\tau=10^{-4}\): profiles of density \(\rho\), velocity \(u\), energy \(E\), pressure \(p\) and heat flux \(q\) at \(t=0.2\). In all models, we set \(N=8\) and the directions \(\left\{\mathbf{l}_{m}=\left(\cos\frac{(2m-1)\pi}{16},\sin\frac{(2m-1)\pi}{16} \right)^{T}\right\}_{m=1}^{8}\). In the DVD-DVM, the discrete velocities in each direction are \(\xi_{k}=0.4k-5\) for \(k=1,...,24\). We set \(M=12\) for the DVD-HSM and \(M=2\) for the DVD-EQMOM. DVD-DVM, the discrete velocity nodes in each direction are selected as \(\xi_{k}=0.4k-7.4\) for \(k=1,...,36\). In the DVD-HSM, we still choose the order \(M=12\). The SSP2 scheme is applied to both the DVD-DVM and DVD-HSM. For the DVD-EQMOM, we set \(M=2\). Fig. 5 presents the resulting profiles of macroscopic quantities at \(t=0.2\). Obviously there is no shock in this case, and the DVD-DVM shows the highest accuracy. The relatively large error of the DVD-EQMOM is partly due to the small number of nodes (\(M=2\)) used in the simulation. Figure 4: 1-D Riemann problem with \(\tau=10^{-4}\): A comparison between the DVD-DVM predictions with and without the weight function \(|\xi|^{D-1}\). All other setups are the same as in the previous case. ### Couette Flow The flow is confined between two infinite parallel walls located at \(x=\pm 0.5H\). The left and right walls move with constant velocities \(\pm v_{w}\mathbf{e}_{y}\) to drive the fluid between them to a steady state. In this way, the flow reduces to a spatially 1-D problem in \(x\). Assume \(D=2\) and \(L=0\) (no internal degrees of freedom). Let \(H=1\), \(v_{w}=0.1\) and the wall temperature \(\theta_{w}=2\). The initial values of the fluid are \((\rho_{0},\mathbf{U}_{0},\theta_{0})=(1,\mathbf{0},2)\). These settings ensure a small Mach number. In the Couette flow, different flow regimes are characterized by the parameter \(\kappa:=(\sqrt{\pi}/2)\text{Kn}\), where the Knudsen number \(\text{Kn}\) is defined as [14] \[\text{Kn}=\frac{\tau}{H}\sqrt{\frac{\pi\theta_{0}}{2}}.\] Thus, the flow regime can be tuned by varying the values of \(\tau\). Both the DVD-DVM and DVD-EQMOM are used with the first-order upwind scheme (see Section 5.2). The 1-D physical domain \([-0.5,0.5]\) is divided into 200 uniform cells. The diffuse-scattering law is applied as the wall boundary condition. The computation stops when the \(L^{2}\)-norm of the difference of \(\mathbf{U}\) between two consecutive time steps is smaller than \(10^{-6}\), which indicates that the flow is in a steady state. We set \(N=15\) and the directions \(\mathbf{l}_{\mathcal{L}}=\left\{\left(\cos\frac{(m-1)\pi}{15},\sin\frac{(m-1)\pi} {15}\right)^{T}\right\}_{m=1}^{15}\) in all computations. For the DVD-DVM, the velocity nodes in each direction are chosen as \(\xi_{k}=0.5k-5.75\) for \(k=1,...,22\). For the DVD-EQMOM, we let \(M=2\). Fig. 6(a) shows the steady-state vertical velocity profiles on the positive domain \(x>0\) for different values of \(\kappa\). The velocity is normalized by the wall velocity \(v_{w}\). The DSMC results in [1] are included for a comparison. Apparently, higher values of \(\kappa\) correspond to more rarefied gases and less momentum transfer from the moving wall to the fluids. Both the DVD-DVM and DVD-EQMOM reproduce the velocity profiles quite close to the reference data for all three values of \(\kappa\). Fig. 6(b) further presents the shear stress \(\tau_{xy}\) defined by \[\tau_{xy}=\int_{\mathbb{R}^{2}}(\xi_{x}-u)(\xi_{y}-v)f(\mathbf{\xi})d\mathbf{\xi}= \int_{\mathbb{R}^{2}}\xi_{x}\xi_{y}f(\mathbf{\xi})d\mathbf{\xi}-\rho uv\] for \(\text{Kn}\) ranging from \(0.01\) to \(100\). Here we denote \(\mathbf{\xi}=(\xi_{x},\xi_{y})^{T}\) and \(\mathbf{U}=(u,v)^{T}\in\mathbb{R}^{2}\). The shear stress is normalized by the free-molecular stress \(\tau_{\infty}=-\rho u_{w}\sqrt{2\theta/\pi}\). Our DVDM results are generally in good agreement with the DSMC results [1]. It is seen that the two-node DVD-DVDM has more significant errors at larger \(\text{Kn}\) (rarefied flow) conditions, as compared with the DVD-DVM made up by more velocity nodes. Figure 6. Couette flow: (a) Steady-state vertical velocity profiles, and (b) Shear stress for different values of \(\text{Kn}\). In all models, \(N=15\) and the directions are \(\{\mathbf{l}_{m}=(\cos\frac{(m-1)\pi}{15},\sin\frac{(m-1)\pi}{15})\}_{m=1}^{15}\). In the DVD-DVM, the discrete velocities in each direction are \(\xi_{k}=0.5k-5.75\) for \(k=1,...,22\). The DSMC data is from [1]. ### 2-D Riemann Problems Two-dimensional Riemann problems have been studied in [27]. Here we consider the following initial data \[(\rho,u,v,p)=\left\{\begin{array}{ll}(\rho_{1},u_{1},v_{1},p_{1})=(0.5313,\ 0,\ 0,\ 0.4),&x>0,\ \ \ y>0,\\ (\rho_{2},u_{2},v_{2},p_{2})=(1,\ 0.7276,\ 0,\ 1),&x\leq 0,\ \ y>0,\\ (\rho_{3},u_{3},v_{3},p_{3})=(0.8,\ 0,\ 0,\ 1),&x\leq 0,\ \ y\leq 0,\\ (\rho_{4},u_{4},v_{4},p_{4})=(1,\ 0,\ 0.7276,\ 1),&x>0,\ \ y\leq 0,\end{array}\right.\] which was also studied in [15]. In contrast to the previous subsections, the internal degrees of freedom is involved here. Thus we set \(L=3\) and the specific heat ratio \(\gamma=(2+D+L)/(D+L)=1.4\). The computational domain is \([-0.5,0.5]^{2}\). The Neumann condition is applied on the boundary. Like in Subsection 6.1, only the continuum and collisionless limits are considered. The continuum regime is characterised again by \(\tau=10^{-4}\). Both the DVD-DVM and DVD-HSM are tested in this case. We set \(N=8\) and the directions \(\mathbf{l}_{\mathcal{L}}=\left\{\left(\cos\frac{(2m-1)\pi}{16},\sin\frac{(2m-1) \pi}{16}\right)^{T}\right\}_{m=1}^{8}\). For the DVD-DVM, the discrete velocity nodes in each direction are taken as \(\xi_{k}=k-8.5\) for \(k=1,...,16\). For the DVD-HSM, we set \(M=12\). The SSP2 scheme is applied for both models. The physical domain \([-0.5,0.5]^{2}\) is divided into a \(400\times 400\) uniform mesh. Fig. 7 shows the density contours at \(t=0.25\) simulated by the both models. The shock waves and contact discontinuities are clearly manifested, which agree reasonably well with the solutions of kinetic equation in [15] and Euler equation in [27]. We again remark that the weight \(|\mathbf{\xi}|^{D-1}\) in the DVDM Eq. (2.8) is necessary. As is revealed in Fig. 8, if such a weight is absent, neither the DVD-DVM nor DVD-HSM correctly predicts the density contour for the Riemann problem in the continuum limit. The collisionless free-molecular regime is characterised with \(\tau=10^{4}\). The analytical results can be found in [15]. All three DVDM submodels are used in the simulation. The non-equilibrium flow generally requires more elaborated discretization of the velocity space than the continuum case, while the absence of shock or discontinuity allows greater sizes of the spatial cells. Thus, the physical domain \([-0.5,0.5]^{2}\) is discretized into a \(80\times 80\) uniform mesh. For the DVD-DVM, we set \(N=24\) and the directions \(\mathbf{l}_{m}=\left(\cos\frac{(2m-1)\pi}{48},\sin\frac{(2m-1)\pi}{48}\right)^{T}\). The discrete velocity nodes in each direction are taken as \(\xi_{k}=0.4k-9.8\) for \(k=1,...,48\). Remark that the total number of velocity nodes \(1152\) is much smaller than that used in [15] (over \(40\ 000\)). For the DVD-HSM, we set \(N=30\) and \(M=14\). The SSP2 scheme Figure 7. 2-D Riemann problem with \(\tau=10^{-4}\): density contours at \(t=0.25\) simulated by (a) DVD-DVM and (b) DVD-HSM. Let \(N=8\) and \(\mathbf{l}_{m}=\left(\cos\frac{(2m-1)\pi}{16},\sin\frac{(2m-1)\pi}{16}\right)^{T}\) for \(m=1,...,8\) in both models. For the DVD-DVM, the discrete velocity nodes in each direction are taken as \(\xi_{k}=k-8.5\) for \(k=1,...,16\). For the DVD-HSM, we have \(M=12\). is used for both the DVD-DVM and DVD-HSM. For the DVD-EQMOM, we set \(N=30\) and \(M=2\). The upwind scheme is employed. The contours of density, temperature and velocity magnitude at \(t=0.15\) are presented in Figs. 9-11 by using different models. Also plotted are the analytical solutions (black dashed line). It is clearly seen that the DVD-DVM yields accurate predictions. In contrast, the DVD-EQMOM and DVD-HSM exhibit greater errors, especially in the temperature profiles. This may be partly attributed to the lower-order approximation in the velocity space (i.e., small values of \(M\)) or lower-order discretization scheme (i.e., the first-order upwind scheme for the DVD-EQMOM). Future work is needed to address these issues. ### Lid-Driven Cavity Flow Our last case is the two-dimensional lid-driven flow in a square cavity \([0,H]^{2}\). The upper wall moves horizontally with a constant speed \(u_{w}\) to drive the fluid while the other three walls are fixed. There are two types of lid-driven cavity flows. The first type is also termed as the microcavity flow, where the Reynolds number \(\mathrm{Re}\) is so small that the flow is mainly characterized by the Knudsen number [34, 14, 24]. The other type with \(\mathrm{Re}\gg 1\) has been widely studied by either solving the Navier-Stokes equation [12] or employing the lattice Boltzmann method [19]. Figure 8. 2-D Riemann problem with \(\tau=10^{-4}\): density contours at \(t=0.25\) simulated by (a) DVD-DVM and (b) DVD-HSM without the weight function \(|\boldsymbol{\xi}|^{D-1}\). All other setups are the same as in the previous case. Figure 9. 2-D Riemann problem with \(\tau=10^{4}\) by the DVD-DVM: contours of (a) density, (b) velocity magnitude and (c) temperature at \(t=0.15\). The analytical solutions are shown as black dashed lines. We set \(N=24\) and the directions are of the similar form as before. Discrete velocities are \(\xi_{k}=0.4k-9.8\) for \(k=1,...,48\). In this case, the internal degrees of freedom are neglected, that is, \(D=2\) and \(L=1\). Our aim is to derive the steady-state flow field and the simulations start from a static flow (\(\mathbf{U}=\mathbf{0}\)) in equilibrium with a constant density \(\rho=1\) at \(t=0\). Let the initial temperatures of both the fluid and the walls be \(\theta_{0}\) and assume that the walls keep this temperature. Then the upper wall starts to move and drive the fluid in the cavity. The computation lasts until the flow becomes steady when the \(L^{2}\)-norm of the difference of \(\mathbf{U}\) between two consecutive time steps is smaller than \(10^{-6}\). We first consider the microcavity flows where the Knudsen number \(\text{Kn}=\frac{\tau}{H}\sqrt{\frac{\pi\theta_{0}}{2}}\). In this case, we set \(H=1\), \(\theta_{0}=2.4\) and \(u_{w}=0.32\), resulting in a Mach number of \(0.16\). We thus tune \(\text{Kn}\) by taking different values of \(\tau\). Only the DVD-DVM is used to simulate the flow, for which we set \(N=30\) and the directions \(\mathbf{l}_{m}=\left(\cos\frac{(2m-1)\pi}{60},\sin\frac{(2m-1)\pi}{60}\right)^{T}\) (\(m=1,\ldots,30\)). The discrete velocity nodes are taken as \(\xi_{k}=0.3k-8.85\) for \(k=1,...,60\). The cavity \([0,H]^{2}\) is divided into \(100\times 100\) uniform cells. The upwind scheme incorporated with diffuse-scattering boundary laws is applied here. Fig. 12 depicts the streamlines and the flow vector field for microcavity flows with various \(\text{Kn}\). A bulk vortex is clearly observed and the streamlines are almost axisymmetric about the horizontal center \(x=0.5H\). As \(\text{Kn}\) increases, the height (\(y\)-value) of the vortex center reduces. These features were also presented in previous works [34, 14]. Fig. 13 gives a comparison of the velocity profiles \(\mathbf{U}=(u,v)^{T}\) across the cavity center with the reference data [24]. Both \(u(y)|_{x=0.5H}\) and \(v(x)|_{y=0.5H}\) are plotted together for each \(\text{Kn}\). It is seen that the DVD-DVM results are in good agreement with the reference data. Figure 11: 2-D Riemann problem with \(\tau=10^{4}\) by the DVD-HSM: contours of (a) density, (b) velocity magnitude and (c) temperature at \(t=0.15\). The analytical solutions are shown as black dashed lines. We set \(N=30\) and \(M=14\). Figure 10: 2-D Riemann problem with \(\tau=10^{4}\) by the DVD-EQMOM: contours of (a) density, (b) velocity magnitude and (c) temperature at \(t=0.15\). The analytical solutions are shown as black dashed lines. \(N\) is set to be \(30\). We next consider the flow with high Reynolds numbers \(\mathrm{Re}\), where \[\mathrm{Re}=\frac{u_{w}H}{\theta_{0}\tau}.\] In this case, we set \(\mathrm{Re}=1000\) by taking \(u_{w}=0.2\), \(H=1\), \(\theta_{0}=1\), and \(\tau=2\times 10^{-4}\). This set of parameters characterises a nearly incompressible flow. For the DVD-DVM, we set \(N=6\) and the directions Figure 12. Microcavity flow: velocity streamlines of (a) \(\mathrm{Kn}=0.1\), (b) \(\mathrm{Kn}=1\) and (c) \(\mathrm{Kn}=8\). For discrete directions and velocities, we set \(N=30\) and \(\{\mathbf{l}_{m}=(\cos\frac{(2m-1)\pi}{60},\sin\frac{(2m-1)\pi}{60})\}_{m=1}^{30}\) while \(\xi_{k}=0.3k-8.85\) for \(k=1,...,60\). Figure 13. Microcavity flow: profiles of \(u(y)|_{x=0.5H}\) and \(v(x)|_{y=0.5H}\) for various Knudsen numbers. The red solid lines are our DVD-DVM simulations, and the black dashed lines are the reference data in [24]. For discrete directions and velocities, we set \(N=30\) and \(\{\mathbf{l}_{m}=(\cos\frac{(2m-1)\pi}{60},\sin\frac{(2m-1)\pi}{60})\}_{m=1}^{30}\) while \(\xi_{k}=0.3k-8.85\) for \(k=1,...,60\). \(\left(\cos\frac{(2m-1)\pi}{12},\sin\frac{(2m-1)\pi}{12}\right)^{T}\). The SSP2 scheme is adopted with the bounce-back boundary condition for no-slip walls. The discrete velocity nodes in each direction are taken as \(\xi_{k}=k-8.5\) for \(k=1,...,16\). Fig. 14 shows the steady-state velocity profiles across the cavity center. The benchmark data are from [12]. The physical domain \([0,H]^{2}\) is discretized to uniform cells. It is seen that when the uniform grids get finer (from \(80\times 80\) to \(160\times 160\)), the simulation results become more accurate and well captures the highly nonlinear boundary profiles. ## 7. Conclusions In this article, we have proposed a discrete-velocity-direction model (DVDM) based on the BGK equation with the internal molecular degrees of freedom. Assuming that the molecule velocity is restricted to a few prescribed directions but the velocity magnitude is still continuous, a semi-continuous DVDM is obtained, where the local discrete equilibrium in each direction is derived by the minimum entropy principle subject to the conservation laws. A key feature of the new model is the introduction of the weight function \(|\xi|^{D-1}\) in the evaluation of the macroscopic fluid quantities. This DVDM can be combined with various treatments of 1-D velocity distribution functions to develop multidimensional spatial-time approximations of the original BGK equation. Specifically, three spatial-time DVD-submodels are derived by incorporating the discrete-velocity model (DVM), the 1-D Gaussian-EQMOM and a Hermite spectral method (HSM). We remark that the DVD-DVM allows radially-positioned discrete velocity nodes, whereas the DVD-EQMOM and DVD-HSM can be regarded as alternative multidimensional versions of EQMOM and HSM, respectively. The feasibility of three spatial-time models have been verified numerically. For the numerical tests, the DVD-DVM and DVD-HSM are discretized with the second-order implicit-explicit Runge-Kutta scheme, while only the first-order upwind scheme is used for the DVD-EQMOM. Two widely-used limiting gas-solid boundary conditions, including the diffuse-scattering law and the bounce-back rule, are properly specified for the DVD-DVM and DVD-EQMOM. Only the Neumann condition is applied for the DVD-HSM. The numerical results for 1-D and 2-D Riemann problems, especially in both the hydrodynamic and rarefied limits, illustrate the ability of the DVDM submodels to capture flow discontinuities. Furthermore, Figure 14. Lid-driven cavity flow: (a) profiles of \(u(y)|_{x=0.5H}\) and \(v(x)|_{y=0.5H}\) and (b) the streamlines for \(\mathrm{Re}=1000\). Red circles are benchmark data [12]. The lines are the DVD-DVM results. The green dashed lines are from a \(80\times 80\) uniform grid and the blue solid lines are from a \(160\times 160\) uniform grid. We set \(N=6\) and choose \(\{\mathbf{l}_{m}\}_{m=1}^{6}\) as above. The discrete velocities in each direction are \(\xi_{k}=k-8.5\) for \(k=1,...,16\). The streamlines are based on data from the \(160\times 160\) uniform grid. the simulations of the planar Couette flow and lid-driven cavity flow agree reasonably well with the benchmark data in a wide range of flow regimes. The numerical tests suggest that the DVD-DVM should be used for the rarefied flows. On the other hand, our numerical results are just preliminary. Better results are expected by using higher-order numerical schemes for spatial-time models or by enlarging the order \(M\) of the DVD-EQMOM. These and the simulation of 3-D flows are our ongoing projects. ## 8. Acknowledgments This work is supported by the National Key Research and Development Program of China (Grant no. 2021YFA0719200) and the National Natural Science Foundation of China (Grant no. 51906122 and 12071246).
2301.10550
Structural insulators and promotors in networks under generic problem-solving dynamics
The collective coordination of distributed tasks in a complex system can be represented as decision dynamics on a graph. This abstract representation allows studying the performance of local decision heuristics as a function of task complexity and network architecture. Here we identify hard-to-solve and easy-to-solve networks in a social differentiation task within the basic model of small-world graphs. We show that, depending on the details of the decision heuristic as well as the length of the added links, shortcuts can serve as structural promotors, which speed up convergence towards a solution, but also as structural insulators, which make the network more difficult to solve. Our findings have implications for situations where, in distributed decision systems, regional solutions emerge, which are globally incompatible as for example during the emergence of technological standards.
Johannes Falk, Edwin Eichler, Katja Windt, Marc-Thorsten Hütt
2023-01-25T12:33:01Z
http://arxiv.org/abs/2301.10550v3
# Structural Insulators and Promotors in Networks ###### Abstract The collective coordination of distributed tasks in a complex system can be represented as decision dynamics on a graph. This abstract representation allows studying the performance of local decision heuristics as a function of task complexity and network architecture. Here we identify hard-to-solve and easy-to-solve networks in a social differentiation task within the basic model of small-world graphs. We show that, depending on the details of the decision heuristic as well as the length of the added links, shortcuts can serve as _structural promotors_, which speed up convergence towards a solution, but also as _structural insulators_, which make the network more difficult to solve. Our findings have implications for situations where, in distributed decision systems, regional solutions emerge, which are globally incompatible as for example during the emergence of technological standards. Graph Coloring Dynamics, Distributed Decision Strategies, Global Coordination, Self-organized dynamics ## 1 Introduction Self-organized dynamics on graphs are an important concept to analyze distributed decision-making and task coordination. Beyond social sciences [15, 16, 21] also logistics [9] and computer science [5, 12, 17] are interested in how distributed decisions can efficiently lead to global coordination, e.g., to avoid queuing or to minimize interference between wireless networks. In the simplest coordination problems, a node of the graph can select a decision (a 'color') out of a list of allowed decisions based on the observed decision states of its direct neighbors. The local decision heuristics (i.e., the decision selection criteria at each node) represent the goal of the systemic task. Such coordination tasks come in two variants [14]: Either the task is related to some type of consensus across the whole system. In this case, the graph is'solved', when no different colors are linked. Alternatively, these coordination tasks can be related to social differentiation, scheduling, or resource allocation. In this case, the graph is'solved', when no same colors are linked. Here we focus on the second scenario of social differentiation and scheduling. Its abstraction as color dynamics on graphs, related to the graph coloring problem, has been made popular by the seminal work of Kearns et al. [15]. This framework has led to relevant insight into problem-solving dynamics and some'stylized facts' about distributed decision making. Examples include the positive effect of random agents in a distributed decision system [21], the effect of a wave-like organization of attention and strategic waiting on these decision dynamics [11], and the effect of shortcuts in a small-world architecture on the convergence toward a fully solved system. This is visible, both in experiments with human subjects [15] and numerical simulations involving simple heuristics [11]. The decision heuristics introduced in Hadzhiev et al. [11] furthermore provided a better understanding of the interplay of centralized and autonomous, decentralized control in manufacturing planning and control [27, 3]. However, a striking characteristic of graph coloring dynamics has not been analyzed in the past: For a few or even a single shortcut (i.e., a small rewiring probability in the Watts-Strogatz model [26]) one observes a dramatic variability of runtimes. Here we show that - besides the random initialization, as well as the general stochastic nature of these dynamics - this high variability is due to the network topology: Depending on the exact positions as well as the heuristic employed, shortcuts in a ring graph can generate easy-to-solve and difficult-to-solve graphs. They can act as _structural insulators_ or _structural promotors_, i.e., they either delay or accelerate regional reorganization efforts towards a trans-regionally compatible solution. The problem we address is of relevance for many real-world applications: In these dynamics, regional solutions emerge rapidly, but they are incompatible on a global scale and the diffusing remaining conflicts, which are the boundaries of incompatible solution regimes, require an excessive amount of local reorganization, until one region switches to a solution compatible with another region. This problem of different locally valid solutions that are globally incompatible can especially be observed in the emergence of compatibility standards [24]: Different technical devices may be locally compatible based on one standard, but incompatible with functionally equivalent standards from other areas, leading to competition between alternatives [22] and ultimately resulting in a global standard. Examples of such battles are BlueRay vs HD DVD or Wi-Fi vs HomeRF [23]. There already exist some models to explain the success or failure of standards. But as economic models, they are focused on the interplay of strategic factors, business models, and business actors [20, 6]. Our investigation rather contributes to understanding the spatial organization of standards and hence the influence of the network topology on the time until a standard settles. ## 2 Methods We investigate heuristics that can solve the graph coloring problem based on local decisions. In this problem from _graph theory_, the goal is to assign colors to the vertices of a graph such that no two adjacent vertices have the same color. The minimum number of colors that are needed to color a network in this way is known as the _chromatic number_\(\chi\) of the graph. In this section, we explain how we generate graphs with a given chromatic number, introduce different local decision heuristics, and present a genetic algorithm that we use to generate networks with specific properties. ### Small-World Networks In this analysis, we mainly focus on small-world networks with few inserted links as a toy model for graphs with high clustering and small shortest path length. The idea of the graph generation follows [26]. However, since the networks are supposed to be solvable with a given number of \(\chi\) colors (the chromatic number), we generate them as follows: 40 (39 for \(\chi=3\)) nodes are arranged as a circular graph, where each node \(i\) is connected to its \(\chi-1\) closest neighbors in both directions. A given number of shortcuts are then added such that each shortcut connects only nodes with a different value of \(mod(i,\chi)\), where \(i\) is the node index, thus preserving the graph's chromatic number \(\chi\). To compare how fast different network topologies can be solved, we look at the number of color changes that have been performed until the network is in a solved state. The color changes then set a time scale where each time step is equal to one color change. ### Other graph topologies with \(\chi=2\) To extend our results to more general statements we generate three other types of random networks (only for \(\chi=2\)): * **BA**: For this network, we start with a simple path graph with 4 numbered nodes. We then add nodes and links following preferential attachment as described in [2] where each new node (labeled with a consecutive number) is attached to existing nodes via two links. However, and in contrast to the reference, to ensure that the graph has a chromatic number of 2, for an even (odd) number of already existing nodes, a newly added node can only connect to nodes with an odd (even) label. * **Random**: The procedure to create this graph starts with a graph of \(N\) unconnected nodes, labeled with an integer \(i\). A given number of edges is then sampled randomly from all edges that would connect two nodes with an even and an odd label. This ensures a chromatic number of \(\chi=2\). If the resulting graph is not connected, the procedure is repeated with a different set of randomly selected edges. * **Modular (Mx)**: To generate this graph, we start with two separate graphs \(A\) and \(B\) of type _random_. We then rewire \(x\) randomly selected edges so that each edge connects one node from \(A\) and one from \(B\). Similar to the procedure for small-world networks, the connections are always added in such a way that the chromatic number \(\chi=2\) is preserved. For small \(x\) the graph has high modularity. The larger \(x\) the, the more similar the graph becomes to a random graph. ### Neighborhood assessment strategies Agent-based models to solve graph coloring problems have already been analyzed in various variations. Inspired by the results from [15], Hadzhiev et al. [11] developed a family of local decision heuristics that allow agent-based networks to be solved in reasonably short times. Following the concepts from [11], a graph coloring heuristic consists of two components: One strategy for the temporal organization (indicating which node acts next) and one for the neighborhood assessment (indicating which color the active node selects). To simulate the behavior of independent distributed systems as closely as possible, we always use random sequential updates (R) for the temporal organization, which means that every time step the next node is selected at random from all available nodes. Using other heuristics for temporal organization, e.g. the channeled attention strategy (C) from [11], the results are qualitatively similar (data not shown). For the neighborhood assessment heuristic, we first refer to three strategies from [11], namely \(R\) (random), \(M\) (color minimizing), and \(W\) (strategic waiting). We then present a new (\(N\)) heuristic whose behavior can be continuously tuned by a parameter \(r\) (reasoning): For large values of \(r\) the agents always select their color by reasoned considerations. The smaller \(r\), the more often the color choice happens randomly. In all strategies, the active node first assesses the colors of its connected neighbors. If possible, the node randomly selects one of the colors that does not appear in its neighborhood (conflict-free color). Otherwise, the different strategies proceed as follows: * **R (random color):** The node selects a color at random from all available colors * **M (conflict minimizing color):** The node selects randomly a color from the set of colors that minimizes the number of conflicts. If the node has already the unique conflict-minimizing color, a color is selected at random. * **W (strategic waiting):** Equal to the M scheme, however, if the node has already the unique conflict-minimizing color, the present color is retained with probability \(p=0.9\). * **N (reasoning):** With a probability \(r\) the node randomly selects a color that minimizes the conflicts (reasoned acting). In the other case (with a probability \(1-r\)) it randomly selects a color from the list of all available colors. The \(N\) heuristic can hence be understood as a generalization of the three other heuristics. For small \(r\) the \(N\) heuristic is similar to the \(R\) heuristic, for intermediate \(r\) it is similar to the \(M\), and for large \(r\) to the \(W\) heuristic. In order to name the full heuristics, we follow the naming scheme that was also used in [11]: \(XY\) means that we used \(X\) as temporal organization strategy and \(Y\) as neighborhood assessment strategy. ### Genetic Algorithm To assess how strongly the topology of a network (with a fixed number of shortcuts) affects the runtime, we use a genetic algorithm that evolves to easy-to-solve or hard-to-solve networks (with respect to a given heuristic). The algorithm starts with an ensemble of six randomly selected small-world networks with the given number \(S\) of shortcuts and proceeds as follows: * Each network of the ensemble is randomly colored and then solved by the respective strategy. The time until solved (measured in activation steps) is averaged over 500 runs. * The two fastest (slowest) solved networks are kept for the next run, additionally, four networks are generated by mutations (rewiring of one shortcut) and by recombination (take \(n\) shortcuts from one network and \(S-n\) shortcuts from the other network) of these two fastest (slowest) networks. * These six new networks are the new ensemble for the first step. The process is terminated after 1000 evolution steps and the obtained topologies are saved. ## 3 Results We take the observed high variability of the distributed graph coloring problem as an opportunity to examine how the network topology influences the runtime. To focus the analysis we limit ourselves to networks with a chromatic number of \(\chi=2\). In the last part of the results section, we explain why networks with \(\chi>2\) show a significantly more complicated behavior, which results from the interaction of different mechanisms and thus defies a simple mechanistic explanation. We begin our investigation by looking at some results from [11]. The authors analyzed how different graph coloring heuristics perform in small-world networks when the number of shortcuts increases. In Fig. 1 we show the performance of the three heuristics that use random sequential updates (\(R\)) for the temporal organization, and \(R\), \(M\) or \(W\) as neighborhood assessment (see 2.3 for details). With the \(RR\) and \(RM\) heuristic, the more shortcuts the network has, the longer (on average) the nodes need to organize and finally solve the network. In contrast, using the \(RW\) heuristic the solution is reached faster with more added links, as it was also observed in human subject networks [15]. Looking at Fig. 1, it is also noticeable that - for a fixed number of shortcuts - the variance of the time steps required is strikingly high. Since the initial conditions for each run are chosen randomly and the heuristic contains stochastic components, a certain variance is to be expected. An open question, however, is whether the topology, i.e. the location of the shortcuts, has an impact on the solvability. To test and quantify the impact of the topology, we use a genetic algorithm (see 2.4) that is designed to generate easy and hard-to-solve small-world graphs with a small number of 5 added links. A strong difference between the runtimes Figure 1: Mean number of time steps (color changes) until the network is solved vs. the number of shortcuts for small-world networks using the \(RR\), \(RM\), and \(RW\) heuristic. The light area denotes the standard deviation (reproduced from [11]). of the extreme graphs could indicate whether and how the topology affects the runtime. Results of the network evolution for the \(RR\), as well as the \(RW\) heuristic, are presented in Fig. 2. The large difference between the fastest and slowest networks (120 vs. 2531 color changes for \(RW\) heuristic, 406 vs. 1206 color changes for the \(RR\) heuristic) indicates that - for a fixed number of shortcuts - the runtimes depend strongly on the shortcut positions. Additionally, the resulting topologies seem to have characteristic features (see also second column of Fig.2): Long-range links facilitate a fast solution finding for the \(RW\) heuristic, but create a difficult-to-solve network for the \(RR\) heuristic. Likewise, the easy-to-solve network for the \(RR\) heuristic is characterized by maximally short links, whereas for the \(RW\) heuristic the short links appear in the difficult graph. In what follows we will introduce a generalized heuristic and extract general features that can explain the interdependence between topology and runtime. Long-range links are often considered to be beneficial for a system-wide organization because they allow transmitting information over a long distance [1]. Our analysis is based on the idea that the respective agent must be able to process the additional information provided by a long link. When agents evaluate the observations from their neighborhood reasoned, the remote information helps them to adapt themselves to the global solution. If, on the other hand, the agents do not operate reasoned, the additional source of information creates confusion, which hinders the stabilization of local solutions. To test this proposition, we introduce a new heuristic \(N\). This heuristic can be continuously adjusted between reasoned and random behavior by means of a single parameter \(r\) (details in Sec. 2.3). We create a ring lattice with 40 nodes and add a single shortcut (with the constraint that the chromatic number \(\chi=2\) is conserved, see also Sec. 2.1) For Fig. 3 we set \(r\) to different values and analyze how the runtime depends on the relative length of the added shortcut (averaged over 10.000 runs each). As expected, if the heuristic is very reasoned (large \(r\)) the time until solved decreases for longer shortcuts. In contrast, if the heuristic contains a lot of randomness (small \(r\)), long-range links deteriorate the solvability of the graph. An additional observation is that the reasoned strategies work poorly, when the inserted link is very short (an increase of the required time by about 30%). ### Reasoned Agents (large \(r\)) For large \(r\) the results are in line with the slow network obtained for the \(RW\) heuristic in Fig. 2. The slow network is characterized by comparably short links that create two densely connected areas. These clusters foster a fast emergence of local solutions. Additionally, the short shortcuts stabilize the local solution against fluctuations from the outside. Figure 4a shows an example of such stabilization of a local solution. The larger the parameter \(r\), the more stable the locally solved areas. However, in the likely case that the local solution is not compatible with the currently prevailing global solution domain, the system is in a hard-to-solve state: The reasoned agents cling to their local solution, the added link acts as a _structural insulator_. Contrarily, evolving towards topologies that are easy to solve for the \(RW\) heuristic, the resulting network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a single long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a long-range shortcutcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a long-range shortcutcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a long-range shortcutcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a long-range shortcutcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a long-range shortcutcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a long-range shortcutcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a long-range shortcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a long-range shortcutcut is shown in Figure 4b. Without the shortcut, the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a long-range shortcutcut is shown in Figure 4b. Without the shortcut, the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a long-range shortcutcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of a long-range shortcutcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of long-range shortcutcut is shown in Figure 4b. Without the shortcut, the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are connected to various areas of the network and that act as _ordering nodes_. These ordering nodes synchronize the local solutions already during their build-up. An example of the effect of long-range shortcutcut is shown in Figure 4b. Without the shortcut, the node labeled with "A" could either stay red or change its color to the same value. The network is characterized by a few nodes that are blue. In both cases, the result would be a single conflict with one neighbor. However, due to the shortcut - that is by definition inserted such that it does not alter the graph's chromatic number and, hence, a global solution is possible - a change to blue minimizes the local color conflicts and acts as a local reference for the global solution domain. ### Irrational Agents (small \(r\)) The situation is different for irrational agents, i.e. with small \(r\) (similar to the \(RR\) heuristic). Here, Fig. 1 tells us that shortcuts consistently create graphs that are more difficult to solve than the pure ring graph, where the effect is stronger the longer the added link. Consequently, the results from Fig. 2 show that the fast networks are characterized by short links. For the \(RR\) heuristic, the difficult-to-solve networks are characterized by long-range links, very similar to the graphs that are easy to solve for the \(RW\) heuristic. For irrational agents (as in the \(RR\) heuristic), the long links that connect a single node to various areas of the graph act like a source of noise: A color-fluctuation of the highly connected node immediately destabilizes the colors of all connected nodes, spread over the full network. ### Complex Topologies Having analyzed the interplay between the length of added links and the reasoning of the acting agents in small-world graphs, it is now natural to ask, whether this behavior can also be observed in more complex networks. As described in Sec. 2.2, we generated modular graphs (2 x 20 nodes, 40 edges each) with different numbers Figure 3: Relative extra time until ring graphs with 40 nodes and a single shortcut are solved vs. the relative length of the added shortcut for different values of \(r\). A relative length of 1 refers to a shortcut of maximal length, hence spanning 20 nodes. The time is measured relative to the time that is needed if the ring graph does not have any shortcut. of rewires, random graphs (40 nodes, 80 edges), and BA graphs (40 nodes). All graphs are generated such that \(\chi=2\). In Fig. 5(left) we show the distribution of the average shortest-path length for the different networks. For the modular graph, the more rewires we do, the shorter gets the path length. In Fig. 5(right) we show the time until solved vs the reasoning parameter \(r\) of the \(N\) heuristic (averaged over 10,000 networks each). For both the random networks and the BA graphs, the more reasoned the agents act, the faster they are. Note, however, that for \(r=1.0\) dead-lock situations are possible that cannot be solved (see e.g. Fig. 2 in [11]). The results confirm the observations from the small-world networks: Random networks as well as BA networks have small modularity and high connectivity. It is therefore unlikely that globally incompatible solutions can stabilize against the rest of the network. The modular network is, however, specifically designed to have two almost separate modules. Fig. 5 shows that in this case heuristics that act too reasoned have a disadvantage: If the two modules converge to different solution domains, it is difficult for the heuristic to overturn one solution. The more edges we rewire, the less modular the network is. Consequently, we observe that reasoned heuristics become more advantageous with the number or rewires. Figure 4: Comparison of the two effects a shortcut can have: (a) A short link stabilizes a solution regime against perturbations from the outside. In the example, there is a color conflict between the two red nodes (indicated by a red link). The right red node has two blue neighbors (one direct and one via the shortcut). If the node acts reasoned its color is stabilized since red minimizes the conflicts. (b) The sketch shows two sections of a large ring graph (indicated by the gray dashed line). The long shortcut organizes two distant sections and orders them. Without the shortcut, the node with the label “A” would have a 50% chance of keeping its color, compared to changing to blue. Due to the shortcut, reasoned-acting nodes will change to blue, since this is the conflict-minimizing color. ### Extension to \(\chi=3\) The natural extension of our investigation is to increase the chromatic number of the graphs. For Fig. 7 we performed a similar analysis as for Fig. 3, but with a ring graph with 39 nodes and a chromatic number of \(\chi=3\). Depending on the length of the added shortcut the system takes longer or is faster to solve than without a shortcut. The general behavior of the network is on average similar to the one with a chromatic number of two (short shortcuts lead to longer times). However, there are also two drastic differences: (1) The curve shows an alternating behavior that was not present for the \(\chi=2\) graphs. The reason is a complicated interplay between the shortcuts and the different possible solution regimes. For two colors there are only two possible solution domains: \(abab\) or \(baba\). However, for three colors there are \(3!=6\) possible solution domains that are facilitated or suppressed depending on the position of the shortcut. (2) The relative effect of a single shortcut is not as strong as for the \(\chi=2\) graph. The main reason is that a shortcut at each end excludes only one color at a time. If there are only two colors a single disallowed color directly determines the correct color: \(\neg\text{red}\rightarrow\text{blue}\). However, the more colors we have the less effect has the banning of a single color. To control such a setting one would need to generalize the definition of a shortcut. For \(\chi=3\) such a generalized shortcut would hence consist of four conventional shortcuts that all-to-all connect two adjacent nodes with two other adjacent nodes. Figure 5: (left) Distributions of the average shortest-path length for the different random graphs shown in the right figure. The abbreviation \(Mx\) denotes a modular graph with \(x\) rewires. Each distribution contains 10,000 data-points. (right) Mean number of time steps (color changes) until the network is solved vs. the reasoning of the heuristic for different graph topologies (see Sec. 2.2), averaged over 10,000 networks. The standard-deviation of the mean is smaller than the markers. ## 4 Conclusion In small-world networks, shortcuts reduce the average path length and facilitate the transport of local information through the system [18]. One would therefore expect that distributed coordination problems on graphs always benefit from shortcuts, albeit the effect size might depend on the respective length of the shortcut. Here, we discussed the graph coloring problem as a simple form of distributed coordination problem. We analyzed how shortcuts affect the time a local heuristic needs to solve the coloring problem. Depending on how reasoned the agents act, added shortcuts give rise to different mechanisms: They synchronize the solution domains between distant sections of the network, stabilize parts of the network against fluctuations, or they create perturbations. For reasoned heuristics, shortcuts tend to insulate locally solved but globally incompatible solutions against each other, finally leading to an increase in the overall time until a solution is found. We call shortcuts that create such separated domains _structural insulators_. In contrast, long shortcuts foster early synchronization of otherwise distant areas of the network, which is why we call them _structural promotors_. The graph coloring problem can also be analyzed as an example of distributed logical systems: The conflicts encountered in graph coloring dynamics on a ring arise due to two (or more) coloring domains that are structurally equal (they are correctly colored) but locally different (they follow a different color permutation). From a mathematical point of view, this inconsistency between local logical systems relates to distributed logic. Our results can hence be interpreted from the perspective of Gotthard Gunther's _theory of polycontexturality_ (often also termed _transclassical logic_) [10]. Figure 6: Sketch of a ring graph with two locally correct but globally incompatible coloring domains. (left) Both sides of the ring have a locally correct coloring. For the red node in the first contexte (C1), the only logical color option is to stay red. Likewise, for the red node in the second contexte the only logical option is to stay red. However, since the solutions are globally incompatible, one contexte needs to change their logic in order to reach solved system. (right) Through an inserted link both contexte get the possibility to observe another contexte (another local logic). The color-choice of the connected neighbour is now affected by the own color. In the context of polycontextural logic, the link can hence be interpreted as a third context (C3), that allows self-reflection. According to this theory, every interacting subject spans a - possibly unique - isolated logic, a _contexture_. All contextures have equal rights and are aligned in a heterarchy. Therefore, no contexture can be said to be right or wrong. In our system, each node can be regarded as a subject (an observer) that spans a contexture. Different logics then show up by the fact that locally correct solutions do not match globally (compare Fig. 6(left)). However, if the network contains a link as depicted in Fig. 6(right), then each connected node can observe the respective node of the other contexture: it can observe the results of the observations of another observer. Gunther's theory states that these observations of other observers allow for self-reflection and a questioning of one's own logic. In our model, by observing persistent color conflicts with a remote node, nodes gain the ability to recognize that their own color choice does not correspond to the globally valid logic. To select the color randomly instead of based on reasoned considerations can then be understood as a switch of the local logic. In this view, it also becomes intuitive, why longer shortcuts serve as _promotors_ and shorter shortcuts serve as _insulators_: For a node with a shortcut, its ability to self-reflect the own logic requires a link to truly independent information, transcending the local solution regime. As a minimal model for the effects of links or information flow within polycontextural systems, the analysis of the graph coloring problem can contribute to heterarchical approaches in biology [4], consensus finding [8], complex and reflexive relations in social systems [25, 13], or transformations in physics [7]. We also believe that our findings have implications for the understanding of the emergence of technological standards (here represented by globally compatible solutions), as well as for the development of more robust scheduling schemes in manufacturing and resource distribution [19].
2303.11037
Delayed closed-loop neurostimulation for the treatment of pathological brain rhythms in mental disorders
Mental disorders (MD) are among the top most demanding challenges in world-wide health. According to the World Health Organization, the burden of MDs continues to grow with significant impact on health and major social and human rights. A large number of MDs exhibit pathological rhythms, which serve as the disorders characteristic biomarkers. These rhythms are the targets for neurostimulation techniques. Open-loop neurostimulation employs stimulation protocols, which are rather independent of the patients health and brain state in the moment of treatment. Most alternative closed-loop stimulation protocols consider real-time brain activity observations but appear as adaptive open-loop protocols, where e.g. pre-defined stimulation sets in if observations fulfil pre-defined criteria. The present theoretical work proposes a fully-adaptive closed-loop neurostimulation setup, that tunes the brain activities power spectral density (PSD) according to a user-defined PSD. The utilized brain model is non-parametric and estimated from the observations via magnitude fitting in a pre-stimulus setup phase. Moreover, the algorithm takes into account possible conduction delays in the feedback connection between observation and stimulation electrode. All involved features are illustrated on pathological alpha- and gamma-rhythms known from psychosis. To this end, we simulate numerically a linear neural population brain model and a non-linear cortico-thalamic feedback loop model recently derived to explain brain activity in psychosis.
Thomas Wahl, Joséphine Riedinger, Michel Duprez, Axel Hutt
2023-03-20T11:42:39Z
http://arxiv.org/abs/2303.11037v1
Delayed closed-loop neurostimulation for the treatment of pathological brain rhythms in mental disorders ###### Abstract Mental disorders (MD) are among the top most demanding challenges in world-wide health. According to the World Health Organization, the burden of MDs continues to grow with significant impact on health and major social and human rights. A large number of MDs exhibit pathological rhythms, which serve as the disorders characteristic biomarkers. These rhythms are the targets for neurostimulation techniques. Open-loop neurostimulation employs stimulation protocols, which are rather independent of the patients health and brain state in the moment of treatment. Most alternative closed-loop stimulation protocols consider real-time brain activity observations but appear as adaptive open-loop protocols, where e.g. pre-defined stimulation sets in if observations fulfil pre-defined criteria. The present theoretical work proposes a fully-adaptive closed-loop neurostimulation setup, that tunes the brain activities power spectral density (PSD) according to a user-defined PSD. The utilized brain model is non-parametric and estimated from the observations via magnitude fitting in a pre-stimulus setup phase. Moreover, the algorithm takes into account possible conduction delays in the feedback connection between observation and stimulation electrode. All involved features are illustrated on pathological \(\upalpha\)- and \(\upgamma\)-rhythms known from psychosis. To this end, we simulate numerically a linear neural population brain model and a non-linear cortico-thalamic feedback loop model recently derived to explain brain activity in psychosis. **Keywords:** neurostimulation, closed-loop, control, real-time, delay, EEG ## 1 Introduction Electrical neurostimulation is an old human idea, and has been a well-established therapy for mental disorders for few decades. Caius Plinius during Antiquity and Scribonius Largus, who lived in the first century AD, proposed respectively contacts with the Electric ray (Torpedo Fish) for the treatment of post-partum pain and severe headaches. In the 19th century, electrical stimulation was commonly prescribed by neurologists for nervous disease [1]. Today, various electrical stimulation techniques exist to modulate neuronal systems and novel techniques for an optimal clinical treatment of a specific pathology gain more and more attention. They could be used as an additional therapeutic lever or as an alternative to pharmacological medication, thus representing a hope for pharmaco-resistant forms of disease. Brain oscillations result from coordinated electrical neuronal tissues activity within and between structures and networks. Implicated in various neural processes, such as perception, attention and cognition, their disruption yields pathological rhythms, which reflect abnormal activity of the implicated brain network, notably at the cellular and molecular level [2]. These pathological rhythms serve as good biomarkers for neuropathologies. For instance, neurophysiological studies have revealed that a large number of mental disorders exhibit pathological rhythms, which do not occur in healthy patients [3]. Neurostimulation techniques have identified such pathological rhythms as good stimulation targets for the treatment of brain oscillatory disorders. Neurostimulation induces electric currents in neuronal tissue. Depending on the stimulation protocol, i.e. the temporal stimulation current shape, its duration and pause and the number of repetitions, neurostimulation can lead to neural plasticity effects or to pacemaker-like brain stimulation, respectively. For example, Deep Brain Stimulation (DBS) is an invasive technique and proposed for patients suffering from severe pharmaco-resistant Parkinson's disease (PD) or obsessive-compulsive disorders. In PD patients aberrant hypersynchronicity and hyperactivity in the \(\upbeta\)-frequency band (12-30 Hz) of the basal ganglia-thalamocortical network can be addressed by the pharmacological medication (e.g. Levodopa) or DBS. The conventional DBS protocols focus on the subthalamic nucleus or globus pallidus stimulation continuously at a temporally constant frequency about 130 Hz. The suppression of the pathological beta oscillations was correlated with improving motor symptoms [4]. Recent techniques [5; 6] propose to apply an adaptive closed-loop stimulation protocol based on observed intracranial brain activity. In addition to this intracranial neurostimulation technique, transcranial electrical stimulation (TES) and transcranial magnetic stimulation (TMS) are non-invasive neuromodulation approaches in which, respectively, a low electrical current and a magnetic field are applied to the cortical tissues. The TES current modalities include direct currents (tDCS), i.e. constant currents, alternating current (tACS), i.e. typically oscillatory currents, and random noise-shape currents (tRNS), which typically includes frequencies above the \(\upbeta\)-frequency band. It was shown that tDCS can improve cognitive performance in healthy subjects [7] and patients [8] and it is applied as a therapeutic means to target brain network dysfunctions, such as Attention-Deficit/Hyperactivity Disorder [9] and major depressive disorder [10]. Although the neurostimulation techniques mentioned above may permit to alleviate mental disorder patients from symptoms, the success rate of these treatments is still limited [11]. This underperformance results from non-optimal choices of the stimulation protocol originating from the lack of understanding of the underlying neural response to stimulations and the non-patient specific stimulation protocol. In other words, typically the stimulation protocol (including size, duration, repetition cycle of the stimulation signal) is open-loop, i.e. pre-defined without taking into account the current brain/health state of the patient [12]. This non-optimal approach is inferior to so-called closed-loop techniques, which adapt to the patients current brain/health state. Such an adaptive, or closed-loop, approach has been introduced for intracranial [13; 14; 15] and transcranial stimulation [16]. Recently proposed closed-loop methods are adaptive in the sense that a pre-defined stimulation signal is applied when observed brain activity fulfills certain criteria, such as passing an amplitude or power threshold. While this adaptive approach improves existing open-loop methods, the pre-defined stimulation signal may still be non-optimally chosen. We propose to estimate a stimulation signal on the basis of observed brain activity. The target stimulation signal is not pre-defined as in the open-loop setting but computed according to a pre-defined target spectral power distribution of the brain activity. To our best knowledge, this focus on a target brain activity spectral distribution has not been proposed before in a closed-loop neurostimulation setup. We argue that it is the natural choice for a closed-loop optimization in the presence of pathological rhythms: typically the pathology is identified by an abnormal power in a certain frequency band and the closed-loop control aims to modify this power value in such a way that the final brain activity power spectral distribution resembles the distribution of a healthy subject. This approach implies the hypothesis that modifying the observed pathological brain rhythms of a patient to resemble brain rhythms of a healthy subject renders the patients brain state and improves the patients health situation. This assumption was motivated by the impressive improving impact of DBS in psychiatric disorders [17]. Technically, the proposed method aims to reshape the spectral distribution of observed data, such as electroencephalographic data (EEG). For illustration, we consider pathological brain rhythms observed in psychosis in the \(\upalpha\)- [18] and \(\upgamma\)-band [19]. Our method relies on the extraction and the filtering in real-time of the brain resting state activity signal, using the EEG and an estimated brain response model. The underlying brain model is fully non-parametric and estimated from observed resting state EEG. Moreover, we consider the fact that the closed-loop feedback exhibits a certain conduction delay between measurement and stimulation. This conduction delay results from the transmission delay in the hardware and the numerical computation time of the stimulation signal. Very first estimates of this delay time are in the range of few tens of milliseconds [Private communication, Isope, 2020], i.e. in the range of EEG signal time scales. Consequently, the present feedback delay in real-world systems may affect the methods performance. To our best knowledge, the present study is the first considering delays in closed-loop neurostimulation systems. The remaining article is organized as follows : Section 2 presents the neurostimulation setup and the closed-loop circuit studied in the rest of this paper. Then, we propose a model-based controller design to apply desired modifications to the observed activity signal. Subsequently, we propose a model estimation method to extract the brain input response model needed for the controller design. Later, we address the problem of the closed-loop delay by designing an additional system to approximate the future values of the observations. Finally, we present two brain models, which illustrate and validate the proposed method. Then, Section 3 presents the simulation results of our circuits, including the accuracy of the model estimation step and the delay compensation. Lastly, in section 4, we discuss the results of the method presented in the paper compared to the state of the art, mention limitations and pinpoint some perspectives and possible experimental tests. ## 2 Material and methods ### Neurostimulation setup We build a theoretical plant as a circuit containing a stimulation element and an observation element, both connected to the model brain system under study. In real practice, the stimulation element corresponds to the neurostimulation device, such as a TES system or a TMS coil. In contrast, the observation element may represent electro-/magneto-encephalographic electrodes (in the following called EEG) or electrodes observing Local Field Potential. We define the time-dependent functions \(u:\mathbb{R}\rightarrow\mathbb{R}\) and \(y:\mathbb{R}\rightarrow\mathbb{R}\) as the input stimulation current and the output EEG signal, respectively. If no input current is applied, the output is a non-zero stochastic signal \(y_{0}\) corresponding to the measured resting state EEG activity and a non-zero neurostimulation current alters the output signal as a linear response. This alteration is caused by a change in the brain activity in response to the neurostimulation input and a direct measurement of the input current. The latter is undesirable as it is not correlated with brain dynamics but only with neurostimulation and measurement devices. In the following, we assume that observations include brain dynamics correlated output only while direct current measurements are filtered out. A method to remove the direct current measurement from the EEG signal is discussed in Section 4. Then, we define the plant \(\mathcal{P}\) as the system that takes \(u\) as its input and generates an output \(y\) which is equal to \(y_{0}\) when no input is applied. By modeling the dynamics of \(\mathcal{P}\), our goal is a neurostimulation signal \(u\) that causes predetermined changes in the spectral power amplitude of the output signal \(y\). In our case, the goal is to increase the activity in the alpha band (\(8-12\)Hz) and decrease the activity in the gamma band (\(25-55\)Hz). ### Linear time invariant model We assume that the observed output response to a small neurostimulation input \(u\) is linear and time-invariant (LTI). This assumption is supported by multiple results across literature [20, 21, 22]. Thus, there is an underlying LTI system \(\mathcal{G}\) that produces an output \(y_{u}\) for any given input \(u\). For this system, we can define a function \(g:\mathbb{R}\rightarrow\mathbb{R}\), which is the output produced by the plant input response system \(\mathcal{G}\) in response to a unit impulse signal \(\delta(t)\). This function \(g\) is also called the unit impulse response of \(\mathcal{G}\) and we have \[y_{u}(t)=g(t)*u(t):=\int_{-\infty}^{+\infty}g(t^{\prime})u(t-t^{\prime})dt^{ \prime}.\] with time \(t\) and \(*\) denotes the convolution over time. It leads to the total plant output \[y(t)=y_{0}(t)+y_{u}(t)=y_{0}(t)+g(t)*u(t). \tag{1}\] With this choice of model, the contribution of the neurostimulation response to the total output is purely additive, allowing us to focus the analysis on \(\mathcal{G}\), which represents the neurostimulation response part of the plant system. We also see that \(y_{0}\), the resting state activity, contains the stochastic part of the output, while \(y_{u}\) can be predicted for any known input signal \(u\) if we have a model for the system \(\mathcal{G}\). A method to estimate the plant input response model \(\mathcal{G}\) is presented in section 2.4. ### Closed-loop control In this section, we suppose that the function \(g\) is known. The estimation of \(g\) will be the aim of section 2.4. To close the loop, we generate the plant input signal \(u\) as the output of a linear controller \(\mathcal{K}\) in response to the plant output \(y\) \[u(t)=k(t)*y(t),\] where \(k:\mathbb{R}\rightarrow\mathbb{R}\) is the unit impulse response of the controller \(\mathcal{K}\). We can now rewrite Eq. (1) as \[y(t)=y_{0}(t)+g(t)*k(t)*y(t). \tag{2}\] Here, we assume that no delay between observation and stimulation application is present. We will relax this condition in section 2.5. To solve Eq. (2), we apply the Laplace transform defined for each time-dependent function \(x:\mathbb{R}\rightarrow\mathbb{R}\) by \[X(s)=\mathcal{L}\{x(t)\}(s):=\int_{0^{-}}^{+\infty}x(t)e^{-st}dt, \tag{3}\] Thus, we define \(Y:\mathbb{C}\rightarrow\mathbb{C}\), \(Y_{0}:\mathbb{C}\rightarrow\mathbb{C}\), \(G:\mathbb{C}\rightarrow\mathbb{C}\) and \(K:\mathbb{C}\rightarrow\mathbb{C}\) as the Laplace transforms of respectively \(y\), \(y_{0}\), \(g\) and \(k\), allowing us to write Eq. (2) as \[Y(s)=Y_{0}(s)+G(s)K(s)Y(s).\] Hence \[Y(s)=\frac{1}{1-G(s)K(s)}Y_{0}(s). \tag{4}\] We now have an equation for the closed-loop output in function of the resting state activity. A block diagram of the closed-loop circuit is shown in Fig. 1. Hence to design the frequency distribution of \(y\) we tune the frequency distribution of the transfer function \(K\) of the controller \(\mathcal{K}\) #### Controller synthesis Our closed-loop setup aims to tune the observation power spectrum, or equivalently, the choice of \(Y(s)\) subjected to the resting state \(Y_{0}(s)\). To this end, we define a linear filter \(\mathcal{H}\) with transfer function \(H:\mathbb{C}\rightarrow\mathbb{C}\) and \[Y(s)=Y_{0}(s)+H(s)Y_{0}(s). \tag{5}\] Specifically, we intend to restore the physiological state of the brain, e.g. of a schizophrenic patient as our motivation, with an observed EEG presenting low alpha activity and high gamma activity. The chosen filter Figure 1: **Closed-loop neurostimulation circuit** \(\mathcal{H}\) is a weighted double bandpass filter with positive weight in the \(\upalpha\)-frequency band to increase \(\upalpha\)-power and negative weights in the \(\upgamma\)-band to decrease the systems \(\upgamma\)-activity. The filter's transfer function is defined as \[H(s)=c_{1}\frac{2\pi B_{1}s}{s^{2}+2\pi B_{1}s+(2\pi f_{1})^{2}}+c_{2}\frac{2\pi B _{2}s}{s^{2}+2\pi B_{2}s+(2\pi f_{2})^{2}}.\] The exact parameters of \(\mathcal{H}\) are shown in table 1. We can synthesize the closed-loop controller \(\mathcal{K}\), by combining equations (4) and (5) and solving for \(K\) as \[\frac{1}{1-G(s)K(s)}Y_{0}(s) = Y_{0}(s)+H(s)Y_{0}(s)\] \[K(s) = \frac{H(s)}{(1+H(s))G(s)}. \tag{6}\] Therefore, if we know the plant input response transfer function \(G\), we can find that desired controller transfer function \(K\) by Eq. (6). Once the transfer function is obtained, we can use it to find a corresponding state-space representation [23] for time domain simulations. ### Model estimation The design of our closed-loop controller requires estimating the plant input response system \(\mathcal{G}\), which in practice includes the brain dynamics, the neurostimulation device and the observation device. Our approach includes the estimation of \(\mathcal{G}\) directly from observed brain activity, such as EEG of the patient. This ensure that the estimated plant model will be as close as possible to the real brain dynamics in the corresponding experimental conditions. To this end, we first need to find a way to measure the plant input response without also measuring the plant resting state activity. This is not trivial since the observed signal is the sum of the resting state activity and the stimulation response. #### Signal extraction Let us consider an open-loop setup with an arbitrary input \(u\) applied to the plant, which generates the output described by Eq. (1). In this equation, we only know \(u\) and \(y\), and want to estimate the impulse response \(g\). The problem is that we cannot observe \(y_{0}\) only during the stimulation. Hence, based on previous data recordings, we need to find a way to predict the dynamics of \(y_{0}\) during the stimulation. First, we provide the following standard definitions that are important in the subsequent discussion. For any time domain signal \(x:\mathbb{R}\rightarrow\mathbb{R}\), we denote the Fourier transform by \[\hat{x}(f)=\mathcal{F}\{x(t)\}(f):=\int_{-\infty}^{\infty}x(t)e^{-2\pi ift}dt. \tag{7}\] We define \(\alpha_{0}:\mathbb{R}\rightarrow\mathbb{R}\) and \(\alpha_{u}:\mathbb{R}\rightarrow\mathbb{R}\) such as \(\alpha_{0}(t)=y_{0}(t)-\bar{y}_{0}\) and \(\alpha_{u}(t)=y_{u}(t)-\bar{y}_{u}\) where \(\bar{y}\), \(\bar{y}_{0}\) and \begin{table} \begin{tabular}{l l l} \hline \hline parameter & description & value \\ \hline \(f_{1}\) & \(\upalpha\)-band natural frequency & 10ms \\ \(B_{1}\) & \(\upalpha\)-band width & 4Hz \\ \(c_{1}\) & \(\upalpha\)-band weight & 1.0 \\ \(f_{2}\) & \(\upgamma\)-band natural frequency & 40ms \\ \(B_{2}\) & \(\upgamma\)-band width & 30Hz \\ \(c_{2}\) & \(\upgamma\)-band weight & -0.5 \\ \hline \hline \end{tabular} \end{table} Table 1: **Parameter set of the filter \(\mathcal{H}\)**. The frequency parameters are chosen based on the alpha frequency range (8-12Hz) and the gamma frequency range (25-55Hz) in an EEG. The weighting parameters \(c_{1}\) and \(c_{2}\), respectively positive and negative, corresponding to the choice to increase the alpha activity and decrease the gamma activity. \(\bar{y}_{u}\) are respectively the ensemble means of \(y\), \(y_{0}\) and \(y_{u}\). We assume that \(y_{0}\) is a wide-sense-stationary (WSS) random process, i.e. its mean and variance do not depend on time. According to the Wiener-Khinchin theorem [24, 25], the autocorrelation function of a wide-sense-stationary random process has a spectral decomposition given by the power spectrum of that process \[S_{yy}(f)=|\hat{\alpha}(f)|^{2},\] where \(\hat{\alpha}:\mathbb{R}\rightarrow\mathbb{C}\) is the Fourier transform of \(\alpha(t)=y(t)-\bar{y}\in\mathbb{R}\) and \(S_{yy}:\mathbb{R}\rightarrow\mathbb{R}^{+}\) is the spectral density of \(y\). Then, we can write Eq. (1) as \[\bar{y}+\alpha(t)=\bar{y}_{0}+\alpha_{0}(t)+\bar{y}_{u}+\alpha_{u}(t),\] where \(\bar{y}=\bar{y}_{0}+\bar{y}_{u}\). The equation then simplifies to \[\alpha(t)=\alpha_{0}(t)+\alpha_{u}(t).\] By application of the Fourier transform, we obtain \[\hat{\alpha}(f)=\hat{\alpha}_{0}(f)+\hat{\alpha}_{u}(f)\] and \[|\hat{\alpha}(f)|^{2}=|\hat{\alpha}_{0}(f)|^{2}+|\hat{\alpha}_{u}(f)|^{2}+2 \text{Re}[\hat{\alpha}_{0}(f)\hat{\alpha}_{u}(f)^{*}].\] In the following, we compute the ensemble average of each term of this equation. Since \(\alpha\) and \(\alpha_{u}\) are two independent processes sampled at different times and \(\langle\hat{\alpha}_{0}\rangle=\langle\hat{\alpha}_{u}\rangle=0\). Hence \[\langle 2\text{Re}(\hat{\alpha}_{0}(f)\hat{\alpha}_{u}(f)^{*})\rangle=2 \text{Re}[\langle\hat{\alpha}_{0}(f)\hat{\alpha}_{u}(f)^{*}\rangle]=0.\] Here and in the following, \(\langle\cdot\rangle\) denotes the ensemble average. We point out that although Eq. (8) does hold when considering the ensemble average of the signals, fluctuations around \(0\) still remain in Eq. (8) for finite ensemble number of finite time signals. Nevertheless, this yields \[\langle|\hat{\alpha}_{u}(f)|^{2}\rangle=\langle|\hat{\alpha}(f)|^{2}\rangle- \langle|\hat{\alpha}_{0}(f)|^{2}\rangle. \tag{8}\] Using Eq. (1), we can express \(\hat{\alpha}_{u}\) in terms of the input impulse response \(g\) and the input \(u\) \[\begin{split}\hat{\alpha}_{u}(f)&=\mathcal{F}\{y_{u }(t)-\bar{y}_{u}\}(f)\\ &=\mathcal{F}\{g(t)*[u(t)-\bar{u}]\}(f)\\ &=\hat{g}(f)\mathcal{F}\{u(t)-\bar{u}\}(f)\.\end{split} \tag{9}\] This equation permits to estimate the transfer function \(\hat{g}\), see Section 3. To express the transfer function \(\hat{g}\) in Laplace space, we use the fact that a unit impulse response function is non-zero only for positive time values \(t\). Hence, based on equations (3) and (7), for \(s=2\pi if\), we can write the Laplace transform \(G\) as \[G(2\pi if)=\int_{0^{-}}^{+\infty}g(t)e^{-2\pi ift}dt=\int_{-\infty}^{+\infty}g (t)e^{-2\pi ift}dt=\hat{g}(f).\] We now need a method to generate a LTI system with a transfer function that matches the magnitude data computed with the formula. This is achieved by the magnitude vector fitting algorithm. #### Magnitude vector fitting Our goal is now to find a transfer function \(G\) corresponding the magnitude data \(|\hat{g}(f)|^{2}\). For this purpose, we use a variant of the vector fitting algorithm design to work even with only the magnitude data. This method is called magnitude vector fitting [26]. It allows to fit a passive LTI system to data by fitting the model transfer function. The system is synthesized such that the mean square error between the magnitude data sample and the transfer function evaluated at the same frequency points is minimized. [26] show that the transfer function of the fitted model reproduces both the magnitude and the phase shift of the original transfer function, although the fitting has been performed using sampled magnitude data only. By minimizing the mean square error, the algorithm ensures that the transfer function of the fitted model accurately matches the original model as represented by the reconstructed gain data. Furthermore, to assess the accuracy of the reconstruction, we also compare the fitted model to the transfer function of the linearized brain model used for the simulation. This allows to double-check the validity of the reconstructed magnitude and also to verify if the reconstructed phase fits the phase of the original model as closely as possible cf. Fig 3C,D. ### Delay compensation Realistic feedback loops exhibit conduction delays between the moment of observation and feedback stimulation. Reasons for such delays are finite conduction speeds in cables, electronic switches, interfaces and delays caused by the controller device to compute numerically adapted stimuli. In systems with large time scales, such as controlled mechanical devices on the centimeter or larger scale, such delays may be negligible. Conversely biological systems such as the brain evolve on a millisecond scale and conduction delays may play an important role. Preliminary estimation of input and output devices of desktop computers have revealed an approximate delay of \(\sim 10\)ms. By virtue of such delays, it is important to take them into account in the closed-loop between the moment of observation and stimulation. The different sources of delay can be represented as plant input and output delays. Since the controller \(\mathcal{K}\) is LTI, the input and output delays can be concatenated into one single plant input delay. Hence, in our setup, we model the delay as an input delay \(\tau\) in the system \(\mathcal{G}\), modifying \(y(t)=g(t)*u(t)\) in Eq. (1) to \(y(t)=g(t)*u(t-\tau)\). The Smith predictor [27][28] is a known method to compensate such delay times. However, in the present problem, this approach allows controlling a limited frequency band only (see Fig. 7A)). Consequently, it was necessary to invent another method. Since the plant input \(u\) is generated by the controller \(\mathcal{K}\), we modify the controller to compensate the delay. To this end, the new controller \(\mathcal{K}\) is chosen to estimate the future value of \(u\) instead of the present value. A method to apply this controller modification is presented in Section 3.2. ### Brain models Our closed-loop control method works for any LTI brain model. Furthermore, we want to show that it also produces good results on non-linear brain models, for which the neurostimulation input response behaves closely to an LTI system, when the input is sufficiently small. To this end, we present two models used to test our method. The first one is a linear neural population model of cortical activity, and the second one is a non-linear cortico-thalamic neural population model with cortico-thalamic delay. #### 2.6.1 Linear brain model We describe neural population activity with a noise-driven linear model [29]. The model is composed of two pairs of interacting excitatory and inhibitory populations. Here we have \(V_{e,i}^{(1,2)}:\mathbb{R}\rightarrow\mathbb{R}\), representing the mean activity of the associated population, where \(V_{e}^{(1,2)}\) and \(V_{i}^{(1,2)}\) correspond respectively to excitatory and inhibitory populations. Each population is driven by noise \(\xi_{1,2}:\mathbb{R}\rightarrow\mathbb{R}\) and the external input \(u:\mathbb{R}\rightarrow\mathbb{R}\), according to the following differential equations: \[\left\{\begin{array}{ll}\tau_{e,1}\frac{dV_{e}^{(1)}(t)}{dt}&=(-1+N_{11})V_{e }^{(1)}(t)-N_{11}V_{i}^{(1)}(t)+b_{1}u(t)+\xi_{1}(t),\\ \tau_{i,1}\frac{dV_{i}^{(1)}(t)}{dt}&=N_{21}V_{e}^{(1)}(t)+(-1-N_{21})V_{i}^{(1 )}(t)+b_{2}u(t),\\ \tau_{e,2}\frac{dV_{e}^{(2)}(t)}{dt}&=(-1+N_{12})V_{e}^{(2)}(t)-N_{12}V_{i}^{( 2)}(t)+b_{3}u(t)+\xi_{2}(t),\\ \tau_{i,2}\frac{dV_{e}^{(2)}(t)}{dt}&=N_{22}V_{e}^{(2)}(t)+(-1-N_{22})V_{i}^{( 2)}(t)+b_{4}u(t),\end{array}\right. \tag{10}\] where the noise \(\xi_{1,2}\) is uncorrelated Gaussian distributed with zero mean and variance \(\kappa_{1,2}^{2}=10^{-7}\), and the stimulation \(u\) is weighted by the coupling constants \(b_{i}>0\) of the corresponding population. In addition, \(\tau_{(e,i),(1,2)}\) are the synaptic time constants of the populations, and constants \(N_{ij}>0\) are interaction gains of the respective population. Table 2 provides the parameters employed in subsequent simulations. The observed output \[y(t)=V_{e}^{(1)}(t)-V_{i}^{(1)}(t)+V_{e}^{(2)}(t)-V_{i}^{(2)}(t)\] is a sum of the effective field potential \(V_{e}^{(j)}-V_{i}^{(j)}\) of both populations \(j=1,2\), cf. Fig. 7 (top panels). The simulation of the linear brain model in time domain is done using the library control of python. The numerical integration is computed thanks to matrix exponential [30], with a simulation sampling time of 1ms. #### 2.6.2 Cortico-thalamic brain model A different model considers the cortico-thalamic feedback circuit [31]. It describes the cortex layers I-III and the cortico-thalamic loop between cortical layers IV-VI, the thalamic relay cell population and the reticular structure. The cortical layer I-III exhibits mean activity of excitatory cells \(v\) and inhibitory cells \(w\). Similarly, layer IV-Vis exhibits the mean activity \(V_{e}\) and \(V_{i}\) and thalamic relay cell populations the mean activity \(V_{th,e}\) and \(V_{th,i}\). Moreover, the reticular structure has the mean activity \(V_{ret}\). The fibers between the cortex and thalamus and the cortex and reticular structure exhibit a finite conduction delay \(\tau\)[31, 32]. The 7-dimensional dynamical system of the brain state \(\mathbf{x}=(v,w,V_{e},V_{i},V_{th,e},V_{th,i},V_{ret})\in\mathbb{R}^{7}\) obeys \[\left\{\begin{array}{ll}\hat{\mathbf{x}}(t)&=\mathbf{F}(\mathbf{x}(t), \mathbf{x}(t-\tau))+\boldsymbol{\xi}(t)+\mathbf{B}u(t),\\ y(t)&=\mathbf{C}\mathbf{x}(t),\end{array}\right. \tag{11}\] where the superscript \(t\) denotes transposition, \(\mathbf{F}\in\mathbb{R}^{7}\) is a nonlinear vector function, \(\mathbf{B}\in\mathbb{R}^{7\times 1}\) is the input coupling matrix and \(\mathbf{C}\in\mathbb{R}^{1\times 7}\) is the observation matrix. We mention that \(\mathbf{B}=(b_{1},b_{2},b_{3},b_{4},0,0,0)^{t},\ b_{i}>0\), i.e. only the cortical layers are stimulated with weights \(b_{i}\). The observation \(y\) captures the activity of the cortical \begin{table} \begin{tabular}{l l l} \hline parameter & description & value \\ \hline \(\tau_{e,1,2}\) & exc. synaptic time constant & 5ms \\ \(\tau_{i,1,2}\) & inhib. synaptic time constant & 20ms \\ \(N_{11}\) & first exc. linear coefficient & 1.15 \\ \(N_{21}\) & first inhib. linear coefficient & 0.63 \\ \(N_{12}\) & second exc. linear coefficient & 2.52 \\ \(N_{22}\) & second inhib. linear coefficient & 6.6 \\ \(N\) & number of neurons & 1000 \\ \(\kappa_{1,2}^{2}\) & noises variances & \(10^{-4}/N\) \\ \(b_{1,2}\) & input coupling constants & 0.18 \\ \(b_{3,4}\) & input coupling constants & 0.14 \\ \hline \end{tabular} \end{table} Table 2: **Parameter set of model (10).** The choice of parameter is partially based on the paper in which it was developed (see [29]). excitatory populations [31, 33] with \(\mathbf{C}=(c_{1},0,c_{3},0,0,0,0),\ c_{i}>0\). For more details, please see the Appendix. The time domain simulations of the cortico-thalamic model is done by numerical integration using the fourth-order Runge-Kutta method implemented by the scipy library in python with a maximum simulation time step of 1 ms. The signal produced by this cortico-thalamic brain model is shown in Fig. 2. ## 3 Results The present work addresses two major problems in closed-loop control: the correct model choice of the systems dynamics and the present conduction delay. The subsequent sections propose solutions for both problems and illustrate them in some detail by applying them to the linear brain activity model from section 2.6.1. The final section demonstrates the closed feedback loop for the cortico-thalamic brain model from Section 2.6.2. ### Model estimation Equations (8) and 9 permit to express the magnitude of \(\hat{g}(f)\) in terms of the spectral densities of observable signals \[\begin{split}|\hat{g}(f)|^{2}|\mathcal{F}\{u(t)-\bar{u}\}(f)|^{2 }&=|\hat{\alpha}(f)|^{2}-|\hat{\alpha}_{0}(f)|^{2}\\ |\hat{g}(f)|^{2}S_{uu}(f)&=S_{yy}(f)-S_{yy_{0}y_{0} }(f)\\ |\hat{g}(f)|^{2}&=\frac{S_{yy}(f)-S_{yy_{0}y_{0}}(f )}{S_{uu}(f)}.\end{split} \tag{12}\] The spectral density functions \(S_{yy_{0}}\) and \(S_{yy}\) may be estimated numerically from output data before and during a stimulation with a known chosen stimulation function \(u\). The estimation may be performed by applying conventional methods, such as the Welch method [34]. These estimations provide the magnitude of the transfer function \(|\hat{g}|\) by utilizing Eq. (12). In detail, at first, we considered the linear model (10) and injected a white noise current into the plant gaining the system's response signal together with the resting state activity, cf. Fig. 3A. The subsequent estimation of \(S_{yy}(f),\ S_{yy_{0}}(f)\) and \(S_{uu}(f)\) (see Fig. 3B) from the data permitted to compute the brain input response model \(\hat{g}(f)\) by Eq. (12). We observe a very good accordance of the original model response function and its estimation in magnitude (see Fig. 3C) and phase (see Fig. 3D). The remaining error in the estimated model compared to the original model depends on the amplitude of the driving noise \(\xi\), cf. Fig. 4. High driving noise can also cause the magnitude vector fitting algorithm not to converge, leading to a non-minimal mean-square error between the fitted and the original models when evaluated at the frequency sample points used for the algorithm. This problem can be solved by increasing the amplitude of the input current \(u\) that we inject in the plant, which decreases the contribution of the rest state driving noise \(\xi\) to the output signal relative to the input Figure 2: **Resting state activity computed from the cortico-thalamic brain model.** Left: Observation time series in a certain time window. Right: Power spectral density of the observation time series. current. Although the remaining dominant input current is also noisy, its value at any time or frequency is known, meaning that it is canceled out in the ratio \(\frac{S_{\text{xx}}}{S_{\text{xx}}}\) in Eq. (12). This effectively leads to lower noise in the transfer function magnitude data extracted with Eq. (12). The limitation is then set by the maximum amplitude of the current we are allowed to inject into the brain in a given neurostimulation setup. Indeed, the amplitude of the current is limited both for safety reasons that are beyond the scope of this paper and because of the assumption of linearity on which our method is based and which requires small currents. ### Delay compensation Delay compensation is achieved by adding another LTI system at the output of the controller \(\mathcal{K}\) cf. Fig. 5, whose purpose is to reproduce the transfer function of a negative delay. We call this system the predictor \(\phi\). However, perfectly reproducing the transfer function of a negative delay would be impossible since the associated time-domain system would then be a perfect predictor, which is a non-causal, i.e. un-physical, system. Nonetheless, we can build a causal and stable system that behaves almost like a perfect predictor, however only in the frequency ranges of interest. The numerical implementation of the controller necessitates discretization in time. Consequently, it is reasonable to choose the predictor design as a discrete-time system, meaning that for any input signal at \(x_{t}:\mathbb{R}\rightarrow\mathbb{R}\) at an instant \(t\in\mathbb{R}\), it approximately predicts the future signal \(x_{t+\Delta t}\) where \(\Delta t\in\mathbb{R}\) is the sampling time chosen when building the predictor. Since \(x\) is a discrete sequence, its transfer function is obtained using the Z-transform, defined as \[X(z)=\mathcal{Z}\{x_{n\Delta t}\}(z):=\sum_{n=0}^{\infty}x_{n\Delta t}z^{-n},\] with \(z\in\mathbb{C}\) and \(X:\mathbb{C}\rightarrow\mathbb{C}\). Then the transfer function \(\Phi:\mathbb{C}\rightarrow\mathbb{C}\) of a negative delay of one step \(\Delta t\) applied Figure 3: **The magnitude vector fitting algorithm successfully reconstructs the transfer function \(G\) from magnitude-only data.****A)** Time series of the resting state activity (blue), the input signal (green) and the stimulation response (red). **B)** Spectral densities of the simulated input signal (green), the resting state activity (blue) and the stimulation response (red). The input signal is a white noise with chosen standard deviation 0.005. **C)** Reconstructed gain \(|\hat{g}|\) of the plant input response. The fitted model (dashed cyan) accurately matches the original model (black). The red curve is the raw data used for fitting, computed from the spectral density data in panel A) using Eq. (12). **D)** Reconstructed phase of the plant input response \(\hat{g}\) to \(x\) would simply be \(\Phi(z)=z\), the Z-transform of a one-step delay. However, this choice would be non-causal, which is not implementable numerically in time. Nevertheless, to obtain a stable and implementable system with a transfer function as close as possible to \(z\), we chose the ansatz \[\Phi(z_{0})=\frac{b_{0}z_{0}+b_{1}}{z_{0}-a}=z_{0}, \tag{13}\] for a fixed value \(z=z_{0}\) and where \(a\in\mathbb{R}\) is the pole of the system and \(b_{0}\in\mathbb{R}\) and \(b_{1}\in\mathbb{R}\) are the polynomial coefficients of the numerator of \(\Phi\). This equation corresponds to the transfer function of a discrete LTI system with exactly one pole and one zero, which is the closest form of a proper rational function to the identity function of \(z\) in the sense that it has only one more pole. We add the additional constraints that \(|a|<1\), since this is the necessary and sufficient condition for the discrete predictor \(\phi\) to be stable. We choose to reformulate this problem by setting \(a\) as a free parameter. This way, we can select any \(a\) between \(-1\) and \(1\), and the remaining parameters are found by solving the linear equation \(b_{0}z_{0}+b_{1}=z_{0}(z_{0}-a)\), where \(z\in\mathbb{C}\) is a chosen complex frequency point at which we want this equation to hold. Since there are two unknowns, we can write a second equation in which we want the derivative of each side of the equation also to Figure 4: **The magnitude vector fitting algorithm’s performances depend on the amplitude ratio of the stimulation current and the driving noise.** Each row correspond to a different signal-to-noise ratios (SNR), computed as the ratio between the mean input coupling strength and the mean noise standard deviation. The transfer function magnitude data (red dots) are then used to synthesize a plant model via the magnitude vector fitting algorithm. The left (right) column corresponds to the transfer function magnitude (transfer function phase). We see that the noise levels in the transfer function magnitudes are higher for stronger brain-driving noise. The fitted model is coded in dashed cyan and deviates more from the original model for higher noise levels. be equal, yielding \(b_{0}=2z_{0}-a\). By replacing \(b_{0}\) in the first equation, we obtain \[z_{0}(2z_{0}-a)+b_{1} =z_{0}(z_{0}-a)\] \[b_{1} =-z_{0}^{2}.\] In the z-domain, the zero frequency corresponds to \(z_{0}=1\). We choose to solve this equation for this point, hence we can replace \(a\), \(b_{0}\) and \(b_{1}\) in Eq. (13) which yields \[\Phi(z)=\frac{(2-a)z-1}{z-a}. \tag{14}\] This transfer function can then be converted to an associated state-space representation and used for time domain simulations with a sampling time \(\Delta t\). The output of this system will then be \(y_{t}\approx u_{t+\Delta t}\) for any input signal \(u_{t}\). Simulating delays greater than the system sampling time is simply achieved by concatenating multiple times this predictor system. Here the delay has to be a multiple of the sampling time. This predictor can then be appended to the output of the digital controller \(\mathcal{K}\). To avoid closed-loop instability, we must limit the amplitude of the feedback signal computed from the controller input signal. This amplitude is determined by the three systems \(\mathcal{G}\), \(\mathcal{H}\) and \(\mathcal{K}\). Since \(\mathcal{G}\) is defined by the system under study and \(\mathcal{H}\) is the chosen filter defining the desired modifications in the frequency distribution of the observed signal, \(\phi\) (or equivalently parameter \(a\)) is the only degree of freedom. Figure 6 shows the region of closed-loop stability as a function of the predictor pole \(a\) and the delay. Because the predictor has a gain that is still slightly greater than one in the frequency ranges of interest, we reduce the weights of the filter \(\mathcal{H}\) to compensate for the excess gain at the \(\upalpha\) and \(\upgamma\)-peaks. To do this, we simply divide the weight of each band by the magnitude of the predictor system evaluated at the band's natural frequency. This reduces the errors in the closed-loop transfer function in the \(\upalpha\) and \(\upgamma\)-ranges. Figure 5: **Closed-loop neurostimulation circuit with predictor** Figure 6: **The predictor pole location affects the closed-loop stability.** The magnitude of the pole with the highest magnitude in the closed-loop transfer function parameterizes the stability of the closed-loop. Indeed, if this value is less than 0 dB, then all the poles of the closed-loop transfer function have a magnitude less than 0 dB, meaning that the system is stable. The system is unstable otherwise. Here the full curve, the dashed curve and the dotted curve correspond to predictors for delays of 3 ms, 5 ms and 10 ms, respectively. The higher the delay is, the lower is the size of the region of closed-loop stability for \(a\). Figure 7(B) shows results combining the model estimation by vector fitting and the delay compensation. The proposed closed-loop control yields an increase in \(\mathfrak{a}\)-power and a decrease in \(\gamma\)-power according to the employed target filter \(\mathcal{H}\). The application of a conventional reference signal control and Smith predictor for delay compensation (Fig. 7(A)) does not yield a reduction of higher \(\gamma\)-frequency activity. This can also be seen in Fig. 7(bottom panel), showing that the proposed scheme adapts much better to the target gain function than the reference signal control scheme. Generally, both methods fail to adapt well to very high-frequencies (details not shown). Figure 7: **Model-based closed-loop neurostimulation with delay compensation successfully decreases gamma activity while reference signal-based control with Smith predictor fails.****A)** Simulation data of the reference signal-based control design with Smith predictor. **B)** Simulation data of the model-based control design with delay compensation. The upper panels show the time series of the resting state activity signal \(y_{0}\) (blue) and the closed-loop output signal \(y\) (red) and the input current \(u\) (green). The amplitude of the stimulation current is much larger for reference signal-based control than for model-based control. The center panels show spectral densities of the resting state activity signal \(y_{0}\) (blue), the closed-loop output signal \(y\) (red) and the input current \(u\) (green). The activity is increased in the alpha range and decreased in the gamma range for model-based control, however, is increased everywhere for reference signal-based control. The spectral density of the input current is again much larger for reference signal-based control than for model-based control. The lower panels show the spectral density gain from \(y_{0}\) to \(y\) of the closed-loop systems. The dashed red curve is computed from the closed-loop transfer function and the black curve is the target curve computed from the transfer function \(1+H(s)\). We see that the implemented closed-loop applies the correct modifications in alpha and gamma ranges for model-based control but not for reference signal-based control where the error is large for frequencies above the alpha range. The conduction delay is 5ms and the value of the parameter \(a\) in the delay compensation scheme is chosen to \(a=0.55\) #### 3.2.1 Accuracy #### 3.2.2 Stability As discussed earlier, delay compensation can destabilize the closed-loop system depending on the parameters of its components. However, if the correct predictor pole is chosen based on Fig. 6, the closed-loop will remain stable. These values are computed under the assumption that there are no model estimation errors. If we take into account the inaccuracies in the fitted brain model compared to the original brain model, extra gain can add up in the feedback signal, introducing again the risk of destabilizing the closed-loop. This is trickier to solve, as we assume here that in a real experimental setup that, it is very difficult to reduce these remaining errors further by the method proposed. Hence the solution is either to simply reduce the amplitude of the spectral density modification that we want to apply by reducing the amplitude of the transfer function of filter \(\mathcal{H}\), or to reduce the amplitude of the predictor \(\oplus\) reducing its accuracy and possibly increasing delay errors. In any case, the inaccuracies in the estimated brain model create errors in the closed-loop transfer function regardless of the delay. Figure 8: **Delay decreases the accuracy of the closed-loop transfer function** For uncompensated delay (dashed blue curve), the closed-loop transfer function significantly deviates from the target transfer function defined as \(1+H(s)\) (black curve). Delay compensation (dashed red curve) reduces the deviation from the target transfer function in the \(\alpha\)- and \(\gamma\)-frequency range for delays of 3ms and 5ms. However, the error is still large in the \(\gamma\)-range for a delay of 10ms. ### Application to cortico-thalamic circuit model To extend the analysis to a biologically more realistic model, we employed a nonlinear cortico-thalamic brain model (cf. section 2.6.2). Fitting a linear transfer function to the brain model activity as described above, we found a good accordance of fitted and original model as can be seen in Fig. 9A),B). Small deviations in the gain and the phase resulted from the internal delay in the brain model and its non-linearity. Indeed, the magnitude vector fitting algorithm does not reproduce this delay but instead synthesizes a linear system that has no delay but still approximates well the transfer function of the original model. Nonetheless, the non-linearity of this model can also decrease the accuracy of the fitting, as we are trying to represent a non-linear input response model by a linear one. However, this effect is only seen when the current is large enough for the non-linear part of the response to be significant. In fact, the model-based control enhances \(\upalpha\)-activity and diminishes \(\upgamma\)-activity in good accordance to the imposed filter \(\mathcal{H}\) (see Fig. 9C)). This can also be seen in the closed-loop transfer function, which corresponds well to the target transfer function (see Fig. 9D)) for small and medium frequencies. The closed-loop transfer function deviates from the target transfer function for large frequencies beyond the \(\upgamma\)-frequency range. This results from the employed conduction delay. To elucidate better the functions of the different elements of the proposed method, we applied a second closed-loop setup, where the neurostimulation input was applied to the first three layers of the cortex modeled by \(u\) and \(v\) and to the reticulum modeled by \(V_{ret}\) (Fig. 10). In this setting, the response in the high-frequency ranges are mainly produced by the cortex, while the response in low-frequency ranges originates mainly from the reticulum and the thalamic relay structure, with a gap approximately between 10Hz and 20Hz. The weak response between 10 Hz and 20 Hz observable cf. Fig. 10A is compensated by the controller, which produces a high magnitude stimulation in the closed-loop for these frequencies cf. Fig. 10C. The second consequence is the inaccuracy of the closed-loop output in the low-frequency ranges, this is caused by the rather long cortico-thalamic internal delay. This delay yields a larger phase shift at low-frequencies and originates from the fact Figure 9: **Fitted model-based control using the cortico-thalamic brain model successfully reproduces the target transfer function in the frequency domains of interest.****A)** Magnitude of the fitted brain model transfer function (dashed cyan) compared to the magnitude of the original cortico-thalamic brain model transfer function (black). **B)** Phase shift of the fitted transfer function (dashed cyan) compared to the magnitude of the original transfer function (black). **C)** Spectral densities of the rest state activity signal (blue), the stimulated brain output (red) and the stimulation signal (green). **D)** Closed-loop transfer function (dashed red), compared to the target transfer function \(1+H(s)\) (black). that we observe signals in the cortex, but stimulate in the reticulum. ## 4 Discussion The goal of the proposed method was to design a delayed closed-loop control method to apply defined modifications to the spectral distribution of an observed signal, such as EEG or LFP. The presented work explicitly describes all the steps needed to build a delayed closed-loop neurostimulation setup to restore the physiological brain state of a patient [35]. Since the controller is modeled as a linear time-invariant system, its implementation is lightweight, straightforward, and easily applicable in most embedded systems. Applications to a simple neural populations model (Fig. 7) and to a biologically plausible cortico-thalamic feedback system (Fig. 9 and 10) demonstrate its elements and their impact on the control performance. ### Main contributions #### Model estimation We assume resting state activity signal driven by noise, when no neurostimulation is applied. Injecting a stimulation creates an additional response that adds to the resting state. Consequently, both the resting state signal and response signal can be observed separately in experimental practice and they serve to estimate a linear state-space model as outlined in section 3.1. This approach is successful for both simplified linear models (cf. Figs. 3,4) and neurophysiological realistic nonlinear models (cf. Fig. 9). This approximation is suitable for nonlinear systems whose dynamics evolve close to a stationary state. Several studies have already exposed evidence confirming that the measured brain dynamics behave mostly linearly at macroscopic scales [20], [21]. Moreover, in the case of the brain response to small neurostimulation input, our assumption of the linear brain response is supported by results of [22]. The authors of this study measured the controllability Gramian of their brain model with nonlinear sigmoid transfer function, similar to the cortico-thalamic brain model [31] used Figure 10: **Reticulum stimulation yields incorrect closed-loop gain in low-frequency ranges.****A)** Magnitude of the fitted brain model transfer function (dashed cyan) compared to the magnitude of the original cortico-thalamic brain model transfer function (black). **B)** Phase shift of the fitted transfer function (dashed cyan) compared to the magnitude of the original transfer function (black). **C)** Spectral densities of the rest state activity signal (blue), the stimulated brain output (red) and the stimulation signal (green). **D)** Closed-loop transfer function (dashed red), compared to the target transfer function \(1+H(s)\) (black). in this paper. If the system exhibits nonlinear dynamics far from any linear approximation, such as bistable dynamics and chaotic evolution, the proposed vector fitting technique may yield a too large model error and thus instability of the closed-loop feedback. The hypothesis of macroscopically linear dynamics has also recently been tested against various nonlinear models [36]. While that work included fitting methods for both linear and nonlinear brain models, our work chose the paradigm of purely frequency domain model fitting with the magnitude vector fitting algorithm [26] and applied it to the brain input response system, which we could isolate thanks to a simple open-loop neurostimulation setup. While models have already been studied in application to neurostimulation [37], [38], we propose a straightforward black box modeling approach that is directly usable for adaptive closed-loop neurostimulation, and is technically applicable easily for each individual patients before any closed-loop neurostimulation sessions. #### Delay compensation Conduction delays of a few milliseconds in the transmission between observation and stimulation may be negligible in systems evolving on time scales of seconds or longer, but may play an important role in neural systems. Our study demonstrates that such feedback delays may introduce control errors and we show how these errors can be avoided by a novel delay compensation method (section 3.2). Application to the linear model (7) demonstrated its superior performance compared to a conventional delay compensation method. Delay compensating systems have already been described in other work [39], [40]. However, we used a design primarily focused on the correction of a gain error in the closed-loop transfer function, whereas the majority of the current research is based on time domain criterion and stability enforcement [41], [42]. The methods performance, i.e. how well the total gain function fits to the pre-defined transfer function, is good for low-frequencies but weakens for frequencies exceeding a limit frequency. Note that frequency domain compensation has also already been achieved, notably via delay equalizers [43]. However, this would restrict the frequency range in which the delay is compensated, and create additional errors in the surrounding frequencies. Other designs include filters with negative group delays, however their applications are limited to band limited input signals [44], [45]. The predictor design we presented also relies on negative group delay, enabling delay compensation in a large frequency band, while still being applicable to the brain EEG, which is inherently not band limited, because of the noise. Nonetheless, while our predictor design allows to significantly decreases the delay errors in the closed-loop transfer function, the delay still imposes a limit on the controllable frequency range. The larger the delay, the smaller is this limit frequency. Low performance may induce instability in the feedback loop [46] and thus should be avoided. A corresponding stability criteria has been proposed, cf. Fig. 6. Better predictor designs could allow better performance of the closed-loop system for larger delays. The improvement of the accuracy of our closed-loop neurostimulation setup by building more efficient predictor designs is in progress and we refer the reader to future work. ### Limits of our methodology #### Experimental stimulation parameters and safety Experimental stimulation protocols have to ensure the subjects safety [47] and thus avoide stimulus-induced health risks and complications. For instance, tDCS may be administered for a duration of 60 minutes and a maximum current of 4 mA without yielding health risks. However, parameters beyond these limits may yield adverse effects in subjects, such as skin lesions similar to burns and mania or hypomania in patients with depression [48]. The proposed method does not limit the stimulation duration _per se_, but of course the duration can be chosen accordingly without constraing the method. The method adapts the systems brain rhythms to the target rhythms very rapidly on a time scale of less than a second and hence permits rather short stimulation duration longer than a second. Moreover, the proposed method does not specify absolute stimulation current magnitude applied. The impact of stimulation at certain magnitudes depends heavily on the stimulation type. In tDCS, anodal stimulation with positive currents have a different impact as cathodal stimulation with negative currents. In addition, currents are thought to have to pass a certain threshold to yield a measurable effect. In tACS [49], stimulating in the \(\alpha\)-frequency range large and small magnitudes yield excitation and inhibition, respectively, while intermediate magnitudes yield weak effects. Stimulating with a range of frequencies, as in tRNS [50], a 1mA peak-to-peak amplitude for 10 minutes stimulation duration does not yield adverse effects. We conclude that it is not straight-forward to decide which stimulation magnitude applied in the presented method would be safe for human subjects, since the stimulation signal is neither constant, single frequency oscillation nor random noise. In sum, we argue that a maximum peak-to-peak amplitude of 1mA for few tens of minutes may not yield adverse effects, but still may evoke a measurable impact on observations and the brain state. Of course, future experimental studies will gain deeper insights. #### Model internal delay The internal delay in the brain is not reproducible by the magnitude vector fitting algorithm, which relies on the time invariance of the signals. Hence, this will cause errors in the transfer function of the fitted model (cf. Fig. 9) that are larger for higher contribution of the delay in the output, cf. Fig. 10. To limit this effect, we must minimize the delay between the application of the neurostimulation input and the measurement of the response to this input as much as possible by taking into account the delay between the different brain regions. #### Estimating the closed-loop delay For delay compensation, in this paper, we assumed that we know the conduction delay in the closed-loop. However, although it is a single constant parameter, we would need a method to measure it for a real closed-loop neurostimulation setup. A straightforward way to do this would be to inject any current into the plant and measure the time lag between the moment at which we inject the input current and the moment at which we measure the output signal. This estimated delay would then correspond to the total closed-loop delay except for the computation delay of the digital controller \(\mathcal{K}\). This computation delay can be easily measured with the same software used for computation, as it corresponds to the delay needed to perform constant-size matrix multiplications. Moreover, several methods have already been developed to estimate the conduction delays in linear systems [51], [52]. #### Direct input current measurements One of the main challenges to solve for closed-loop neurostimulation is the elimination of direct transmission artifacts from the measured EEG signal [53]. Indeed, when measuring the plant output signal, a portion of the measured signal might be a direct measurement of the input current without any influence from the brain dynamics. In the ideal case, one intends to minimize the contribution of the stimulation input to the observed signal since it would mean that the measured EEG signal does not fully correspond to the brain activity. Hence, reading the EEG of the patient would be more difficult for the user of our closed-loop setup, and the contribution of the brain dynamics to the closed-loop would be smaller. A simple solution to this problem is discussed further below. ### Perspectives The control proposed allows to perform accurate frequency shaping of the systems' activity spectral distribution. However, this approach is limited to linear models of the brain stimulation response. This may be disadvantageous if the systems dynamics exhibit nonlinear behavior (see e.g. [54]) as we want to represent the brain dynamics realistically. Furthermore, in real-case scenarios, we would also have to take into account the noise in the acquisition of the signal by the sensor and in the application of the input signal by the actuator. ### Filtering out direct input current measurements Filtering out the direct input current measurements is achievable with our setup removing the strictly proper system requirement while using the magnitude vector fitting algorithm to measure the brain input response. In other words, while fitting the brain input response system, we want the fitted model to be able to contain a direct transmission term corresponding to the direct current measurement. Hence, if the real plant input response contains a significant direct transmission term, it will be identified by the magnitude vector fitting algorithm when synthesizing the estimated plant input response. The second step is them simply to substract the feedr trough term multiplied by the input current to the plant output signal. Thus, the remaining part of the signal would only correspond to the brain dynamics. ### Application to multiple inputs multiple outputs plants For now, we only focused on plant with a signal input signal and a single output signal. However, in a real setup, the EEG measurement is typically composed of multiple channels corresponding to different electrodes. This can also be true for the neurostimulation device. For example, with electric current stimulation, we can inject multiple signals using multiple electrodes. This can be simply solved by feeding a single input to each input channel and summing each output to a single output channel. However, when we separate the different channels, we can have more control over each individuals output channels. When we have multiple inputs and output, the plant is then a Multiple-Inputs Multiple-Outputs (MIMO) system. Everything developed in this paper is generalizable to MIMO systems, with one caveat: when solving Eq. (6), a unique solution only exists if the system has as more outputs than it has inputs. The user can always ensure this, by using as many neurostimulation input channels than there are EEG output channels. In this generalized setup, we can also define the filter \(\mathcal{H}\) to apply different modifications to each output channel. ### Neurostimulation effects on larger time scales Our method relies only on the short term dynamics of the brain, using signal feedback and delay compensation to produce an adaptive stimulation current and obtain the desired EEG frequency distribution. However, more traditional neurostimulation techniques rely on the long term dynamics of neural plasticity, which is not modeled in the brain models we use in this paper. Long term brain adaptation to neurostimulation could cause the EEG frequency distribution to diverge from the desired frequency distribution after several minutes of stimulation. This effect could be compensated either by reiterating the model identification step and performing neurostimulation again, or by adjusting the weight of the filter \(\mathcal{H}\) according to the observed changes in real-time. Incorporating the effect of neural plasticity in the brain models would allow our method to produce predictable and durable modification to the EEG frequency distribution, even after we stop the stimulation. ## Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. ## Author Contributions TW, AH and MD contributed to the development of the methods presented in this study. TW produced the source code used for the simulations. JR and AH wrote the introduction section. TW and AH wrote the other sections of the manuscript. All the authors read and approved the submitted version. ## Funding This research was funded by Inria in the "Action Exploratoire" project _A/D Drugs_. ## Data Availability Statement The source code used for the simulation results can be freely accessed here: [https://github.com/Thomas-Wahl/neuroclodec](https://github.com/Thomas-Wahl/neuroclodec)
2310.18895
Optimizing Task-Specific Timeliness With Edge-Assisted Scheduling for Status Update
Intelligent real-time applications, such as video surveillance, demand intensive computation to extract status information from raw sensing data. This poses a substantial challenge in orchestrating computation and communication resources to provide fresh status information. In this paper, we consider a scenario where multiple energy-constrained devices served by an edge server. To extract status information, each device can either do the computation locally or offload it to the edge server. A scheduling policy is needed to determine when and where to compute for each device, taking into account communication and computation capabilities, as well as task-specific timeliness requirements. To that end, we first model the timeliness requirements as general penalty functions of Age of Information (AoI). A convex optimization problem is formulated to provide a lower bound of the minimum AoI penalty given system parameters. Using KKT conditions, we proposed a novel scheduling policy which evaluates status update priorities based on communication and computation delays and task-specific timeliness requirements. The proposed policy is applied to an object tracking application and carried out on a large video dataset. Simulation results show that our policy improves tracking accuracy compared with scheduling policies based on video content information.
Jingzhou Sun, Lehan Wang, Zhaojun Nan, Yuxuan Sun, Sheng Zhou, Zhisheng Niu
2023-10-29T04:13:09Z
http://arxiv.org/abs/2310.18895v1
# Optimizing Task-Specific Timeliness With Edge-Assisted Scheduling for Status Update ###### Abstract Intelligent real-time applications, such as video surveillance, demand intensive computation to extract status information from raw sensing data. This poses a substantial challenge in orchestrating computation and communication resources to provide fresh status information. In this paper, we consider a scenario where multiple energy-constrained devices served by an edge server. To extract status information, each device can either do the computation locally or offload it to the edge server. A scheduling policy is needed to determine when and where to compute for each device, taking into account communication and computation capabilities, as well as task-specific timeliness requirements. To that end, we first model the timeliness requirements as general penalty functions of Age of Information (AoI). A convex optimization problem is formulated to provide a lower bound of the minimum AoI penalty given system parameters. Using KKT conditions, we proposed a novel scheduling policy which evaluates status update priorities based on communication and computation delays and task-specific timeliness requirements. The proposed policy is applied to an object tracking application and carried out on a large video dataset. Simulation results show that our policy improves tracking accuracy compared with scheduling policies based on video content information. Age of Information, edge computing, task-oriented communication, object tracking ## I Introduction Fueled by recent advances in wireless communication and computation technologies, cyber-physical network applications have evolved to intelligently connect the physical and cyber worlds, enabled by fully utilizing computation resources scattered over communication networks. These applications, including autonomous driving, remote healthcare, and real-time monitoring, rely on collecting raw sensing data about the time-varying physical environment, extracting valuable status information through computation, and generating control demands based on the status information. However, since the environment changes constantly, control quality degrades until a new status update is made. Hence, the performance of these applications heavily depends on the freshness of status information provided by the network system, which necessitates a shift of focus from solely conveying information bits to providing timely information for certain tasks under service, named as task-oriented communications [1]. One key challenge in this shift is how to effectively orchestrate communication and computation resources in the system, taking account of task-specific timeliness requirements. Over the past decade, edge computing has received much attention due to its potential in providing timely information processing service [2]. This trend motivates design of schemes that can adaptively offload the computation burden to the edge side or execute it locally, according to the capabilities of both communication and computation resources [3, 4, 5]. In this work, we study a system consisting of multiple energy-constrained devices, where each device observes a time-varying process and generates tasks that require computation to extract status information about the underlying process. These computation-intensive tasks can be executed on-device or offloaded to an edge server for assistance. Based on the latest status information, devices can decide control actions. And the control quality depends on the freshness of status information. For example, in Augmented Reality (AR) applications, a headset needs to continuously capture the position of certain objects and render virtual elements [6]. If the position information becomes stale, the virtual overlay may not align with the physical objects. In this case, the headset can update position status by running object detection algorithms, which can be offloaded to an edge server when the network is in good condition, or done locally at the cost of higher latency and energy consumption. Local computing usually takes longer time than that on the edge server side [7], and thus the problem of where to compute seems trivial for single-device system. However, in a multi-device scenario where devices share the wireless channel, some devices might be more suitable than others to offload, probably due to factors such as better channel conditions. Because of limited communication resources, a _scheduling policy_ is needed to determine _when devices should generate computation tasks and where computation should be executed to extract status information_. ### _Related Work_ In recent years, _Age of Information_ (AoI) has provoked great interest as a metric to quantify the freshness of information [8]. In a status update system, AoI measures the elapsed time since the generation of the freshest received information and characterizes the freshness of the status information used for decision-making. Unlike metrics such as delay and throughput, which focus on packet-level performance, AoI provides a system-level view. There has been a growing body of research on designing device scheduling policies based on AoI. Among them, weighted average AoI is widely adopted as the optimization target. Periodic status sampling is investigated in [9], and stochastic sampling is considered in [10], where Whittle's index has been shown to enjoy a close-to-optimal performance. Energy harvesting system is studied in [11, 12, 13]. Besides weighted average AoI, nonlinear functions of AoI have also gained attention. In networked control systems with estimation error as the control performance, it has been found that the error can be expressed as a nonlinear function of AoI [14, 15] for linear time-invariant system, if the sampling time is independent of the underlying status. In the single device case with a general monotonic AoI penalty function, the optimal sampling problem is studied in [16, 17]. For the multi-device case, a scheduling policy using Whittle's index is proposed in [18]. A threshold-type policy is derived in [19] based on the steady distribution of AoI. A more recent work [20] shows that AoI functions can be applied to time-series prediction problem. However, most of these studies have ignored the role of computation in providing fresh information. As for communication and computation co-design for AoI under the framework of edge computing, tandem queuing model is widely adopted to describe the interplay between communication and computation [21, 22, 23, 24]. With Poisson sampling process, the average AoI is derived as a function of the sampling rate, transmission rate, and computation rate. Soft update is proposed in [25] to characterize of process of computation. Optimal sampling policies are derived for exponentially and linearly decaying age cases. In [26], constrained Markov decision process is adopted to decide when to offload computation-intensive status update to the edge server. In [27], a finite horizon problem is formulated to optimization linear AoI target. Multi-device scheduling problem is studied in [28] which only considers local computing. ### _Contributions_ For multi-device scheduling in the context of information freshness, an important problem is how to orchestrate communication and computation resources. Most previous papers on this problem only consider single-device case. The one most closely related to our work is [28], but we extend the choices of computing to include the edge side. Our work aims to address the problem of scheduling energy-constrained devices with nonlinear AoI penalty functions and explore ways to provide up-to-date status information by switching between edge computing and local computing. Our contributions can be summarized as follows, * We develop a general framework to jointly consider the communication and computation aspects of real-time status update applications. Computation tasks can be done on-device or on the edge server side. Taking transmission and computation time into account, control performance is modeled as general monotonic AoI penalty functions. Given system parameter, a nontrivial lower bound of the time average AoI penalty is derived. By inspecting the property of the lower bound, we propose indices that represent the priority of local computing or edge computing at different AoI values. * A low-complexity scheduling policy is proposed by combining the indices introduced above with the virtual queue technique from Lyapunov optimization [29]. We show that this policy satisfies the energy constraints of each device. For penalty functions of the form \(f(x)=x^{p}\), \(p>0\), we derive the performance gap between the proposed policy and the lower bound, when communication and computation stages take single time slot. * Extensive simulations are carried out to evaluate the performance of the proposed policy for different forms of penalty functions and latency distributions. Simulation results demonstrate that the average AoI penalty under the proposed policy is close to the lower bound. Moreover, we apply the proposed policy to object tracking applications which can be naturally cast as status update processes. The proposed policy is examined on a large video dataset ILSVRC17-VID [30]. Our results show that the proposed policy improves object tracking accuracy by 27% to that of the video content matching-based scheduling. Furthermore, the proposed policy also outperforms content-based scheduling that has access to the ground truth information. The rest of this paper is organized as follows. In Section II, we present the system model and the problem formulation. In Section III, we formulate a convex optimization problem to compute a nontrivial lower bound of the average AoI penalty. In Section IV, a low-complexity scheduling policy is provided based on the lower bound problem. In Section V, numerical results are presented along with the object tracking application. We conclude the paper in Section VI. ## II System Model We consider the status update system shown in Fig. 1. This system consists of a set of energy-constrained devices, denoted as \(\mathcal{N}\), with a total number of \(N\). Each device performs a sensing-control task by collecting sensing data, Fig. 1: System model. extracting status information from it, and determining control actions based on the status. As the status information becomes stale, the control quality decreases. This simplified model is well-suited for many real-time applications and abstracts away irrelevant details. For energy-constrained mobile devices, however, frequent status updates can quickly drain the battery. Therefore, a scheduling policy is required for each device to decide 1) when to generate a computation task for status update and 2) where to execute the computation. Slotted time system is considered. At the beginning of a time slot, each device collects sensing data if scheduled, which is assumed to take negligible time. The scheduled devices then perform computation locally or offload computation tasks to an edge server, which takes several slots to finish. For device \(n\), \(n\in\mathcal{N}\), let \(D_{l,n}\) be the number of slots required to finish local computing. The offloading stage takes \(D_{t,n}\) slots to send the raw sensing data to the edge server, followed by edge computing that lasts \(D_{e,n}\) slots. Result feedback delay is ignored. These three are random variables with finite expectations denoted as \(\overline{D_{l,n}}\), \(\overline{D_{t,n}}\), and \(\overline{D_{e,n}}\), respectively. Furthermore, the latency in each communication or computation stage is assumed to be independent. We use three binary indicators \(u_{l,n}(k)\), \(u_{t,n}(k)\), and \(u_{e,n}(k)\) to indicate the stage device \(n\) is in at time slot \(k\). For example, \(u_{l,n}(k)=1\) if device \(n\) is performing local computing at time slot \(k\). Otherwise, \(u_{l,n}(k)=0\). Similarly, \(u_{t,n}(k)\) and \(u_{e,n}(k)\) are associated with the offloading and edge computing stages, respectively. Consider non-preemptive policies, we require that \(u_{l,n}(k)+u_{t,n}(k)+u_{e,n}(k)\leq 1\). When the summation is zero, device \(n\) is idle. Let \(M\) be the number of orthogonal sub-channels. If device \(n\) is scheduled to offload sensing data to the edge server, it will occupy one idle channel for \(D_{t,n}\) consecutive slots to complete the transmission. On the edge server side, we assume that it is equipped with multi-core hardware and can process multiple computation tasks in parallel [3]. Therefore, each offloaded computation task is served immediately upon arrival, and there is no queuing delay. Let \(d_{n}(k)\) be the latency since the time slot when the sensing data is collected. If \(u_{l,n}(k)+u_{t,n}(k)+u_{e,n}(k)=1\), which means that device \(n\) is performing status update, \(d_{n}(k)=d_{n}(k-1)+1\). Otherwise, \(d_{n}(k)=0\). When computation is finished, a new control action is generated and returned to the device. Generally, the quality of the control action depends on the freshness of the sensing data used to compute it. To capture this freshness, AoI is defined as the time elapsed since the generation time of the sensing data used to compute the current control action. The AoI of device \(n\) at time slot \(k\) is denoted as \(h_{n}(k)\). As shown in Fig. 2, AoI evolves as: \[h_{n}(k)=\left\{\begin{array}{ll}d_{n}(k-1)+1,&\text{ if the computation is}\\ &\text{ finished at slot }k-1,\\ h_{n}(k-1)+1,&\text{ otherwise.}\end{array}\right. \tag{1}\] It is pointed out in [14] that, for LTI system, the control quality can be cast as a function of AoI if the sampling process is independent of the content of the underlying process. Following this finding, we model the relationship between control quality and AoI as a penalty function \(f_{n}(\cdot)\), representing the degradation in performance due to information staleness. It is required that the penalty increases with AoI. Furthermore, to avoid ill cases, we also require that the expected penalty with latency \(D_{l,n},D_{t,n},D_{e,n}\) is finite. We focus on energy consumption on the device side. For local computing, device \(n\) takes \(E_{l,n}\) Joule per slot. When offloading sensing data to the edge server, device \(n\) consumes \(E_{t,n}\) Joule per slot. Let \(E_{n}(k)\) be the energy consumption at time slot \(k\), it consists of two components \(E_{n}(k)=E_{l,n}u_{l,n}(k)+E_{t,n}u_{t,n}(k)\). The average energy consumed per time slot by device \(n\) should be no larger than \(\overline{E}_{n}\). Let vector \(\mathbf{h}(k)\triangleq(h_{1}(k),h_{2}(k),\ldots,h_{N}(k))\) represent the AoI of all devices at time slot \(k\). Similarly, \(\mathbf{d}(k),\mathbf{u}_{l}(k),\mathbf{u}_{t}(k),\mathbf{u}_{e}(k)\) are vectors of corresponding variables. The state of the whole status update system is \(\Theta(k)\triangleq(\mathbf{h}(k),\mathbf{d}(k),\mathbf{u}_{l}(k),\mathbf{u}_{t}(k),\mathbf{u}_{ e}(k))\). The history up to time slot \(k\) is denoted as \(\mathcal{H}(k)\triangleq\{\Theta(i)|i\leq k\}\). A scheduling policy \(\pi\) takes in the history \(\mathcal{H}(k)\) and decides the new value of \(\mathbf{u}_{l}(k)\) and \(\mathbf{u}_{t}(k)\). Note that policy \(\pi\) is a centralized policy because it needs \(\mathbf{h}(k)\) and \(\mathbf{d}(k)\) to make decision. To obtain this information, we assume each device will report at the start and end of its computation. Because each device is not always doing computation and this action information is tiny compared to raw sensing data, we ignore this extra cost to implement policy \(\pi\). Our objective is to propose a scheduling policy that minimizes the time-averaged AoI penalty, subject to energy consumption constraints and communication constraints, as expressed in **P1**. \[\begin{split}\textbf{P1}:&\min_{\pi\in\Pi}& \sum_{n\in\mathcal{N}}\limsup_{K\rightarrow\infty}\frac{1}{K}\mathbb{E}_{\pi} \left[\sum_{k=1}^{K}f_{n}(h_{n}(k))\right]\\ &\text{s.t.}& u_{t,n}(k),u_{e,n}(k),u_{l,n}(k)\in\{0,1 \},\ \forall k\geq 1,\ \forall n\in\mathcal{N},\\ & u_{t,n}(k)+u_{e,n}(k)+u_{l,n}(k)\leq 1,\ \forall k\geq 1,\ \forall n\in\mathcal{N},\\ &\sum_{n\in\mathcal{N}}u_{t,n}(k)\leq M,\ \forall k\geq 1,\\ &\limsup_{K\rightarrow\infty}\frac{1}{K}\mathbb{E}_{\pi}\left[ \sum_{k=1}^{K}E_{n}(k)\right]\leq\overline{E}_{n},\forall n\in\mathcal{N}. \end{split} \tag{2}\] Here, \(\Pi\) is the set of non-preemptive policies. This problem can be formulated as a Constrained Markov Decision Process (CMDP) with \(\Theta(k)\) representing the state of the system. Fig. 2: Evolution of AoI. However, solving this problem exactly is computationally prohibitive. The first reason is that the state space grows exponentially with the number of devices. The second reason is that there are multiple constraints, which renders the standard iteratively tightening approach for CMDP invalid [31]. Therefore, in the following, we begin by investigating the lower bound of the AoI penalty. Building on this, we propose a low-complexity scheduling policy that draws inspiration from the lower bound problem. ## III Lower Bound of The AoI Penalty In this section, we aim to derive a nontrivial lower bound on the AoI penalty given system parameters. This not only aids in evaluating policy performance but also provides valuable insights into how to design a scheduling policy. ### _Lower Bound Derivation_ We first study the AoI penalty of a single device and then extend the result to multiple devices. For simplicity, the subscript \(n\) is dropped temporarily. The time horizon can be divided into disjoint time intervals delineated by the event of computation completion, with each interval being referred to as an _update round_, as shown in Fig. 2. Let \(h_{l}^{+}\) and \(h_{t}^{+}\) be the peak age in local computing round and edge computing round respectively. Both are random variables depending on the policy \(\pi\). Furthermore, we introduce \(\rho_{l}(\pi)\) and \(\rho_{t}(\pi)\) to denote the portions of energy spent on local computing and offloading under policy \(\pi\) respectively. The following lemma presents an alternative expression for the average AoI penalty in **P1**, **Lemma 1**.: _Given policy \(\pi\in\Pi\), the average AoI penalty is,_ \[\begin{split}&\limsup_{K\to\infty}\frac{1}{K}\mathbb{E}_{\pi} \left[\sum_{k=1}^{K}f(h(k))\right]\\ &=\frac{\rho_{t}(\pi)\overline{E}}{E_{t}\overline{D}_{t}}\mathbb{ E}_{\pi}[F(h_{t}^{+})-F(D_{t}+D_{e}-1)]\\ &+\frac{\rho_{l}(\pi)\overline{E}}{E_{l}\overline{D}_{l}}\mathbb{ E}_{\pi}[F(h_{l}^{+})-F(D_{l}-1)],\end{split} \tag{3}\] _where_ \[F(h)\triangleq\sum_{x=0}^{h}f(x). \tag{4}\] Proof.: See Appendix A. Considering the following optimization problem **P2**, \[\begin{split}\textbf{P2}:&\quad\min_{\pi}\quad \frac{\rho_{t}\overline{E}}{E_{t}\overline{D}_{t}}\mathbb{E}_{\pi}(\pi)[F(h_{t} ^{+})-F(D_{t}+D_{e}-1)]\\ &\quad\quad+\frac{\rho_{l}(\pi)\overline{E}}{E_{l}\overline{D}_{ l}}\mathbb{E}_{\pi}[F(h_{t}^{+})-F(D_{l}-1)]\\ &\quad\quad\text{s.t.}\quad\frac{\rho_{t}(\pi)\overline{E}}{E_{t }\overline{D}_{t}}(\mathbb{E}_{\pi}[h_{t}^{+}]-(\overline{D}_{t}+\overline{D}_ {e}-1))\\ &\quad\quad\quad+\frac{\rho_{l}(\pi)\overline{E}}{E_{l}\overline{ D}_{l}}(\mathbb{E}_{\pi}[h_{t}^{+}]-(\overline{D}_{l}-1))=1.\end{split} \tag{5}\] Lemma 2 shows that it provides a lower bound for **P1**, **Lemma 2**.: _The minimum value of **P2** is no larger than that of **P1**._ Proof.: See Appendix A. Because \(f(x)\) only takes values at discrete point, we introduce extended penalty function \(\tilde{f}(x)\) to facilitate analysis. \(\tilde{f}(x)\) is obtained by interpolating \(f(x)\) such that: 1) \(\tilde{f}(x)\) is an increasing function, 2) \(\tilde{f}(x)=f(x)\), when \(x\in\mathbb{N}\). As a result, we have \[\sum_{i=0}^{h}f(i)\geq\int_{0}^{h}\tilde{f}(x)dx. \tag{6}\] Let \(\tilde{F}(h)\) be the integral of \(\tilde{f}(x)\) over \([0,h]\). Then, \(\tilde{F}(h)\leq F(h)\). Because \(\tilde{f}(x)\) is increasing, \(\tilde{F}(h)\) is convex. By Jensen's inequality, we have \[\mathbb{E}_{\pi}[F(h_{l}^{+})]\geq\tilde{F}(\mathbb{E}_{\pi}[h_{l}^{+}]),\ \mathbb{E}_{\pi}[F(h_{t}^{+})]\geq\tilde{F}(\mathbb{E}_{\pi}[h_{t}^{+}]). \tag{7}\] Let \(x\triangleq\mathbb{E}_{\pi}[h_{t}^{+}]\) and \(y\triangleq\mathbb{E}_{\pi}[h_{t}^{+}]\). Plugging (7) into **P2** leads to the following optimization problem **P3**, \[\begin{split}\textbf{P3}:&\quad\min_{x\geq 0,y\geq 0} \frac{\rho_{t}(\pi)\overline{E}}{E_{t}\overline{D}_{t}}\left(\tilde{F}(y)- \mathbb{E}_{\pi}[F(D_{t}+D_{e}-1))]\right)\\ &\quad\quad+\frac{\rho_{l}(\pi)\overline{E}}{E_{l}\overline{D}_{l }}\left(\tilde{F}(x)-\mathbb{E}_{\pi}[F(D_{l}-1)]\right)\\ &\quad\quad\text{s.t.}\quad\frac{\rho_{t}(\pi)\overline{E}}{E_{t }\overline{D}_{t}}(y-(\overline{D}_{t}+\overline{D}_{e}-1))\\ &\quad\quad\quad+\frac{\rho_{l}(\pi)\overline{E}}{E_{l}\overline{ D}_{l}}(x-(\overline{D}_{l}-1))=1.\end{split} \tag{8}\] **P3** relaxes the feasible region of \(\mathbb{E}_{\pi}[h_{t}^{+}]\) and \(\mathbb{E}_{\pi}[h_{t}^{+}]\) to non-negative number, and thus it is a relaxation of **P2**. The optimal solution is \[x_{\text{opt}}=y_{\text{opt}}=\frac{1+\frac{\rho_{t}(\pi)\overline{E}( \overline{D}_{t}-1)}{E_{l}\overline{D}_{l}}+\frac{\rho_{t}(\pi)\overline{E}( \overline{D}_{t}+\overline{D}_{e}-1)}{E_{t}\overline{D}_{t}}}{\frac{\rho_{l}( \pi)\overline{E}}{E_{l}\overline{D}_{t}}+\frac{\rho_{t}(\pi)\overline{E}}{E_{t }\overline{D}_{e}}}. \tag{9}\] For simplicity, the optimal solution (9) is denoted as \(G(\rho_{l},\rho_{t})\). Then, with energy proportions \(\rho_{l},\rho_{t}\), the penalty lower bound in single-device case is \[\begin{split}&\quad\frac{\rho_{t}\overline{E}}{E_{t}\overline{D}_{t}} \left(\tilde{F}(G(\rho_{l},\rho_{t}))-\mathbb{E}_{\pi}[F(D_{t}+D_{e}-1)] ]\right)\\ &\quad+\frac{\rho_{l}\overline{E}}{E_{l}\overline{D}_{l}}\left( \tilde{F}(G(\rho_{l},\rho_{t}))-\mathbb{E}_{\pi}[F(D_{l}-1)]\right).\end{split} \tag{10}\] Now we zoom out to consider the entire system, and let \(\mathbf{\rho}_{l}\triangleq(\rho_{l,1},\rho_{l,2},\ldots,\rho_{l,N})\), \(\mathbf{\rho}_{t}\triangleq(\rho_{t,1},\rho_{t,2},\ldots,\rho_{t,N})\) Consider optimization problem **P4** \[\min_{\mathbf{\rho}_{t},\mathbf{\rho}_{t}} \sum_{n\in\mathcal{N}}\left(\frac{\rho_{l,n}\overline{E}_{n}}{E_{l,n}\overline{D}_{l,n}}+\frac{\rho_{t,n}\overline{E}_{n}}{E_{t,n}\overline{D}_{t,n}}\right)\tilde{F}_{n}(G_{n}(\rho_{l,n},\rho_{t,n})) \tag{11}\] \[-\sum_{n\in\mathcal{N}}\left(\frac{\rho_{t,n}\overline{E}_{n}}{E _{t,n}\overline{D}_{t,n}}\mathbb{E}[F_{n}(D_{t,n}+D_{e,n}-1)]\right.\] \[+\left.\frac{\rho_{l,n}\overline{E}_{n}}{E_{l,n}\overline{D}_{l,n }}\mathbb{E}[F_{n}(D_{l,n}-1)]\right)\] s.t. \[\sum_{n\in\mathcal{N}}\frac{\rho_{t,n}\overline{E}_{n}}{E_{t,n}} \leq M,\] \[\rho_{l,n}+\rho_{t,n}\leq 1,\ \forall n\in\mathcal{N},\] \[\rho_{l,n}\geq 0,\rho_{t,n}\geq 0,\ \forall n\in\mathcal{N}.\] The first constraint is obtained by relaxing the communication constraint, which originally states that at most \(M\) devices can offload simultaneously. It is now relaxed as the time-average number of transmissions, which should not exceed \(M\). Therefore, the optimal value of **P4** provides a lower bound of the time average AoI penalty. ### _Lower Bound Analysis_ In this subsection, we first show that **P4** is a convex optimization problem, and then study properties of the optimal solution based on KKT conditions. **Lemma 3**.: _The optimization problem **P4** is convex._ Proof.: See Appendix B. For simplicity, let's introduce the following auxiliary variables \[a_{n} =\frac{\overline{E}_{n}}{E_{l,n}\overline{D}_{l,n}},\ b_{n}= \frac{\overline{E}_{n}}{E_{t,n}\overline{D}_{t,n}}, \tag{12}\] \[c_{n} =\overline{D}_{l,n}-1,\ d_{n}=\overline{D}_{t,n}+\overline{D}_{e, n}-1,\] \[v_{n} =\mathbb{E}_{x}[F_{n}(D_{l,n}-1)],\ w_{n}=\mathbb{E}_{x}[F_{n}(D_ {t,n}+D_{e,n}-1)],\] \[x_{n} =\rho_{l,n},\ y_{n}=\rho_{t,n}.\] Let \(\alpha,\mathbf{\beta},\mathbf{\gamma},\mathbf{\nu}\) be Lagrange multipliers. the Lagrangian function is \[L(\mathbf{x},\mathbf{y},\alpha,\mathbf{\beta},\mathbf{\gamma},\mathbf{\nu})= \mathbf{\beta}^{T}(\mathbf{x}+\mathbf{y}-\mathbf{1})-\mathbf{\gamma}^{T}\mathbf{x}-\mathbf{ \nu}^{T}\mathbf{y} \tag{13}\] \[+\sum_{n\in\mathcal{N}}(a_{n}x_{n}+b_{n}y_{n})\tilde{F}_{n}(G_{n} (x_{n},y_{n}))\] \[-\sum_{n\in\mathcal{N}}(a_{n}v_{n}x_{n}+b_{n}w_{n}y_{n})\] \[+\alpha\left(\sum_{n\in\mathcal{N}}\frac{y_{n}\overline{E}_{n}}{E _{t,n}}-M\right).\] Because **P4** is convex, the optimal solution and Lagrange multipliers satisfy KKT conditions. Let \(\mathbf{x}^{*},\mathbf{y}^{*},\alpha^{*},\mathbf{\beta}^{*},\mathbf{\gamma}^{*},\mathbf{\nu}^{*}\) be the corresponding optimal solution. Applying KKT conditions to (13) provides the following property, **Theorem 1**.: _The optimal solution specified by KKT conditions satisfies_ \[\frac{W_{t,n}(h_{t,n})}{E_{t,n}\overline{D}_{t,n}}-\frac{W_{l,n}(h_{l,n})}{E_ {l,n}\overline{D}_{l,n}}-\frac{\gamma_{n}^{*}-\nu_{n}^{*}}{\overline{E}_{n}}= \frac{\alpha^{*}}{E_{t,n}}, \tag{14}\] _where_ \[h_{t,n} =G_{n}(x_{n}^{*},y_{n}^{*})-(\overline{D}_{t,n}+\overline{D}_{e, n}-1), \tag{15}\] \[h_{l,n} =G_{n}(x_{n}^{*},y_{n}^{*})-(\overline{D}_{l,n}-1), \tag{16}\] _and_ \[W_{t,n}(x)\] \[=x\tilde{f}_{n}(x+\overline{D}_{t,n}+\overline{D}_{e,n}-1)\] \[-(\tilde{F}_{n}(x+\overline{D}_{t,n}+\overline{D}_{e,n}-1)- \mathbb{E}[F_{n}(D_{t,n}+D_{e,n}-1)]), \tag{17}\] \[W_{l,n}(x)\] \[=x\tilde{f}_{n}(x+\overline{D}_{l,n}-1)\] \[-(\tilde{F}_{n}(x+\overline{D}_{l,n}-1)-\mathbb{E}[F_{n}(D_{l,n} -1)]). \tag{18}\] Proof.: See Appendix C. Here, \(h_{t,n}\) represents the expected AoI when device \(n\) is scheduled to offload, and \(h_{l,n}\) represents the expected AoI when device \(n\) is scheduled to do local computing. An intuitive illustration of \(W_{t,n}\) and \(W_{l,n}\) is shown in Fig. 3. Taking \(W_{l,n}\) as an example, \(h\) is the AoI when the scheduling decisions are made, and \(d\) is the computing latency. When \(x=h\), the first term in (18) is the summation of regions I, II, and III, and the second term is the summation of regions I and II. Thus, \(W_{l,n}\) is the colored region III. With this geometrical interpretation, the influence of latency and penalty function is reduced to the area of the colored region in Fig. 3. _Remark 1_.: Rethinking (14), the first term \(\frac{W_{l,n}(h)}{E_{t,n}\overline{D}_{t,n}}\) is the priority of doing status update by offloading when AoI is \(h\). And the second term \(\frac{W_{l,n}(h)}{E_{t,n}\overline{D}_{l,n}}\) corresponds to the priority of doing local computing. The third term is related to two Lagrange multipliers: \(\gamma_{n}^{*}\) and \(\nu_{n}^{*}\). Due to complementary slackness, \(\gamma_{n}^{*}=0\) if \(\rho_{t,n}^{*}>0\). For the same reason, \(\nu_{n}^{*}=0\) if \(\rho_{t,n}^{*}>0\). Note that \(\gamma_{n}^{*}\) and \(\nu_{n}^{*}\) can not be larger than \(0\) simultaneously. When \(\gamma_{n}^{*}\) and \(\nu_{n}^{*}\) equal 0, (14) is reduced to \[\frac{1}{\overline{D}_{t,n}}W_{t,n}(h_{t,n})-\frac{E_{t,n}}{E_{l,n}\overline{D}_ {l,n}}W_{l,n}(h_{l,n})=\alpha^{*}. \tag{19}\] Fig. 3: An illustration of \(W(x)\). Taking \(\alpha^{*}\) as the price of using the channel to offload, Equation (19) can be used to determine whether to perform local computing or offload updates. ## IV Scheduling Policy Based on (14), which characterizes the expected AoI at scheduling instants, a natural and intuitive policy is to schedule local computing for device \(n\) when its AoI is \(h_{l,n}\) and to schedule offloading when the AoI is \(h_{t,n}\). However, the challenge of obtaining the values of \(h_{l,n}\) and \(h_{t,n}\), as well as the parameters \(\gamma_{n}\), \(\nu_{n}\), and \(\alpha\) at runtime, renders this policy impractical. Nevertheless, the insights provided by (14) indicate that a scheduling policy should steer the AoI at scheduling instants towards values that align with (14). This helps to design scheduling policy for the original problem **P1**. We first introduce an auxiliary variable \(Q_{n}\), and rearrange (14) as \[\left(\frac{W_{t,n}(h_{t,n})}{\overline{D}_{t,n}}-E_{t,n}Q_{n} \right)+\frac{E_{t,n}}{E_{l,n}}\left(\frac{W_{l,n}(h_{l,n})}{\overline{D}_{l,n }}-E_{l,n}Q_{n}\right)\] \[=\alpha+\frac{E_{t,n}}{\overline{E}_{n}}(\gamma_{n}-\nu_{n}). \tag{20}\] If \(Q_{n}\) satisfies \[\frac{W_{l,n}(h_{l,n})}{\overline{D}_{l,n}}-E_{l,n}Q_{n}=0, \tag{21}\] then \[\frac{W_{t,n}(h_{t,n})}{\overline{D}_{t,n}}-E_{t,n}Q_{n}=\alpha+\frac{E_{t,n} }{\overline{E}_{n}}(\gamma_{n}-\nu_{n}). \tag{22}\] If the value of \(Q_{n}\) is known, (21) and (22) provide a heuristic scheduling policy. Firstly, for the edge computing part, since the function \(W_{t,n}(x)\) is increasing, we can sort idle devices in descending order based on the left-hand side of (22) with \(h_{t,n}\) replaced by \(h_{n}(k)\) and select no more than \(m(k)\) devices to offload, where \(m(k)\) is the number of idle channels at time slot \(k\). Then, if device \(n\) is still idle and \(h_{n}(k)\) satisfies that \[\frac{W_{l,n}(h_{n}(k))}{\overline{D}_{l,n}}\geq E_{l,n}Q_{n}, \tag{23}\] it will be scheduled to do local computing. By adopting this approach, we can bring the AoI at scheduling instants closer to the values specified by (14). \(Q_{n}\) plays the role of threshold in this policy, such that devices will not update so frequently that the energy constraints are violated. In other words, \(Q_{n}\) is determined by energy constraints. Although it is hard to calculate the exact value of \(Q_{n}\), we can approach it at runtime. Based on this insight, we use tools from Lyapunov optimization [29] and introduce virtual queue \(Q_{n}(k)\): \[Q_{n}(k+1)\triangleq\max\{Q_{n}(k)-\overline{E}_{n}+E_{n}(k),0\}, \tag{24}\] \(Q_{n}(k)\) corresponds to the energy consumption until time slot \(k\). If update is too frequent, \(Q_{n}(k)\) will increase and prevent further update. As system evolves, \(Q_{n}(k)\) approximates \(Q_{n}\). Let \(\mathcal{N}_{\text{idle}}(k)\) be the set of devices that are idle at the beginning of time slot \(k\). The two auxiliary sets are defined as \[\mathcal{C}_{l}(k) \triangleq\left\{n\ \left|\ \frac{W_{l,n}(h_{n}(k))}{\overline{D}_{l,n}} \geq VE_{l,n}Q_{n}(k),\ n\in\mathcal{N}_{\text{idle}}(k)\right.\right\},\] \[\mathcal{C}_{t}(k) \triangleq\left\{n\ \left|\ \frac{W_{t,n}(h_{n}(k))}{\overline{D}_{t,n}} \geq VE_{t,n}Q_{n}(k),\ n\in\mathcal{N}_{\text{idle}}(k)\right.\right\},\] where \(V\) is a parameter used to smooth the fluctuation of \(Q_{n}(k)\). The set \(\mathcal{C}_{l}(k)\) consists of devices eligible for local computing, while \(\mathcal{C}_{t}(k)\) consists of devices eligible for edge computing. The intersection of these two sets may not be empty. To simplify the expression of scheduling policy, we introduce index \(I_{n}(k)\) as \[I_{n}(k)= \tag{25}\] Based on the insights from (14), we propose a _Max-Weight scheduling policy_\(\pi_{\text{MW}}\), which makes scheduling decisions at each time slot as shown in algorithm 1. In this algorithm, the scheduler first decides which devices to offload based on their values of \(I_{n}(k)\), which is derived from (22). Subsequently, those devices that are still idle will perform local computing if they fall within the set \(\mathcal{C}_{l}(k)\), as dictated by (21). ``` 1:The number of idle channels \(m(k)\). 2:Sort devices in \(\mathcal{C}_{t}(k)\) in descending order according to the value of \(I_{n}(k)\). The result is \((n_{1},n_{2},\ldots,n_{S})\). \(S\) is the total number of devices in this set. 3:\(s\gets 1\) 4:while\(s\leq S\)do 5:if\(m(k)>0\) and \(I_{n_{s}}(k)\geq 0\)then 6:\(u_{t,n_{s}}(k)\gets 1\), \(m(k)\gets m(k)-1\) 7:elseif\(n_{s}\in\mathcal{C}_{l}(k)\cap\mathcal{C}_{t}(k)\)then 8:\(u_{l,n_{s}}(k)\gets 1\) 9:endif 10:\(s\gets s+1\) 11:endwhile 12:for\(n\in\mathcal{C}_{l}(k)-\mathcal{C}_{t}(k)\)do 13:\(u_{l,n}(k)\gets 1\) 14:endfor ``` **Algorithm 1** Max-Weight scheduling The term \(VQ_{n}(k)\) plays the role of \(Q_{n}\) in (21) and (22). Since we want \(\limsup_{k\to\infty}VQ_{n}(k)\) and \(\liminf_{k\to\infty}VQ_{n}(k)\) to be close to \(Q_{n}\), it is expected that a smaller \(V\) enjoys a better performance, because a small \(V\) can smooth fluctuations in the value of \(Q_{n}(k)\). This conjecture is substantiated in Section V. Theorem 2 demonstrates that algorithm 1 maximizes a term that is linear in \(u_{l,n}\) and \(u_{t,n}\), as follows: **Theorem 2**.: _Algorithm 1 makes scheduling decisions \(\mathbf{u}_{l}(k)\) and \(\mathbf{u}_{t}(k)\) to maximize the following,_ \[\sum_{n\in\mathcal{N}_{\text{div}}(k)}\left(\frac{W_{l,n}(h_{n}(k) )}{\overline{D}_{l,n}}-VE_{l,n}Q_{n}(k)\right)u_{l,n}(k) \tag{26}\] \[+\sum_{n\in\mathcal{N}_{\text{div}}(k)}\left(\frac{W_{t,n}(h_{n}( k))}{\overline{D}_{t,n}}-VE_{t,n}Q_{n}(k)\right)u_{t,n}(k).\] Proof.: See Appendix D. The following theorem establishes that, under mild assumptions, policy \(\pi_{\text{MW}}\) satisfies the energy constraints in (2). **Theorem 3**.: _For any \(n\in\mathcal{N}\), if there exists \(D_{n}^{*}\) such that \(D_{l,n}\), \(D_{t,n}\) and \(D_{e,n}\) are smaller than \(D_{n}^{*}\), then_ \[\limsup_{K\to\infty}\frac{1}{K}\mathbb{E}_{\pi_{\text{MW}}}\left[\sum_{k=1}^{ K}E_{n}(k)\right]\leq\overline{E}_{n},\forall n\in\mathcal{N}. \tag{27}\] Proof.: Let \(k_{i}\) be the time slot at which device \(n\) starts its \(i\)-th round of local computing or offloading. Because the delay is bounded, we have \[Q(k_{i+1})\leq Q(k_{i})+\max(E_{l,n},E_{t,n})D_{n}^{*}. \tag{28}\] We will prove that there exists \(L\) such that \[\limsup_{i\to\infty}Q_{n}(k_{i})\leq L. \tag{29}\] Given \(k_{i}\), there exists \(s_{i}\) such that \[Q_{n}(k_{i})\geq Q_{n}(k_{1})+(s_{i}-1)\max(E_{l,n},E_{t,n})D_{n} ^{*}, \tag{30}\] \[Q_{n}(k_{i})<Q_{n}(k_{1})+s_{i}\max(E_{l,n},E_{t,n})D_{n}^{*}.\] Let \(s^{*}=\sup\{s_{i},i\geq 1\}\). If \(s^{*}\) is finite, then (29) holds trivially. Otherwise, consider an \(i^{*}\) such that \[Q_{n}(k_{i^{*}})-t\overline{E}_{n}\geq\max\left(\frac{W_{l,n}(2D_{n}^{*}+t)}{ VE_{l,n}},\frac{W_{t,n}(2D_{n}^{*}+t)}{VE_{t,n}}\right), \tag{31}\] and \[t\overline{E}_{n}+\max\left(\frac{W_{l,n}(2D_{n}^{*}+t)}{VE_{l, n}},\frac{W_{t,n}(2D_{n}^{*}+t)}{VE_{t,n}}\right) \tag{32}\] \[\leq Q_{n}(k_{1})+(s_{i^{*}}-1)\max(E_{l,n},E_{t,n})D_{n}^{*},\] where \(t\triangleq\left[\frac{\max(E_{l,n},E_{t,n})D_{n}^{*}}{E_{n}}\right]\). (31) means that device \(n\) is idle for \(t\) time slots at least, after the completion of the \(i^{*}\)-th status update. Therefore, \[Q_{n}(k_{i^{*}+1})\leq Q_{n}(k_{i^{*}})+\max(E_{l,n},E_{t,n})D_{n}^{*}-t \overline{E}_{n}. \tag{33}\] According to the definition of \(t\), (33) yields that \(Q_{n}(k_{i^{*}+1})\leq Q_{n}(k_{i^{*}})\). If \(Q_{n}(k_{i^{*}+1})\) falls in the range specified in (30) with \(s_{i^{*}}\), repeating the analysis above gives that \(Q_{n}(k_{i^{*}+2})\leq Q_{n}(k_{i^{*}+1})\). If \(Q_{n}(k_{i^{*}+1})<Q_{n}(k_{1})+(s_{i}^{*}-1)\max(E_{l,n},E_{t,n})D_{n}^{*}\), due to (28), we have \[Q_{n}(k_{i^{*}+2})\leq Q_{n}(k_{1})+s_{i^{*}}\max(E_{l,n},E_{t,n})D_{n}^{*}. \tag{34}\] Based on induction, we conclude that \[Q_{n}(k_{j})\leq Q_{n}(k_{1})+s_{i^{*}}\max(E_{l,n},E_{t,n})D_{n}^{*},\forall j \geq i^{*}, \tag{35}\] and thus (29) holds. Recall the definition of \(Q_{n}(k)\) in (24), we have for all \(k\geq 1\): \[\frac{1}{K}\sum_{k=1}^{K}E_{n}(k)\leq\frac{Q_{n}(K+1)}{K}-\frac{Q_{n}(1)}{K}+ \overline{E}_{n}. \tag{36}\] Taking expectations of the above and letting \(K\to\infty\) yields: \[\limsup_{K\to\infty}\frac{1}{K}\mathbb{E}_{\pi_{\text{MW}}}\left[\sum_{k=1}^{K} E_{n}(k)\right]\leq\overline{E}_{n}. \tag{37}\] One important distinction between our work and other studies that use Max-Weight policy for scheduling, such as [9], is that the set of idle devices in our problem varies with time. Thus, conventional approaches based on strongly stable queue techniques cannot be applied to our problem directly. Although a general performance guarantee is difficult to establish, the following proposition provides insight into the performance gap for a special case: **Proposition 1**.: _Let \(J^{\pi_{\text{MW}}}\) be the average AoI penalty under policy \(\pi_{\text{MW}}\). When the penalty function is \(f_{n}(x)=\alpha_{n}x^{p}\), \(p>0\), and \(D_{l,n}=D_{t,n}=1\), \(D_{e,n}=0\), \(\forall n\in\mathcal{N}\), \(J^{\pi_{\text{MW}}}\) satisfies_ \[\left(\frac{J^{\pi_{\text{MW}}}}{p+1}\right)^{p+1}\leq J^{*}\left(\frac{B}{p}+J^ {\pi_{\text{MW}}}\right)^{p}, \tag{38}\] _where \(J^{*}\) is the lower bound from (11), and \(B\) is defined as_ \[B\triangleq\frac{V}{2}\sum_{n\in\mathcal{N}}(\max(E_{l,n},E_{t,n})-\overline{E} _{n})^{2}. \tag{39}\] Proof.: See Appendix E. _Remark 2_.: When \(p=1\), the penalty function is in linear form, and the target becomes the weighted time average age. Let \(p=1\) in (38), we obtain the following inequality: \[(J^{\pi_{\text{MW}}}-2J^{*})^{2}\leq 4J^{*}(B+J^{*}), \tag{40}\] which yields, \[\frac{J^{\pi_{\text{MW}}}}{J^{*}}\leq 2+2\sqrt{\frac{B}{J^{*}}+1}. \tag{41}\] This suggests that the weighted average age achieved by the Max-Weight policy is bounded within approximately four times of the lower bound. ## V Numerical Results In this part, we evaluate the performance of the proposed policy under various settings. In addition to extensive simulations on synthetic data, we also apply this policy to a video tracking task and carry out experiments on ILSVRC17-VID dataset. ### _Simulation Results_ Scheduling decisions depend on various factors, including the form of penalty function, computation delay, transmission delay, etc. To facilitate experiments, the set of devices is divided into 2 types. Part of the simulation settings are listed in Table I, where \(U(a,b)\) means taking values uniformly in the set \(\{a,a+1,\dots,b\}\). In the first simulation, the delay distribution of Type-II devices' local computing delay follows \(U(1,x)\) with \(x\) increasing from 10 to 20. Different kinds of penalty functions are considered in the simulation, including linear function, square function, and a special type of composite function, as shown in Table II. By varying the distribution of Type-II devices' local computing delay, we obtain the results shown in Fig. 4. The number of devices is \(30\). Half of them are Type-I, and the left are Type-II. The number of orthogonal channels is \(3\). _Max-Reduction policy_[32] is considered for comparison. In Max-Reduction policy, the terms \(\frac{W_{i,n}(h_{i}(k))}{P_{i,n}}\) and \(\frac{W_{i,n}(h_{i}(k))}{P_{i,n}}\) in (25) are replaced by the expected penalty reduction after scheduling. The lower bound is obtained by solving the optimization problem **P4** numerically. The simulation horizon is \(10^{6}\) slots. \(V\) is set to be \(0.01\) for composite penalty function, and \(1\) for both linear and square penalty functions. The performance of the proposed Max-Weight policy is close to the lower bound. It should be noted that the lower bound is derived by using Jenson's inequality, and thus the estimation error between the lower bound and the minimum average AoI penalty gets larger for higher-order penalty functions. It is also interesting to check whether the proposed policy does steer AoI to be aligned with (14). Considering the case where the local computing delay of Type-II devices follows \(U(1,10)\), we estimate the value of \(\alpha\) by plugging the peak AoI value after each computation into (19) and calculate the average for each device. And thus we obtain \(30\) points, each corresponding to one device. We then calculate the mean value and standard deviation over these \(30\) devices. The Coefficient of Variation (CV) is listed in Table III, which is the ratio of the standard deviation to the mean. The result shows that the CV values of Max-Weight policy are one order of magnitude smaller than that of the Max-Reduction policy. To investigate the influence of parameter \(V\), we first check the average energy consumption, as shown in Fig. 5. These curves are obtained by running the Max-Weight policy and calculating the moving average of energy consumption. We choose the square penalty function case and plot the first \(30000\) time slots. The local computing time for Type-II devices is \begin{table} \begin{tabular}{c|c c} \hline \hline **Function** & **Type-I** & **Type-II** \\ \hline Linear & \(x\) & \(2x\) \\ Square & \(0.1x^{2}\) & \(0.2x^{2}\) \\ Composite & \(1-(0.02x+1)^{-0.4}\) & \(1-(0.14x+1)^{-0.4}\) \\ \hline \hline \end{tabular} \end{table} TABLE II: List of penalty functions. Fig. 4: Performance comparison. Fig. 5: The average energy consumption under different \(V\) with square penalty function. \begin{table} \begin{tabular}{c|c c} \hline \hline **Parameter** & **Type-I** & **Type-II** \\ \hline Local Comp. Delay (slots) & \(U(1,15)\) & \(U(1,x)\) \\ Transmission Delay (slots) & \(U(1,3)\) & \(U(3,7)\) \\ Edge Comp. Delay (slots) & \(U(1,2)\) & \(U(1,2)\) \\ Local Comp. Energy (J/slot) & 10 & 10 \\ Transmission Energy (J/slot) & 1 & 1 \\ Energy Budget (J/slot) & 0.4 & 0.4 \\ \hline \hline \end{tabular} \end{table} TABLE I: Simulation settings. \(U(1,10)\). The cyan dashed line corresponds to the energy budget \(0.4\). The first observation is that all three curves converge to the horizontal cyan line, this is in line with Theorem 3. Another observation is that smaller \(V\) results in slower convergence to the expected value. This might be because a larger \(V\) means fewer rounds to reach the desired \(Q_{n}\) value. However, the convergence speed comes at the price of performance loss. As shown in Fig. 6, increasing \(V\) from \(0.1\) to \(10\) leads to larger average penalty. This is because that a larger \(V\) increases the fluctuation of the virtual queue \(Q(k)\), as discussed in Section IV. The influence of delay distribution is also studied in Fig. 6. Fixing the mean value, we run simulations when delay follows uniform distribution, Poisson distribution and geometric distribution respectively. The performance under geometric distribution is the worst. This might be because that the geometric distribution has the largest variance among the three in this case. ### _Experimental Results_ To show the usage of the proposed policy, we choose an object tracking application for demonstration. Tracking object is key to many visual applications [33, 34, 7, 35]. Given an object's initial position, the tracker tracks this object as it moves. In this process, tracking error accumulates, and tracking performance would decrease if the tracker has not been refreshed. Fig. 7 gives an example of the tracking process. The red dash box is the position of the target car, and the blue box is the tracking result. After 30 video frames, the blue box drifts away from the true position. To refresh the tracker, object detection algorithms [36, 37] is called to obtain the current position of the target object. The detection can be done on-device or offloaded to an edge server. Thus, _object tracking task can be naturally cast as a status update process, where status update refers to the object detection step_. In this case, AoI is defined as the number of video frames since the latest frame used for object detection. To evaluate tracking performance, we first calculate the IoU (Intersection over Union). It represents the area of the intersection over that of the union. Let \(B_{1}\) be the tracking position and \(B_{2}\) be the actual position, IoU is defined in (42). Tracking performance is measured by the probability of the IoU larger than a given threshold IoU\({}_{\text{th}}\): \(\mathbb{P}\left(\text{IoU}_{\text{curr}}\geq\text{IoU}_{\text{th}}\right)\), where IoU\({}_{\text{curr}}\) is the IoU of the current frame. \[\text{IoU}=\frac{\text{Area}(B_{1}\cap B_{2})}{\text{Area}(B_{1}\cup B_{2})}. \tag{42}\] In this experiment, we choose CSRT algorithm [38] for video tracking, which is faster than DNN-based methods. To isolate influence from the detection algorithm, it is assumed that the detection algorithm can always return the accurate position. We first do profiling on ILSVRC17-VID dataset to evaluate the tracking performance as a function of AoI. The IoU thresholds IoU\({}_{\text{th}}\) are set to be 0.5 and 0.75, representing different requirements for tracking accuracy. 30% of the videos in the dataset are chosen for profiling, i.e., 540 videos. For each video, we start from frame 1, initialize the CSRT tracker with bounding boxes, and let it track the following 90 frames. Then, the tracker is refreshed with the actual positions in the 91-st frame and repeats the procedure. Fig. 8 shows the profiling result, where these two curves can be fitted by functions of the form of \((ax+1)^{-b}\). Table.IV shows the fitted parameters. Thus, the penalty function is modeled as \(1-(ax+1)^{-b}\) with the penalty being the tracking failure probability. This experiment is done on a simulator we build on the server. We set the number of tracking devices to be 20, half of which are labeled as Type-I device with IoU\({}_{\text{th}}=0.5\). The Fig. 8: Profiling result of the successful tracking probability as a function of AoI. Fig. 6: The average penalty under different \(V\) and delay distributions. Fig. 7: The tracking performance degrades as the object moves. other half are labeled as Type-II device with IoU\({}_{\text{th}}=0.75\). As for parameter settings, we set both two types' local computing delay follows Gaussian distribution \(\mathcal{N}(200,30)\)ms [7], truncated above 0. The local computing power is set to be \(2.5\)W. For transmission part, the Type-I's transmission delay follows distribution \(\mathcal{N}(30,10)\)ms, and Type-II's transmission delay follows distribution \(\mathcal{N}(60,20)\)ms. The transmission power is set to be \(250\)mW. The energy budget is set to be \(300\)mW. For the computation delay on the edge side, we test the inference time of Faster-RCNN network [39] with ResNet50 [40] as backbone on a Linux server with TITAN Xp GPU. The computation time distribution is shown in Fig. 9. Two polices based on video content are adopted for comparison. The first is NCC (Normalized Cross Correlation) policy [41]. NCC refers to the cross-correlation between two regions. A small cross-correlation value suggests that the detected object has significant change, and thus the tracker is likely to be inaccurate. Thus, NCC value is plugged into (25) as done in Max-Reduction policy. The second is CIB (Current IoU Based-) policy. In CIB policy, we assume the scheduler knows the IoU between the tracking position and the actual position. The IoU value is plugged into (25) for scheduling. Note that CIB policy requires knowledge of the actual position and thus cannot be implemented in real scenario. We just use it for comparison. The parameter \(V\) is set to be \(0.01\). To evaluate the performance on ILSVRC17-VID dataset, we randomly take 300 videos from the videos that are not used for profiling. Fig. 10a compares the success tracking probability of Max-Weight policy, NCC and CIB. As shown in this figure, Max-Weight policy outperforms the other two for both two types of devices, and improves the total successful tracking probability by 27% compared with NCC. In Fig. 10b, the cumulative distributions of IoU under these two policies are presented. As we can see, Max-Weight policy enjoys better tracking performance. It is surprising to observe in Fig. 10 that CIB policy is worse than the Max-Weight policy, as CIB uses knowledge of the actual IoU. This phenomenon might be due to two reasons. First, CIB policy doesn't take transmission and computation delay into consideration, and this might lead to bad resource allocation. Second, CIB policy only uses current IoU, while the profiling curves in Fig. 8 incorporate long-term information. This motivates us to investigate how to represent content semantics from a time perspective. ## VI Conclusion To support emerging real-time applications with computation-intensive status update, it is critical to efficiently manage communication and computation resources in the network system to provide as fresh status information as possible. To fully utilize computation resources, we considered a hybrid computation framework where computation tasks can be processed on-device or offloaded to an edge server. Task-specific timeliness requirement was modeled as penalty functions of AoI. We first analyzed the minimum average AoI penalty and formulated an optimization problem to compute the penalty lower bound. Based on the lower bound, we proposed indices to quantify the priorities of local computing \begin{table} \begin{tabular}{c|c c} \hline \hline **IoU Requirement** & \(\mathbf{a}\) & \(\mathbf{b}\) \\ \hline \(0.5\) & \(0.02149158\) & \(0.45788114\) \\ \(0.75\) & \(0.14155363\) & \(0.45766638\) \\ \hline \hline \end{tabular} \end{table} TABLE IV: Fitted parameters with different IoU thresholds. Fig. 10: Performance comparison between two policies. Fig. 9: Computation time distribution of detection. and edge computing respectively. Combining energy virtual queue with these indices, we proposed a Max-Weight scheduling policy, inspired by the optimal conditions of the lower bound problem. Extensive simulations showed that our proposed policy has close-to-optimal performance under different penalty functions. We also applied the proposed policy to object tracking tasks on ILSVRC17-VID dataset and improved the tracking accuracy compared with scheduling polices based on video content information. ## Appendix A Proof of Lemma 1, 2 Given a policy \(\pi\), let \(C(k)\) be the number of update rounds finished by time slot \(k\), and \(Z_{i}\) be the time slot when the \(i\)-th round starts. The average AoI penalty by time slot \(K\) is \[\begin{split}&\frac{1}{K}\left(\sum_{k=1}^{K}f(h(k))\right)\\ &=\frac{1}{K}\left(\sum_{c=1}^{C(K)}\left(F(h_{c}^{+})-F(h_{c}^{-} -1)\right)+R(K)\right),\end{split} \tag{43}\] where \(h_{c}^{+}\) is the peak age of the \(c\)-th update round, and \(h_{c}^{-}\) be the age at the beginning of this round, and \[R(K)\triangleq\sum_{h=h_{C(K)+1}}^{K+h_{C(K)+1}^{-}-Z_{C(K)+1}}f(h). \tag{44}\] Update rounds can be further classified based on the type of computation executed during the round. Let \(C_{l}(K)\) be the number of rounds that contains local computing, and \(C_{t}(K)\) be the number of rounds with edge computing, then \[\begin{split}&\frac{1}{K}\sum_{c=1}^{C(K)}F(h_{c}^{+})=\frac{1}{K} \sum_{c=1}^{C_{l}(K)}F(h_{l,c}^{+})+\frac{1}{K}\sum_{c=1}^{C_{t}(K)}F(h_{l,c}^{ +}),\end{split} \tag{45}\] where \(h_{l,c}^{+}\) is the peak age in the \(c\)-th local computing round, and \(h_{l,c}^{+}\) is the peak age in the \(c\)-th edge computing round. As for \(h_{c}^{-}\), note that it equals the total latency in round \(c-1\). Thus we can shift the summation by one round and obtain \[\begin{split}&\frac{1}{K}\sum_{c=2}^{C(K)+1}F(h_{c}^{-}-1)\\ &=\frac{1}{K}\sum_{c=1}^{C_{l}(K)}F(D_{l,c}-1)+\frac{1}{K}\sum_{ c=1}^{C_{t}(K)}F(D_{t,c}+D_{e,c}-1).\end{split} \tag{46}\] Due to the independence of communication and computation latency in each update round, basic renewal theory yields the following equations: \[\begin{split}&\lim_{K\rightarrow\infty}\frac{C_{l}(K)}{K}=\frac{ \rho_{l}(\pi)\overline{E}}{E_{l}\overline{D}_{l}},\ \text{w.p.1},\\ &\lim_{K\rightarrow\infty}\frac{C_{t}(K)}{K}=\frac{\rho_{t}(\pi) \overline{E}}{E_{t}\overline{D}_{t}},\ \text{w.p.1}.\end{split} \tag{47}\] Obviously, policies with unbounded \(R(K)\) as \(K\rightarrow\infty\) cannot be optimal. Therefore, we only consider policies under which the residual term \(R(K)\) satisfies \[\lim_{K\rightarrow\infty}\frac{R(K)}{K}=0,\ \text{w.p.1}. \tag{48}\] Since these time averages converge with probability 1, the Lebesgue Dominated Convergence Theorem [42] ensures the time average expectations are the same as the pure time averages. Thus, by letting \(K\rightarrow\infty\) in (43), the average AoI penalty under policy \(\pi\) can be written as \[\begin{split}&\frac{\rho_{l}(\pi)\overline{E}}{E_{l}\overline{D}_{l}} \mathbb{E}_{\pi}[F(h_{l}^{+})-F(D_{l}-1)]\\ &+\frac{\rho_{t}(\pi)\overline{E}}{E_{t}\overline{D}_{t}}\mathbb{E }_{\pi}[F(h_{t}^{+})-F(D_{t}+D_{e}-1)].\end{split} \tag{49}\] This concludes the proof for Lemma 1. On the other hand, based on the definition of \(C(k)\), the following two inequalities hold \[\sum_{c=1}^{C(K)}(h_{c}^{+}-h_{c}^{-}+1)\leq K, \tag{50}\] \[\sum_{c=1}^{C(K)+1}(h_{c}^{+}-h_{c}^{-}+1)\geq K. \tag{51}\] Let \(K^{\prime}\) be the time slot when round \(C(K)+1\) finishes, then \[\lim_{K\rightarrow\infty}\frac{K^{\prime}}{K}=1. \tag{52}\] Thus, \[\begin{split}&\lim_{K\rightarrow\infty}\frac{1}{K}\sum_{c=1}^{C(K)}(h_{ c}^{+}-h_{c}^{-}+1)\leq 1,\\ & 1\leq\lim_{K\rightarrow\infty}\frac{K^{\prime}}{K}\frac{1}{K^{ \prime}}\sum_{c=1}^{C(K)+1}(h_{c}^{+}-h_{c}^{-}+1).\end{split} \tag{53}\] Applying renewal theory again yields \[\begin{split}&\frac{\rho_{t}(\pi)\overline{E}}{E_{t}\overline{D}_{t}} (\mathbb{E}_{\pi}[h_{t}^{+}]-(\overline{D}_{t}+\overline{D}_{e}-1))\\ &+\frac{\rho_{l}(\pi)\overline{E}}{E_{l}\overline{D}_{l}}( \mathbb{E}_{\pi}[h_{l}^{+}]-(\overline{D}_{l}-1))=1.\end{split} \tag{54}\] Once the constraints in **P1** are satisfied, so does (54). This concludes the proof for Lemma 2. ## Appendix B Proof of Lemma 3 To show that **P4** is a convex problem, we only need to prove that the following is a convex function of \(\rho_{l,n}\) and \(\rho_{t,n}\), \[\left(\frac{\rho_{l,n}\overline{E}_{n}}{E_{l,n}\overline{D}_{l,n}}+\frac{\rho_ {t,n}\overline{E}_{n}}{E_{t,n}\overline{D}_{t,n}}\right)\tilde{F}_{n}(G_{n}( \rho_{l,n},\rho_{t,n})). \tag{55}\] For simplicity, let's introduce some notations: \[\begin{split}& p\triangleq\overline{D}_{l,n}-1,\ q\triangleq \overline{D}_{t,n}+\overline{D}_{e,n}-1,\\ & x\triangleq\frac{\rho_{l,n}\overline{E}_{n}}{E_{l,n}\overline{D}_ {l,n}},\ y\triangleq\frac{\rho_{t,n}\overline{E}_{n}}{E_{t,n}\overline{D}_{t,n}}. \end{split} \tag{56}\] According to (9), the function \(G_{n}(\rho_{l,n},\rho_{c,n})\) can be expressed as \[G_{n}(\rho_{l,n},\rho_{c,n})=\frac{1+px+qy}{x+y}, \tag{57}\] and the target (55) becomes \[(x+y)\tilde{F}_{n}\left(\frac{1+px+qy}{x+y}\right). \tag{58}\] Let \(Z(x,y)\) be the perspective transform of \(\tilde{F}_{n}\) \[Z(x,y)\triangleq y\tilde{F}_{n}\left(\frac{x}{y}\right),\ y>0. \tag{59}\] Because \(\tilde{F}_{n}(x)\) is convex, its perspective transform \(Z(x,y)\) is also convex [43]. Consider the following linear transformation \[\begin{bmatrix}1+px+qy\\ x+y\end{bmatrix}=\begin{bmatrix}p&q\\ 1&1\end{bmatrix}\begin{bmatrix}x\\ y\end{bmatrix}+\begin{bmatrix}1\\ 0\end{bmatrix}, \tag{60}\] let \(\mathbf{x}\triangleq(x,y)^{T}\), and denote the relationship above as \(A\mathbf{x}+\mathbf{b}\). It is easy to see that \[(x+y)\tilde{F}_{n}\left(\frac{1+px+qy}{x+y}\right)=Z(A\mathbf{x}+\mathbf{b}). \tag{61}\] According to the composition rule of convex function, the target (55) is also a convex function. Thus, **P4** is a convex optimization problem. ## Appendix C Proof of Theorem 1 According to KKT conditions, the optimal solution satisfies \[a_{n}\tilde{F}_{n}(G_{n}(x_{n}^{*},y_{n}^{*}))+\frac{a_{n}b_{n} y_{n}^{*}(c_{n}-d_{n})-a_{n}}{a_{n}x_{n}^{*}+b_{n}y_{n}^{*}}\tilde{f}_{n}(G_{n}(x_ {n}^{*},y_{n}^{*}))\] \[-a_{n}v_{n}+\beta_{n}^{*}-\gamma_{n}^{*}=0,\ \forall n\in \mathcal{N},\] \[b_{n}\tilde{F}_{n}(G_{n}(x_{n}^{*},y_{n}^{*}))+\frac{a_{n}b_{n} x_{n}^{*}(d_{n}-c_{n})-b_{n}}{a_{n}x_{n}^{*}+b_{n}y_{n}^{*}}\tilde{f}_{n}(G_{n}(x_ {n}^{*},y_{n}^{*}))\] \[-b_{n}w_{n}+\frac{\overline{E}_{n}}{E_{t,n}}\alpha^{*}+\beta_{n}^ {*}-\nu_{n}^{*}=0,\ \forall n\in\mathcal{N},\] \[\alpha\left(\sum_{n\in\mathcal{N}}\frac{y_{n}^{*}\overline{E}_{n }}{E_{t,n}}-M\right)=0,\ \beta_{n}^{*}(x_{n}^{*}+y_{n}^{*}-1)=0,\] \[\gamma_{n}^{*}x_{n}^{*}=0,\ \nu_{n}^{*}y_{n}^{*}=0,\ \forall n\in \mathcal{N}.\] Combining the first two equations and removing \(\beta_{n}^{*}\) gives \[\frac{a_{n}b_{n}(d_{n}-c_{n})(x_{n}^{*}+y_{n}^{*})+a_{n}-b_{n}}{a _{n}x_{n}^{*}+b_{n}y_{n}^{*}}\tilde{f}_{n}(G_{n}(x_{n}^{*},y_{n}^{*}))\] \[+(b_{n}-a_{n})\tilde{F}_{n}(G_{n}(x_{n}^{*},y_{n}^{*})) \tag{62}\] \[+a_{n}v_{n}+\gamma_{n}^{*}-b_{n}w_{n}-\nu_{n}^{*}=-\frac{ \overline{E}_{n}}{E_{t,n}}\alpha^{*}.\] Recalling the definition of \(G_{n}\), it can be written as \[G_{n}(x_{n},y_{n})=\frac{1+a_{n}c_{n}x_{n}+b_{n}d_{n}y_{n}}{a_{n}x_{n}+b_{n}y_ {n}}. \tag{63}\] With direct computation, we have \[\frac{a_{n}b_{n}(d_{n}-c_{n})(x_{n}+y_{n})+a_{n}-b_{n}}{a_{n}x_{n} +b_{n}y_{n}} \tag{64}\] \[=(a_{n}-b_{n})G_{n}(x_{n},y_{n})+b_{n}d_{n}-a_{n}c_{n}. \tag{65}\] As shown in (9), \(G_{n}(\rho_{l,n},\rho_{t,n})\) is the expected peak age when the energy allocation is \(\rho_{l,n},\rho_{t,n}\). Combining this fact with (64), we can express (62) as \[\frac{W_{t,n}(h_{t,n})}{E_{t,n}\overline{D}_{t,n}}-\frac{W_{l,n}(h_{l,n})}{E_ {l,n}\overline{D}_{l,n}}-\frac{\gamma_{n}^{*}-\nu_{n}^{*}}{\overline{E}_{n}}= \frac{\alpha^{*}}{E_{t,n}}. \tag{66}\] This concludes the proof. ## Appendix D Proof of Theorem 2 First, to maximize (26), the scheduling policy should only consider devices in the set \(\mathcal{C}_{l}\cup\mathcal{C}_{t}\). We start a simple policy under which 1) all devices in \(\mathcal{C}_{l}\) are scheduled to do local computing, 2) devices in \(\mathcal{C}_{t}-\mathcal{C}_{l}\) are sort in descending order according to the value of \(I_{n}(k)\), and at most \(m(k)\) devices from the top are scheduled to offload. Consider two cases: 1. If there are less than \(m(k)\) devices in \(\mathcal{C}_{t}-\mathcal{C}_{l}\), we can reorder devices from \(\mathcal{C}_{l}\) to offload if \(I_{n}(k)\geq 0\) until the channels are all occupied. 2. If all channels are occupied by devices from \(\mathcal{C}_{t}-\mathcal{C}_{l}\), taking the device with the largest \(I_{n}(k)\) in \(\mathcal{C}_{l}\cap\mathcal{C}_{t}\), say device \(x\). If the index of device \(x\) is larger than one of the devices scheduled to offload, say device \(y\), then we can replace \(y\) by \(x\) to offload, and improve the sum weight (26). Repeating this process finite times will maximize (26). The process above is equivalent to first sort devices in \(\mathcal{C}_{t}\) in descending by the value of \(I_{n}(k)\), then order at most \(m(k)\) devices with \(I_{n}(k)\geq 0\) to offload, corresponding to Line 1-10 in Alg.1. Devices left in \(\mathcal{C}_{l}\) will be scheduled to do local computing, corresponding to Line 11-13 in Alg.1. ## Appendix E Proof of Proposition 1 The proof is divided into three parts. In the first part, the weight in (26) is derived by computing the drift of an expectation term. Next, we show how to obtain a randomized policy \(\pi_{\text{R}}\). Finally, we prove (38) by comparing \(\pi_{\text{MW}}\) with \(\pi_{\text{R}}\). ### _Drift Expression_ We first consider the drift of the quadratic virtual queue functions, defined as \(\Delta(k)\): \[\Delta(k)\triangleq\frac{1}{2}\mathbb{E}\left[\left.\sum_{n\in\mathcal{N}}Q_{n}^{2 }(k+1)-\sum_{n\in\mathcal{N}}Q_{n}^{2}(k)\ \right|\ \mathcal{H}(k)\right], \tag{67}\] recalling that \(\mathcal{H}(k)\) represents the history up to time slot \(k\). \(\Delta(k)\) satisfies that \[\begin{split}\Delta(k)\leq&\frac{1}{2}\mathbb{E} \left[\sum_{n\in\mathcal{N}}(E_{n}(k)-\overline{E}_{n})^{2}\Bigg{|}\mathcal{H}( k)\right]\\ &+\mathbb{E}\left[\sum_{n\in\mathcal{N}}Q_{n}(k)(E_{n}(k)- \overline{E}_{n})\Bigg{|}\mathcal{H}(k)\right]\\ \leq&\frac{1}{2}\sum_{n\in\mathcal{N}}(\max(E_{l,n },E_{t,n})-\overline{E}_{n})^{2}\\ &+\mathbb{E}\left[\sum_{n\in\mathcal{N}}Q_{n}(k)(E_{n}(k)- \overline{E}_{n})\Bigg{|}\mathcal{H}(k)\right].\end{split} \tag{68}\] As for the age part, it is captured by the following function \[P(k)\triangleq\sum_{n\in\mathcal{N}}((h_{n}(k)-1)f_{n}(h_{n}(k)-1)-\tilde{F}_ {n}(h_{n}(k)-1)). \tag{69}\] When \(D_{l,n}=D_{t,n}=1,D_{e,n}=0,\ \forall n\in\mathcal{N}\), the indices in (17) and (18) are reduced to be \[W_{n}(h_{n}(k))\triangleq h_{n}(k)f_{n}(h_{n}(k))-\tilde{F}_{n}(h_{n}(k)). \tag{70}\] The drift is \[\begin{split}\Gamma(k)\triangleq&\mathbb{E}\left[P(k+ 1)-P(k)|\mathcal{H}(k)\right]\\ =&\sum_{n\in\mathcal{N}}(h_{n}(k)-1)(f_{n}(h_{n}(k) )-f_{n}(h_{n}(k)-1)\\ &+\sum_{n\in\mathcal{N}}(f_{n}(h_{n}(k))+\tilde{F}_{n}(h_{n}(k)- 1)-\tilde{F}_{n}(h_{n}(k)))\\ &-\sum_{n\in\mathcal{N}}W_{n}(h_{n}(k))\mathbb{E}\left[(u_{t,n}( k)+u_{l,n}(k))|\mathcal{H}(k)\right].\end{split} \tag{71}\] Because \[\tilde{F}_{n}(h_{n}(k)-1)-\tilde{F}_{n}(h_{n}(k))\leq-f_{n}(h_{n}(k)-1), \tag{72}\] the drift term \(\Gamma(k)\) satisfies \[\begin{split}\Gamma(k)\leq&\sum_{n\in\mathcal{N}}h_ {n}(k)(f_{n}(h_{n}(k))-f_{n}(h_{n}(k)-1))\\ &-\sum_{n\in\mathcal{N}}W_{n}(h_{n}(k))\mathbb{E}\left[(u_{t,n}( k)+u_{l,n}(k))|\mathcal{H}(k)\right].\end{split} \tag{73}\] Let \(B\triangleq\frac{V}{2}\sum_{n\in\mathcal{N}}(\max(E_{l,n},E_{t,n})-\overline{ E}_{n})^{2}\), combining (68) and (73) yields \[\begin{split}&\Gamma(k)+V\Delta(k)\\ &\leq B-V\sum_{n\in\mathcal{N}}Q_{n}(k)\overline{E}_{n}\\ &+\sum_{n\in\mathcal{N}}h_{n}(k)(f_{n}(h_{n}(k))-f_{n}(h_{n}(k)- 1)\\ &-\sum_{n\in\mathcal{N}}(W_{n}(h_{n}(k))-VE_{l,n}Q_{n}(k))\mathbb{ E}\left[u_{l,n}(k)|\mathcal{H}(k)\right]\\ &-\sum_{n\in\mathcal{N}}(W_{n}(h_{n}(k))-VE_{t,n}Q_{n}(k))\mathbb{ E}\left[u_{t,n}(k)|\mathcal{H}(k)\right].\end{split} \tag{74}\] Based on Theorem 2, the policy \(\pi_{\text{MW}}\) makes decisions to minimize the right-hand-side of (74) at each time slot. ### _Randomized Policy_ In this part, we show how to construct a randomized policy \(\pi_{\text{R}}\). Since there are \(M\) channels, the candidate scheduling action set is defined as \(\mathcal{A}\) \[\mathcal{A}\triangleq\left\{\left(\mathbf{u}_{l},\mathbf{u}_{t}\right)\left|\sum_{n \in\mathcal{N}}u_{t,n}\leq M;u_{l,n}+u_{t,n}\leq 1,\forall n\in\mathcal{N} \right.\right\}. \tag{75}\] Let's define a randomized policy \(\pi_{\text{R}}\) that takes action \(a\in\mathcal{A}\) with probability \(p_{\pi_{\text{R}}}(a)\) in each time slot. Since \(\pi_{\text{R}}\) is independent of the history \(\mathcal{H}(k)\), \(\mathbb{E}_{\pi_{\text{R}}}[u_{t,n}(k)|\mathcal{H}(k)]\) and \(\mathbb{E}_{\pi_{\text{R}}}[u_{l,n}(k)|\mathcal{H}(k)]\) are stationary, and we denote them as \(p_{l,n}\) and \(p_{t,n}\) respectively. Note that we cannot simply define a randomized policy and state that it would schedule device \(n\) to do local computing with probability \(p_{l,n}\), and to offload with probability \(p_{t,n}\). Because of the channel constraint, such a vanilla policy might be infeasible. Let \(\mathcal{P}_{\text{R}}\triangleq\left\{(\mathbf{p}_{l},\mathbf{p}_{l})\right\}\) be the set of probability distributions achievable by a randomized policy. The associated energy allocation scheme set \(\mathcal{E}_{\text{R}}\) is defined as \[\begin{split}\mathcal{E}_{\text{R}}\triangleq&\left\{ \left(\mathbf{\rho}_{l},\mathbf{\rho}_{t}\right)\left|\rho_{l,n}=\frac{E_{l,n} \overline{D}_{l,n}p_{l,n}}{\overline{E}_{n}},\right.\right.\\ &\left.\rho_{t,n}=\frac{E_{t,n}\overline{D}_{t,n}p_{t,n}}{ \overline{E}_{n}},\left(\mathbf{p}_{l},\mathbf{p}_{t}\right)\in\mathcal{P}_{\text{R}} \right\}.\end{split} \tag{76}\] Let \(\mathcal{E}\) be the set of all possible energy allocation schemes under any stationary policy \(\pi\). According to [29, Lemma 4.17], we have \(\mathcal{E}=\mathcal{E}_{\text{R}}\). Now, let \(\pi_{\text{opt}}\) be the optimal stationary scheduling policy, and its associated energy allocation vectors are \(\mathbf{\rho}_{l}^{*}\) and \(\mathbf{\rho}_{l}^{*}\), plugging \(\mathbf{\rho}_{l}^{*}\) and \(\mathbf{\rho}_{t}^{*}\) into to the optimization target of (11) yields a lower bound of the minimum AoI penalty. Let \(\mathbf{p}_{l}^{*}\) and \(\mathbf{p}_{t}^{*}\) be the corresponding scheduling probability1. According to (11), we obtain AoI penalty lower bound, Footnote 1: Note that we cannot directly using the solution of the lower bound problem (11) to construct randomized policy because energy allocation scheme given by the solution may not be achievable. \[J^{*}=\sum_{n\in\mathcal{N}}\frac{\alpha_{n}}{p+1}\left(\frac{1}{p_{l,n}^{*}+p_ {t,n}^{*}}\right)^{p}. \tag{77}\] Because \(\pi_{\text{MW}}\) minimizes the right-hand-side of (74), we have \[\begin{split}\Gamma(k)+V\Delta(k)\leq& B+\sum_{n\in \mathcal{N}}h_{n}(k)(f_{n}(h_{n}(k))-f_{n}(h_{n}(k)-1))\\ &-V\sum_{n\in\mathcal{N}}Q_{n}(k)(\overline{E}_{n}-p_{l,n}^{*}E_{l,n}-p_{t,n}^{*}E_{t,n})\\ &-\sum_{n\in\mathcal{N}}(p_{l,n}^{*}+p_{t,n}^{*})W_{n}(h_{n}(k)). \end{split} \tag{78}\] Because \(p_{l,n}^{*}E_{l,n}+p_{t,n}^{*}E_{t,n}\leq\overline{E}_{n}\), the inequality above is relaxed to be \[\begin{split}&\Gamma(k)+V\Delta(k)\\ &\leq B+\sum_{n\in\mathcal{N}}h_{n}(k)(f_{n}(h_{n}(k))-f_{n}(h_{n}(k)- 1))\\ &-\sum_{n\in\mathcal{N}}(p_{l,n}^{*}+p_{t,n}^{*})W_{n}(h_{n}(k)). \end{split} \tag{79}\] ### _Performance Derivation_ In this part, we will prove the inequality (38). Let the average AoI penalty of device \(n\) under policy \(\pi_{\text{MW}}\) be \(J_{n}^{\pi_{\text{MW}}}\). Note that the AoI penalty function for device \(n\) is \(f_{n}(h)=\alpha_{n}h^{p}\). In the \(i\)-th round of status update, \(h_{n}(k)\), the AoI of device \(n\), will increase from \(1\) to a peak AoI \(\hat{h}_{n}^{(i)}\). The average AoI penalty in this round is \[\alpha_{n}\sum_{h=1}^{\hat{h}_{n}^{(i)}}h^{p}\geq\alpha_{n}\int_{0}^{\hat{h}_{n }^{(i)}}h^{p}dh=\frac{\alpha_{n}}{p+1}(\hat{h}_{n}^{(i)})^{p+1}. \tag{80}\] Let \(\hat{h}_{n}\) be the distribution of the peak AoI of device \(n\), we have \[J_{n}^{\text{max}}=\lim_{K\to\infty}\frac{1}{K}\mathbb{E}_{\pi_{\text{MW}}} \left[\sum_{k=1}^{K}h_{n}^{p}(k)\right]\geq\frac{\alpha_{n}}{p+1}\frac{ \mathbb{E}_{\pi_{\text{MW}}}[\hat{h}_{n}^{p+1}]}{\mathbb{E}_{\pi_{\text{MW}}} [\hat{h}_{n}-1]}. \tag{81}\] As for the second term in (79), we have \[\lim_{K\to\infty}\frac{1}{K}\mathbb{E}_{\pi_{\text{MW}}}\left[ \sum_{k=1}^{K}h_{n}(k)(f_{n}(h_{n}(k))-f_{n}(h_{n}(k)-1))\right] \tag{82}\] \[=\alpha_{n}\frac{\mathbb{E}_{\pi_{\text{MW}}}[\hat{h}_{n}^{p+1}]} {\mathbb{E}_{\pi_{\text{MW}}}[\hat{h}_{n}-1]}-J_{n}^{\pi_{\text{MW}}}\] \[\leq pJ_{n}^{\pi_{\text{MW}}}.\] Taking expectation under \(\pi_{\text{MW}}\) on both sides of (79), taking average up to time slot \(K\) and letting \(K\) to \(\infty\), we have \[\lim_{K\to\infty}\frac{V}{2K}\mathbb{E}_{\pi_{\text{MW}}}\left[ \sum_{n\in\mathcal{N}}(Q_{n}^{2}(k+1)-Q_{n}^{2}(1))\right] \tag{83}\] \[\leq B+pJ^{\pi_{\text{MW}}}-\lim_{K\to\infty}\frac{1}{K}\mathbb{E }_{\pi_{\text{MW}}}\left[P(k)-P(1)\right]\] \[-\lim_{K\to\infty}\frac{1}{K}\mathbb{E}_{\pi_{\text{MW}}}\left[ \sum_{n\in\mathcal{N}}\sum_{k=1}^{K}(p_{l,n}^{*}+p_{t,n}^{*})W_{n}(h_{n}(k)) \right].\] According to Theorem 3, the left-hand-side of (83) is \(0\) as \(K\to\infty\), therefore, \[\lim_{K\to\infty}\frac{1}{K}\mathbb{E}_{\pi_{\text{MW}}}\left[ \sum_{n\in\mathcal{N}}\sum_{k=1}^{K}(p_{l,n}^{*}+p_{t,n}^{*})W_{n}(h_{n}(k))\right] \tag{84}\] \[\leq B+pJ_{n}^{\pi_{\text{MW}}}.\] Next, we study the property of the function \(x\tilde{f}(x)-\tilde{F}(x)\). **Lemma 4**.: _Consider a differentiable injective function \(\tilde{f}:\mathbb{R}\to\mathbb{R}\), its inverse is denoted as \(\tilde{f}^{-1}\), and its integral is \(\tilde{F}(x)\). If \(\tilde{f}(x)\) is increasing, \(y\tilde{f}^{-1}(y)-\tilde{F}(\tilde{f}^{-1}(y))\) is convex._ Proof.: Let \(S(y)\triangleq y\tilde{f}^{-1}(y)-\tilde{F}(\tilde{f}^{-1}(y))\). It's derivative is \[\frac{\text{d}S(y)}{\text{d}y}=y\frac{\text{d}\tilde{f}^{-1}(y)}{\text{d}y}+ \tilde{f}^{-1}(y)-y\frac{\text{d}\tilde{f}^{-1}(y)}{\text{d}y}=\tilde{f}^{-1}( y). \tag{85}\] Because \(\tilde{f}(x)\) is increasing, so is \(\tilde{f}^{-1}(y)\). Therefore, the derivative of \(S(y)\) is increasing, and thus \(S(y)\) is convex. Because \(\tilde{f}(x)=f(x)\) when \(x\in\mathbb{N}\), we have \(W_{n}(h_{n}(k))=S_{n}(f_{n}(h_{n}(k)))\). With this equation, (83) can be written as \[B+pJ^{\pi_{\text{MW}}} \tag{86}\] \[\geq\lim_{K\to\infty}\frac{1}{K}\mathbb{E}_{\pi_{\text{MW}}}\left[ \sum_{n\in\mathcal{N}}\sum_{k=1}^{K}(p_{l,n}^{*}+p_{t,n}^{*})S_{n}(f_{n}(h_{n}( k)))\right]\] \[\overset{(a)}{\geq}\sum_{n\in\mathcal{N}}(p_{l,n}^{*}+p_{t,n}^{* })S_{n}\left(\lim_{K\to\infty}\frac{1}{K}\sum_{k=1}^{K}\mathbb{E}_{\pi_{\text{ MW}}}\left[h_{n}(k)\right]\right)\] \[=\sum_{n\in\mathcal{N}}(p_{t,n}^{*}+p_{t,n}^{*})S_{n}\left(J_{n}^ {\pi_{\text{MW}}}\right),\] where inequality \((a)\) is due to Jenson's inequality. Denoting \(\tilde{f}_{n}^{-1}(J_{n}^{\pi_{\text{MW}}})\) by \(\overline{h}_{n}\), we have \[S_{n}\left(J_{n}^{\pi_{\text{MW}}}\right)=W_{n}(\overline{h}_{n})=\frac{\alpha_{ n}p}{p+1}\overline{h}_{n}^{p+1}. \tag{87}\] Then, (86) becomes \[\frac{p}{p+1}\sum_{n\in\mathcal{N}}\alpha_{n}(p_{l,n}^{*}+p_{t,n}^{*})\overline {h}_{n}^{p+1}\leq B+pJ^{\pi_{\text{MW}}}. \tag{88}\] Multiplying both sides by \((J^{*})^{\frac{1}{p+1}}\), the lower bound from (77), yields \[(J^{*})^{\frac{1}{p+1}}\left(\frac{p}{p+1}\sum_{n\in\mathcal{N}} \alpha_{n}(p_{l,n}^{*}+p_{t,n}^{*})\overline{h}_{n}^{p+1}\right)^{\frac{p}{p+1}} \tag{89}\] \[\leq(J^{*})^{\frac{1}{p+1}}\left(B+pJ^{\pi_{\text{MW}}}\right)^{ \frac{p}{p+1}}.\] Applying Holder's inequality to the left hand side of the above gives \[p^{\frac{p}{p+1}}(p+1)^{-1}\sum_{n\in\mathcal{N}}\alpha_{n}\overline{h}_{n}^{p} \leq(J^{*})^{\frac{1}{p+1}}\left(B+pJ^{\pi_{\text{MW}}}\right)^{\frac{p}{p+1}}. \tag{90}\] Because \(\overline{h}_{n}=\tilde{f}_{n}^{-1}(J_{n}^{\pi_{\text{MW}}})\), rearranging (90) yields \[\left(\frac{J^{\pi_{\text{MW}}}}{p+1}\right)^{p+1}\leq J^{*}\left(\frac{B}{p}+J^{ \pi_{\text{MW}}}\right)^{p}. \tag{91}\]
2303.09678
Neural Lyapunov Control for Nonlinear Systems with Unstructured Uncertainties
Stabilizing controller design and region of attraction (RoA) estimation are essential in nonlinear control. Moreover, it is challenging to implement a control Lyapunov function (CLF) in practice when only partial knowledge of the system is available. We propose a learning framework that can synthesize state-feedback controllers and a CLF for control-affine nonlinear systems with unstructured uncertainties. Based on a regularity condition on these uncertainties, we model them as bounded disturbances and prove that a CLF for the nominal system (estimate of the true system) is an input-to-state stable control Lyapunov function (ISS-CLF) for the true system when the CLF's gradient is bounded. We integrate the robust Lyapunov analysis with the learning of both the control law and CLF. We demonstrate the effectiveness of our learning framework on several examples, such as an inverted pendulum system, a strict-feedback system, and a cart-pole system.
Shiqing Wei, Prashanth Krishnamurthy, Farshad Khorrami
2023-03-16T22:46:33Z
http://arxiv.org/abs/2303.09678v1
# Neural Lyapunov Control for Nonlinear Systems ###### Abstract Stabilizing controller design and region of attraction (RoA) estimation are essential in nonlinear control. Moreover, it is challenging to implement a control Lyapunov function (CLF) in practice when only partial knowledge of the system is available. We propose a learning framework that can synthesize state-feedback controllers and a CLF for control-affine nonlinear systems with unstructured uncertainties. Based on a regularity condition on these uncertainties, we model them as bounded disturbances and prove that a CLF for the nominal system (estimate of the true system) is an input-to-state stable control Lyapunov function (ISS-CLF) for the true system when the CLF's gradient is bounded. We integrate the robust Lyapunov analysis with the learning of both the control law and CLF. We demonstrate the effectiveness of our learning framework on several examples, such as an inverted pendulum system, a strict-feedback system, and a cart-pole system. ## I Introduction While the knowledge of a control Lyapunov function (CLF) for a system can enable the implementation of universal controllers [1], designing a CLF for a given real-world nonlinear system can be highly challenging, especially when the system's dynamics are uncertain. Furthermore, when faced with a real-world uncertain system, proving that a controller provides global stabilization might be infeasible or even ill-defined, and one instead needs to estimate the region of attraction (RoA) of the closed-loop system using online data. Motivated by the several advances in learning-based methods for various control design tasks (_e.g._, [2, 3, 4, 5, 6]), we consider the problems of designing a controller for uncertain systems and estimating the achieved RoA from a learning-based or data-driven approach in this paper. Specifically, we address the problem of synthesizing state-feedback controllers and estimating the RoA for control-affine nonlinear systems with unstructured uncertainties. We model these uncertainties as a bounded disturbance but with no assumptions about the source of these uncertainties. They may result from incorrect model parameters or dynamics that are not reflected in the model. Our work takes advantage of the rich literature on Lyapunov theory [7, 8, 9, 10], and a data-driven approach is adopted. Several approaches have been explored in the existing literature to enhance the robustness of Lyapunov analysis and control designs to uncertainties in the underlying dynamics of real-world systems, such as an unknown constant parameter [11], an unknown Gaussian process [12], or an unknown linearly parameterized control-affine system [13]. In particular, the authors of [14] consider the modeling error as a disturbance and propose the projection-to-state stability approach to characterize the tracking error. However, one common point among the above methods is that they either derive controllers [11, 13, 14] or refine the RoA [12] based on a given CLF. In practice, such a CLF may not be readily available for complex nonlinear systems and may provide a conservative estimation of the actual RoA. Another area closely related to this paper is the automated formulation of Lyapunov functions. Lyapunov functions for polynomial systems can be found by solving linear matrix inequalities (LMIs) [15]. Approximation of Lyapunov functions by sum-of-squares (SOS) polynomials can be found through the solution of a semidefinite programming (SDP) problem [16]. Computational methods for Lyapunov functions have been reviewed in [17]. More recently, neural networks have been used to approximate a Lyapunov function with SMT (Satisfiability Modulo Theories) solvers being a verification tool [18, 19]. However, these learning-based approaches (_e.g._, [5, 18]) have been developed assuming exact knowledge of the system. In [2], knowledge of closed-loop dynamics is needed when computing the Lipschitz constant. The method proposed in [5], Neural Lyapunov Redesign (NLR), is an offline method that finds a neural Lyapunov function and a stabilizing controller assuming known system dynamics. _Our contributions:_ We consider control-affine nonlinear systems with unstructured uncertainties and propose a learning framework that simultaneously learns the following: a state-feedback controller that seeks to enlarge the RoA, a CLF that can be used to estimate the RoA, and an improved model of the uncertain system dynamics (starting with an initial nominal approximate system model). Inspired by [14], we model the unstructured uncertainties as a bounded disturbance. We prove that when the gradient of the CLF is bounded, and the disturbance is bounded by a particular quantity, a CLF for the nominal system is an input-to-state stable control Lyapunov function (ISS-CLF) for the true system and thus can correctly estimate the RoA for the true system. We apply a machine learning-based data-driven approach for learning the controller, CLF, and system dynamics and demonstrate the effectiveness of our learning framework on three different examples. ## II Preliminaries Let \(\mathcal{X}\subset\mathbb{R}^{n}\) and \(\mathcal{U}\subset\mathbb{R}^{m}\) be the state and control input spaces (in general, subsets of \(\mathbb{R}^{n}\) and \(\mathbb{R}^{m}\) to model physical constraints of real-world systems). Consider the following control-affine system: \[\dot{x}=f(x)+g(x)u \tag{1}\] with drift dynamics \(f:\mathcal{X}\to\mathbb{R}^{n}\) and actuation matrix \(g:\mathcal{X}\to\mathbb{R}^{n\times m}\). To ensure the existence and uniqueness of the solution, we assume that \(f\) and \(g\) are Lipschitz continuous on \(\mathcal{X}\). We further assume \(0\in\mathcal{X}\) and \(f(0)+g(0)u_{0}=0\) for a certain \(u_{0}\in\mathcal{U}\) and also assume controllability of the system. Introducing a disturbance in (1), we consider the perturbed system: \[\dot{x}=f(x)+g(x)u+d \tag{2}\] where \(d\in\mathcal{D}\) is the disturbance assumed to be essentially bounded in time (i.e., bounded everywhere except possibly on a set of measure zero) and \(\mathcal{D}\subset\mathbb{R}^{n}\) is the disturbance space. A natural framework for modeling the effects of perturbations is given by the widely used notions of input-to-state stability (ISS) [20] and input-to-state stable control Lyapunov functions (ISS-CLF) [9]. **Definition 1** (_CLF and ISS-CLF_).: A class \(C^{1}\) function \(V:\mathcal{X}\to\mathbb{R}_{+}\) is a CLF for \((\ref{eq:C})\) on \(\mathcal{X}\) if there exist \(\alpha_{1},\alpha_{2},\alpha_{3}\in\mathcal{K}_{\infty}\) such that for all \(x\in\mathcal{X}\): \[\alpha_{1}(\|x\|)\leq V(x) \leq\alpha_{2}(\|x\|), \tag{3}\] \[\inf_{u\in\mathcal{U}}\dot{V}(x,u) \leq-\alpha_{3}(\|x\|). \tag{4}\] \(V\) is an ISS-CLF for (2) on \(\mathcal{X}\) if it satisfies (3) and additionally, there exist \(\alpha_{4},\rho\in\mathcal{K}_{\infty}\) such that \[\|x\|\geq\rho\left(\operatorname*{ess\,sup}_{\tau\geq t_{0}}\|d(\tau)\|\right) \Rightarrow\inf_{u\in\mathcal{U}}\dot{V}(x,u,d)\leq-\alpha_{4}(\|x\|) \tag{5}\] for all \(x\in\mathcal{X}\) and \(d\in\mathcal{D}\). The existence of a CLF implies the existence of a state-feedback controller \(k:\mathcal{X}\to\mathcal{U}\)[8] such that \(\dot{V}\leq-\alpha_{3}(\|x\|)\) in the closed-loop system (i.e., when control inputs are generated using \(k\)). We refer to controllers satisfying this inequality as _admissible_. We note that for the definition of a CLF, \(\alpha_{1}\) and \(\alpha_{2}\) need only to be class \(\mathcal{K}\) functions, and condition (4) can be reduced to \(\inf_{u\in\mathcal{U}}\dot{V}(x,u)<0\) to guarantee local asymptotic stability of the origin [8]. ## III Uncertain Dynamics Let \(\hat{f}:\mathcal{X}\to\mathbb{R}^{n}\) and \(\hat{g}:\mathcal{X}\to\mathbb{R}^{n\times m}\) be Lipschitz continuous functions denoting the estimates of \(f\) and \(g\) in (1). Then, system (1), called _true system_, can be written as \[\dot{x}=\hat{f}(x)+\hat{g}(x)u+\underbrace{(f(x)-\hat{f}(x))+(g(x)-\hat{g}(x) )u}_{d} \tag{6}\] where the estimation errors are written as the disturbance signal \(d\). If \(d\) is essentially bounded in time, system (6) can be seen as the perturbed system of the _nominal system_ \[\dot{x}=\hat{f}(x)+\hat{g}(x)u. \tag{7}\] Finding a CLF for (1) valid on the entire state space \(\mathcal{X}\) is difficult for nonlinear systems, and the conditions (3) and (4) are usually satisfied only on a compact subset \(\mathcal{C}\) of \(\mathcal{X}\). Then, given a continuous state-feedback controller \(k:\mathcal{X}\to\mathcal{U}\), the estimation error \(d\) is indeed bounded on \(\mathcal{C}\) as a result of the continuity of \(f,\hat{f},g,\hat{g}\), and \(k\) on \(\mathcal{X}\). Given a CLF for the nominal system (7) on \(\mathcal{C}\), by (6), the time derivative of \(V\) can be written as \[\dot{V}(x,k(x),d)=\underbrace{\nabla V(x)^{\top}(\hat{f}(x)+\hat{g}(x)k(x))}_ {\hat{V}(x,k(x))}+\underbrace{\nabla V(x)^{\top}d}_{\delta} \tag{8}\] where \(\nabla V\) is the gradient of \(V\) and \(\delta=\nabla V(x)^{\top}d\) can be seen as a disturbance in the dynamics of \(V\). Since \(V\) is \(C^{1}\) on \(\mathcal{X}\), \(\|\nabla V\|\) is bounded on \(\mathcal{C}\) (with bound \(L_{V}>0\)), and the disturbance term \(\delta\) is also bounded. **Property 1**.: _Let \(V\) be a CLF with \(\alpha_{1},\alpha_{2},\alpha_{3}\in\mathcal{K}_{\infty}\) for system (7) associated with an admissible continuous controller \(k:\mathcal{X}\to\mathcal{U}\) over a compact set \(\mathcal{C}\subset\mathcal{X}\). Let \(\Omega\subset\mathcal{C}\) be a sublevel set of \(V\), and \(\alpha_{p}\) and \(\alpha_{q}\) be class \(\mathcal{K}_{\infty}\) functions such that \(\alpha_{p}+\alpha_{q}=\alpha_{3}\). If \(\|x\|\geq\alpha_{q}^{-1}\left(L_{V}\max_{x\in\Omega}\|d\|\right)\) for all \(x\in\partial\Omega\) (the boundary of set \(\Omega\)), then \(V\) is an ISS-CLF for system (6) with controller \(k\) over \(\Omega\)._ Proof.: Let \(c=V(x)\) for any \(x\in\partial\Omega\). Then, for all \(x\in\partial\Omega\), \[\dot{V}(x,k(x),d)\leq-\alpha_{p}(\|x\|)-\alpha_{q}(\|x\|)+\delta\leq-\alpha_{p }(\|x\|)<0,\] and by Nagumo's Theorem, \(V(x(t))\in[0,c]\) for \(t\geq t_{0}\) if \(V(x(t_{0}))\in[0,c]\). Therefore, the set \(\Omega\) is forward invariant, which means that for any trajectory starting within \(\Omega\), the disturbance along the trajectory will be bounded by \(\max_{x\in\Omega}\|d\|\). Note that the inverse of a class \(\mathcal{K}_{\infty}\) function is defined on \(\mathbb{R}_{+}\) and belongs to class \(\mathcal{K}_{\infty}\), \(\rho:r\mapsto\alpha_{q}^{-1}(L_{V}r)\) is therefore a class \(\mathcal{K}_{\infty}\) function. \(V\) is an ISS-CLF on \(\Omega\) since if \(\|x\|\geq\rho\left(\sup_{\tau\geq t_{0}}\|d(\tau)\|\right)\), by (4) and (8), \[\dot{V}(x,k(x),d)\leq-\alpha_{p}(\|x\|)-\alpha_{q}(\|x\|)+\delta\leq-\alpha_{p }(\|x\|)\] where \(\alpha_{p}\in\mathcal{K}_{\infty}\), and condition (5) is satisfied. The next property is a direct result of Nagumo's Theorem. **Property 2**.: _Let \(V\) be an ISS-CLF for (6) under a state-feedback controller \(k:\mathcal{X}\to\mathcal{U}\) on a compact set \(\mathcal{C}\) and \(\Omega\subseteq\mathcal{C}\) be a sublevel set of \(V\). \(\Omega\) is forward invariant if \(\|x\|\geq\rho\left(\|d\|\right)\) for all \(x\in\partial\Omega\)._ Finally, based on [9, Theorem 1], we have: **Property 3**.: _If \(V\) is an ISS-CLF for system (6) under an admissible state-feedback controller \(k:\mathcal{X}\to\mathcal{U}\) on \(\mathcal{C}\), then (6) is ISS with the controller \(k\) on \(\mathcal{C}\)._ ## IV Learning the Controller and ISS-CLF ### _Neural Network Structures_ As part of our learning framework, we update the nominal dynamics during training. For this purpose, we decompose the estimates \(\hat{f}\) and \(\hat{g}\) into two parts: \[\hat{f}=f_{0}+f_{\theta_{1}},\quad\hat{g}=g_{0}+g_{\theta_{2}} \tag{9}\] where \(f_{0}\) and \(g_{0}\) depend on the initial knowledge of the system, and the residual parts \(f_{\theta_{1}}\) and \(g_{\theta_{2}}\) are neural networks (parameterized by \(\theta_{1}\) and \(\theta_{2}\), respectively). If the system is completely unknown, we may start with \(f_{0}=g_{0}=0\). Let \(V_{\theta_{3}}:\mathcal{X}\rightarrow\mathbb{R}_{+}\), called _Lyapunov candidate_, be a neural network parameterized by \(\theta_{3}\). To ensure that \(V_{\theta_{3}}\) satisfies (3) in Definition 1, its structure is defined as: \[V_{\theta_{3}}(x)=x^{\top}(MM^{\top}+\gamma I)x+\phi(x)^{\top}\phi(x) \tag{10}\] where \(M\in\mathbb{R}^{n\times n}\) is a trainable lower triangular matrix, \(I\in\mathbb{R}^{n\times n}\) is the identity matrix, \(\gamma>0\) is a constant and \(\phi:\mathcal{X}\rightarrow\mathbb{R}^{d}\) is a neural network. \(\theta_{3}\) includes both \(M\) and weights in \(\phi\). Following [2], \(\phi\) is a composition of linear layers (with no bias term) and activation functions (Lipschitz continuous functions with a trivial null space), and its layer dimensions are increasing. Let \(d_{\ell}\) and \(W_{\ell}\) be the output dimension and layer weights of layer \(\ell\), and \[W_{\ell}=\begin{bmatrix}G_{\ell 1}^{\top}G_{\ell 1}+\varepsilon I_{d_{\ell-1}} \\ G_{\ell 2}\end{bmatrix} \tag{11}\] where \(G_{\ell 1}\in\mathbb{R}^{q_{\ell}\times d_{\ell-1}}\) for some integer \(q_{\ell}\geq 1\), \(G_{\ell 2}\in\mathbb{R}^{(d_{\ell}-d_{\ell-1})\times d_{\ell-1}}\), \(I_{d_{\ell-1}}\in\mathbb{R}^{d_{\ell-1}\times d_{\ell-1}}\) is the identity matrix, and \(\varepsilon>0\) is a constant. Note that (11) is enforced during training to ensure that \(\phi\) has a trivial null space, implying that \(\phi(x)^{\top}\phi(x)\) is positive definite. Our Lyapunov candidate differs from that of [2] in two aspects. Firstly, the \(\phi(x)^{\top}\phi(x)\) term is not necessarily lower bounded by a class \(\mathcal{K}_{\infty}\) function; so we have added the \(\gamma x^{\top}x\) term to achieve this property. Secondly, we added the \(x^{\top}MM^{\top}x\) term to better capture quadratic behaviors and expect the \(\phi(x)^{\top}\phi(x)\) term to capture other nonlinear behaviors. The upper bounding function \(\alpha_{2}\in\mathcal{K}_{\infty}\) can be constructed based on \(V_{\theta_{3}}\). The state-feedback controller \(u_{\theta_{4}}:\mathcal{X}\rightarrow\mathcal{U}\) is also a neural network (parameterized by \(\theta_{4}\)). The activation functions of \(u_{\theta_{4}}\) are chosen such that it is continuous. ### _Learning Algorithm_ Let \(\mathcal{X}_{\tau}\subset\mathcal{X}\) be a discretization of \(\mathcal{X}\) with \(\|x-[x]_{\tau}\|_{2}\leq\tau/2\), where \([x]_{\tau}\) denotes the closest point in \(\mathcal{X}_{\tau}\) to \(x\in\mathcal{X}\). Based on the structure and our choice of activation functions of the neural networks, the estimated time derivative of \(V_{\theta_{3}}\) \[\hat{V}_{\theta_{3}}(x,u_{\theta_{4}}(x))=\nabla V_{\theta_{3}}(x)^{\top}(\hat {f}(x)+\hat{g}(x)u_{\theta_{4}}(x)) \tag{12}\] is locally Lipschitz continuous on \(\mathcal{X}\). For \(V_{\theta_{3}}\) to be a CLF for the nominal system (7), condition (4) in Definition 1 has to be satisfied on a certain set \(\mathcal{C}\) (to be determined), i.e., \[\hat{\hat{V}}_{\theta_{3}}(x,u_{\theta_{4}}(x))\leq-\kappa\|x\|^{2}\text{ on }\mathcal{C} \tag{13}\] where we have chosen \(\alpha_{3}(\|x\|)=\kappa\|x\|^{2}\) and \(\kappa\) is a positive constant. The _Lyapunov loss_ is defined as \[\mathcal{L}_{\theta_{3},\theta_{4}} =\frac{\lambda_{\text{RoA}}}{N_{i}}\sum_{x\in\mathcal{S}_{i}} \text{ReLU}[\hat{\hat{V}}_{\theta_{3}}(x,u_{\theta_{4}}(x))+\kappa\|x\|^{2}+\epsilon]\] \[+\frac{\lambda_{\text{Lip}}}{N_{i}}\sum_{x\in\mathcal{S}_{i}}\| \nabla V_{\theta_{3}}(x)\| \tag{14}\] where \(\lambda_{\text{RoA}},\lambda_{\text{Lip}}\), and \(\epsilon\) are positive constants, \(\mathcal{S}_{i}\) is the training set at iteration \(i\), \(N_{i}\) is the number of training samples in \(\mathcal{S}_{i}\), and \(\text{ReLU}(x)=\max(0,x)\) stands for the Rectified Linear Unit. The first term of (14) accounts for condition (13), and \(\epsilon\) is a positive constant offset. The second term aims to limit the norm of \(\nabla V_{\theta_{3}}\). Compared with [2], our loss function has a more general structure with a regularization term that is designed to limit \(L_{V_{\theta_{3}}}\) (upper bound of \(\|\nabla V_{\theta_{3}}\|\)), as required by part of our methodology. Our learning framework learns the nominal dynamics, the Lyapunov candidate, and the controller at the same time. Instead of directly working on full-state dynamics, we update the nominal dynamics using the dynamics of the CLF during learning. During each iteration, our approach first updates \(f_{\theta_{1}}\) and \(g_{\theta_{2}}\) by minimizing the mean squared error (MSE) between \(\hat{V}_{\theta_{3}}\) (by (12)) and \(\hat{\hat{V}}_{\theta_{3}}\) (approximated value of \(\hat{V}_{\theta_{3}}\) obtained by numerical differentiation), and then updates \(V_{\theta_{3}}\) and \(u_{\theta_{4}}\) by minimizing the Lyapunov loss (14). Let \(\mathcal{V}(c)=\{\,x\in\mathbb{R}^{n}\mid V_{\theta_{3}}(x)\leq c\,\}\) denote the \(c\)-sublevel set of \(V_{\theta_{3}}\). At iteration \(i\), we collect from the system the trajectories initialized at \(\mathcal{X}_{\tau}\), determine the set of stable initial states1, and note them as \(\mathcal{X}_{\text{stable}}\subset\mathcal{X}_{\tau}\). The estimation of the RoA \(R_{i}\) is given by largest sublevel set of \(V_{\theta_{3}}\) contained in \(\mathcal{X}\) where (13) is satisfied, i.e., \(R_{i}=\mathcal{V}(c_{i})\), where \(c_{i}=\max_{c>0}c\) subject to Footnote 1: We assume that the algorithm starts with an initial locally stable controller. \[\hat{\hat{V}}_{\theta_{3}}(x,u_{\theta_{4}}(x))\leq-\kappa\|x\|^{2}\text{ for all }x\in\mathcal{X}_{\text{stable}}\cap\mathcal{V}(c) \tag{15}\] and \(\mathcal{V}(c)\subset\mathcal{X}\). To enlarge the estimated RoA, we identify an exploration region by defining a level multiplier \(\eta_{i}>1\) to include more states in the training set. The training set at iteration \(i+1\) is \(\mathcal{S}_{i+1}=\mathcal{V}(\eta_{i}c_{i})\). The details are provided in Algorithm 1. ### _Verification Condition_ The Lyapunov candidate \(V_{\theta_{3}}\) minimizing (14) is called _Lyapunov-like_. Define the following first-order logic formula \[\Phi_{\zeta}(x) =(\|x\|\geq\zeta,V_{\theta_{3}}(x)\leq c)\] \[\wedge\left(\hat{\hat{V}}_{\theta_{3}}(x,u_{\theta_{4}}(x))+ \kappa\|x\|^{2}\geq 0\right) \tag{16}\] to check the violation of the Lyapunov condition (13) where \(\zeta>0\) is a small constant that rules out a small region around the origin to avoid numerical instabilities, and \(c\) is found from Algorithm 1. We use dReal [21], an SMT solver for nonlinear constraints, to solve (16). dReal runs a delta-complete algorithm whose numerical error bound is specified by the user. If dReal cannot find any solution satisfying (16), then condition (13) is certified on \(R=\mathcal{V}(c)\). In this case, \(V_{\theta_{3}}\) is a CLF on \(R\) for the nominal system (7) under controller \(u_{\theta_{4}}\) by Definition 1 and is an ISS-CLF for (6) by Property 1. Further, \(R\), as a sublevel set of \(V_{\theta_{3}}\), is forward invariant by Property 2, and system (6) is ISS with \(u_{\theta_{4}}\) by Property 3. However, for computational tractability (specifically, due to the limit on the number of parameters in the dReal SMT solver that we use), we can only run the SMT solver on a simplified setting, an example for which is provided in Section V-A. Note that even if condition (13) is violated on \(R\), it is still possible for states to remain bounded under \(u_{\theta_{4}}\). Let \(\delta^{\prime}=\max_{x\in R}\left[\hat{\hat{V}}_{\theta_{3}}(x,u_{\theta_{4}} (x))+\kappa\|x\|^{2}\right]\) denote upper bound of the violation on \(R\). With \(d\) defined in (6) and \(\delta=\nabla V_{\theta_{3}}(x)^{\top}d\), we have \(\hat{V}_{\theta_{3}}(x,u_{\theta_{4}}(x))\leq-\kappa\|x\|^{2}+\delta+\delta^{\prime}\) on \(R\). Let \(\alpha_{p}\) and \(\alpha_{q}\) be class \(\mathcal{K}_{\infty}\) functions such that \(\alpha_{p}+\alpha_{q}=\kappa\|x\|^{2}\). If \(\|x\|\geq\alpha_{q}^{-1}\left(\|\delta+\delta^{\prime}\|\right)\) for all \(x\in\partial R\), we have \(\hat{V}_{\theta_{3}}(x,u_{\theta_{4}}(x))\leq-\alpha_{p}(\|x\|)\) for all \(x\in\partial R\). Let \(y(x)=V_{\theta_{3}}(x,u_{\theta_{4}}(x))\) and \(c\) be the corresponding level value of \(R\). Then, by Nagumov's Theorem, \(y(x(t))\in[0,c]\) for \(t\geq t_{0}\) if \(y(x(t_{0}))\in[0,c]\). Hence, states will stay bounded. ## V Experiments In this section, we study the efficacy of our proposed learning framework. The Lyapunov candidate \(V_{\theta_{3}}\) is randomly initialized and then pretrained to \[V_{0}(x)=0.1x^{\top}x \tag{17}\] on \(\mathcal{X}\). The controller \(u_{\theta_{4}}\) has two parts: \[u_{\theta_{4}}(x)=\text{LS}_{a,b,m_{a},m_{b}}(u_{0}(x)+\psi(x)) \tag{18}\] where \(u_{0}\) is an initial locally stabilizing controller (fixed during training), \(\psi\) is the trainable part, and \(\text{LS}_{a,b,m_{a},m_{b}}\) is the loose saturation filter (see Fig. 1). The thresholds \(a\) and \(b\) are fixed, but the slopes \(m_{a}\) and \(m_{b}\) (initialized to zero) are trainable and included in the controller parameters \(\theta_{4}\). The motivation for using a loose saturation filter is to more clearly demonstrate that our algorithm is indeed learning to enlarge the RoA, as for some systems, a locally stable controller could automatically be a globally stable controller in the absence of such a filter. Also, a similar setup is used in reference [5], and we use the loose saturation filter in our work for a fair comparison. In (18), \(u_{0}\) is the linear controller given by the Linear-Quadratic Regulator (LQR) solution using the nominal dynamics (linearized at \(x=0\)). Furthermore, it is to be noted that our algorithm can still work without any initial stabilizing controller. In this case, we need to ensure that the training set always has a sufficient amount of data during training, and it could take a longer time to stabilize the learning in the beginning. We test our approach on three examples: an inverted pendulum, a third-order strict-feedback system, and a cart-pole system. The hyperparameters and structures of neural networks are listed in Tables I and II. The experiment results are reported in Table III along with the baseline given by the estimated RoA by the LQR solution. The LQR Lyapunov function \(V_{\text{LQR}}=x^{\top}Px\) is obtained using the nominal dynamics before training. Then, the estimation RoA given by \(V_{\text{LQR}}\) is \(\mathcal{V}(c^{\prime})\), where \(c^{\prime}=\max_{c>0}c\) subject to \(\hat{V}_{\text{LQR}}(x,u_{\theta_{4}}(x))\leq-\kappa\|x\|^{2}\) for all \(x\in\mathcal{X}_{\text{stable}}\cap\mathcal{V}(c)\) and \(\mathcal{V}(c)\subset\mathcal{X}\). Our method enlarges the true RoA by 740%, 219%, and 173% on the three examples and increases the estimated RoA by at least 200% compared with the baseline. For examples 1 to 3, the average training times for each iteration (as in Algorithm 1) are 147, 90, and 109 s respectively on an i7-5930K CPU. The sample sizes are 10000, 15625, and 10000, respectively. ### _Stationary Inverted Pendulum_ The stationary inverted pendulum is a second-order nonlinear system with dynamics: \[ml^{2}\ddot{\theta}-mgl\sin\theta=u\] \begin{table} \begin{tabular}{l c c c} \hline \hline & **Inverted** & **Strict** & **Cart-pole** \\ & **Pendulum** & **Feedback Form** & **Cart-pole** \\ \hline \(\lambda_{\text{RoA}}\) & 1000 & 500 & 500 \\ \(\lambda_{\text{LQ}}\) & 0.1 & 0.01 & 0.01 \\ \(\eta_{0}\) & 5 & 2 & 9 \\ \(k_{\eta}\) & 15 & - & - \\ \(a\) & -2 & -1 & -5 \\ \(b\) & 2 & 1 & 5 \\ \hline \hline \end{tabular} \end{table} TABLE I: Hyperparameters used for training (a dash means that the level multiplier \(\eta_{i}\) is kept fixed during training). For all examples, \(\gamma=10^{-6}\), \(\kappa=0.1\), \(\epsilon=0.01\). Fig. 1: Loose saturation function. where \(\theta=0\) is the upward position of the pendulum, and \(\theta\) is positive in the counter-clockwise direction. The states are \((\theta,\omega)\) with \(\omega=\dot{\theta}\) and the state space is \[\mathcal{X}=[-\pi,\pi]\times[-\pi,\pi].\] The true parameters of the system are \(m=1\ \mathrm{kg}\) and \(l=0.5\ \mathrm{m}\), while the nominal parameters are \(m^{\prime}=0.8\ \mathrm{kg}\) and \(l^{\prime}=0.4\ \mathrm{m}\). Thus, our initial knowledge of the system is not accurate. When determining the output dimensions of \(f_{\theta_{1}}\) and \(g_{\theta_{2}}\), we have taken some known relationships into account, _e.g._, \(\dot{\theta}=\omega\) for the inverted pendulum. Therefore, for this example, the output dimensions of \(f_{\theta_{1}}\) and \(g_{\theta_{2}}\) should both be one. The same reasoning has been applied to the other example. Figure 2 demonstrates an overestimation of the true RoA generated by the offline method called Neural Lyapunov Redesign (NLR, [5]), which does not consider uncertainties in the system dynamics. Figure 3 shows the percentages of RoA, forward invariant RoA, and estimated RoA over the entire state space. The enlargement of the true RoA and the estimated RoA based on our method are reflected in Fig. 4. Next, we consider using an SMT solver to verify the CLF. \begin{table} \begin{tabular}{c c c c} \hline \hline & **Inverted** & \multicolumn{2}{c}{**Strict**} & \multirow{2}{*}{**Cart-pole**} \\ & **Pendulum** & & **Feedback Form** & \\ \hline \(f_{\theta_{1}}\) & [16,16,16,1] & [16,16,16,3] & [16,16,16,2] \\ & [tanh,tanh,tanh,id ] & [tanh,tanh,tanh,id] & [tanh,tanh,tanh,id] \\ \(g_{\theta_{2}}\) & scalar & scalar & [16,16,16,2] \\ \(\phi\) & [64,64,64] & [64,64,64] & [64,64,64] \\ (in \(\psi_{\theta_{3}}\)) & [tanh,tanh,tanh] & [tanh,tanh,tanh] & [tanh,tanh,tanh] \\ \(\psi\) & [16,16,16,1] & [16,16,16,1] & [16,16,16,1] \\ (in \(u_{\theta_{4}}\)) & [tanh,tanh,tanh,id] & [tanh,tanh,tanh,id] & [tanh,tanh,tanh,id] \\ \hline \hline \end{tabular} \end{table} TABLE II: Network structures and activation functions (id stands for the identity mapping). Fig. 4: Evolution of the estimated RoA, sampling area \(\mathcal{S}_{l}\), forward invariant RoA, and true RoA (see legend in the first figure). The number of iterations is 0, 25, 50, 75, 100, and 200 from top left to bottom right. Fig. 3: Left: true RoA and estimated RoA ratios. Right: phase plot and 20 randomly sampled trajectories starting from the boundary of the estimated RoA. Fig. 2: Estimated RoA by NLR [5] without accounting for the dynamic uncertainty during training. Left: true RoA (green), false RoA based on the nominal model (gray), and estimated RoA (blue contour). Right: phase plot and four divergent trajectories starting from within the estimated RoA. \begin{table} \begin{tabular}{c c c c} \hline \hline & **Inverted** & \multicolumn{2}{c}{**Strict**} & \multirow{2}{*}{**Cart-pole**} \\ & **Pendulum** & & **Feedback Form** & \\ \hline True RoA\({}^{*}\) & \multirow{2}{*}{11.9/100} & \multirow{2}{*}{31.1/99.0} & \multirow{2}{*}{27.8/76.0} \\ (before/after training) & & & & \\ Forward invariant RoA\({}^{**}\) & & & & \\ (before/after training) & & & & \\ Estimated RoA (ours) & & & & \\ Estimated RoA (LQR) & & & & \\ \hline \hline \end{tabular} * The percentages presented in Table III are approximate estimations based on a mesh. For example, the percentage of the true RoA is defined as the ratio of the number of stable mesh points over the total number of mesh points. * Forward invariant RoA refers to the set of initial states from which the trajectory never leaves \(\mathcal{X}\) before converging to the origin. \end{table} TABLE III: Percentages of RoA, forward invariant RoA, and estimated RoAs over the state space. Due to the SMT solver's limit on the number of parameters, we consider a simpler setting for (16). We reduce \(f_{\theta_{1}}\), \(\phi\), and \(\psi\) to neural networks of three, two, and three layers, respectively. The nonlinear constraint (16) is solved with \(\zeta=0.3\) and a precision of \(10^{-3}\), and no counter-example is found. This means that (13) is verified on the estimated RoA, which is around \(34.6\%\). ### _A Third-Order Strict Feedback Form_ Consider a third-order system of the strict feedback form: \[\dot{x}_{1}=e_{1}x_{2},\quad\dot{x}_{2}=e_{2}x_{3},\quad\dot{x}_{3}=e_{3}x_{1}^{ 2}+e_{4}u.\] The states are \((x_{1},x_{2},x_{3})\) and the state space is \[\mathcal{X}=\{\,(x_{1},x_{2},x_{3})\mid|x_{1}|\leq 1.5,|x_{2}|\leq 1.5,|x_{3}| \leq 2\,\}.\] Again, we assume that our initial knowledge of the system is not accurate: the true parameters are \(e_{1}=1\), \(e_{2}=1\), \(e_{3}=1\), and \(e_{4}=1\), while the nominal parameters are \(e_{1}^{\prime}=0.9\), \(e_{2}^{\prime}=0.8\), \(e_{3}^{\prime}=0.9\), and \(e_{4}^{\prime}=0.8\). Figure 5 shows the evolution of the percentages of RoA, forward invariant RoA, and estimated RoA along with a 3D visualization. We note that as our learning framework tries to stabilize more states, there can be transition periods where the size of the RoA drops, which might result from the sensitivity of the nonlinear controller to its weights. Figure 6 demonstrates ten randomly sampled trajectories starting from the boundary of the estimated RoA. ### _Cart-Pole System_ Finally, our method is tested on the cart-pole system: \[(M+m)\ddot{x}-ml\ddot{\theta}\cos\theta+ml\dot{\theta}^{2}\sin \theta+b_{c}\dot{x}=u,\] \[ml^{2}\ddot{\theta}-mgl\sin\theta=ml\ddot{x}\cos\theta.\] The states are \((\theta,\omega,x,v)\), where \(\omega=\dot{\theta}\) and \(v=\dot{x}\). The state space is defined as \[\mathcal{X}=\{\,(\theta,\omega,x,v)\mid|\theta|\leq\pi/6,|\omega|\leq 1,|x| \leq 1,|v|\leq 1.5\,\}.\] Note that \(\theta=0\) corresponds to the upward position of the pole, and \(\theta\) is positive in the counter-clockwise direction. The true parameters are \(M=1\)\(\mathrm{kg}\), \(m=0.3\)\(\mathrm{kg}\), \(l=1\)\(\mathrm{m}\), and \(b_{c}=0\)\(\mathrm{kg}\,\mathrm{s}^{-1}\), while the nominal parameters are \(M^{\prime}=0.8\)\(\mathrm{kg}\), \(m^{\prime}=0.27\)\(\mathrm{kg}\), \(l^{\prime}=0.8\)\(\mathrm{m}\), and \(b_{c}^{\prime}=0\)\(\mathrm{kg}\,\mathrm{s}^{-1}\). Figure 7 shows the evolution of the percentages of RoA, forward invariant RoA, and estimated RoA. Ten randomly sampled trajectories starting from the boundary of the estimated RoA are plotted in Fig. 8. The offline method NLR ([5]) generates an estimated forward invariant RoA ratio of 1.7% using the true parameters, which demonstrates the difficulty of this task. Finally, we test the robustness of the RoA estimation to sudden perturbations, where \(b_{c}\) is increased to \(9.1\)\(\mathrm{kg}\,\mathrm{s}^{-1}\). As seen in Fig. 9, our estimated RoA remains valid while the NLR estimated RoA no longer holds. ## VI Conclusion We proposed a learning framework that can synthesize state-feedback controllers and CLF for control-affine nonlinear systems with unstructured uncertainties. Our approach initializes the system at different initial conditions and observes the system trajectories. Exact knowledge of system dynamics is not required. Based on a regularity condition, we model uncertainties as bounded and structured disturbances. Experiments show that our method can find a controller that enlarges the RoA and a CLF that estimates the RoA. In addition, our work produces RoA estimations with the uncertainties considered to avoid overestimation (Section V-A) and has better robustness to sudden changes in dynamics (Section V-C). Future work includes relaxing assumptions, generalizing our method to higher-dimension systems, and exploring further combinations of the proposed approach and neural network verifying tools (such as SMT solvers).
2302.06662
Proposal for Observing Yang-Lee Criticality in Rydberg Atomic Arrays
Yang-Lee edge singularities (YLES) are the edges of the partition function zeros of an interacting spin model in the space of complex control parameters. They play an important role in understanding non-Hermitian phase transitions in many-body physics, as well as characterizing the corresponding nonunitary criticality. Even though such partition function zeroes have been measured in dynamical experiments where time acts as the imaginary control field, experimentally demonstrating such YLES criticality with a physical imaginary field has remained elusive due to the difficulty of physically realizing non-Hermitian many-body models. We provide a protocol for observing the YLES by detecting kinked dynamical magnetization responses due to broken PT symmetry, thus enabling the physical probing of nonunitary phase transitions in nonequilibrium settings. In particular, scaling analyses based on our nonunitary time evolution circuit with matrix product states accurately recover the exponents uniquely associated with the corresponding nonunitary CFT. We provide an explicit proposal for observing YLES criticality in Floquet quenched Rydberg atomic arrays with laser-induced loss, which paves the way towards a universal platform for simulating non-Hermitian many-body dynamical phenomena.
Ruizhe Shen, Tianqi Chen, Mohammad Mujahid Aliyu, Fang Qin, Yin Zhong, Huanqian Loh, Ching Hua Lee
2023-02-13T19:48:40Z
http://arxiv.org/abs/2302.06662v2
# Proposal for observing Yang-Lee criticality in Rydberg atomic arrays ###### Abstract Yang-Lee edge singularities (YLES) are the edges of the partition function zeros of an interacting spin model in the space of complex control parameters. They play an important role in understanding non-Hermitian phase transitions in many-body physics, as well as characterizing the corresponding non-unitary criticality. Even though such partition function zeroes have been measured in dynamical experiments where time acts as the imaginary control field, experimentally demonstrating such YLES criticality with a physical imaginary field has remained elusive due to the difficulty of physically realizing non-Hermitian many-body models. We provide a protocol for observing the YLES by detecting kinked dynamical magnetization responses due to broken \(\mathcal{PT}\) symmetry, thus enabling the physical probing of non-unitary phase transitions in non-equilibrium settings. In particular, scaling analyses based on our non-unitary time evolution circuit with matrix product states (tMPS) accurately recover the exponents uniquely associated with the correspond-on-unitary CFT. We provide an explicit proposal for observing YLES criticality in Floquet quenched Rydberg atomic arrays with laser-induced loss, which paves the way towards an universal platform for simulating non-Hermitian many-body dynamical phenomena. _Introduction.-_ In 1952, Yang and Lee established a relationship between phase transitions and special points where the partition function vanish, also known as Yang-Lee zeros [1; 2]. For a spin model in the thermodynamic limit, non-unitary critical points known as Yang-Lee edge singularities (YLES) [3; 4; 5; 6] lie at the ends of a dense line of partition function zeros in the space of complex control parameters such as the magnetic field or inverse temperature [1; 2]. To observe YLES, one can examine a non-Hermitian quantum ferromagnetic many-body Hamiltonian involving complex magnetic fields [5; 6; 7; 8]. The YLES of such non-Hermitian ferromagnetic models lead to anomalous critical scaling behaviours associated with their governing non-unitary conformal field theories (CFTs) [6; 8; 9]. For a long time, Yang-Lee edge singularities have been deemed as purely theoretical constructs since it is experimentally challenging to realize the requisite imaginary field. Recently, it was realized that the partition function of a classical spin model can also be mathematically simulated by real-time evolution, and Yang-Lee zeros were finally observed through probing spin coherence in a series of landmark experiments involving externally coupled local spins [10; 11; 12]. However, what these experiments achieved was the measurement of partition function zeros through a dynamical process, not the physical observation of the YLES and their associated non-unitary phase transition. It is still difficult to realize such esoteric phase transitions in a finite-size quantum ferromagnetic model with physical complex fields. While various single-body non-Hermitian phenomena have already been demonstrated [13; 14; 15], experimental demonstrations in _interacting_ many-body non-Hermitian models have just begun, primarily with cold atoms [16; 17; 18; 19; 20]. Indeed, ultracold atomic systems have lately proven to be ideal platforms for simulating many-body physics due to their excellent tunability and high controllability [21; 22; 23; 24], with demonstrated successes in topological physics [25; 26], strongly correlated matter [27; 28; 29] and thermalization phenomena [30; 31]. Rydberg atomic arrays are particularly promising, with Rydberg interactions successfully deployed to simulate many-body Hamiltonians [32; 33; 34; 35; 36; 37; 38]. Rydberg-dressing techniques enjoy great tunability in shaping interactions, leading to new prospects in engineering complicated many-body phases [40; 41; 42; 43; 44; 45; 46; 47]. To introduce non-Hermiticity in an ultracold atomic lattice, an increasingly established approach [48; 49; 50; 19; 20; 51; 52] is laser-induced loss, i.e., the application of lasers onto trapped atoms such that they are excited to "external" states [18; 19; 53]. Encouraged by these recent rapid advances in ultracold atoms, we combine Rydberg-dressing techniques with laser-induced loss to realize a ferromagnetic chain with an imaginary effective field, such as to observe the YLES and its critical scaling in a genuinely physical complex parameter space. In this work, we first introduce a transverse-field Ising model with imaginary field and discuss signatures of its non-unitary YLES criticality associated with spontaneous \(\mathcal{PT}\) breaking [54; 55; 56]. We next devise a Floquet quench for observing the YLES phase boundary by measuring kinked dynamical responses in a non-equilibrium setting, before describing our proposed experiment involving a dissipative Rydberg-dressed optical tweezer array [6; 41; 42; 46; 57], where the imaginary field is implemented through laser-induced atom loss [18; 19; 53]. _Model for Yang-Lee edge singularities.-_ For a con crete platform for observing Yang-Lee criticality, we consider the prototypical non-Hermitian ferromagnetic transverse-field Ising chain [58; 59], which we will later show how to realize with Rydberg atoms: \[\hat{H}_{\text{TFI}}=-\sum_{j}(h_{x}\hat{\sigma}_{j}^{x}+J\hat{\sigma}_{j}^{z} \hat{\sigma}_{j+1}^{z})+\sum_{j}i\gamma\hat{\sigma}_{j}^{z}, \tag{1}\] where \(J\) sets the strength of the interaction, \(\gamma\) is the imaginary field strength and \(\hat{\sigma}_{j}^{\alpha}(\alpha=x,z)\) are Pauli matrices.The YLES for this model form a curve in the plane of real and imaginary magnetic fields \((h_{x},\gamma)\) in FIG. 1 (a) [6], which we denote as \(\gamma_{YL}\) at each value of \(h_{x}\). When approaching \(\gamma_{YL}\), the ground states of our model experience spontaneous \(\mathcal{PT}\) symmetry breaking [6], with real ground-state eigenenergies \(E_{g}\) splitting into complex eigenenergies with equal and opposite \(\text{Im}E_{g}\), which demarcate the paramagnetic and ferromagnetic ground states in our model, as shown in FIG. 1 (a) for \(\hat{H}_{\text{TFI}}\): the paramagnetic (ferromagnetic) phases are characterized by vanishing (non-vanishing) \(\text{Im}E_{g}\). Even though YLES were originally defined for classical systems in the thermodynamic limit, they equivalently exist in finite-size quantum systems due to a quantum-classical mapping [6; 60; 61; 9], as shown in FIGs. 1 (a),(b), which were computed through exact diagonalization (ED) with \(L=8\) sites. In FIGs. 1 (c),(d), the associated ground state magnetization \(M_{z}=\left|\left\langle\sum_{j}\hat{\sigma}_{j}^{z}/L\right\rangle\right|\) and \(M_{x}=\left|\left\langle\sum_{j}\hat{\sigma}_{j}^{x}/L\right\rangle\right|\) also exhibit kinks at these YLES locations. Indeed, comparing the derivatives \(\frac{d[(\text{Im}E_{g}(\gamma))]}{d\gamma}\) against that of the x-magnetization \(|\frac{dM_{x}(\gamma)}{d\gamma}|\) (FIG. 1 (d)), we observe divergences at the same \(\gamma=\gamma_{YL}=0.1837\) where the YLES is located (for \(J=1,h_{x}=1.5\)). _Dynamical response from Yang-Lee edge singularities.-_ Since Eq. 1 is non-Hermitian, any physical realization, cold atom or otherwise, will undergo non-equilibrium evolution. That makes it difficult to directly probe ground state properties such as Yang-Lee criticality via a static ensemble measurement. Detecting a specific critical ground state transition through dynamics requires different dynamical behaviors across its two phases. In particular, the ground state of one phase can dominate the dynamics and be isolated [62; 63; 64; 6]. To design a dynamical means to probe the YLES, we turn to the spectral flow across the critical transition. The under-appreciated but crucial observation is that due to \(\mathcal{PT}\)-symmetry breaking, imaginary eigenenergies appear and that leads to markedly different non-unitary dynamics across the transition [6]. From FIG. 2 (a), the ground state energy \(E_{g}\) (with smallest \(\text{Re}E_{g}\)) is seen to rapidly acquire larger \(\pm\text{Im}E_{g}\) immediately after \(\gamma\) is tuned to be greater than \(\gamma_{YL}\) (inset). By contrast, other non-ground state eigenenergies in FIG. 2 (a) are largely stationary. This drastic real-to-complex ground state eigenenergy transition does not just imply that ground state observables i.e. \(M_{x}\) exhibit a kink at the YLES - more importantly, the rapidly increasing \(\text{Im}E_{g}\) at \(\gamma>\gamma_{YL}\) suggests that upon time evolution by \(\hat{H}_{\text{TFI}}\), _any_ initial state with significant ground state overlap will converge towards the ground state and dominate. As such, our proposal to experimentally detect non-unitary critical YLES involves measuring the dynamical \(x\)-magnetization order parameter \(M_{x}(T)=\left\langle\psi(t)\right|\sum_{j}\hat{\sigma}_{j}^{x}/L\left|\psi(t)\right\rangle\), which we henceforth expect to exhibit similar kinks as the ground state \(M_{x}\) (FIG. 1 (c)). The edge of such a non-unitary phase transition can be mapped out in parameter space by plotting the kink locations of \(M_{x}(T)\). With our Rydberg array implementation in mind, we propose the following protocol: 1. Prepare a ferromagnetic initial state \(\left|\psi(0)\right\rangle=\left|\downarrow\downarrow......\right\rangle\), which in a Rydberg array, can be achieved by first optically pumping all atoms into either \(\left|\uparrow\right\rangle\) or \(\left|\downarrow\right\rangle\). Next, a microwave field which couples the two ground states can pump all atoms to \(\left|\downarrow\right\rangle\)[65]. Figure 1: Yang-Lee edge singularities (YLES) of \(\hat{H}_{\text{TFI}}\) (Eq. (1)) and their associated ground state discontinuities (which cannot be directly measured yet): (a) Phase diagram of \(\hat{H}_{\text{TFI}}\) in the space of real and complex fields \(h_{x}\) and \(\gamma\). The YLES (dashed curve) is the phase boundary demarcating the paramagnetic (PM) \(\text{Im}E_{\text{g}}=0\) phase and the ferromagnetic (FM) \(|\text{Im}E_{\text{g}}|>0\) phase, where \(E_{g}\) is the ground state eigenenergy with the minimal real part. (b) Plots of \(\text{Im}E_{\text{g}}\) vs. \(\gamma\) at the four values of \(h_{x}\) indicated in (a), revealing that \(E_{g}\) is non-analytic at YLES \(\gamma_{YL}\). (c) Identification of Yang-Lee phase transitions by the ground state magnetization order parameter \(M_{x}=\left|\left\langle\sum_{j}\hat{\sigma}_{j}^{x}/L\right\rangle\right|\). Magnetization kinks (dashed lines) occur at the same \(\gamma_{YL}\) as in (b). (d) The critical point (at \(\gamma_{YL}=0.1837\) for \(h_{x}=1.5\)) can be equivalently extracted from divergences in either the derivative of the imaginary ground state energy \(\frac{d[\text{Im}(E_{g}(\gamma))]}{d\gamma}\) (blue), or that of the ground-state x-magnetization \(\frac{dM_{x}}{d\gamma}\) (red). All results are obtained from exact diagonalization (ED) with open boundary conditions, with interaction strength \(J=1\) and system size \(L=8\). 2. Apply a dynamical quench on this ferromagnetic initial state by evolving it under \(\hat{H}_{\rm TFI}\): \(\left|\psi(t)\right\rangle=e^{-it\hat{H}_{\rm TFI}}\left|\psi(0)\right\rangle/ \left|\right|\)\(-e^{-it\hat{H}_{\rm TFI}}\left|\psi(0)\right\rangle\left|\right|\), where \(\left|\left|\left|\psi(t)\right\rangle\right|\right|=\sqrt{\left|\left\langle \psi(t)\right|\psi(t)\right\rangle}\) gives the normalization. 3. Measure the x-magnetization order parameter \(M_{x}(T)=\left|\left\langle\psi(T)\right|\sum_{j}\hat{\sigma}_{j}^{x}\left| \psi(T)\right\rangle\left|/L\right.\) after a sufficiently long stipulated time \(T\) for different \(\gamma\), keeping \(J\) and \(h_{x}\) fixed. As shown in FIGs. 2 (b),(c), the initial ferromagnetic state \(\left|\psi(0)\right\rangle\) already overlaps with the ground state of \(\hat{H}_{\rm TFI}\) more than most other eigenstates. Due to PT-symmetry breaking, after evolving for \(TJ\sim\mathcal{O}(10^{1})\), it is dominated by the ground state for \(\gamma>\gamma_{YL}\), but not when \(\gamma<\gamma_{YL}\). As such, we expect to observe a kink in \(M_{x}(T)\) across the critical YLES \(\gamma=\gamma_{YL}\), even though there is considerable ground state overlap at one side of the transition. Note that even though the ground state \(M_{x}(T)\) is extracted through a dynamical quench, what we are measuring is not mathematically a dynamical phase transition, as previously measured in Refs. [10; 11; 66; 12] through partition function zeroes. Instead, our proposal allows for the demonstration of non-unitary criticality by providing access to a non-Hermitian Ising chain with complex fields from atom loss in an optical array. _Nonunitary criticality of Yang-Lee edge singularities.-_ Next, to show how our protocol above can be used to extract the critical exponents of our YLES through finite-size scaling. After fixing the quenching duration \(T\), the critical values \(\gamma=\gamma_{YL}^{L}\) at different system sizes \(L\) can be extracted from the peak divergences of the plots of the derivative of \(M_{x}(T)\) with respect to \(\gamma\). This is illustrated in FIG. 3 (a) with data from our tMPS simulation (described shortly after), computed with \(T=20/J\) with \(J=1\)[6]. Here, for different system sizes \(L=8,10,...,16\), the critical values of \(\gamma=\gamma_{YL}^{L}\) are marked by dashed vertical lines, where the derivatives peak. In an experiment, \(\gamma_{YL}^{L}\) can be extracted by adjusting \(\gamma\) and measuring \(M_{x}(T)\) in separate spin chains of various lengths \(L\). The characteristic critical exponents for the Yang-Lee singularity can be extracted via the following universal critical scaling law, which holds for its non-unitary \(c=-22/5\) conformal field theory [69; 9]: \[\gamma_{YL}^{L}-\gamma_{Y}^{\infty}\propto L^{-\alpha}=L^{-(\beta_{1}\delta_ {1}/\nu_{1})}, \tag{2}\] where \(\gamma_{YL}^{L}\) is the location of the YLES obtained from our protocol at finite size \(L\). \(\gamma_{YL}^{\infty}\) represents the YLES Figure 2: (a) Flow of the complex spectrum of \(\hat{H}_{\rm TFI}\) (Eq. (1)) as \(\gamma\) is tuned across \(\gamma_{YL}\approx 0.1837\). The ground state eigenenergies (boxed) undergo spontaneous \(\mathcal{PT}\) symmetry breaking and rapidly acquires imaginary parts (green to yellow, red arrow) as \(\gamma\) increases slightly above \(\gamma_{YL}\), much more so than other eigenenergies. (b) and (c): Evolution of the overlap \(\left|\left\langle\psi(T)|\psi_{i}\right\rangle\right|\) between the dynamically evolved state \(\left|\psi(T)\right\rangle=e^{-iT\hat{H}_{\rm TFI}}\left|\psi(0)\right\rangle/ \left|\right|=i^{-iT\hat{H}_{\rm TFI}}\left|\psi(0)\right\rangle\left|\right|\) and all eigenstates \(\left|\psi_{i}\right\rangle\) (blue) of \(\hat{H}_{\rm TFI}\), with initial state being \(\left|\psi(0)\right\rangle=\left|\downarrow\downarrow......\right\rangle\). For \(\gamma\) below the YLES \(\gamma_{YL}\) (b), the ground state overlap (red dashed) decreases rapidly due to mixing. But for \(\gamma>\gamma_{YL}\), the ground state component becomes dominant beyond time \(TJ\sim\mathcal{O}(10)\) due to the large \({\rm Im}E_{g}\) of the ground state, leading to kinked magnetization responses detectable in our proposed Rydberg atom system of FIG. 4 below. All results are obtained from ED with \(J=1\), \(h_{x}=1.5\) and \(L=8\). as \(L\rightarrow\infty\), which can be obtained from our finite-sized data by extrapolating \(\gamma_{YL}^{\mathcal{U}}\) with respect to \(1/L\), as performed by polynomial fitting in FIG. 3 (b). Upon obtaining \(\gamma_{YL}^{\mathcal{S}}\), one can further plot \(\log\bigl{(}\gamma_{YL}^{\mathcal{S}}-\gamma_{YL}^{\mathcal{S}}\bigr{)}\) against \(\log(1/L)\), such that the critical exponent \(\alpha\) in Eq. 2 can be extracted from the gradient of the fitted line in FIG. 3 (c). From our tMPS calculations, we obtain \(\alpha_{\text{MPS}}\approx 2.423\), which is in excellent agreement with its theoretical value from the non-unitary CFT with central charge \(c=-22/5\) for the YLES: \(\alpha_{\text{CFT}}=\beta_{1}\delta_{1}/\nu_{1}=2.40\) with \(\beta_{1}=1,\delta_{1}=-6,\nu_{1}=-5/2\)[67, 68]. Before proposing how our protocol for finite-size systems can be experimentally implemented, we briefly describe the numerical approach of time evolution with matrix product states (tMPS) for computing the results in FIG. 3, which is a state-of-art tool for handling generic one-dimensional quantum many-body systems with nearest-neighbor couplings [70, 71, 72]. We implement the discretized non-unitary time evolution operator \(e^{-i\delta t\hat{H}_{\text{TFII}}}\) for our non-Hermitian Hamiltonian through a non-unitary circuit [73, 74, 6, 75]. As sketched in FIG. 3 (d) [6], for each time step, we utilize a second-order Suzuki-Trotter decomposition as \[U(\delta t)=U_{\text{odd}}(\delta t/2)U_{\text{even}}(\delta t)U_{\text{odd}} (\delta t/2)+\mathcal{O}(\delta t^{3}), \tag{3}\] where the non-unitary components are embedded in every even and odd bond [6]. At the end of each time step, normalization is applied to suppress numerical divergences: this gives a direct implementation of non-unitary dynamics more efficient than ancilla-based approaches with significant information wastage [76, 73]. In this work, we only consider open boundary conditions [77], which is less costly for MPS calculations. _Experimental proposal with Rydberg atoms.-_ To observe YLES in Rydberg atomic arrays through our driving protocol, what needs to be implemented is the time evolution operator \(e^{-iT\hat{H}_{\text{TFII}}}\). For this, we note that in a Rydberg system [46], it is feasible to implement a transverse field Hamiltonian \(\hat{H}_{X}(F)=-\sum_{j}F\hat{\sigma}_{j}^{x}\), an atomic loss \(\hat{H}_{Z}(g)=i\sum_{j}g\hat{\sigma}_{j}^{z}\) and, as we shall show, a ferromagnetic interaction \(\hat{H}_{ZZ}\) that is proportional to a coupling constant \(J_{0}\). As such, we can Trotter-approximate \(e^{-iT\hat{H}_{\text{TFII}}}\) by a two-step Floquet driving protocol (FIG. 4) \[e^{-iT\hat{H}_{\text{TFII}}} \approx[(e^{i\frac{T}{K}\sum_{j}(h_{x}\hat{\sigma}_{j}^{x}-i \gamma\hat{\sigma}_{j}^{z})})(e^{i\frac{T}{K}\sum_{j}J\hat{\sigma}_{j}^{z} \hat{\sigma}_{j+1}^{z}})]^{N}\] \[=\left[e^{+i\tau_{h_{x}}\hat{H}_{X}(\not{F})+i\tau_{h}\hat{H}_{Z} \not{\partial}}e^{+i\tau_{J}\hat{H}_{ZZ}}\right]^{N} \tag{4}\] \[=[U_{2}^{\text{Floq}}U_{1}^{\text{Floq}}]^{N},\] which consists of \(N\) Floquet periods, each period being a two-step quench governed by \(U_{1}^{\text{Floq}}=e^{i\tau_{J}\hat{H}_{ZZ}}\) followed by \(U_{2}^{\text{Floq}}=e^{i\tau_{h_{x}}\hat{H}_{X}(\not{F})+i\tau_{j}\hat{H}_{Z} \not{\partial}}\). As elaborated below, \(F\), \(g\) and \(J_{0}\) are respectively the magnitudes of the physical interactions, transverse fields and atom loss, and are related to their corresponding quenching durations \(\tau_{J}\), \(\tau_{h_{x}}\) and \(\tau_{\gamma}\) via \((\tau_{h_{x}}F,\tau_{\gamma}g,\tau_{J}J_{0})=\frac{T}{N}(h_{x},\gamma,J)\). They should be chosen such that the number of Floquet cycles \(N=\frac{TJ}{J_{0}\tau_{J}}\gg 1\), so as to minimize the Trotterization error [78]. We next detail how the Floquet quenches \(U_{1}^{\text{Floq}}\) and \(U_{2}^{\text{Floq}}\) can be implemented in our Rydberg setup. As shown in FIG. 4, we encode the two hyperfine ground states of trapped Caesium atoms to form a pseudospin-1/2: \(\ket{\downarrow}=\ket{6S_{1/2},F{=}3,m_{F}{=}0}\) and \(\ket{\uparrow}=\ket{6S_{1/2},F{=}4,m_{F}{=}0}\)[46]. To engineer interactions in the first step \(U_{1}^{\text{Floq}}\), we employ Rydberg-dressing [41, 42, 46, 79] by coupling the state \(\ket{\uparrow}\) with the Rydberg state \(\ket{R}=\ket{43P_{3/2}}\). As derived in [6], in the limit of large detuning \(\Delta\) vs. Rabi frequency \(\Omega\) (blue arrow in FIG. 4), the interplay between the bare Rydberg interactions and \(\ket{R}\) leads to an energy shift of \(J_{0}\approx\frac{\Omega^{4}}{8\Delta^{3}}\)[40]. In the subspace \(\ket{\uparrow\uparrow}_{i,i+1}\) of the \(\ket{\uparrow}\) states of nearest neighbor atoms, \(J_{0}\) behaves as an effective interaction [6, 40, 80] \[\hat{H}_{\text{int}}(J_{0})=-\sum_{i}J_{0}\hat{P}_{i}\hat{P}_{i+1}, \tag{5}\] where \(\hat{P}_{i}=\frac{\hat{\sigma}_{i}^{z}+\hat{I}_{i}}{2}\) with \(\hat{\sigma}_{i}^{z}=\ket{\uparrow}_{i}\bra{\uparrow}_{i}-\ket{\downarrow}_{ i}\bra{\downarrow}_{i}\) and \(\hat{I}_{i}=\ket{\uparrow}_{i}\bra{\uparrow}_{i}+\ket{\downarrow}_{i}\bra{ \downarrow}_{i}\). While this is still not the ferromagnetic interaction required in \(\hat{H}_{\text{TFI}}\) of Eq.(4), inspired by the operator identity \(e^{-i\frac{\pi}{2}\hat{\sigma}^{x}}e^{i\tau_{J}J_{0}\hat{\sigma}^{z}}e^{-i\frac{ \pi}{2}\hat{\sigma}^{x}}e^{i\tau_{J}J_{0}\hat{\sigma}^{z}}=-\hat{I}\), one can convert it into a clean ferromagnetic interaction \(\hat{H}_{ZZ}(J_{0})=-\sum_{j}J_{0}\hat{\sigma}_{i}^{x}\hat{\sigma}_{i+1}^{z}\) by applying two transverse \(\hat{\sigma}^{x}\) field kicks: \[(e^{-i\frac{\pi}{2}\sum_{j}\hat{\sigma}_{j}^{x}}e^{+i2\tau_{j}\hat{H}_{\text{ inst}}(J_{0})})^{2}\approx e^{+i\tau_{j}\hat{H}_{ZZ}}=U_{1}^{\text{Floq}}, \tag{6}\] as elaborated in the supplemental materials [6]. Such transverse-field kicks can be generated by microwave fields, shown as the red circle in FIG. 4[46]. To realize the next evolution step \(U_{2}^{\text{Floq}}\), the Rydberg dressing for \(U_{1}^{\text{Floq}}\) is turned off immediately, and another microwave field (brown circle in FIG. 4) is turned on to generate the transverse field \(\hat{H}_{X}(F)=-\sum_{j}F\hat{\sigma}_{j}^{x}\)[46]. At the same time, a strong laser is shone on \(\left|\uparrow\right\rangle\) such as to excite it to another state \(\left|6P_{3/2},F=5\right\rangle\), leading to effective laser-induced loss \(\hat{H}_{Z}(g)=\sum_{j}ig\hat{\sigma}_{j}^{x}\) with imaginary field/decay rate \(g\)[6, 48, 50, 51, 82, 81]. After repeatedly alternating between Floquet steps \(U_{1}^{\text{Floq}}\) and \(U_{2}^{\text{Floq}}\) (FIG. 4) over \(N=\frac{TJ}{J_{0}\tau_{j}}\) iterations, the dynamically evolved magnetization \(\hat{M}_{x}(T)\) can be obtained by measuring the normalized populations in the \(\left|\uparrow\right\rangle\) and \(\left|\downarrow\right\rangle\) levels [6, 14]. The YLES can be observed as kinks in \(M_{x}(T)\) as \(\tau_{\gamma}\) or \(g\) are tuned (Fig. 3). From that, the associated anomalous scaling behavior and exponents (Eq. 2) can be simply be extracted by controlling the number of trapped atoms \(L\)[36, 83]. _Discussions.-_ The ground state properties of non-Hermitian quantum systems are often deemed experimentally inaccessible due to overwhelming decoherence or the lack of thermal equilibrium. Yet, for the Yang-Lee phase transition in our model, we found that the spontaneously broken \(\mathcal{PT}\)-symmetry can give rise to a pronounced kink in the dynamical magnetization \(M_{x}(T)\)_without_ the need for reaching thermal equilibrium. As such, we provide a realistic Floquet evolution protocol for observing the YLES and its associated non-unitary phase transition in a Rydberg chain, distinct from the observation of partition function zeroes in previous experiments [10, 11, 12]. Our proposal paves the way for future experimental observation of not just the YLES, but also other non-unitary phase transitions [84, 82, 85, 86, 87]. The rapid development of hardware and algorithms in universal quantum computation also opens up the possibility of implementing our YLES measurement protocol in quantum computers via ancilla-based methods [88, 89, 90, 91]. Moreover, our MPS implementation, which is related to mid-circuit measurements [92, 93, 94, 95], provides an approach to improve the current ancilla-based methods for dynamically simulating various non-Hermitian many-body phenomena [96, 97, 98, 99, 100, 101, 102, 103] and unconventional non-Hermitian topology [104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117], on quantum circuits. _Acknowledgements.-_ T. C. thanks Bo Yang for fruitful discussions. The exact diagonalization is numerically computed with QuSpin Python library [118, 119], and the MPS results are calculated with ITensor [120]. T. C. acknowledges support from the National Research Foundation, Singapore under the NRF fellowship award (NRF-NRFF12-2020-005). F. Q. is supported by the National Research Foundation, Singapore under its QEP2.0 programme (NRF2021-QEP2-02-P09).
2308.13552
A Lens to Pandemic Stay at Home Attitudes
We describe the design process and the challenges we met during a rapid multi-disciplinary pandemic project related to stay-at-home orders and social media moral frames. Unlike our typical design experience, we had to handle a steeper learning curve, emerging and continually changing datasets, as well as under-specified design requirements, persistent low visual literacy, and an extremely fast turnaround for new data ingestion, prototyping, testing and deployment. We describe the lessons learned through this experience.
Andrew Wentzel, Lauren Levine, Vipul Dhariwal, Zahra Fatemi, Barbara Di Eugenio, Andrew Rojecki, Elena Zheleva, G. Elisabeta Marai
2023-08-23T21:21:16Z
http://arxiv.org/abs/2308.13552v1
# A Lens to Pandemic Stay at Home Attitudes ###### Abstract We describe the design process and the challenges we met during a rapid multi-disciplinary pandemic project related to stay-at-home orders and social media moral frames. Unlike our typical design experience, we had to handle a steeper learning curve, emerging and continually changing datasets, as well as under-specified design requirements, persistent low visual literacy, and an extremely fast turnaround for new data ingestion, prototyping, testing and deployment. We describe the lessons learned through this experience. ## 1 Introduction In early 2020, in light of the emergence of the global COVID-19 pandemic, the U.S. National Science Foundation (NSF) began accepting proposals for non-medical, non-clinical research that could be used right away to explore how to model and understand the spread of COVID-19, how to inform and educate about the science of virus transmission and prevention, and how to encourage the development of procedures and actions to address the pandemic [9]. NSF invited researchers to apply for funding through the Rapid Response Research (RAPID) mechanism. This mechanism enables the NSF to receive and review proposals that have a high priority in terms of the availability of or access to data, facilities, or specialized equipment, as well as quick-response research on natural or anthropogenic disasters and similar unforeseen events. Requests for RAPID proposals could be for up to $200K and up to one year in duration. Our collaborative idea was for a RAPID project analyzing people's moral take on and consequent response to government mandated stay-at-home (SAH) orders at the start of the pandemic, leading to potential insights into effective governmental messaging of such orders. Shared values, beliefs, and understandings were often reflected at the time in social media posts, therefore, we were interested in identifying the underlying values expressed in such posts, and how they related to attitudes with respect to SAH messaging. Concretely, the funded project aimed to analyze these values and attitudes using Moral Foundations (MF) theory, a psychological model for describing the different dimensions underlying social discourse, such as Care or Liberty or Loyalty. Building from the strengths of the team, the project focused on applying MF theory, in collaboration with social-science researchers, Causal Inference (CI) Figure 1: Analysis of moral frames (MF) in social media posts related to Stay-at-Home orders during the COVID-19 pandemic. (A) Tweet summary showing several MF tweet features, sorted here by tweets with a negative stance. (B) Inference panel showing partial dependence plots derives from generalized linear models (GAMS), here showing the relationship between county voting history and COVID-19 cases, and Care tweets. (C) Timeline of Care tweets, along with popularity, COVID-19 incidence, and sentiment. Note the negative spikes in April/June, after BLM protests, and the more negative sentiment (lower bar). A tooltip shows the text of a tweet supporting SAH orders. (D) Glyph-map of counties showing political party (color) and population (width) vs. Care tweets (height); major cities and rural white areas stand out. researchers, and Natural Language Processing (NLP) researchers to help build and analyze a MF-annotated tweet corpus related to SAH orders. Additionally, the project aimed to leverage geospatial data to analyze datasets at multiple levels of spatial aggregation, and to compare temporal and spatial differences to enhance participation and promote positive public health outcomes. Analyzing social media from a moral-framing perspective posed a number of challenges. First, while collaborators had clear insights into MF theory, applying these frames to social media was a difficult and poorly defined task. an example, a tweet stating "We are not staying home!" could be an expression of the "Liberty" frame in opposition to government lockdowns if it was tweeted at the end of March 2020. However, if the tweet occurred two months later during a major civil movement protest, it could be a tweet about solidarity with other protesters and thus would be an expression of the "Loyalty" frame. Therefore, meaningful analysis of the data requires understanding the tweeter's situational context, such as time, location, and major recent events. Second, since developing a corpus to analyze was part of the proposed work, design requirements had to constantly be updated as the expected scope of the data and resulting expectations changed week-to-week. We also found that annotating tweets with moral frames is a difficult task that requires trained experts. These issues resulted in rapid changes in the expected number of tweets that were both annotated and geotagged, the ability to tie multiple tweets to individual users, and the kinds of textual features that were deemed relevant. This resulted in us moving to an agile design approach, with rapid prototyping cycles that constantly changed. Finally, there were many design challenges that were imposed by the collaboration itself. For example, we found that our collaborators, who had a diverse range of backgrounds and experience levels, tended to have limited visual literacy. While the resulting design process was difficult, the collaboration did result in usable insights and results, with results being published at conferences [15, 38]. ## 2 Related Work and Background Our team leveraged Moral Foundation Theory (MFT) for this project. MFT is a psychological model for describing the different dimensions underlying social discourse. MFT is widely used to study how values differ between individuals, and has been used to explain differences in political affiliation and belief systems [26]. Our version of MFT considered 6 foundation frames with opposing pairwise "virtues" and "vices" qualities, such as "Care" (virtue), which pertains to "the need to help or protect oneself or others", and the corresponding "Harm" (vice), which deals with "fear of damage or destruction to oneself or others" [17]. Recent visual analysis studies had looked at attitudes on social media related to public health, including attitudes towards the coronavirus [1], public health interventions [12, 21], climate change [11], and popular topics of discourse on social media [5, 6, 22]. Other systems have looked at information spread on social media, journalism, and misinformation [10, 16, 24, 35, 37], or focused on real-time information spread without geolocation [25, 8, 3]. Systems that focus on event detection [2, 13] tend to not analyze stance or moral foundations. Systems have also focused on temporal progression of topics [4, 14, 44, 20, 27, 43, 42], but none of these systems tie discourse to demographics or MFT framing. ## 3 Methods Our group had to rely on remote collaboration due to the pandemic lockdowns, with weekly meetings, and extensive use of collaborative tools leveraging our laboratory's expertise [32]. The core group spanned four research labs, evenly distributed: communications, NLP, causal inference in social media, and visual computing. Per our initial agreement, all group members are listed as co-authors. Our design process relied heavily on rapid prototyping using an activity-centered approach [31] and an agile framework, a combination which had been very successful in our work with domain experts across the disciplines [34, 33, 39, 42]. The project featured two stages, foraging for data and features, and hypothesis testing, which focused on establishing the cause of trends. Our collaborators' research activities were focused on identifying and explaining SAH-related phenomena and how Moral Framing factors into them. In short, we used MF as a lens to create a SAH-relevant dataset, and then to analyze and describe larger social trends. This agile, distributed design was similar to the distributed prototyping methods described by Losev et al [28]. However, our process focused more heavily on rapid prototypes without interaction design due to a larger emphasis on the data foraging stage. ### Overall Design and Data Sources The overall top-design emerged gradually and organically, and featured multiple coordinated views (Fig. 1). We gradually added data sources to our system, starting with a continually updated and expanded set of manually labeled geotagged U.S. tweets that expressed a moral frame and stance regarding stay-at-home (SAH) orders between March and May 2020. Tweets were manually labelled for SAH relevance, moral framing, and stance by expert annotators. Non-relevant tweets or tweets with no moral frames were excluded. The resulting corpus is described by Fatemi et al. [15]. During processing, each tweet was mapped to one of the 3113 U.S. counties, not including Alaska as we did not have election data for those areas. We then integrated several geospatial datasets: 2018 census information [36, 41], voting results from the 2016 US presidential election results [36], results from a New York Times survey on estimated mask usage [40], and daily COVID-19 cases and deaths for each county [23]. We also gradually extracted tweet features such as stance (pro SAH or against SAH), virality, sentiment, and vividness, and aggregated each moral frame stance within each county, resulting, over the length of the project, in 14 different continuous values for each county. ### Four Custom Views One view was dedicated to data summarization, and supported foraging activities at the start of the project. It gives an overview of the tweet features of interest, such as sentiment or vividness, broken down by Moral Frame. Due to limited visual literacy among the group at the start of the project, the view made extensive use of rotated, colored bar charts. To alleviate the cognitive load due to the extensive use of color, the colors were mapped as intuitively as possible (e.g., gray for not vivid, purple for vivid) based on perceptual principles and common media interpretation. Figure 2: Design of timelines using aggregate tweets and small multiples for each tweets for each moral frame used on a larger non-geotagged dataset. A custom Timeline panel (Fig. 2) showed the temporal distribution of the tweets, along with changes in COVID-19 rates, overall sentiment, and tweet popularity. The timeline panel grew with the project, with additional features being mapped to it. Again due to limited visual literacy, the timeline used basic cues such as layout, height, and color (hue and saturation) to encode time-varying multi-dimensional data. Because of scalability issues with this rich custom encoding, later applications to larger datasets used sparklines as an alternative. Later on during the foraging stage, a Geospatial map panel showed the geographic distribution of tweets with a given Moral Frame, overlaid with demographic data. The design of this panel featured several costly trial-and-error design cycles (Fig. 3), where texture-blending based approaches which worked on other scientific datasets [18], and which seemed acceptable during lo-fi prototyping, backfired within our group in the hi-fi prototyping stage. We gradually developed an alternative, far more successful glyph encoding. Finally, during the hypothesis testing stage, providing support for inference analysis became important. An Inference panel was created (Fig. 1), which allows for building predictive models within the front-end of the system, to support inferences about how different factors influence average tweet features. ## 4 Evaluation Our system aimed to support the development of insights related to SAH policy application in the U.S. We report here one of the case studies that our team performed over the course of several months, with results published and presented in several venues [38, 15]. The case study (Fig. 1) was performed remotely, with the team piloiting the investigation via Zoom meetings, and the lead author operating the front-end of the system accordingly. Collaborators were also given independent access to the front end, which they used for additional analyses between meetings. For brevity, we distill the main insights here: **Major Frames** The dominant frames were Care, followed by Harm, which share the same foundation. Care tweets were overwhelmingly in support of SAH orders, while Harm were more mixed. Tweets with Harm also increased after the first month of Lockdowns, around times when SAH orders were removed, and had lower sentiment overall, which may reflect pandemic fatigue. Care tweets were significantly correlated with areas that self-reported higher mask use. An important implication of this analysis is that Care-targeted messaging of SAH orders helps. **Political Polarization** The frames most associated with Democratic areas were Betrayal and Loyalty, which share a foundation, while the libertarian frames Subversion and Freedom were most associated with Republican areas. Betrayal tweets all happened in the start of May, largely in response to unmasked protests, and the mandated ending of SAH orders in major left-leaning cities located within right-leaning states. Notably, no Betrayal tweets came from Chicago, which may be due to the Mayor's continuation of SAH orders. Subversion and Freedom tweets increased after the first renewal of SAH orders in April. An important implication is that higher-granularity political analysis would help with targeted SAH messaging, for example including a Libertarian perspective in addition to Republican or Democrat views. **Spatial-Temporal Trends** Tweets regarding SAH orders peaked at the start of the pandemic, with recurrences focused on major cities when SAH orders were renewed or ended. We also noted a large dip at the end of May, likely due to the George Floyd protests occupying the public zeigteist at this time, with a short increase at the start of June in response to updates in public mask mandates in several cities. An important finding was that rural areas were underrepresented in the social media dataset. Qualitative feedback from our collaborators across the disciplines was enthusiastic: _"Excellent for data exploration"_, _"Great for in Figure 3: Examples of the progression of map design through the design process. (Top-left) Choropleth map using texture weaving [18] to encode two features. (Top-right) Choropleth map with a single color and glyphs showing tweet distribution. (Bottom-left) Choropleth map using textures and glyphs. Areas are aggregated by the intersection of district voting maps and counties for demographic data to make each section more evenly distributed in terms of population. Glyphs show tweets aggregated at the county level. (Bottom-right) Glyph-based map where shape encodes tweet features and color encodes demographics. All maps are zoomable vestigating anomalies in the data."_, _"Amazing work, the encodings work well together."_, _"This interface does very nicely with the data and domain knowledge we've been given."_, _"It's nice, pleasant to look at. And extremely informative."_ In terms of ratings, components were rated favorably, with two requests for the additional on-demand glyph pictorial explanation. Experts found the system met these goals well: understanding the MF distribution, temporal MF distribution, MF political context, and verifying MF hypotheses. Most also found the system useful in analyzing MF tweet features, the corpus, exploring sentiment and topic relationships, and MF geographical distribution. The Causal inference and Communications specialists found the system very useful for understanding the emerging corpus, whereas some of the NLP experts continued to rely for corpus analysis tasks on their standard approach (LDA, topic clustering). ## 5 Discussion Our original design for this project was to create a large-scale analysis tool for analyzing networks of twitter stance and how different moral-framing affected the propagation of certain posts. In reality, we found that there were many difficulties with assessing a developing, real time problem while also incorporating a rich set of features in a collaborative setting. Despite this we did manage to create a unique system despite issues related to availability of data and issues that arose from a short-term collaboration with changing goals. Although our project did not seek to advance Moral Framing theory, our results point to a need for careful consideration of how the MF data is collected, for example using more granular political affiliation measures. **Generalizability and Scalability** In response to collaborator and reviewer requests, we expanded our system to be usable for other datasets. Specifically, we relied on an earlier dataset of tweets related to the Black Lives Matter Movement [19], where we used hashtag content to estimate stance. The resulting dataset was slightly larger and covered a longer time period, with 1901 geotagged tweets spanning 2 years. The main challenge was adapting the timeline to not encode COVID-19 rates in the timeline, and to use textual features instead. Our results suggest that our approach generalizes well, with the main issue being the availability of geotagged data with sufficient quality annotation. In terms of scalability, we mainly faced issues with adapting the timeline to larger tweets. In a separate analysis using a non-geotagged dataset, we relied instead on a design that relied on small multiples, and shown in Fig. 2. Our collaboration was a learning process in producing rapid, evolving designs, leading to several design lessons, beyond visual scaffolding [30] for improved visual literacy: 1. Minimize assumptions about the data when designing during the data collection process. This allows us to gradually specialize designs while keeping earlier progress in the event that the data changes. For example, initially, collaborators' main interest was in the relationship between moral framing and political affiliation. However, data availability made collecting multiple relevant tweets from each individual difficult. Thus we had to rely on location to infer demographics for tweets from each region, and reason about moral framing on an aggregate scale. Given the necessity of context, we further found that automatic annotation of moral frames performed too poorly. As a result, obtaining a pool of geotagged tweets that expressed a moral stance regarding stay at home orders was difficult, and yielded a much smaller dataset than what was originally expected at the start of the design process. 2. Collaboration issues are exacerbated by tight project timelines. We found that in the early stages our collaborators were unable to articulate what they wanted out of the designs: their earliest requirements were a basic COVID-19 dashboard with political associations overlayed on the map. These findings were similar to those described by Sondag et al. [39]. However, we found that true requirements were more easily assessed by analyzing what points were discussed during meetings and what researchers were most focused on in their analysis. Collaborators also had insufficiently defined goals, and we found that there was not enough time to allow proper collaboration practices to mature (for example, a social science collaborator published project joint results without crediting the whole team). If time allows it, we would recommend relying on "ethographic" methods rather than interviews for requirement gathering. We would also recommend repeatedly revisiting and enforcing collaborative practices during group meetings. 3. Keep designs generalizable. Because our study changed so rapidly, by the time the data collection and design process was finished, the original topic was considered "obsolete", with different topics taking over the online debate. In addition, our original data came from Twitter, which now has stricter limits on the amount of data scraping that can be used. In turn, our work needs to adapt to different topics, as well as different data sources to stay relevant. Our end design generalized well across case studies and datasets. 4. Dissemination issues stemming from data. Disappointingly, during the publication process we met resistance due to a misalignment of review expectation and design practice that couldn't be rectified within the project timeline. Most of these issues resulted from the unexpected difficulty in gathering a dataset of sufficient size and quality, largely due to the fact that automatic annotation proved to be ineffective, and the project timeline didn't provide sufficient time to manually collect an annotated sufficiently sized dataset. This was a common review issue in both CI/NLP-centered and vis-centric venues. As a result, several reviewers either claimed we had an insufficient dataset when using manual data, or we had an insufficient algorithm when using automatic annotation. ## 6 Conclusion In conclusion, our project was successful in producing insights, and thus in achieving its proposed goals. It was also satisfying to be able to contribute to a better understanding of the pandemic policies, in particular at a time when vaccines were not yet available. At the same time, it was a difficult, extremely intensive experience, leading to several cases of burnout and dissatisfaction on the team, which were then aggravated by repeated difficulties in getting the methods published. If at all possible, we would rather embark on projects where (most of) the data has been already collected, and (most of) the requirements can be safely established during requirements engineering. ###### Acknowledgements. We thank Juan Trelles and our other colleagues at the Electronic Visualization Laboratory for their technical and emotional support. This work was partially supported by awards from the U.S. National Science Foundation (IIS-2031095, CNS-1828265, CDSE-1854815) and the U.S. National Institutes of Health (NLM R01LM012527, NCI R01CA258827).
2302.08897
Forecasting the Turkish Lira Exchange Rates through Univariate Techniques: Can the Simple Models Outperform the Sophisticated Ones?
Throughout the past year, Turkey's central bank policy to decrease the nominal interest rate has caused episodes of severe fluctuations in Turkish lira exchange rates. According to these conditions, the daily return of the USD/TRY have attracted the risk-taker investors' attention. Therefore, the uncertainty about the rates has pushed algorithmic traders toward finding the best forecasting model. While there is a growing tendency to employ sophisticated models to forecast financial time series, in most cases, simple models can provide more precise forecasts. To examine that claim, present study has utilized several models to predict daily exchange rates for a short horizon. Interestingly, the simple exponential smoothing model outperformed all other alternatives. Besides, in contrast to the initial inferences, the time series neither had structural break nor exhibited signs of the ARCH and leverage effects. Despite that behavior, there was undeniable evidence of a long-memory trend. That means the series tends to keep a movement, at least for a short period. Finally, the study concluded the simple models provide better forecasts for exchange rates than the complicated approaches.
Mostafa R. Sarkandiz
2023-02-12T01:01:36Z
http://arxiv.org/abs/2302.08897v1
**Forecasting the Turkish Lira Exchange Rates through Univariate Techniques: Can the Simple Models Outperform the Sophisticated Ones?1** ## Abstract Throughout the past year, Turkey's central bank policy to decrease the nominal interest rate has caused episodes of severe fluctuations in Turkish lira exchange rates. According to these conditions, the daily return of the USD/TRY have attracted the risk-taker investors' attention. Therefore, the uncertainty about the rates has pushed algorithmic traders toward finding the best forecasting model. While there is a growing tendency to employ sophisticated models to forecast financial time series, in most cases, simple models can provide more precise forecasts. To examine that claim, present study has utilized several models to predict daily exchange rates for a short horizon. Interestingly, the simple exponential smoothing model outperformed all other alternatives. Besides, in contrast to the initial inferences, the time series neither had structural break nor exhibited signs of the ARCH and leverage effects. Despite that behavior, there was undeniable evidence of a long-memory trend. That means the series tends to keep a movement, at least for a short period. Finally, the study concluded the simple models provide better forecasts for exchange rates than the complicated approaches. ## Key Words: Exchange Rate, Forecasting, Autoregressive, Exponential Smoothing, Structural Break ## JEL Classification: C51; C53; C58 ## I) Introduction In most macroeconomic analyses, the exchange rate has always been an integral part of the macro models because the rate plays a crucial role in determining the export/import ratio, which is one of the fundamental parameters in GDP formation and inflation fluctuations [1]. If there are no transaction costs and trade barriers, the purchasing power parity (PPP) hypothesis states the exchange rate of two currencies equals the ratio of their inflation rates. The hypothesis has been subjected to numerous investigations; however, it has been rejected in most cases. In fact, a PPP-based exchange rate time series that is more stable than the market series can be calculated, and in most cases, two series are cointegrated. For instance, [2] found out there is no short-term co-movement between those time series, but in the long run, the market rates tend to move toward the PPP rates. Actually, the short-term decoupling happens because more parameters other than inflation rate influence the currency's ratio. In this regard, [3], by conducting several diagnostic tests, concluded there is a negative long-term nexus between the balance of trade and exchange rate. In contrast, [4] found a strong positive relationship between them; however, in the short run, the correlation could be insignificant or non-linear. In addition to those mentioned factors, the foreign debts and the credit risk are two examples of other
2304.12599
Normal forms for quasi-elliptic Enriques surfaces and applications
We work out normal forms for quasi-elliptic Enriques surfaces and give several applications. These include torsors and numerically trivial automorphisms, but our main application is the completion of the classification of Enriques surfaces with finite automorphism groups started by Kondo, Nikulin, Martin and Katsura-Kondo-Martin.
Toshiyuki Katsura, Matthias Schütt
2023-04-25T06:12:19Z
http://arxiv.org/abs/2304.12599v2
# Normal forms for quasi-elliptic Enriques surfaces and applications ###### Abstract. We work out normal forms for quasi-elliptic Enriques surfaces and give several applications. These include torsors and numerically trivial automorphisms, but our main application is the completion of the classification of Enriques surfaces with finite automorphism groups started by Kondo, Nikulin, Martin and Katsura-Kondo-Martin. Research of the first author is partially supported by JSPS Grant-in-Aid for Scientific Research (C) No.23K03066. _In each case, we only require that \((a_{1},a_{2})\not\equiv(0,0)\)._ One can also give a combined equation covering both cases to view the supersingular case as a specialization of the classical one, see (11.1) in Theorem 11.1. These equations are very useful for explicit computations, similar to the Weierstrass form of an elliptic surface (with section). We shall demonstrate this with three major applications. Our first application concerns the Enriques torsors above a given quasi-elliptic rational surface \(X\): **Theorem 1.2**.: _A general quasi-elliptic rational surface \(X\) admits an irreducible 4-dimensional family of torsors of classical Enriques surfaces and an irreducible 3-dimensional family of torsors of supersingular Enriques surfaces._ _More precisely, given any \(X\), the families of torsors have dimension one resp. two less than in the general case if and only if \(X\) has only two resp. one reducible fibre(s)._ In comparison, in characteristic zero there are 2-dimensional families of torsors of Enriques surfaces above a given rational elliptic surface (but they are not necessarily irreducible), and the same holds true for very general rational elliptic surfaces in any characteristic for moduli dimension reasons. Theorem 1.2 thus shows once again how special quasi-elliptic Enriques surfaces are. In the proof of Theorem 1.2, we will explicitly exhibit the torsors in question. This will also put us in the position to complete the classification of Enriques surfaces with finite automorphism groups, our second application. After the work of Kondo [14], Nikulin [21], and Martin [18], there are only the cases of classical and supersingular Enriques surfaces left where the possible graphs \(\Gamma\) of smooth rational curves have been computed in [12], but the automorphism groups and the moduli involved have not been determined completely yet. Using Theorem 1.1, we can remedy this with our second main result: **Theorem 1.3**.: _Let \(S\) be an Enriques surface with finite automorphism group. Then \(S\) appears in [12] or [18]._ _In particular, classical or supersingular Enriques surfaces with finite automorphism group form irreducible families, depending on \(\Gamma\), of the dimension and automorphism group stated in [12]._ It also follows that the subgroups of cohomologically or numerically trivial automorphisms are as stated in [12]. By [8], this leaves open only the case of Enriques surfaces with a cohomologically trivial automorphism of order \(3\). Using Theorem 1.1 and 1.2, we can also solve this: **Theorem 1.4**.: _Let \(S\) be an Enriques surface (in any characteristic) with a cohomologically trivial automorphism of order \(3\). Then \(S\) is a supersingular Enriques surface in characteristic \(2\) and belongs to the following family:_ \[S:\quad y^{2}=tx^{4}+\alpha t^{5}x^{2}+t^{7}x+t^{3}\quad(\alpha\in k). \tag{1.3}\] As a consequence, the full picture of numerically trivial automorphisms of Enriques surfaces follows (and similar for cohomologically trivial automorphisms): **Corollary 1.5**.: \(G\) _appears as group of numerically trivial automorphisms of some Enriques surfaces over \(k\) if and only if_ 1. \(G\in\{\{1\},\mathbb{Z}/2\mathbb{Z},\mathbb{Z}/4\mathbb{Z}\}\) _if char_\((k)\neq 2\)_;_ 2. \(G\in\{\{1\},\mathbb{Z}/2\mathbb{Z}\}\) _if char_\((k)=2\) _and the Enriques surfaces are singular;_ 3. \(G\in\{\{1\},\mathbb{Z}/2\mathbb{Z},(\mathbb{Z}/2\mathbb{Z})^{2}\}\) _if char_\((k)=2\) _and the Enriques surfaces are classical;_ 4. \(G\in\{\{1\},\mathbb{Z}/2\mathbb{Z},\mathbb{Z}/3\mathbb{Z},\mathbb{Z}/5\mathbb{ Z},\mathbb{Z}/7\mathbb{Z},\mathbb{Z}/11\mathbb{Z},Q_{8}\}\) _if char_\((k)=2\) _and the Enriques surfaces are supersingular._ One can also apply our results to the study of maximal root types supported on Enriques surfaces (i.e. rank 9 root lattices whose vertices correspond to smooth rational curves). In [26], there was given a complete classification of the maximal root types for singular and classical Enriques surfaces. This can now be complemented for many types on supersingular Enriques surfaces. For another direction of applications of our methods, see Remark 11.4. The paper is organized as follows. After reviewing the basics of Enriques surfaces needed for our work in Section 2, we start by developing an equation for nodal Enriques surfaces valid in any characteristic (Section 3). Then we specialize to the quasi-elliptic setting. Using the relative Jacobian and a detailed analysis of the singularities and their resolutions, including the impact on the canonical divisor, we derive the equations from Theorem 1.1; this line of argument covers Sections 4 through 11. The final three sections are concerned with the applications given above. ## 2. Basics on Enriques surfaces Let \(S\) be an Enriques surface, i.e. a smooth algebraic surface with \[b_{2}(S)=10,\ \ K_{S}\equiv 0\] regardless of the characteristic. Outside characteristic two, Enriques surfaces form an irreducible 10-dimensional family, but in characteristic two there are three classes of Enriques surfaces by [2] which depend on the Picard scheme \(\text{Pic}^{\tau}(S)\) as follows: \[\begin{array}{ll}\text{classical:}&\text{Pic}^{\tau}(S)=\mathbb{Z}/2\mathbb{Z} \\ \text{singular:}&\text{Pic}^{\tau}(S)=\mu_{2}\\ \text{supersingular:}&\text{Pic}^{\tau}(S)=\alpha_{2}\end{array}\] Each classical and singular Enriques surfaces form irreducible 10-dimensional families; their closures intersect in the 9-dimensional supersingular locus (cf. [17]). Singular Enriques surfaces behave as in characteristic zero in the sense that they are quotients of K3 surfaces by free involutions. Hence many ideas and results carry over; for instance the classification of singular Enriques surfaces with finite automorphism group is completely known thanks to [18] while there are a few open questions for the other types of Enriques surfaces which we will answer in this paper. By [9] we know that \(\rho(S)=10\) and \(\text{Num}(S)\cong U\oplus E_{8}\). In particular, \(\text{Num}(S)\) represents zero non-trivially, and by Riemann-Roch, \(S\) admits a genus one fibration \[f:\quad S\to\mathbb{P}^{1} \tag{2.1}\] Outside characteristic two, this comes with two multiple fibres (thus with no section), but in characteristic two Enriques surfaces are divided into the following In each case, the multiple fibres (of multiplicity two) behave as follows by [5, Thms 5.7.5, 5.7.6]: \[\begin{array}{ll}\text{classical}&\text{two multiple fibers, both ordinary or additive}\\ \text{singular}&\text{one multiple fiber, ordinary or multiplicative}\\ \text{supersingular}&\text{one multiple fiber, supersingular or additive}\end{array}\] Many properties of an Enriques surface are governed by the question whether it contains a smooth rational curve \(C\) (often called a nodal curve or \((-2)\)-curve, for \(C^{2}=-2\)). Nodal Enriques surfaces have codimension 1 in moduli, and by [4, 16] one can arrange for \(C\) to appear as a bisection of (2.1) (or as a fibre component if that is preferable). In the next section, we will start by briefly considering nodal Enriques surfaces in full generality. These considerations will apply subsequently to quasi-elliptic Enriques surfaces in characteristic two, because these always contain a nodal curve, namely the curve of cusps. ## 3. The defining equation In this section, we let \(k\) denote an arbitrary algebraically closed field. Let \(S\) be a nodal Enriques surface and fix a genus one fibration (2.1) with nodal bisection \(C\). Let \(t\) be a coordinate of \(\mathbf{A}^{1}\subset\mathbf{P}^{1}\). Then, \(t\) has a pole of order at the point \(P_{\infty}\) at infinity. We may assume that \(f:S\longrightarrow\mathbf{P}^{1}\) has a multiple fiber at \(P_{\infty}\). We set \(f^{-1}(P_{\infty})=2F_{\infty}\). Then, we have \[C^{2}=-2,\ (C\cdot F_{\infty})=1,\ F_{\infty}^{2}=0.\] By a suitable Mobius translation, we may assume that the fiber defined by \(t=0\) is a regular fiber. We have the following vanishing theorem. **Theorem 3.1**.: \((\)_Cossec, Dolgachev and Liedtke_[6, Theorem 2.1.15]\()\) _Let \(S\) be an Enriques surface defined over \(k\) and \(D\) be a nef and big divisor on \(S\). Then we have \(\mathrm{H}^{1}(S,\mathcal{O}_{S}(D))=0\)._ **Lemma 3.2**.: _(i) For any integer \(n\geq 2\), \(C+nF_{\infty}\) is nef and big._ _(ii) For any integer \(n\geq 4\), \(2C+nF_{\infty}\) is nef and big._ _(iii) For any integer \(n\geq 8\), \(4C+nF_{\infty}\) is nef and big._ Proof.: To prove (i), we use \((C+nF_{\infty})^{2}=-2+2n\). If \(n\geq 2\), then we get \((C+nF_{\infty})^{2}>0\), so \(C+nF_{\infty}\) is big. For any irreducible component \(E\) of \(F_{\infty}\), we have \((F_{\infty}\cdot E)=0\). Therefore, we have \(((C+nF_{\infty})\cdot E)\geq 0\). We also have \(((C+nF_{\infty})\cdot C)=-2+n\geq 0\) by assumption. For any irreducible curve \(C^{\prime}\), this gives \(((C+nF_{\infty})\cdot C^{\prime})\geq 0\). We conclude that \(C+nF_{\infty}\) is nef as claimed. The proofs of (ii) and (iii) are similar and thus omitted for brevity. **Lemma 3.3**.: _(i) For \(n\geq 2\), \(\dim L(C+nF_{\infty})=n\)._ _(ii) For \(n\geq 4\), \(\dim L(2C+nF_{\infty})=2n-3\)._ _(iii) For \(n\geq 8\), \(\dim L(4C+nF_{\infty})=4n-15\)._ Proof.: Since we have \(\chi(\mathcal{O}_{S})=1\), by the Riemann-Roch theorem we have \[\chi(\mathcal{O}_{S}(C+nF_{\infty}))=\{(C+nF_{\infty})^{2}-(C+nF_{\infty}) \cdot K_{S}\}/2+1.\] By Theorem 3.1, Lemma 3.2 and the Serre duality theorem, we have \[\mathrm{H}^{i}(S,\mathcal{O}_{S}(C+nF_{\infty}))=0\quad(i=1,2)\quad\text{for $n \geq 2$}.\] Therefore, we have \(\dim L(C+nF_{\infty})=n\). The proofs of (ii) and (iii) are similar. We will use Lemma 3.3 to calculate a defining equation of \(S\). Since \(\dim L(2F_{\infty})=2\), \(\dim L(C+2F_{\infty})=2\) and \(L(2F_{\infty})\subset L(C+2F_{\infty})\), we have \[L(2F_{\infty})=L(C+2F_{\infty}).\] Therefore, \(\{1,t\}\) gives a basis of \(L(C+2F_{\infty})\). Since \(L(C+3F_{\infty})=3\), there exists an element \(x\in L(C+3F_{\infty})\) such that \(\{1,t,x\}\) gives a basis of \(L(C+3F_{\infty})\). We have \[(x)_{\infty}=C+G_{1} \tag{3.1}\] with an effective divisor \(G_{1}\) such that \(G_{1}\subset 3F_{\infty}\) and \(G_{1}\not\subset 2F_{\infty}\). We have \(L(2C+3F_{\infty})\supset L(C+3F_{\infty})\). Suppose that there exists an element \(f\in L(2C+3F_{\infty})\) such that \((f)_{\infty}=2C+G_{0}\) with \(G_{0}\subset aF_{\infty}\)\((0\leq a\leq 3)\). Then, we have \((f)=D-(2C+G_{0})\). Here, \(D\) is an effective divisor which does not contain any irreducible component of \(2C+G_{0}\). Therefore, we have \[0=((f)\cdot C)\geq(D\cdot C)+(4-a)>0,\] a contradiction. Therefore, we have \[L(2C+3F_{\infty})=L(C+3F_{\infty}).\] By Lemma 3.3 we have \(\dim L(2C+4F_{\infty})=5\). We see \(1,t,t^{2},x\in L(2C+4F_{\infty})\), and so there exists an element \(y\in L(2C+4F_{\infty})\) such that \(\{1,t,t^{2},x,y\}\) is a basis of \(L(2C+4F_{\infty})\). We have \[(y)_{\infty}=2C+G_{2} \tag{3.2}\] with an effective divisor \(G_{2}\) such that \(G_{2}\subset 4F_{\infty}\) and \(G_{2}\not\subset 3F_{\infty}\). We also have \[(t)_{\infty}=2F_{\infty}. \tag{3.3}\] We consider a vector space \(L(4C+16F_{\infty})\). By (15.3), (3.2) and (3.3), we see that the following 50 functions are contained in \(L(4C+16F_{\infty})\): \[\begin{array}{l}t^{i}\;(0\leq i\leq 8),\;t^{i}x\;(0\leq i\leq 6),\;t^{i}x^{2}\; (0\leq i\leq 5),\\ t^{i}x^{3}\;(0\leq i\leq 3),\;t^{i}x^{4}\;(0\leq i\leq 2),\\ t^{i}y\;(0\leq i\leq 6),\;t^{i}y^{2}\;(0\leq i\leq 4),\;t^{i}xy\;(0\leq i \leq 4),\;t^{i}x^{2}y\;(0\leq i\leq 3)\end{array}\] On the other hand, by Lemma 3.3, we have \(\dim L(4C+16F_{\infty})=49\). Therefore, these 50 functions are linearly dependent over \(k\). We denote by \(g_{i}(t)\), \(h_{i}(t)\), \(k_{i}(t)\) polynomials of degree less than or equal to \(i\) with variable \(t\). A non-trivial linear relation between these functions is expressed as \[\begin{array}{l}h_{4}(t)y^{2}+k_{3}(t)x^{2}y+k_{4}(t)xy+h_{6}(t)y\\ \quad=g_{2}(t)x^{4}+g_{3}(t)x^{3}+g_{5}(t)x^{2}+g_{6}(t)x+g_{8}(t)\end{array} \tag{3.4}\] with indices indicating the degree of the respective polynomial in \(k[t]\). Now, we consider the generic fiber \(E\) of \(f:S\longrightarrow\mathbf{P}^{1}\). Then, this is a curve of genus 1 over \(k(t)\) and the bisection \(C\) gives a point \(P\) of degree 2 on the curve \(E\). By the Riemann-Roch theorem for curves, we have \(\dim L(nP)=2n\) for \(n\geq 1\). By the consideration above, we see \(\langle 1,x\rangle\) is a basis of \(L(P)\), and \(\langle 1,x,x^{2},y\rangle\) is a basis of \(L(2P)\). It is easy to see that \(1,x,x^{2},x^{3},x^{4},y,y^{2},xy,x^{2}y\) are elements of \(L(4P)\). Since \(\dim L(4P)=8\), these 9 elements are linearly dependent over \(k(t)\). Therefore, the equation (3.4) is nothing but the desired linear relation over \(k(t)\). We continue to argue with \(2P\). Since this is very ample on \(E\), \(E\) is embedded into \(\mathbb{P}^{3}\) via \([X_{0},X_{1},X_{2},X_{3}]=[1,x,x^{2},y]\). But then the image lies on the conic \(\{X_{0}X_{2}=X_{1}^{2}\}\), so we can eliminate \(X_{2}\) and obtain (3.4) as affine equation over \(k(t)\). This means that our Enriques surface is birationally equivalent to the surface defined by the equation \((\ref{eq:2.4})\). Hence, we have the following theorem. **Theorem 3.4**.: _Any nodal Enriques surface is birationally expressed by (3.4)._ In the next section, we will specialize to the quasi-elliptic setting in characteristic two to derive a much more convenient equation (as ultimately displayed in Theorem 1.1) which will also lend itself to several applications. ## 4. Queen type equations We now specialize to the situation in characteristic two where the genus one fibration is quasi-elliptic and the nodal curve \(C\) is the curve of cusps. We will use this setting to simplify the equation (3.4). Our results should be compared to those of Queen [23], [24]. The main difference is that Queen works over a field, so he can simplify further, while we prefer to preserve some polynomial shape with good control over the degrees involved. We let \(K=k(t)\) and consider the degree two extension \(K(E)/K(x)\). In what follows, we distinguish whether this extension is separable or not. We argue in complete analogy with Section 3 using the point \(P\) at infinity corresponding to the curve of cusps. ### Inseparable case As above, \(\{1,x,x^{2},y\}\) is a basis of \(L(2P)\), and we have \(K(E)=K(x,y)\). Since \(K(E)/K(x)\) is a purely inseparable extension of degree 2, we see that \(y^{2}\in K(x)\). On the other hand, we have \(y^{2}\in L(4P)\) and \[K+Kx+Kx^{2}+Kx^{3}+Kx^{4}=K(x)\cap L(4P).\] Therefore, there exist elements \(a,b,c,d,e\in K\) such that \[y^{2}=ax^{4}+bx^{3}+cx^{2}+dx+e.\] Suppose that \(b\) is not zero. Then, differentiating this equation with respect to \(x\), we have the singular locus of \(E\) defined by \(bx^{2}+d=0\), which contradicts our assumption that the infinite point \(P\) is the cusp. Thus \(b=0\) and equation (3.4) becomes \[h_{4}(t)y^{2}=g_{2}(t)x^{4}+g_{5}(t)x^{2}+g_{6}(t)x+g_{8}(t). \tag{4.1}\] ### Separable case We consider \(L(4P)\). Then, by the Riemann-Roch theorem we have \(\dim L(4P)=8\). Since \(1\), \(x\), \(x^{2}\), \(x^{3}\), \(x^{4}\), \(y\), \(xy\), \(x^{2}y\) and \(y^{2}\) are elements of \(L(4P)\), we have a linear relation \[y^{2}+(ax^{2}+bx+c)y=dx^{4}+ex^{3}+fx^{2}+gx+h \tag{4.2}\] with \(a,b,c,d,e,f,g,h\in K\). Note that by considering the pole order at \(P\), the coefficients of \(y^{2}\) and of \(x^{4}\) are non-zero, so we can take the coefficient of \(y^{2}\) to be 1 and assume \(d\neq 0\). By the change of coordinates \(X=1/x\) and \(Y=y/x^{2}\), we have \[Y^{2}+(a+bX+cX^{2})Y=d+eX+fX^{2}+gX^{3}+hX^{4}. \tag{4.3}\] By our assumption, the point \(P\) of degree 2 defined by \(X=0\) is the cusp singularity. Therefore, differentiating with respect to \(X\) and \(Y\), we infer that \(X=0\) must be a solution of the equations \[a+bX+cX^{2}=0,\ \ \ bY=e+gX^{2}.\] Therefore, we have \(a=0\) and \(bY=e\). Suppose \(b\neq 0\). Then, we have \(Y=e/b\). Substituting these results to the equation (4.3), we have \((e/b)^{2}=d\), and equation (4.2) becomes \[y^{2}+(bx+c)y=(e/b)^{2}x^{4}+ex^{3}+fx^{2}+gx+h.\] By the change of coordinates \(\tilde{y}=y+(e/b)x^{2}\), this equation is converted to a cubic in \(x,y\). By inspection, it attains a section at infinity. In particular, this quasi-elliptic surface cannot have multiple fibers, which contradicts our assumption. Hence, we see \(b=0\) and \(e=0\), and our equation becomes \[y^{2}+cy=dx^{4}+fx^{2}+gx+h.\] Since \(K(x,y)/K(x)\) is separable, we have \(c\neq 0\). Applying this calculation to the equation (3.4), we obtain \[h_{4}(t)y^{2}+h_{6}(t)y=g_{2}(t)x^{4}+g_{5}(t)x^{2}+g_{6}(t)x+g_{8}(t). \tag{4.4}\] Note that this contains (4.1) as a subfamily, though in what follows the two equations will sometimes display quite a different behaviour. ## 5. General normal form We aim to convert equations (4.1) and (4.4) alike to a general normal form. To this end, we multiply both sides of (4.4) by \(h_{4}\). Replacing \(h_{4}y\) by \(y\), we obtain the equation \[y^{2}+h_{6}y=h_{4}g_{2}x^{4}+h_{4}g_{5}x^{2}+h_{4}g_{6}x+h_{4}g_{8}.\] Writing \(h_{4}g_{2}=h_{3}^{2}+th_{2}^{2}\), we can translate \(y\) by \(h_{3}x^{2}\) to get \[y^{2}+h_{6}y=th_{2}^{2}x^{4}+(h_{4}g_{5}+h_{3}h_{6})x^{2}+h_{4}g_{6}x+h_{4}g_{8}.\] Dividing \(x\) and \(y\) by \(h_{2}\), this leads to \[y^{2}+h_{2}h_{6}y=tx^{4}+(h_{4}g_{5}+h_{3}h_{6})x^{2}+h_{2}h_{4}g_{6}x+h_{2}^{2 }h_{4}g_{8}. \tag{5.1}\] We could continue by analysing this equation (for instance the special fibres at the zeroes of \(h_{2}\), or the purported multiple fibre at \(\infty\)), but for the sake of a unified treatment we will content ourselves with the overall shape of a complete model. To this end, we attach the weights \(9\) to \(y\) and \(4\) to \(x\) and homogenize (5.1) as an equation of degree \(18\) in \(\mathbb{P}[1,1,4,9]\): \[y^{2}+a_{9}y=stx^{4}+a_{10}x^{2}+a_{14}x+a_{18}. \tag{5.2}\] Here and in what follows, the \(a_{i}\) will be regarded as homogenous polynomials in \(k[s,t]\) of degree (exactly!) given by the index, though we will take the liberty to suppress \(s\) from notation for ease of presentation. If necessary, a complete model of the surface can be described by 4 affine charts, namely 1. the chart in (5.1) and those obtained from it as follows: 2. the chart with affine coordinates \(X=1/x,Y=y/x^{2},t\) as in Section 4.2; 3. the standard chart at \(t=\infty\) with affine coordinates \(s=1/t,u=x/s^{4},v=y/s^{9}\); 4. the chart analogous to the second one with coordinates \(U=1/u,V=v/u^{2},s\). Note that the shape of (5.2) is preserved by the admissible coordinate transformations \[(x,y)\mapsto(x+b_{4},y+b_{1}x^{2}+b_{5}x+b_{9}) \tag{5.3}\] where \(b_{i}\in k[t]\) of degree \(i\). **Lemma 5.1**.: _There is an admissible transformation converting (5.2) to_ \[S:\quad y^{2}+a_{9}y=tx^{4}+ta_{4}^{2}x^{2}+a_{14}x+t^{3}a_{3}^{4}. \tag{5.4}\] _Remark 5.2_.: (i) The shape of (5.4) is symmetric in \(t\) and the suppressed homogenizing variable \(s\). This will be quite useful later when we locate the multiple fibres (in the classical case) at \(0\) and \(\infty\), cf. (1.1). (ii) Note that we have not made any assumption on the special fibres at \(t\) (or \(\infty\)) yet, so (5.4) is universally valid locally at any given fibre (but with coefficients depending on the choice of fibre). Proof.: The proof of Lemma 5.1 relies on the following general easy result: **Lemma 5.3**.: _Let \(k\) be an algebraically closed field. Let \(n\in\mathbb{N}\) and \(h_{1},\dots,h_{n}\in k[z_{1},\dots,z_{n}]\) such that for each \(i\)_ \[h_{i}=z_{i}^{d_{i}}+(\text{terms of total degree }<d_{i}).\] _Then there is a common zero of all \(h_{i}\) in \(k^{n}\)._ Proof of Lemma 5.3.: Homogenizing the equations by an additional variable \(z_{0}\), we deduce that there is a solution in \(\mathbb{P}^{n}(k)\). The degree assumptions directly imply that there is no solution in the hyperplane \(z_{0}=0\), so the claim follows. To continue the proof of Lemma 5.1, we spell out the coefficients of (5.2) and (5.3) as \[a_{i}=\sum_{j=0}^{i}\alpha_{i,j}t^{i},\quad b_{i}=\sum_{j=0}^{i}\beta_{i,j}t^{i}\] Converting (5.2) to (5.4) by way of the admissible transformation (5.3) amounts to \(b_{1}\equiv 0\) and solving the following system of 21 equations: \[\alpha_{10,2j} = \beta_{5,j}^{2}+\sum_{l=0}^{2j}\alpha_{9,l}\beta_{5,2j-l}\;\;\;(j =0,\ldots,5)\] \[\alpha_{18,2j} = \beta_{9,j}^{2}+\sum_{l=0}^{2j}\alpha_{9,l}\beta_{9,2j-l}\;\;\;(j =0,\ldots,9)\] \[\alpha_{18,4j+1} = \beta_{4,j}^{4}+\sum_{l=0}^{2j}\alpha_{10,4j+1-2l}\beta_{4,l}^{2} +\sum_{l=0}^{14}\alpha_{14,l}\beta_{4,4j+1-l}\] \[\qquad\qquad+\sum_{l=0}^{4j+1}\alpha_{9,l}\beta_{9,4j+1-l}\;\;\; (j=0,\ldots,4)\] Considering the \(\beta_{i,j}\) as variables, the system obviously satisfies the conditions of Lemma 5.3, so the claim follows. **Corollary 5.4**.: _(i) The set of admissible transformations (5.3) converting (5.2) to (5.4) is finite. (ii) Given a quasi-elliptic fibration \(S\to\mathbb{P}^{1}\) on an Enriques surface, the normal form (5.4) is rigid up to scaling \(x\) by some non-zero constant and \(y\) by its square._ Proof.: _(i)_ This is implicit in the proof of Lemma 5.1. Since \(b_{1}\equiv 0\), we can consider the zero set \(Z\subset\mathbb{A}^{21}\) given by the 21 equations above. If \(Z\) were positive-dimensional, then \(\bar{Z}\subset\mathbb{P}^{21}\) would intersect any hyperplane non-trivially. However, we argued that \(\bar{Z}\cap Z(z_{0})=\emptyset\), contradiction. _(ii)_ This follows directly from _(i)_. Philosophically, one should consider (5.4) as a replacement of the Weierstrass equation of a (smooth) genus one curve with a rational point. Indeed we shall see soon that in the case of (quasi-elliptic) Enriques surfaces, it shares many convenient features with the standard Weierstrass form. For instance, we will see this in action when working out explicit linear systems in sections 13.8, 13.9. The analogous equations for general nodal Enriques surfaces (without the assumption of being quasi-elliptic, and in fact in any characteristic) are to be exploited in future work. ## 6. Relative Jacobian In this section, we work out the Weierstrass form of the relative Jacobian of the quasi-elliptic fibration (5.4). By Queen [24], the relative Jacobian of (5.4) is given by omitting the constant term: \[\operatorname{Jac}(S):\quad y^{2}+a_{9}y=tx^{4}+ta_{4}^{2}x^{2}+a_{14}x. \tag{6.1}\] **Lemma 6.1**.: _The relative Jacobian admits the Weierstrass form_ \[Y^{2}=X^{3}+(a_{9}^{2}t+a_{4}^{4}t^{2})\,X+a_{14}^{2}t \tag{6.2}\] Proof.: To convert (6.1) to Weierstrass form, we formally have to distinguish whether \(a_{9}\not\equiv 0\) or not. ### \(a_{9}\not\equiv 0\) We convert to Queen's second standard from [23] by the change of coordinates \[y=y_{1}+\frac{a_{14}}{a_{9}}x+\left(\frac{a_{4}^{2}t}{a_{9}}+\frac{a_{14}^{2}} {a_{9}^{3}}\right)x^{2},\] as we get \[y_{1}^{2}+a_{9}y_{1}=\underbrace{\left(t+\frac{a_{4}^{4}t^{2}}{a_{9}^{2}}+ \frac{a_{14}^{4}}{a_{9}^{6}}\right)}_{h}x^{4}.\] The change of coordinates \(x=x_{2}/y_{2},\ \ y_{1}=x_{2}/y_{2}^{2}\) gives the cubic \[x_{2}+a_{9}y_{2}^{2}=hx_{2}^{3}.\] Writing \(y_{2}=y_{3}a_{9}/h,\ \ x_{2}=x_{3}a_{9}^{3}/h\), we derive the following equation which is monic in \(x_{3}\) and in \(y_{3}\): \[y_{3}^{2}=x_{3}^{3}+(a_{4}^{4}a_{9}^{4}t^{2}+a_{9}^{6}t+a_{14}^{4})x_{3}.\] This simplifies further by setting \[x_{3}=a_{9}^{2}X+a_{14}^{2},\ \ \ y_{3}=a_{9}^{3}Y+a_{14}a_{9}^{2}X+a_{14}a_{4}^{2 }a_{9}^{2}t\] and results exactly in the Weierstrass form (6.2). ### \(a_{9}\equiv 0\) In this case, the affine chart at \(\infty\) with coordinates \(Y=y/x^{2},X=1/x\) readily returns a cubic starting from (6.1). This is easily transformed into Weierstrass form - and exactly yields (6.2) with \(a_{9}\equiv 0\) ## 7. Rationality vs. minimality By [2], if \(S\) is an Enrique surface, then the relative Jacobian \(\operatorname{Jac}(S)\) is a rational surface. Therefore, the degree of discriminant of the minimal equation (as a homogeneous polynomial, i.e. including contributions at \(\infty\)) is 8 by [10]. Presently, the discriminant \(\Delta\) is expressed as \[\Delta=(a_{9}^{2}t+a_{4}^{4}t^{2})a_{9}^{4}+a_{14}^{4},\] a homogeneous polynomial of degree \(56\). Thus the Weierstrass form (6.2) is highly non-minimal; in particular, this implies that there is a degree 4 polynomial \(g\) such that \[g^{12}\mid\Delta.\] As one of the special features of characteristic two, we have the same divisibility property for the formal derivative: \[g^{12}\mid\Delta^{\prime}=a_{9}^{6}\quad\Longrightarrow\quad g^{2}\mid a_{9} =g^{2}a_{1}.\] In turn, the shape of \(\Delta\) then implies that \(g^{2}\mid a_{14}=g^{2}a_{6}\), and moreover \[g^{4}\mid a_{4}^{4}t^{2}a_{1}^{4}+a_{6}^{4}\quad\Longrightarrow\quad g^{2} \mid ta_{4}^{2}a_{1}^{2}+a_{6}^{2}.\] Since the last sum decomposes into an even and odd part, we deduce as before using the formal derivative that \[g\mid a_{4}a_{1}\quad\text{and}\quad g\mid a_{6}\ \ (\text{so}\ \ g^{3}\mid a_{14}). \tag{7.1}\] In view of the degrees of the polynomials involved, these divisibility properties are quite restrictive, especially the left-most one. We will make use of this to prove the following important simplification: **Lemma 7.1**.: _In the above setting, we have \(g\mid a_{4}\) (in addition to \(g^{2}\mid a_{9},\ g^{3}\mid a_{14}\))._ Proof.: Assuming the contrary, there is a linear form \(\ell\) dividing \(g\) with multiplicity \(m\) such that \(\ell^{m}\nmid a_{4}\). Then (7.1) implies that \(\ell^{2m+1}\mid a_{9}\). By the universality of the normal form (5.4) (cf. Remark 5.2 (ii)), we may as well assume that \(\ell=t\). Thus the Weierstrass form of the relative Jacobian reads \[Y^{2}=X^{3}+(a_{9}^{2}t+a_{4}^{4}t^{2})\,X+a_{14}^{2}t \tag{7.2}\] We first assume that \(a_{9}\not\equiv 0\). Then (7.1) gives in fact exact divisibilities \[t^{2m+1}\mid\mid a_{9}=t^{2m+1}b_{9},\quad t^{m-1}\mid\mid a_{4}=t^{m-1}b_{4}\] for degree reasons (and by assumption). Then the Weierstrass form (7.2) can be minimalized \(m-1\) times by setting \(X=t^{2m-2}X^{\prime},\ \ Y=t^{3m-3}Y^{\prime}\), but the resulting Weierstrass form \[Y^{\prime 2}=X^{\prime 3}+(t^{7}b_{9}^{2}+t^{2}b_{4}^{4})X^{\prime}+t^{7}b_{14}^{ 2} \tag{7.3}\] is minimal since Tate's algorithm immediately returns fibre type \(\mathrm{I}_{n}^{*}\) for some \(n>0\). In fact, since (7.3) has discriminant still divisible by \(t^{12}\) by construction, we infer \(n\geq 8\), but then the contribution of this fibre to Euler-Poincare characteristic already prevents \(\mathrm{Jac}(S)\) from being rational. For \(a_{9}\equiv 0\), the argument is completely analogous, as we have to minimize at most \(m-1\) times to arrive at the same kind of fibre type, so we skip the details. We remark that, with the divisibilities of Lemma 7.1 in effect, the relative Jacobian \(\mathrm{Jac}(S)\) is indeed verified to be rational as it can be minimalised at each zero of \(g\) (counted with multiplicity). Indeed, in terms of the factorizations \[a_{9}=g^{2}a_{1},\quad a_{4}=a_{0}g,\quad a_{14}=g^{3}a_{2}\] implied by Lemma 7.1, the minimal model is given by \[\mathrm{Jac}(S):\quad Y^{2}=X^{3}+\left(a_{1}^{2}t+a_{0}^{4}t^{2}\right)X+a_{2 }^{2}t. \tag{7.4}\] This has discriminant \[\Delta(t)=(a_{1}^{2}t+a_{0}^{4}t^{2})a_{1}^{4}+a_{2}^{4}\] of degree 8. For later use, we record the possible configurations of reducible fibres of \(\mathrm{Jac}(S)\). **Lemma 7.2**.: _The reducible fibres of \(\mathrm{Jac}(S)\) are determined as follows:_ 1. _If_ \(a_{1}\equiv 0\)_, then there are_ 1. _two fibres of type_ \(\mathrm{I}_{0}^{*}\) _at the zeroes of_ \(a_{2}\) _if_ \(a_{2}\) _is not a square;_ 2. _one fibre of type_ \(\mathrm{I}_{4}^{*}\) _at the zero of_ \(a_{2}\) _if_ \(a_{2}\) _is a square and_ \(a_{0}\neq 0\)_;_ 3. _one fibre of type_ \(\mathrm{II}^{*}\) _at the zero of_ \(a_{2}\) _if_ \(a_{2}\) _is a square and_ \(a_{0}=0\)_._ 2. _If_ \(a_{1}\not\equiv 0\) _and_ \(a_{1}\nmid a_{2}\)_, then there are eight fibres of type_ \(\mathrm{III}\)_._ 3. _If_ \(a_{1}\not\equiv 0\) _and_ \(a_{1}\mid a_{2}\)_, then write_ \(a_{2}=a_{1}b_{1}\) _and_ \(h=a_{1}^{2}t+a_{0}^{4}t^{2}+b_{1}^{4}\)_, so that_ \(\Delta=a_{1}^{4}h\)_._ 1. _If_ \(h\) _has four different roots, then there are one fibre of type_ \(\mathrm{I}_{0}^{*}\) _(at the zero of_ \(a_{1}\)_) and four fibres of type_ \(\mathrm{III}\) _(at the roots of_ \(h\)_);_ 2. _if_ \(h\) _has a double root and two simple roots, then there are one fibre of type_ \(\mathrm{I}_{2}^{*}\) _(at the zero of_ \(a_{1}\)_) and two fibres of type_ \(\mathrm{III}\) _(at the simple roots of_ \(h\)_);_ 3. _if_ \(h\) _has a triple root and a simple root, then there are one fibre of type_ \(\mathrm{III}^{*}\) _(at the root of_ \(a_{1}\)_) and one fibre of type_ \(\mathrm{III}\) _(at the simple root of_ \(h\)_)._ Proof.: Reducible fibres are encoded in multiple roots of \(\Delta\). If \(a_{1}\not\equiv 0\), then considering the formal derivative \(\Delta^{\prime}=a_{1}^{6}\) again shows that only the root of \(a_{1}\) may be multiple. The claims then follow from an easy case-by-case analysis in parallel to the results from Ito [10]. The case \(a_{1}\equiv 0\) follows similarly. ## 8. Singularity analysis I We turn to the model of \(S\) which we have derived so far: \[S:\quad y^{2}+a_{1}g^{2}y=tx^{4}+a_{0}tg^{2}x^{2}+a_{2}g^{3}x+t^{3}a_{3}^{4}. \tag{8.1}\] We start by analysing the singularities outside the fibres above the zeroes of \(g\): **Lemma 8.1**.: _Let \(t_{0}\in\mathbb{P}^{1}\) be a non-zero of \(g\). Then (8.1) has at worst ADE-singularities in the fibre above \(t_{0}\)._ Proof.: After a Mobius transformation, if necessary, we may assume that \(t_{0}=0\), using again the universality of (8.1) as explained in Remark 5.2 (i). By the Jacobi criterion, the fibre at \(t_{0}\) in the affine chart (8.1) contains a singularity if and only if \(t\mid a_{1},a_{2}\). That is, we are outside case (ii) of Lemma 7.2, and (the strict transform of) the double fibre component \(\Theta=\{t=y=0\}\) is part of the starred fibre. One directly verifies that this fibre arises by resolving the ADE-singularities at the points \((x,y,t)=(\alpha,0,0)\) where \(\alpha\) runs through the roots of the auxiliary polynomial \[r=x^{4}+a_{0}g(0)^{2}x^{2}+(a_{2}/t)(0)g(0)^{3}x.\] We will give a few more details in the proof of Lemma 8.2. It remains to check the point at infinity - the cusp, if the fibre is irreducible. Here the partial derivative with respect to \(t\) always returns \(1\), so the point is never a surface singularity(!). With a view towards our goal of determining when \(S\) is an Enriques surface (so that it has exactly one or two multiple fibres), we record the following useful consequence: **Lemma 8.2**.: _If \(g(t_{0})\neq 0\), then the fibre of \(S\) at \(t_{0}\) is simple._ Proof.: If the fibre contains no surface singularity, then it is automatically reduced of Kodaira type II or III, and the claim follows. Otherwise, we may assume that \(t_{0}=0\) as before and derive \(a_{1}=tc_{0},a_{2}=tb_{1}\) from the proof of Lemma 8.1. We now analyse the resolution of the ADE singularities. To this end, we blow-up along \(\Theta\); in the affine chart \(y=ty^{\prime}\), we obtain the strict transform \[ty^{\prime 2}+c_{0}tg^{2}y^{\prime}=x^{4}+a_{0}g^{2}x^{2}+b_{1}g^{3}x+t^{2}a_{3}^{ 4}. \tag{8.2}\] The fibre components at \(t=0\) are given by the zeroes of the polynomial \(r\) from the proof of Lemma 8.1 In particular, if \(r\neq x^{4}\), then the strict transform of \(\Theta\) has has at least two adjacent fibre components inside the starred fibre. Since only terminal components of starred fibres may be simple by the work of Kodaira [13] and Tate [29], it follows that \(\Theta\) is a multiple component of the underlying Kodaira divisor. But then the fibre has to be simple since \(\Theta\) has multiplicity two. (In more detail, the fibre has type I\({}_{0}^{*}\) if \(t\nmid b_{1}\); I\({}_{2}^{*}\) if \(t\mid b_{1}\), but \(c_{0}\neq 0\); and I\({}_{4}^{*}\) if \(t\mid b_{1}\) and \(c_{0}=0\) (but \(b_{1}\not\equiv 0\)).) It remains to consider the case \(r=x^{4}\) which translates as \[a_{0}=0,\ \ \ a_{2}=b_{0}t^{2},\ \ \ a_{1}=c_{0}t.\] Then (8.2) reveals the 4-fold fibre component \(\Theta^{\prime}=\{t=x=0\}\). We continue to blow up along \(\Theta^{\prime}\). In the affine chart \(t=xt^{\prime}\), the strict transform reads \[t^{\prime}y^{\prime 2}+c_{0}t^{\prime}g^{2}y=x^{3}+t^{\prime}b_{0}g^{3}x+t^{ \prime 2}a_{3}^{4}x. \tag{8.3}\] At \(x=0\) (which describes the fibre at \(t=xt^{\prime}=0\)), we obtain two simple fibre components if \(c_{0}\neq 0\). Thus the fibre itself is simple as claimed, and two analogous further blow-ups add another two components each, of multiplicity two resp. three, to make for the fibre of Kodaira type III\({}^{*}\). Meanwhile, if \(c_{0}=0\), then (8.3) returns another double fibre component given by \(\Theta^{\prime\prime}=\{x=y^{\prime}=0\}\). It contains an \(A_{5}\)-singularity at \((0,0,0)\) but more importantly, there is also an \(A_{1}\)-singularity in the other chart \(x=tx^{\prime}\). Its resolution results in a simple fibre component, confirming the claim of the lemma. (The overall configuration of exceptional curves gives a fibre of Kodaira type II\({}^{*}\).) ## 9. Singularity analysis II Let \(\alpha\) be a root of \(g\). In analogy with the case of (quasi-)elliptic surfaces, we call \(\alpha\) non-minimal (for the model (8.1)) if \(a_{3}(\alpha)=0\). Indeed, at a non-minimal root, we can apply a change of variables \[x=(t-\alpha)x^{\prime},\ \ \ y=(t-\alpha)^{2}y^{\prime},\] reducing the degree of (8.1) by \(4\) to \(14\) (and those of all coefficients accordingly) while embedding the surface in \(\mathbb{P}[1,1,3,7]\). In analogy, the degree of the discriminant drops by \(12\) - exactly as in the proof of Lemma 7.1 (or as in the last step of Tate's algorithm [29]). **Proposition 9.1**.: _At the roots of \(g\), either (8.1) is non-minimal or \(S\) has a double fibre._ Since \(S\) is an Enriques surface, the proposition has the following important consequence: **Corollary 9.2**.: \(g\) _has exactly two minimal roots, if \(S\) is classical, resp. one minimal root, if \(S\) is supersingular._ Proof of Proposition 9.1.: Let \(\alpha\) be a root of \(g\) (of multiplicity \(m\geq 1\)). By the universality of (5.4), we may assume that \(\alpha=0\) and write \[a_{4}=t^{m}b_{4},\ \ \ a_{9}=t^{2m}b_{9},\ \ \ a_{14}=t^{3m}b_{14}\] as before. We proceed by resolving the singularity at \((0,0,0)\) until we reach a model with ADE singularities at worst. The first two steps where we blow up along smooth curves rather than in single points are exactly as in the proof of Lemma 8.2 (but with the additional divisibilities of coefficients provided by the root of \(g\)). Indeed, the fibre of (8.1) at \(t=0\) is the double component \(\Theta=\{t=y=0\}\), and we set out by blowing up \(S\) along \(\Theta\). It suffices to consider the following chart: ### 1st blow-up: \(y=ty^{\prime}\) Then the strict transform of (8.1) is \[ty^{\prime 2}+t^{2m}b_{9}y^{\prime}=x^{4}+t^{2m}b_{4}^{2}x^{2}+t^{3m-1}b_{14} x+t^{2}a_{3}^{4} \tag{9.1}\] with 4-fold fibre component \(\Theta_{1}=\{t=x=0\}\) and singular point at \((0,0,0)\). We continue to blow-up along \(\Theta_{1}\). Again, one affine chart suffices to investigate the exceptional curve: ### 2nd blow-up: \(t=xt^{\prime}\) The strict transform of (9.1) reads \[t^{\prime}y^{\prime 2}+t^{\prime 2m}b_{9}(xt^{\prime})x^{2m-1}y^{ \prime} = x^{3}+t^{\prime 2m}b_{4}(xt^{\prime})^{2}x^{2m+1}\] \[\ \ \ +t^{\prime 3m-1}b_{14}(xt^{\prime})x^{3m-1}+t^{\prime 2}a_{3 }(xt^{\prime})^{4}x.\] At \(x=0\), we recover the 4-fold fibre component \(\Theta_{1}=\{x=t^{\prime}=0\}\) as well as the double fibre component \(\Theta_{2}=\{x=y^{\prime}=0\}\), intersecting \(\Theta_{1}\) transversely in the surface singularity \((0,0,0)\). By inspection of the threefold vanishing order of each monomial, this is not an ADE singularity, so we blow it up in the usual manner. Again, one affine chart suffices to detect the remaining singularities: ### 3rd blowup: \(x=t^{\prime}x^{\prime\prime},y^{\prime}=t^{\prime}y^{\prime\prime}\) The strict transform of (9.2) is given by \[y^{\prime\prime 2}+t^{\prime 4m-3}b_{9}(x^{\prime\prime}t^{\prime 2 })x^{\prime\prime 2m-1}y^{\prime\prime} = x^{\prime 3}+t^{\prime 4m-2}b_{4}(x^{\prime\prime}t^{\prime 2 })^{2}x^{\prime\prime 2m+1} \tag{9.3}\] \[\ \ +t^{\prime 6m-5}b_{14}(x^{\prime\prime}t^{\prime 2})x^{ \prime\prime 3m-1}+a_{3}(x^{\prime\prime}t^{\prime 2})^{4}x^{\prime\prime}.\] Assuming that the original equation was minimal at \(\alpha\), we have \(a_{3}(0)\neq 0\), so we can rescale to assume that \(a_{3}(0)=1\). At \(t^{\prime}=0\), we thus obtain the cuspidal cubic \[\Theta_{3}=\{t^{\prime}=y^{\prime\prime 2}+x^{\prime\prime 3}+x^{\prime\prime}=0\}\] from (9.3). Since \(t=x^{\prime\prime}t^{\prime 2}\), this has multiplicity two as a fibre component (and this multiplicity will persist throughout the resolution process to give the claim of the proposition). ### Interlude: contraction of fibre components For completeness, we briefly deviate from the resolution of singularities to explain how the fibre components \(\Theta_{0},\Theta_{1},\Theta_{2}\) behave. Note that the components \(\Theta_{1}\) and \(\Theta_{2}\) intersect \(\Theta_{3}\) transversely in two distinct points (different from the cusp). On the partial resolution \(\hat{S}\) given by (9.3) where we have blown-up \(S\) three times, we thus obtain the following configuration of curves in the fibre \(\hat{F}\) above \(t=0\): \(\Theta_{0}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta_{2}\)\(\Theta_{3}\)\(\Theta_{1}\)\(\Theta where the argument of each of \(b_{4},b_{9},b_{14}\) is \(t^{\prime 2}(\hat{x}+b_{3}^{2})\). Looking closely, the lowest order terms exactly resemble the equation of an elliptic curve in Weierstrass form, but it can be minimalized \((m-1)\) times. Hence, without considering any further intermediate partial resolutions, we apply the variable change \[(t^{\prime},\hat{x},\hat{y})=(\tilde{t},\tilde{t}^{2m-2}\tilde{x},\tilde{t}^{3m -3}\tilde{y}) \tag{9.6}\] which leads to the following strict transform of (9.5): \[\tilde{y}^{2}+\tilde{t}b_{9} \,(\tilde{t}^{2m-2}\tilde{x}+b_{3}^{2})^{2m-1}(\tilde{t}^{m-1} \tilde{y}+b_{3}\tilde{x}+\tilde{t}b_{3}b_{4}\,(\tilde{t}^{2m-2}\tilde{x}+b_{3} ^{2})^{m}) \tag{9.7}\] \[=\tilde{x}^{3}+\tilde{t}^{2}b_{4}^{2}\,(\tilde{t}^{2m-2}\tilde{x} +b_{3}^{2})^{2m}\tilde{x}+\tilde{t}b_{14}\,(\tilde{t}^{2m-2}\tilde{x}+b_{3}^{2 })^{3m-1}.\] At \(\tilde{t}=0\) this again describes a cuspidal cubic, say \(\Theta^{\prime}=\{\tilde{t}=\tilde{y}^{2}+\tilde{x}^{3}=0\}\). Since \(t=\tilde{t}^{2}(\tilde{t}^{2m-2}\tilde{x}+b_{3}^{2})\) with unit \(b_{3}\), the cuspidal cubic has multiplicity two as a fibre of \(S\). On \(\Theta^{\prime}\), a singularity of this surface may exist only at the cusp point if at all. Indeed, the singularity is necessarily rational and the types are given as follows: * If \(t\nmid b_{14}\), then nonsingular. * If \(t\mid b_{14}\), \(t\nmid b_{9}\), then \(A_{1}\). * If \(t\mid\mid b_{14}\), \(t\mid b_{9}\), then \(D_{4}\). * If \(t^{2}\mid b_{14}\), \(t\mid\mid b_{9}\) and \(t\mid b_{4}\), then \(D_{6}\). * If \(t^{2}\mid b_{14}\), \(t\mid\mid b_{9}\) and \(t\mid b_{4}\), then \(E_{7}\). * If \(t^{2}\mid\mid b_{14}\), \(t^{2}\mid b_{9}\) and \(t\nmid b_{4}\), then \(D_{8}\). * If \(t^{2}\mid\mid b_{14}\), \(t^{2}\mid b_{9}\) and \(t\mid b_{4}\), then \(E_{8}\). In particular, the multiplicity of the fibre continues to be two since the strict transform of \(\Theta^{\prime}\) (of multiplicity two) presents a simple component of the underlying Kodaira type. This completes the proof of Proposition 9.1. _Remark 9.3_.: Note that the above computations are of purely local nature. Hence, if \(S\) is an Enriques surface, then the divisibility properties from Lemma 7.1 limit the possible shapes of \(b_{4}\) and \(b_{9}\). In practice, this means that \(b_{4}\equiv 0\) in cases (v) and (vii) and \(b_{9}\equiv 0\) in cases (vi) and (vii). ## 10. Canonical divisor In order to decide when (8.1) (or (5.4)) defines an Enriques surface, we now investigate the canonical divisor. Since the resolution of ADE-singularities does not affect it and non-normal roots of \(g\) can be dealt with by lowering the degree, this amounts to analysing the minimal roots of \(g\) (which are one or two in number by Corollary 9.2). Throughout, we argue with the standard rational 2-form \[\omega=dx\wedge dt/a_{9}\quad\text{if }a_{9}\not\equiv 0,\quad\text{or}\quad \omega=dy\wedge dt/a_{14}\quad\text{if }a_{14}\not\equiv 0.\] (Note that \((a_{9},a_{14})\not\equiv(0,0)\) since otherwise (5.4) and (8.1) would be geometrically reducible.) **Proposition 10.1**.: _Let \(\alpha\neq\infty\) denote a minimal root of \(g\) of multiplicity \(m\). Then \(\omega\) extends over a minimal resolution of (8.1) with a pole of order \(m/2\) along the fibre at \(\alpha\)._ _Remark 10.2_.: At \(\infty\) we have to be more careful since our choice of \(\omega\) tends to have a zero there (as we shall discuss below around Proposition 10.3). Of course, the assumption \(\alpha\neq\infty\) in Proposition 10.1 can always be achieved by some Mobius transformation, so it should not be viewed as a restriction. Proof.: As before, we may assume that \(\alpha=0\). Then we simply trace back \(\omega\) through the resolution of the singularity in the proof of Proposition 9.1. For brevity, we only discuss the case \(a_{9}\not\equiv 0\). The other case is completely analogous. Step by step, we obtain \[\omega = \frac{dx\wedge dt}{a_{9}}=\frac{dx\wedge dt}{t^{2m}b_{9}}\stackrel{{ \ref{eq:2}}}{{=}}\frac{dx\wedge dt^{\prime}}{x^{2m-1}t^{\prime 2m}b_{9}} \stackrel{{\ref{eq:3}}}{{=}}\frac{dx^{\prime\prime}\wedge dt^{ \prime}}{x^{\prime\prime 2m-1}t^{\prime 4m-2}b_{9}}\] \[\stackrel{{\eqref{eq:4}}}{{=}} \frac{d\hat{x}\wedge dt^{\prime}}{(\hat{x}+b_{3}^{2})^{2m-1}t^{ \prime 4m-2}b_{9}}\stackrel{{\eqref{eq:5}}}{{=}}\frac{d\tilde{x} \wedge d\tilde{t}}{(\tilde{t}^{2m-2}\hat{x}+b_{3}^{2})^{2m-1}\tilde{t}^{2m}b_ {9}}\] Since (9.7) has at worst ADE-singularities in the fibre at \(\tilde{t}=0\) by the proof of Proposition 9.1, there is the standard 2-form \[\tilde{\omega}=\frac{d\tilde{x}\wedge d\tilde{t}}{(\tilde{t}^{2m-2}\tilde{x}+ b_{3}^{2})^{2m-1}\tilde{t}^{m}b_{9}}.\] This extends to a 2-form on the minimal desingularization which is regular and non-zero along the fibre. Comparing \(\omega\) and \(\tilde{\omega}\), we deduce that \(\omega\) has a pole given by \(\tilde{t}^{m}\). In particular, this gives the claimed order. **Proposition 10.3**.: 1. _For_ \(S\) _to be a classical Enriques surface,_ \(g\) _has to have exactly two minimal roots, each of multiplicity one._ 2. _For_ \(S\) _to be a supersingular Enriques surface,_ \(g\) _has to have exactly one minimal root, of multiplicity two._ Proof.: By Corollary 9.2, \(g\) has exactly the claimed number of minimal roots. Denote their multiplicities by \(m_{1},m_{2}\). Since \(g\) has degree \(4\), there are \(M=4-m_{1}-m_{2}\) non-minimal roots, counted with multiplicity. We now go through the single cases. If \(M=0\), then (8.1) is minimal of degree \(18\). The standard rational 2-forms \(\omega=dx\wedge dt/a_{9}\) or \(dy\wedge dt/a_{14}\) are regular outside the fibres above the minimal roots of \(g\), attaining a zero of order 3 at \(\infty\) (verified by considering the third chart, with \(s=1/t\), from Section 5). By Proposition 10.1, \(\omega\) extends over the minimal desingularization with \[\text{div}(\omega)=3F_{\infty}-\frac{m_{1}}{2}F_{1}-\frac{m_{2}}{2}F_{2}.\] Since \(m_{1}+m_{2}=4\), this is numerically equivalent to \(F\), so \(S\) cannot be an Enriques surface. If \(M=1\), then the minimal equation relative to (8.1) has degree \(14\), and the analogous standard 2-form \(\omega\) has divisor \[\text{div}(\omega)=2F_{\infty}-\frac{m_{1}}{2}F_{1}-\frac{m_{2}}{2}F_{2}\] on the minimal desingularization. This is numerically equivalent to \(F/2\not\equiv 0\). The case \(M=2\) exactly leads to the claimed multiplicities (and the canonical divisor ends up being numerically trivial as we will check below) while the remaining case with \(M=3\) and \(m_{1}=1\), necessary supersingular, gives \(\text{div}(\omega)\equiv-F/2\not\equiv 0\). This contradiction completes the proof of Proposition 10.3. ## 11. Proof of Theorem 1.1 ### Uniform normal form With a view towards moduli and our applications, we first consider the following normal form of quasi-elliptic Enriques surfaces where the multiple fibres are not yet fixed at \(t=0\) and \(\infty\) (in the classical case): **Theorem 11.1**.: _Let \(S\) be a classical or supersingular Enriques surface. Then \(S\) is given by an equation_ \[S: y^{2}+g_{2}^{2}a_{1}y=tx^{4}+tg_{2}^{2}a_{0}x^{2}+g_{2}^{3}a_{2}x+t^{3}c_ {1}^{4}. \tag{11.1}\] _Here the coefficients are polynomials of degree at most the index with the conditions that \(c_{1},g_{2}\not\equiv 0\), \((a_{1},a_{2})\not\equiv(0,0)\) and \(c_{1}\nmid g_{2}\) (resp. \(\text{deg}(g_{2})=2\) if \(\text{deg}(c_{1})=0\))._ _The Enriques surface is classical, if \(g_{2}\) has two different roots (possibly including \(\infty\)), resp. supersingular, if \(g_{2}\) is a square._ One of the benefits of the theorem is that the codimension one condition for being supersingular inside the (closure of the) moduli space of classical Enriques surfaces becomes transparent in a very instructive and explicit way. We will exploit this extensively in applications studied in the remainder of this paper. Proof.: By Proposition 10.3, the polynomial \(g\) of degree \(4\) that has been central to many of our considerations has two non-minimal roots (counted with multiplicity). Hence (8.1) can be minimalized at these two roots, and we obtain exactly the normal form (11.1) where \(g_{2}\) has either two simple roots (classical case) or one double root (supersingular case). The non-vanishing and non-divisibility conditions follow directly from what we have seen before. The resulting surface \(S\) is indeed an Enriques surface because the canonical divisor is trivial in case \(m_{1}=2\) resp. numerically trivial in case \(m_{1}=m_{2}=1\) as \(K_{S}=F_{\infty}-F_{1}/2-F_{2}/2\). In either case, the Euler-Poincare characteristic equals that of \(\operatorname{Jac}(S)\), i.e. \(e(S)=12\). Together this identifies \(S\) as an Enriques surface by virtue of the Enriques-Kodaira classification [2]. ### Proof of Theorem 1.1 (i) In the classical case, we apply Mobius transformations and rescale \(x,y,t\) to normalize \(g_{2}=t\) and \(c_{1}=1+t\) to obtain (1.1) from (11.1). The conditions given are all immediate. ### Proof of Theorem 1.1 (ii) In the supersingular case, we normalize \(g_{2}=t^{2}\) and \(c_{1}=1\) to obtain (1.2) from (11.1). The conditions are again immediate. _Remark 11.2_.: In the supersingular case, there is one normalization left, amounting to the scalings \((x,y,t)\mapsto(\alpha x,\alpha^{3}y,\alpha^{2}t)\) for \(\alpha\in k^{\times}\), to comply with the fact that the supersingular locus has codimension one inside the closure of the classical locus. _Remark 11.3_.: The normal forms in Theorems 1.1 and 11.1 are very well suited for explicit computations, similar to the Weierstrass form of an elliptic curve. We will see this in action when computing linear systems in 13.8, 13.9 and automorphism groups in Section 14. _Remark 11.4_.: The techniques of this paper have also applications beyond Enriques surfaces. For instance, one can use them to construct explicit simply connected projective surfaces with \(p_{g}=1\) which provide counterexamples to the Torelli theorem (compare [3], [22]). ## 12. Torsor interpretation We continue the paper with some interesting applications of our results and techniques. In this section we consider the general quasi-elliptic picture, but as a motivation we first recall the usual set-up. While the standard construction of an Enriques surface outside characteristic two nowadays probably is that as a quotient of a K3 surface by a free involution, there is also another approach using the Jacobian of any genus one fibration on the Enriques surface. This is a rational elliptic surface, and over \(\mathbb{C}\), one can recover the Enriques surface by a suitable logarithmic transformation (cf. [1, V.13]). In essence, this depends on the two ramified fibres, but it also involves a choice of 2-torsion points on the ramified fibres. This implies that a given rational elliptic surface \(X\) admits a two-dimensional family of Enriques surfaces whose Jacobian is \(X\), but the family is only irreducible if \(X\) has no two-torsion section. In the algebraic category, there is an alternative interpretation of this construction in terms of torsors [6, SS4.10]. Naturally this also applies to the quasi-elliptic fibrations on which we are focussing in this paper, as featured in Theorem 1.2. ### Proof of Theorem 1.2 Let \(X\) be a rational quasi-elliptic surface. By [10], it can be given by the standard Weierstrass form \[X:\quad y^{2}=x^{3}+Ax+B \tag{12.1}\] where \(A,B\in k[t]\) have degree \(4\) resp. \(8\). In order to compare this with the relative Jacobian of a classical or supersingular Enriques surface as determined in (7.4), we spell out \[A=d_{1}^{4}+tb_{1}^{2}+t^{2}b_{0},\quad B=b_{3}^{2}+tb_{2}^{2}\] where the \(b_{i},d_{j}\) denote polynomials of degree given by the index as usual. Applying the linear transformation \[(x,y)\mapsto(u+d_{1}^{2},v+d_{1}u+b_{3}+b_{0}^{1/2}td_{1}),\] the original equation (12.1) is converted to \[X:\quad v^{2}=u^{3}+(tb_{1}^{2}+t^{2}b_{0})u+t(b_{1}d_{1}+b_{2})^{2}.\] By Theorem 1.1 and (7.4), the Enriques surfaces with relative Jacobian \(X\) are thus given by (11.1) with \[a_{0}=b_{0}^{1/4},\quad a_{1}=b_{1},\quad a_{2}=b_{1}d_{1}+b_{2},\] but with \(c_{1}\) and \(g_{2}\) arbitrary (under the condition that \(c_{1}\nmid g_{2}\)). Since we can always rescale \(x,y\) in (11.1) to normalize some coefficient of \(c_{1}\) or \(g_{2}\), this leads to a 4-dimensional irreducible family of classical Enriques surfaces and a 3-dimensional irreducible subfamily of supersingular Enriques surfaces, but this holds only if there are three or more reducible fibres. Otherwise there are additional Mobius transformations preserving the relative Jacobian, thus confirming Theorem 1.2. _Remark 12.1_.: The results of Theorem 1.2 agree with the predictions in the context of a conjecture of W. Lang (cf. [6, SS4.8]). ### Example: Quasi-elliptic Enriques surfaces with two fibres of Kodaira type \(\mathrm{I}_{0}^{*}\) By [10], rational quasi-elliptic surfaces with two fibres of Kodaira type \(\mathrm{I}_{0}^{*}\) (located at \(t=0,\infty\)) form a one-dimensional family, given by the Weierstrass equations \[X:\quad y^{2}=x^{3}+\alpha^{4}t^{2}x+t^{3}\quad(\alpha\in k).\] By the proof of Theorem 1.2, the Enriques surfaces with \(X\) as relative Jacobian are given by \[y^{2}=tx^{4}+\alpha tg_{2}^{2}x^{2}+tg_{2}^{3}x+t^{3}c_{1}^{4}. \tag{12.2}\] Since there are two independent scalings left to normalize two non-zero coefficients (say the top coefficients of \(g_{2}\) and \(c_{1}\)) while preserving the shape of (12.2), these Enriques surfaces depend on 4 moduli in total. We can arrange for one of the \(\mathrm{I}_{0}^{*}\) fibres to be multiple by requiring that \(t\mid g_{2}\) (or \(\text{deg}(g_{2})<2\)). More precisely, we are in the classical case if \(g_{2}\neq t^{2}\) and in the supersingular case if \(g_{2}=t^{2}\). For both cases, we obtain quasi-elliptic Enriques surfaces with two fibres of Kodaira type \(\mathrm{I}_{0}^{*}\), one of them multiple. ## 13. Finite automorphism groups of Enriques surfaces in characteristic two Enriques surfaces with finite automorphism groups are notorious because a general Enriques surface has infinite automorphism group (contrary to the case of K3 surfaces, for instance). The property of having finite automorphism group can be described combinatorially, namely in the incidence graph \(\Gamma\) of smooth rational curves - which in particular has to be finite! This was employed to give a full classification over \(\mathbb{C}\) by Kondo [14] and Nikulin [21], and in odd characteristic by Martin [18]. In fact, Martin's work also covers the case of singular Enriques surfaces while all possible graphs \(\Gamma\) have been determined for the cases of classical and supersingular Enriques surfaces in [12]. In loc. cit. the authors also provide examples for each \(\Gamma\), but no uniqueness or irreducibilty of moduli is claimed. In particular, the precise finite automorphism groups could not be classified. Theorem 1.1 remedies this based on our normal form arguments. ### Overall idea Given a graph \(\Gamma\) of smooth rational curves, we identify a divisor of Kodaira type which induces a quasi-elliptic fibration on the Enriques surface. In particular, this determines the shape of the relative Jacobian, so Ito's equations in [10] can be translated back to our Enriques surfaces using Theorem 1.2, and in particular, using the explicit calculations from 12.1. Ideally, this quasi-elliptic fibration already fixes all the curves in \(\Gamma\), but in three cases we also have to consider a second fibration to verify the findings from [12] (see 13.8, 13.9). In particular, this will prove that the examples and automorphism groups in [12] are complete, and thus verify Theorem 1.3. In what follows, we go through all possible graphs \(\Gamma\) one by one; throughout we employ the notation and findings of [12]. ### \(\Gamma=\tilde{\boldsymbol{E}}_{\boldsymbol{7}}+\tilde{\boldsymbol{A}}_{ \boldsymbol{1}}^{(1)}\) These Enriques surfaces admit a quasi-elliptic fibration with reducible fibres of types \(\text{III}^{*}\) (multiple) and \(\text{III}\) (simple). Locating these fibres at \(t=0\) and \(t=\infty\), respectively, [10] and the formulas from 12.1 lead to the normal form \[y^{2}+t^{3}g_{1}^{2}y=tx^{4}+t^{3}c_{1}^{4}\] where \(\text{deg}(g_{1})=1\) (with classical Enriques surfaces if \(t\nmid g_{1}\), and supersingular Enriques surfaces otherwise). More precisely, the fibre of type III is simple if and only if \(\text{deg}(g_{1})=1\). Hence we normalize \(g_{1}=t+a,c_{1}=b(t+1)\) such that \(b\neq 0\) and the 2-dimensional classical family is given \(a\neq 0\) and the 1-dimensional supersingular family occurs at \(a=0\). This confirms that the examples of [12] are complete, with anticipated moduli dimensions, and so are the possible automorphism groups computed. ### \(\boldsymbol{\Gamma=\tilde{E}_{7}+\tilde{A}_{1}^{(2)}}\) The Enriques surfaces with this graph also admit a quasi-elliptic fibration with reducible fibres of types III\({}^{*}\) and III, but contrary to 13.2 the special graph \(\Gamma=\tilde{E}_{7}+\tilde{A}_{1}^{(2)}\) implies that also the III fibre is multiple. Thus this only concerns the classical case and occurs precisely at \(g_{1}=1\neq 0\), again confirming [12]. ### \(\boldsymbol{\Gamma=\tilde{E}_{8}}\) There is a quasi-elliptic fibration with multiple II\({}^{*}\) fibre (which we locate at \(t=0\)). By [10]. the relative Jacobian is given by \[y^{2}=x^{3}+t^{5}.\] Using (12.1), we obtain a two-dimensional family of Enriques surfaces as torsors given by \[y^{2}=tx^{4}+t^{4}g_{1}^{2}x+t^{3}c_{1}^{4}\ \ \ (t\nmid g_{2}). \tag{13.1}\] We derive a one-dimensional family of classical Enriques surfaces where \(t\nmid g_{1}\) (whence we normalize \(g_{1}=1+\beta t,\ c_{1}=1+t\)), and into a single supersingular Enriques surface at \(g_{1}=t\). This is again in perfect agreement with [12]. ### \(\boldsymbol{\Gamma=\tilde{D}_{8}}\) This graph induces a quasi-elliptic fibration with a multiple fibre of type I\({}_{4}^{*}\) (which we located at \(t=0\)). The relative Jacobian is given by [10] as \[y^{2}=x^{3}+t^{2}x+t^{5}.\] Hence 12.1 yields a two-dimensional family of classical Enriques surfaces defined by \[y^{2}=tx^{4}+t^{3}g_{1}^{2}x^{2}+t^{5}g_{1}^{3}x+t^{3}c_{1}\] where \(t\nmid g_{1}\), and a one-dimensional family of supersingular Enriques surfaces at \(g_{1}=\gamma t\ (\gamma\neq 0)\). ### \(\Gamma=\tilde{\boldsymbol{D_{4}}}+\tilde{\boldsymbol{D_{4}}}\) These Enriques surfaces admit a quasi-elliptic fibration with two multiple fibres of type \(\mathrm{I}_{0}^{*}\). In particular, they are automatically classical, and we use (12.2) by setting \(g_{2}=\gamma t\ (\gamma\neq 0)\). This results in the 2-dimensional family from [12]. ### \(\boldsymbol{\Gamma=\text{VII}}\) Among all possible configurations of smooth rational curves on classical and supersingular Enriques surfaces with finite automorphism group, this graph is singled out by the property that it only induces elliptic fibrations. It follows from the classification of the fibrations in [12, 14.1] that the universal cover has 12 \(A_{1}\) singularities, so the minimal resolution is the supersingular K3 surface of Artin invariant \(\sigma=1\). Its Enriques quotients (classical and supersingular) have been determined in [15]. In particular, this confirms the findings of [12]. ### \(\boldsymbol{\Gamma=\tilde{E_{6}}+\tilde{A_{2}}}\) In the classical case, these surfaces admit a quasi-elliptic fibration with multiple fibres of type III at \(t=0\) and II at \(t=\infty\), plus a simple fibre of type III\({}^{*}\) at \(t=1\). Using [10] we compute the equation \[S:\quad y^{2}+(t+1)t^{2}y=tx^{4}+t^{3}x^{2}+(t+1)t^{4}x+at^{3}(t+b)^{4} \tag{13.2}\] where \(ab\neq 0\), from 12.1. This has curve of cusps \(C\) at \(\infty\), making for the following graph of smooth rational curves together with the fibre components: Note the divisor \(D=C+2\Theta_{0}+3\Theta_{4}+2(\Theta_{3}+\Theta_{3}^{\prime})+\Theta_{2}+\Theta _{2}^{\prime}\) of Kodaira type IV\({}^{*}\) central to the diagram. We continue by exhibiting the linear system \(|2D|\); by inspection of the diagram, this will induce an elliptic fibration \[f^{\prime}:\quad S\to\mathbb{P}^{1}\] with multiple fibre of type IV\({}^{*}\), nodal bisections \(C_{0},\Theta_{1},\Theta_{1}^{\prime}\) and another reducible fibre containing the curve \(C_{1}\). To make all of this explicit, we resolve the singularity in the fibre at \(t=1\) as in the proof of Lemma 8.2. First we substitute \(a=c^{4}\) and change variables Figure 1. Smooth rational fibre components and curve of cusps for \(f\) \(y=(t+1)y^{\prime}+x^{2}+tx+c^{2}t(t+b)^{2}\), amounting to the blow-up along \(\Theta_{0}\) from 9.1. This produces the 4-fold fibre component \[\Theta_{4}=(t+1=x+\gamma=0)\] where \(\gamma^{2}=(bc+c+1)(b+1)c\). After changing variable \(x=x^{\prime}+\gamma\), the remaining pairs of fibre components \(\Theta_{i},\Theta_{i}^{\prime}\) of multiplicity \(i=1,2,3\) are successively uncovered in the three blow-ups \(t=x^{\prime i}t_{i}+1\). This shows that the function \[w=\frac{(x+t\gamma)^{2}}{t(t+1)^{2}}=\frac{(1+\gamma t_{1})^{2}}{(x^{\prime}t_ {1}+1)t_{1}^{2}} \tag{13.3}\] has pole divisor exactly \(2D\) outside the multiple fibres at \(t=0,\infty\). Meanwhile, on the two multiple fibres of the original quasi-elliptic fibration, the resolution of singularities as in Section 9 shows that \(w\) is regular and non-constant on the fibre component met by the curve of cusps (in particular on \(C_{0}\), confirming that it gives a bisection), and that \(w=c^{2}b^{2}\) on the other fibre component \(C_{1}\) at \(t=0\). That is, \(C_{1}\) is a component of the (reducible) fibre of \(f^{\prime}\) at \(w=c^{2}b^{2}\). **Lemma 13.1**.: _The general member of the family of Enriques surfaces given by (13.2) does not have finite automorphism group._ Proof.: For the surface to fall into the finite automorphism group case \(\Gamma=\tilde{E}_{6}+\tilde{A}_{2}\), the second reducible fibre of the fibration \(f^{\prime}\) (at \(w=c^{2}b^{2}\)) has to have type \(\mathrm{I}_{3}\) or \(\mathrm{IV}\). (Otherwise the Jacobian would have positive Mordell-Weil rank by [16], [28], thus inducing an infinite order automorphism on \(S\).) We therefore continue by calculating the Kodaira type of this very fibre, depending on \(b\) and \(c\). To this end, we substitute (13.3) for \(x^{\prime}\) in the equation corresponding to (9.2). Rescaling the coordinate \(y^{\prime}\) by \(h^{2}/t_{1}^{5}\) for \(h=b^{2}\alpha^{2}t_{1}+b\alpha t_{1}+\alpha^{2}t_{1}+\alpha t_{1}+1\) where \(\alpha=\sqrt{c/(b+1)}\), we obtain an equation \[y^{\prime\prime 2}+wt_{1}h^{2}y^{\prime\prime}=r\ \ \ \text{with}\ h\in k[t_{1},w]\ \ \text{of degree}\ 6\ \text{in}\ t_{1}.\] This is not quite visibly a curve of arithmetic genus one, but writing \(r=r_{0}^{2}+r_{1}\) where \(r_{1}\) is odd with respect to \(t_{1}\) or \(w_{1}\), the coordinate change \(y^{\prime\prime}=hu+r_{0}\) gives a curve of arithmetic genus one over \(k(w)\): \[u^{2}+wt_{1}hu=wR\ \ \ \text{with}\ R\in k[t_{1},w]\ \ \text{of degree}\ 4\ \text{in}\ t_{1}. \tag{13.4}\] This has discriminant \[\Delta^{\prime}=w^{12}(w^{2}+w+c^{4})(w+c^{2}b^{2})^{2}\] vanishing generally to order two at \(w=c^{2}b^{2}\); since we already know that the fibre is reducible as it contains the smooth rational component \(C_{1}\), this verifies the Kodaira type \(\mathrm{I}_{2}\). The lemma follows. In order to determine the subfamily with finite automorphism group, it remains to check when the fibre at \(w=c^{2}b^{2}\) degenerates to Kodaira type \(\mathrm{I}_{3}\). The discriminant \(\Delta^{\prime}\) acquires a triple root at \(w=c^{2}b^{2}\) if and only if \(c=b/(1+b)^{2}\). One verifies that the fibre type is indeed \(\mathrm{I}_{3}\) by checking that the fibre of (13.4) at \(w=c^{2}b^{2}\) always contains exactly one singular point (a surface singularity of type \(A_{1}\) where \(h\) vanishes), but it becomes reducible on the given subfamily. (Alternatively, both fibre types \(\mathrm{I}_{2}\) and \(\mathrm{I}_{3}\) follow generally from the classification of wild ramification of additive fibres in characteristic \(2\) in [27] as this implies that \(\Delta^{\prime}\) has vanishing order at least four at any additive fibre.) With fibres of type \(\mathrm{IV}^{*}\) and \(\mathrm{I}_{3}\), the Jacobian of \(f^{\prime}\) has \(\mathrm{MW}(\mathrm{Jac}(f^{\prime}))\cong\mathbb{Z}/3\mathbb{Z}\). But then the symmetry imposed by the induced order 3 automorphism on \(S\) implies that the curves in Figure 1 augmented by the two additional components of the \(\mathrm{I}_{3}\) fibre of \(f^{\prime}\) produce exactly the graph of smooth rational curves for type \(\Gamma=\tilde{E}_{6}+\tilde{A}_{2}\) from [12]. This gives the claimed irreducible one-dimensional family of classical Enriques surfaces with finite automorphism group from Theorem 1.3. #### 13.8.1. Supersingular case In the supersingular case, the above set-up degenerates to a quasi-elliptic fibration with a single multiple fibre of type III at \(t=0\) and another reducible fibre, simple as before, of type \(\mathrm{III}^{*}\) at \(t=1\). Starting from [10] we compute the equation \[S:\quad y^{2}+(t+1)t^{4}y=tx^{4}+t^{5}x^{2}+(t+1)t^{7}x+ct^{3}(t+1)^{4} \tag{13.5}\] where \(c\neq 0\), using the additional normalizations alluded to in 12.1. (Note that we used a different normalization than in (1.2) because the given one turns out much more convenient as it automatically leads to a unique Enriques surface.) The graph of smooth rational curves in Figure 1 is still standing, and so is the divisor \(D\) of Kodaira type \(\mathrm{IV}^{*}\), but now the linear system \(|2D|\) is generated by \[u=\frac{x^{2}+c(t+1)^{3}t}{t^{2}(t+1)^{2}}.\] In terms of the induced fibration \(f^{\prime}\), the smooth rational curve \(C_{1}\) is a component of the fibre at \(u=c\). The fibre contains a single singular point which is never a surface singularity for any \(c\neq 0\); its projectivized tangent cone consists of a single line unless \(c=1\). We now combine Kodaira's and Tate's classification of singular fibres [13], [29], with an Euler-Poincare characteristic reasoning taking into account the multiple fibre of type \(\mathrm{IV}^{*}\) at \(u=\infty\). Presently this implies that the fibre can only have type III for \(c\neq 0,1\). But then \(S\) inherits an automorphism of infinite order from \(\mathrm{Jac}(f^{\prime})\) by the Shioda-Tate formula, ruling out all values \(c\neq 0,1\). On the other hand, for \(c=1\), the fibre acquires three smooth rational irreducible components which meet in the singular point (with different tangent directions). This verifies that the fibre has type \(\mathrm{IV}\). In particular, we have \(\text{MW}(\text{Jac}(f^{\prime}))=\mathbb{Z}/3\mathbb{Z}\) by [28], and the symmetry imposed by the induced automorphism on \(S\) exactly leads to the required graph \(\Gamma\) of smooth rational curves. Thus, at \(c=1\), the Enriques surface \(S\) has finite automorphism group. ### \(\boldsymbol{\Gamma=\text{VIII}}\) By [12], Enriques surfaces of this type ought to have a quasi-elliptic fibration with two multiple fibres of type III and a simple fibre of type \(\text{I}_{2}^{*}\). By [10] and 12.1, the normal form can be given as \[S:\ \ y^{2}+t^{2}(t+1)y=tx^{4}+t^{3}(at+b)^{4}\ \ \ (a,b\in k^{*}). \tag{13.6}\] The diagram of reducible fibre components enriched by the curve of cusps \(C\) features a divisor \(D=C_{0}+C_{\infty}+2C+2\Theta_{0}+\Theta_{1}+\Theta_{1}^{\prime}\) of Kodaira type \(\text{I}_{1}^{*}\). It follows that \(|2D|\) induces an elliptic fibration \[f^{\prime}:\ \ S\to\mathbb{P}^{1}\] with multiple fibre of type \(\text{I}_{1}^{*}\) and at least 4 smooth rational bisections as well as two smooth rational four-sections. Explicitly, the fibration can be exhibited through the elliptic parameter \[u=\frac{(x^{2}+tx+a^{2}t^{3}+at^{2}+bt^{2}+b^{2}t)^{2}}{t^{3}(t+1)^{2}}\] which has pole divisor exactly \(2D\). The fibration \(f^{\prime}\) visibly has a double fibre at \(u=0\), with smooth support. In addition to the double \(\text{I}_{1}^{*}\) fibre at \(u=\infty\), there are generally two reducible fibres, of Kodaira type \(\text{I}_{2}\) each, at \(u=a^{2}\) and \(u=b^{2}\). It follows that \(S\) has infinite automorphism group (induced from \(\text{MW}(\text{Jac}(f^{\prime}))\) unless \(a=b\) which exactly recovers the family from [12] (after an easy variable transformation following the ideas from Sections 3, 4). ### Conclusion With all possible graphs of smooth rational curves covered, each type yields irreducible families of classical and supersingular Enriques surfaces of the expected dimensions. In consequence, these families agree with the examples exhibited in [12], and the proof of Theorem 1.3 is complete, except for the computation of the automorphism groups where the examples of [12] and our equations have not been identified explicitly. This applies exactly to type \(\tilde{E}_{6}+\tilde{A}_{2}\) ((c4) and (s3) in the tables below) and will be covered in 14.2. ### Equations For the convenience of the reader, we collect normal forms for all quasi-elliptic Enriques surfaces with finite automorphism group. **Theorem 13.2**.: _The normal forms of quasi-elliptic classical Enriques surfaces with finite automorphism group and their numbers \(n\) of moduli are given as follows:_ \begin{tabular}{|c|c|c|c|} \hline & Type & normal form & \(n\) \\ \hline (c1) & \(E_{8}\) & \(y^{2}=tx^{4}+at^{5}x+t^{3}(1+t)^{4}\) & \((a\neq 0)\) & \(1\) \\ \hline (c2) & \(\tilde{E_{7}}+\tilde{A_{1}}^{(1)}\) & \(y^{2}+at^{3}y=tx^{4}+bt^{5}x+t^{3}(1+t)^{4}\) & \((a\neq 0,b\neq 0)\) & \(2\) \\ \hline (c3) & \(\tilde{E_{7}}+\tilde{A_{1}}^{(2)}\) & \(y^{2}+at^{3}y=tx^{4}+t^{3}(1+t)^{4}\) & \((a\neq 0)\) & \(1\) \\ \hline (c4) & \(\tilde{E}_{6}+\tilde{A}_{2}\) & \(y^{2}+(t+1)t^{2}y=tx^{4}+t^{3}x^{2}+(t+1)t^{4}x+t^{3}(t+b)^{4}b^{4}/(b+1)^{8}\) & \(1\) \\ & & & \((b\neq 0,1)\) & \\ \hline (c5) & \(D_{8}\) & \(y^{2}=tx^{4}+at^{3}x^{2}+bt^{3}x+t^{3}(1+t)^{4}\) & \((b\neq 0)\) & \(2\) \\ \hline (c6) & \(D_{4}+D_{4}\) & \(y^{2}=tx^{4}+at^{3}x^{2}+bt^{4}x+t^{3}(1+t)^{4}\) & \((b\neq 0)\) & \(2\) \\ \hline (c7) & VII & only elliptic fibration & \(1\) \\ \hline (c8) & VIII & \(y^{2}+t^{2}(t+1)y=tx^{4}+ct^{3}(1+t)^{4}\) & \((c\neq 0)\) & \(1\) \\ \hline \end{tabular} **Theorem 13.3**.: _The normal forms of quasi-elliptic supersingular Enriques surfaces with finite automorphism group and their numbers \(n\) of moduli are given as follows:_ \begin{tabular}{|c|c|c|c|} \hline & Type & normal form & \(n\) \\ \hline (s1) & \(\tilde{E}_{8}\) & \(y^{2}=tx^{4}+x+t^{7}\) & \(0\) \\ \hline (s2) & \(\tilde{E_{7}}+\tilde{A_{1}}^{(1)}\) & \(y^{2}+y=tx^{4}+ax+t^{7}\) & \((a\neq 0)\) & \(1\) \\ \hline (s3) & \(\tilde{E}_{6}+\tilde{A}_{2}\) & \(y^{2}+t^{4}y=tx^{4}+t^{3}\) & \(0\) \\ \hline (s4) & \(D_{8}\) & \(y^{2}=tx^{4}+tx^{2}+ax+t^{7}\) & \((a\neq 0)\) & \(1\) \\ \hline (s5) & VII & only elliptic fibration & \(0\) \\ \hline \end{tabular} Proof.: All equations have been derived before with the exception of (s3). But this results from \(c=1\) in (13.5) by moving the III\({}^{*}\) fibre to \(\infty\) combined with an easy coordinate transformation. ## 14. Automorphisms of quasi-elliptic Enriques surfaces It is a classical problem to investigate automorphisms of Enriques surfaces - especially finite automorphism groups and those which act trivially on cohomology (or on Num). Over \(\mathbb{C}\), the solution for the latter from [20] was later corrected in [19]. In positive characteristic, there are extensive results in [8], but the order three case was left open, and the results also depended on the classification of Enriques surfaces with finite automorphism groups which we are about to complete with Theorem 1.3. To prepare for this, we start with some general results on automorphisms of quasi-elliptic surfaces. ### Automorphisms preserving a quasi-elliptic fibration For independent use, we discuss the shape of automorphisms of quasi-elliptic Enriques surfaces in some generality (extending results from [12]). **Lemma 14.1**.: _Let \(\varphi\) be an automorphism of an Enriques surface \(S\) which preserves some quasi-elliptic fibration. Then \(\varphi\) preserves some fibre of (11.1), say at \(t=\infty\), and is given by_ \[(x,y,t)\mapsto(\beta x+b_{2},\delta y+\sqrt{\gamma}x^{2}+d_{3}x+d_{5},\alpha t +\gamma), \tag{14.1}\] _with scalars \(\alpha,\beta,\gamma,\delta\) (non-zero except possibly for \(\gamma\)) and polynomials \(b_{2}\), \(d_{3}\), \(d_{5}\) in \(k[t]\) of degree bounded by the index._ Proof.: The induced action of \(\varphi\) on the base \(\mathbb{P}^{1}\) of the quasi-elliptic fibration has a fixed point, so \(\varphi\) preserves the corresponding fibre. By Mobius transformation, we can map this fibre to \(t=\infty\) and deduce the claimed action on \(\mathbb{P}^{1}\) with \(\alpha\in k^{\times}\). In addition, \(\varphi\) automatically preserves the curve of cusps. Then \(\varphi\) takes the shape \[(x,y,t)\mapsto(\beta x+b_{2},\delta y+d_{1}x^{2}+d_{3}x+d_{5},\alpha t+\gamma), \tag{14.2}\] where the degrees follow from comparing the different charts introduced in Section 5. (Alternatively one may argue with the model in weighted projective space.) Comparing the coefficient of \(x^{4}\) before and after the transformation, we deduce that \(d_{1}=\sqrt{\gamma}\). One can also take the multiple fibre(s) into consideration: in the supersingular case, it is obviously also fixed by \(\varphi\); in the classical case, the multiple fibres are either interchanged (if \(\gamma\neq 0\)) or each is preserved (whence \(\gamma=0\) as either the multiple fibres are located at \(0,\infty\) or \(\varphi\) acts as identity on the base). We consider one of these special cases in more detail: **Lemma 14.2**.: _Assume that the automorphism \(\varphi\) preserves a quasi-elliptic fibration and acts on the base \(\mathbb{P}^{1}\) with exactly two fixed points which we locate at \(0,\infty\). Then it takes the shape_ \[(x,y,t)\mapsto(\beta x+b_{2},\delta y+d_{5},\alpha t) \tag{14.3}\] _for some non-zero scalars \(\alpha,\beta,\delta\) where \(\alpha\neq 1\). Moreover the polynomials \(a_{1},a_{2},g_{2}\) in (11.1) are all monomial (or zero for \(a_{1},a_{2}\))._ Proof.: We may start with the shape of \(\varphi\) given by (14.1), but then with exactly two fixed points on \(\mathbb{P}^{1}\), we have \(\gamma=0\) and \(\alpha\neq 1\) as stated (so the polynomial \(d_{1}\) from (14.2) is zero as stated). As argued above, this locates the multiple fibres at \(t=0\), say, and \(\infty\) in the classical case. That is, up to normalizing, \(g_{2}=t\) resp. \(g_{2}=t^{2}\); in particular, \(g_{2}\) is monomial as claimed. We now substitute (14.1) into (11.1) and compare coefficients. Focussing on even and odd terms and using the action on the base, we deduce subsequently * from the coefficient of \(y\) that \(a_{1}\) is monomial or zero, * from the coefficient of \(x^{2}\) that \(d_{3}\equiv 0\), and * from the coefficient of \(x\) that \(a_{2}\) is monomial or zero. This completes the proof of Lemma 14.2 One can retrieve further information from studying the constant term of the normal form and then imposing the invariance of the normal form under \(\varphi\), but we will only pursue this for some special cases needed to prove Theorems 1.3 and 1.4. ### Proof of Theorem 1.3 After 13.10, it remains to study the (finite) automorphism groups of the Enriques surfaces \(S\) of type \(\Gamma=\tilde{E}_{6}+\tilde{A}_{2}\). The symmetry group of the graph is \(\mathfrak{S}_{3}\), and these automorphisms are always induced from the Mordell-Weil groups of the Jacobians of genus one fibrations on \(S\) (it suffices to consider those fibrations denoted by \(f\) and \(f^{\prime}\) in 13.8). It remains to compute the numerically trivial automorphisms. Necessarily they preserve any genus one fibration on \(S\), and they fix any reducible fibre. We can thus apply Lemma 14.2 to the quasi-elliptic fibrations from 13.8. For the classical Enriques surfaces (family (c4) in Theorem 13.2), comparing the coefficients of \(y\) and \(x^{2}\) gives \(\delta=\beta=1\). This leaves the constant coefficient where vanishing orders at \(0\) and at \(\infty\) yield \[b_{2}=b^{\prime}t,\ \ \ d_{5}=dt^{2}+d^{\prime}t^{3}\ \ \ (b^{\prime},d,d^{ \prime}\in k).\] The three constants satisfy a system of three equations which is seen to have exactly two solutions: \[(b^{\prime},d,d^{\prime})\in\{(0,0,0),\ (0,1,1)\}\] The automorphism \(\varphi\) corresponding to the second involution acts non-trivially on \(\operatorname{Num}(S)\) as it interchanges the two branches of the \(\operatorname{III}^{*}\) fibre at \(t=1\) (it is induced by translation by the two-torsion section on \(\operatorname{MW}(\operatorname{Jac}(f))\)). Thus we conclude that \(\operatorname{Aut}_{nt}(S)=\{\operatorname{id}\}\), confirming the examples from [12]. For the supersingular Enriques surface \(S\) denoted by (s3) in Theorem 13.3, the same approach gives \[\alpha=\beta^{2},\ \ \delta=\beta^{3},\ \ \beta^{5}=1,\ \ b_{2}\equiv 0\ \ \text{ and }\ \ d_{5}=\gamma t^{4}\ \ \ \text{where}\ \ \ \gamma(\gamma+\beta^{3})=0.\] Here \(\gamma=0\) and \(\beta\neq 1\) yields a cyclic group of order \(5\) of numerically trivial automorphisms while the automorphisms with \(\gamma=\beta^{3}\) again interchange the two branches of the III\({}^{*}\) fibre (at \(t=\infty\)), i.e. they act non-trivially on \(\text{Pic}(S)\). Hence \(\text{Aut}_{nt}(S)=\text{Aut}_{ct}(S)=\mathbb{Z}/5\mathbb{Z}\), again confirming the findings of [12]. This completes the proof of Theorem 1.3. ### Proof of Theorem 1.4 Let \(\varphi\) be a cohomologically trivial automorphism of odd order \(n>1\) acting on an Enriques surface \(S\). It was proved in [8, Thm. 7.1] that \(S\) can only be classical if \(\#\text{Aut}(S)<\infty\), and otherwise \(S\) is supersingular. So let us assume that \(S\) is supersingular and \(\#\text{Aut}(S)=\infty\). The cohomologically trivial action implies that \(\varphi\) preserves any genus one fibration on \(S\); more precisely, by [8, Lem. 7.5], \(\varphi\) acts non-trivially on the base of the fibration. In the case of order \(n=3\), it was also shown in [8, Prop. 7.9] that \(S\) admits a configuration of rational curves amounting to a quasi-elliptic fibration with two fibres of type I\({}_{0}^{*}\), exactly one of which is multiple. This puts us in the setting of 12.2. By assumption, \(\varphi\) preserves the curve of cusps and each irreducible fibre component of the two I\({}_{0}^{*}\) fibres. By Lemmata 14.2, \(g_{2}=\eta t\) resp. \(\eta t^{2}\)\((\eta\in k)\) and the automorphism is given by \[\varphi:\ \ (x,y,t)\mapsto(\beta x+b_{2},\delta y+d_{5},\zeta t)\] where \(\zeta^{3}=1\neq\zeta\). Since \(S\) is supersingular, we normalize \(c_{1}=1+\eta t\) and substitute \(\varphi\) into (12.2). If \(\text{deg}(c_{1})=1\), then we can normalize \(\eta=1\) as well. Comparing bottom and top coefficients of the constant term of (12.2) before and after the substitution, we find that \(b_{2}\in kt^{2},d_{5}\in kt^{4}\). But then the coefficients of \(t^{3}\) and \(t^{7}\) combine for \(\zeta^{3}t^{3}(1+\zeta t)^{4}\). This is proportional to the original term \(t^{3}c_{1}^{4}\) if and only if \(\zeta=1\) which we ruled out by appealing to [8, Lem. 7.5]. Hence \(\eta=0\) which leads exactly to (1.3) from Theorem 1.4 (with the additional normalization \(g_{2}=t^{2}\)). To complete the proof of the theorem, it remains to verify that the order \(3\) automorphism \[\varphi:\ \ (x,y,t)\mapsto(\zeta^{2}x,y,\zeta t)\] on the Enriques surfaces given by (1.3) is indeed cohomologically trivial. This is easily checked by resolving the singularities as explained in Sections 8, 9. _Remark 14.3_.: We note again that the surfaces given by (1.3) have infinite automorphism group. ### Proof of Corollary 1.5 The results of Corollary 1.5 can be found in [8] with two restrictions: 1. the possible lists of numerically trivial subgroups of the automorphism group in _(iii)_ and _(iv)_ rely on the validity of the computations of [12] for all classical and supersingular Enriques surfaces with finite automorphism groups which we confirmed in Theorem 1.3; 2. the appearance of \(\mathbb{Z}/3\mathbb{Z}\) as a group of numerically trivial automorphisms depends on Theorem 1.4. Thus Corollary 1.5 is proved in its entirety. ## 15. Appendix In this appendix, we construct some Enriques surfaces by using the theory of vector fields, and determine the parameters for Enriques surfaces with finite automorphism group. For the details of the theory of vector fields, refer to Rudakov-Shafarevich [25]. We consider the Enriques surface \(S\) defined by (13.5). Replacing \(y\) (resp. \(t\), resp. \(x\)) by \(y/t^{5}\) (resp. \(1/t\), resp. \(x/t^{2}\)), we have an equation \[y^{2}+(t+1)y=tx^{4}+tx^{2}+(t+1)x+ct^{3}(1+t)^{4}\quad(c\neq 0).\] Considering a change of coordinates \(y=w+x\), we have an equation \[w^{2}+(t+1)w=tx^{4}+(t+1)x^{2}+ct^{3}(1+t)^{4}.\] By the base change \(t=s^{2}\), we have \[\left(\frac{w+sx^{2}+(s+1)x+\sqrt{c}s^{3}(1+s)^{4}}{s+1}\right)^{2}=w.\] We set \[z=\frac{w+sx^{2}+(s+1)x+\sqrt{c}s^{3}(1+s)^{4}}{s+1}.\] Then we have \[\left\{\begin{array}{l}w=(s+1)z+sx^{2}+(s+1)x+\sqrt{c}s^{3}(1+s)^{4},\\ w=z^{2}.\end{array}\right. \tag{15.1}\] Therefore, we have an equation \[z^{2}+(s+1)z+sx^{2}+(s+1)x+\sqrt{c}s^{3}(1+s)^{4}=0, \tag{15.2}\] and we have a rational point \[(x,z)=(\sqrt[16]{c}+\sqrt[8]{c}+(\sqrt[16]{c}+\sqrt[8]{c}+\sqrt[4]{c})s+\sqrt[ 4]{c}s^{3},\sqrt[16]{c}+(\sqrt[16]{c}+\sqrt[8]{c})s+\sqrt[8]{c}s^{2})\] of the equation (15.2) over \(k(s)\). We consider the change of coordinates \[\left\{\begin{array}{l}\tilde{x}=x+\sqrt[16]{c}+\sqrt[8]{c}+(\sqrt[16]{c}+ \sqrt[8]{c}+\sqrt[4]{c})s+\sqrt[4]{c}s^{3}\\ \tilde{z}=z+\sqrt[16]{c}+(\sqrt[16]{c}+\sqrt[8]{c})s+\sqrt[8]{c}s^{2}.\end{array}\right. \tag{15.3}\] Then, our equation becomes \[\tilde{z}^{2}+(s+1)\tilde{z}+s\tilde{x}^{2}+(s+1)\tilde{x}=0.\] Since we have \[\frac{\tilde{z}+(s+1)}{\tilde{x}}=\frac{s\tilde{x}+(s+1)}{\tilde{z}},\] we consider a birational mapping \[\left\{\begin{array}{l}X=\frac{\tilde{z}+(s+1)}{\tilde{x}}\\ Z=\frac{s\tilde{x}+(s+1)}{\tilde{z}}.\end{array}\right.\] Then, the inverse mapping is given by \[\left\{\begin{array}{l}\tilde{x}=\frac{(s+1)(Z+1)}{XZ+s}\\ \tilde{z}=\frac{(s+1)(X+s)}{XZ+s}\end{array}\right. \tag{15.4}\] and our equation becomes \[X=Z. \tag{15.5}\] We have \[k(x,y,t)=k(x,w,t)\subset k(x,w,s)=k(x,z,s)=k(\tilde{x},\tilde{z},s)=k(X,Z,s)=k(X,s)\] by (15.5). We consider a rational vector field \[D=(X^{2}+s)^{2}\left\{\frac{\partial x}{\partial s}\frac{\partial}{\partial X }+\frac{\partial x}{\partial X}\frac{\partial}{\partial s}\right\}\] on \(\mathbf{P}^{1}\times\mathbf{P}^{1}\supset\mathbf{A}^{1}\times\mathbf{A}^{1}= \mathrm{Spec}\ k[s,X]\). This vector field \(D\) is 2-closed. Since \[\left\{\begin{array}{l}x=\frac{(s+1)(Z+1)}{XZ+s}+\sqrt[16]{c}+\sqrt[8]{c}+( \sqrt[16]{c}+\sqrt[8]{c}+\sqrt[4]{c})s+\sqrt[4]{c}s^{3}\\ =\frac{(s+1)(X+1)}{X^{2}+s}+\sqrt[16]{c}+\sqrt[8]{c}+(\sqrt[16]{c}+\sqrt[8]{c}+ \sqrt[4]{c})s+\sqrt[4]{c}s^{3},\\ y=\ x+w=x+z^{2},\\ t=\ s^{2}\end{array}\right.\] by (15.3), (15.4) and (15.5), we have \[D=\{(X+1)^{3}+(\sqrt[16]{c}+\sqrt[8]{c}+\sqrt[4]{c})(X^{2}+s)^{2}+\sqrt[4]{c}s ^{2}(X^{2}+s)^{2}\}\frac{\partial}{\partial X}+(s+1)(X^{2}+s)\frac{\partial}{ \partial s},\] and \[D(x)=D(y)=D(t)=0.\] Therefore, we have \(k(X,s)^{D}=k(x,y,t)\) and we see that \((\mathbf{P}^{1}\times\mathbf{P}^{1})^{D}\) is birationally equivalent to our Enriques surface \(S\). To make the calculation simpler, we replace \(s\) (resp. \(X\)) by \(s+1\) (resp. \(X+1\)) and we set \(\alpha=\sqrt[16]{c}+\sqrt[4]{c}\). Then, we have \[D=\{X^{3}+(\alpha+\sqrt[4]{c}s^{2})(X^{2}+s)^{2}\}\frac{\partial}{\partial X} +s(X^{2}+s)\frac{\partial}{\partial s}.\] Note that under the condition \(c\neq 0\) we have \(\alpha=0\) if and only if \(c=1\). Let \(pr:\mathbf{P}^{1}\times\mathbf{P}^{1}\longrightarrow\mathbf{P}^{1}\) be the first projection: \[pr:\begin{array}{ccc}\mathbf{P}^{1}\times\mathbf{P}^{1}&\longrightarrow& \mathbf{P}^{1}\\ \cup&&\cup\\ \mathbf{A}^{1}\times\mathbf{A}^{1}&\longrightarrow&\mathbf{A}^{1}=\operatorname{ Spec}\,k[s]\\ (s,X)&\mapsto&s.\end{array}\] We denote by \(C_{\infty}\) (resp. \(F_{\infty}\)) the irreducible curve in \(\mathbf{P}^{1}\times\mathbf{P}^{1}\) defined by \(X=\infty\) (resp. \(t=\infty\)). Then, we have \(\mathbf{P}^{1}\times\mathbf{P}^{1}=\operatorname{Spec}\,k[s,X]\cup C_{\infty}\cup F _{\infty}\). We also denote by \(C_{0}\) (resp. \(F_{0}\), resp. \(C\)) the irreducible curve in \(\mathbf{P}^{1}\times\mathbf{P}^{1}\) defined by \(X=0\) (resp. \(s=0\), resp. \(s=X^{2}\)). A canonical divisor of \(\mathbf{P}^{1}\times\mathbf{P}^{1}\) is given by \[K_{\mathbf{P}^{1}\times\mathbf{P}^{1}}=-2C_{\infty}-2F_{\infty}.\] The isolated singularities of the vector field \(D\) are \[P_{\alpha}=\left(0,\frac{1}{\alpha}\right),\ P_{0}=(0,0),\ Q_{\infty}=(\infty, \infty).\] We blow-up these isolated singular points to get the minimal resolution of the vector field \(D\). Assume \(c\neq 1\), i.e., \(\alpha\neq 0\). Then, there is a graph of curves given as follows: In Figure 2, "dotted line" means an integral curve and "line" means a non-integral curve with respect to the vector field \(D\). The self-intersection numbers of the curves are given as follows: \[\begin{array}{l}C_{0}^{2}=C_{\infty}^{2}=E_{2}^{2}=E_{4}^{2}=E_{5}^{2}=E_{6} ^{2}=E_{7}^{2}=G_{4}^{2}=G_{5}^{2}=-1\\ F_{\infty}^{2}=G_{1}^{2}=G_{2}^{2}=G_{3}^{2}=-2\\ F_{0}^{2}=E_{1}^{2}=E_{3}^{2}=C^{2}=-4,\end{array}\] Figure 2. Resolution of the isolated singular points of \(D\) and in Figure 2 the intersection of two curves are transversal whenever the two curves intersect. We denote this surface by \(\widetilde{\mathbf{P}^{1}\times\mathbf{P}^{1}}\) and the blowing-up by \[\varphi:\widetilde{\mathbf{P}^{1}\times\mathbf{P}^{1}}\longrightarrow\mathbf{P} ^{1}\times\mathbf{P}^{1}.\] Then, a canonical divisor of \(\widetilde{\mathbf{P}^{1}\times\mathbf{P}^{1}}\) is given by \[\begin{array}{rl}K_{\widetilde{\mathbf{P}^{1}\times\mathbf{P}^{1}}}=&-2C_{ \infty}-2F_{\infty}-3G_{1}-4G_{2}-3G_{3}-2G_{4}-G_{5}\\ &+E_{1}+2E_{2}+2E_{3}+4E_{4}+3E_{5}+3E_{6}\end{array} \tag{15.6}\] and the divisorial part of \(D\) on \(\widetilde{\mathbf{P}^{1}\times\mathbf{P}^{1}}\) is given by \[\begin{array}{rl}(D)=&-2C_{\infty}-4F_{\infty}-5G_{1}-6G_{2}-5G_{3}-2G_{4}- G_{5}\\ &+E_{1}+2E_{2}+2E_{3}+4E_{4}+3E_{5}+3E_{6}+E_{7}\end{array} \tag{15.7}\] Since the vector field \(D\) has no isolated singular points on \(\widetilde{\mathbf{P}^{1}\times\mathbf{P}^{1}}\), the quotient surface \((\widetilde{\mathbf{P}^{1}\times\mathbf{P}^{1}})^{D}\) is non-singular; we denote it by \(\tilde{S}\). Let \(\psi:\widetilde{\mathbf{P}^{1}\times\mathbf{P}^{1}}\longrightarrow\tilde{S}\) be the projection; for simplicity, we use the same symbol for the image of a curve on \(\widetilde{\mathbf{P}^{1}\times\mathbf{P}^{1}}\). Then, the configuration of the images of curves above is given in Figure 3. As for the self-intersection numbers of the curves, we have \[F_{\infty}^{2}=G_{1}^{2}=G_{3}^{2}=-1,G_{2}^{2}=G_{4}^{2}=-4\] For the other curves the self-intersection numbers are \(-2\). If two curves intersect, then the intersection of the two curves is transversal except the intersection of \(G_{4}\) and \(G_{5}\), which is \(G_{4}\cdot G_{5}=2\). We blow-down \(G_{1}\), \(G_{3}\), \(F_{\infty}\) and \(G_{2}\) successively. Then we have a relatively minimal quasi-elliptic surface \(f:S\longrightarrow\mathbf{P}^{1}\) with the cusp locus \(C\) and a diagram \[\mathbf{P}^{1}\times\mathbf{P}^{1}\stackrel{{\varphi}}{{ \longleftarrow}}\widetilde{\mathbf{P}^{1}\times\mathbf{P}^{1}}\stackrel{{ \psi}}{{\longrightarrow}}(\widetilde{\mathbf{P}^{1}\times \mathbf{P}^{1}})^{D}=\tilde{S}\stackrel{{\pi}}{{\longrightarrow}}S.\] As for canonical divisor \(K_{\widetilde{\mathbf{P}^{1}\times\mathbf{P}^{1}}}\) of \(\widetilde{\mathbf{P}^{1}\times\mathbf{P}^{1}}\), we have the formula by Rudakov-Shafarevich [25]: \[K_{\widetilde{\mathbf{P}^{1}\times\mathbf{P}^{1}}}\sim\psi^{*}K_{\widetilde{S} }+(D).\] Therefore, we have \[K_{\widetilde{\mathbf{P}^{1}\times\mathbf{P}^{1}}}\sim(\pi\circ\psi)^{*}K_{S} +2G_{1}+2G_{2}+2F_{\infty}+(D). \tag{15.8}\] Combining (15.6), (15.7) and (15.8), we have \((\pi\circ\psi)^{*}K_{S}\sim 0\). Therefore, \(K_{S}\) is numerically trivial. Since the second Betti number \(b_{2}(\widetilde{S})=b_{2}(\mathbf{P}^{1}\widetilde{\times}\mathbf{P}^{1})=2+ 12=14\), we have \(b_{2}(S)=10\). Therefore, \(S\) is an Enriques surface, confirming what we already knew. On \(S\), we obtain the configuration of curves given in Figure 4. We have \(C_{0}^{2}=C_{\infty}^{2}=0\) and the other curves are nodal curves. Since \(C_{\infty}^{2}=0\), we have a genus 1 fiber space such that \(C_{\infty}\) is contained in a fiber. Since \(F_{0}\cdot C_{\infty}=1\) and any Enriques surface has no section for any genus 1 fiber space, we see that the fiber is the multiple fiber \(2C_{\infty}\). Since \(C_{\infty}\) is a rational curve, we see that the multiple fiber \(2C_{\infty}\) is of type \(\mathrm{II}\). However, if the configuration of the nodal curves is given by Figure 5, then we have no such a genus 1 fibration by [12]. Hence, if \(c\neq 1\), the automorphism group of \(S\) is infinite. If \(c=1\), then \(\alpha=0\). Therefore, on \(\mathbf{P}^{1}\times\mathbf{P}^{1}\), \(P_{\alpha}\) sits on \(C_{\infty}\), and after one blowing-up at \(P_{0}\) a singular point sits on \(C_{0}\). Therefore, on \(\widetilde{\mathbf{P}^{1}\times\mathbf{P}^{1}}\) we have \(C_{0}^{2}=C_{\infty}^{2}=-4\). Therefore, on \(\tilde{S}\) both \(C_{0}\) and \(C_{\infty}\) become nodal curves and the configuration of the nodal curves is given in Figure 5. Hence, as in [12] the automorphism group of the Enriques surface \(S\) is finite. _Remark 15.1_.: Let \(\lambda\) be a solution of the equation \(x^{2}+x+\sqrt{a}=0\) (\(a\in k^{*}\)). For the Enriques surface \(S\) defined by (13.6) (resp. (13.2)), we take the vector Figure 4. Configuration of curves (\(c\neq 1\)) field \[\begin{array}{ll}D=&\{s^{2}X(X+1)^{2}+(\sqrt{a}s+\sqrt{b})^{2}(sX^{2}+1)^{2}\} \frac{\partial}{\partial X}\\ &+s^{2}(s+1)(sX^{2}+1)\frac{\partial}{\partial s}\\ (\text{resp. }D=&\{s^{2}(X+1)^{3}+s^{2}(s+1)^{2}+(\lambda s^{2}+\sqrt[4]{a}b)(X^{2}+s )^{2}\}\frac{\partial}{\partial X}\\ &+s^{2}(s+1)(X^{2}+s)\frac{\partial}{\partial s})\end{array}\] on \(\mathbf{P}^{1}\times\mathbf{P}^{1}\). Then, \((\mathbf{P}^{1}\times\mathbf{P}^{1})^{D}\) is birationally equivalent to \(S\). Using this vector field, in a similar way to the above we can show that \(S\) has a finite automorphism group if and only if \(a=b\) (resp. \(a=\frac{b^{4}}{(1+b)^{8}}\)). ### Acknowledgements We thank S. Kondo for helpful discussions and I. Dolgachev and G. Martin for interesting comments.
2308.00248
Gapless superconducting state and mirage gap in altermagnets
The interplay between spin-orbit interaction (SOI) and magnetism produces interesting phenomena in superconductors. When a two-dimensional (2D) system with strong SOI is coupled to an $s$-wave superconductor, an in-plane magnetic field can drive the system into a gapless superconducting state and induce a mirage gap at finite energies for an Ising superconductor. In this work, we demonstrate that when an $s$-wave superconductor is proximitized to an altermagnet, the intrinsic anisotropic spin splitting of the altermagnet can result in a gapless superconducting state and a pair of mirage gaps at finite energy. The gapless superconductivity exhibits spin-polarized segmented Fermi surfaces, with coexisting spin-singlet and spin-triplet pairings that have a $d$-wave character. Importantly, the gapless superconducting and mirage gap features are quantified through quantum transport. Our results suggest that altermagnet is an ideal platform for studying gapless superconducting states and mirage gap physics.
Miaomiao Wei, Longjun Xiang, Fuming Xu, Lei Zhang, Gaomin Tang, Jian Wang
2023-08-01T03:06:24Z
http://arxiv.org/abs/2308.00248v2
# Gapless superconducting state and mirage gap in altermagnets ###### Abstract Interplay between Rashba spin orbit interaction (SOI) and superconductivity can give rise to many interesting effects where an in-plane magnetic field is essential. For instance, for a 2D system with strong Rashba SOI proximity coupled to a s-wave superconductor, the in-plane magnetic field can drive the system into a gapless superconducting state while it can also induce a mirage gap at finite energies for an Ising superconductor while keeping the main gap at Fermi level intact. We show that when a s-wave superconductor proximitized to an alternmenti in the absence of SOI and in-plane magnetic field, the gapless superconducting state with mirage gap can emerge showing d-wave signature, due to the anisotropic spin splitting of the altermagnet. When the Rashba SOI is added, the system can turn into a gapped superconductor with mirage gap. Pairing mechanism and transport properties of mirage gap are investigated. Our result suggests that altermagnet is an ideal platform for studying gapless superconducting state and mirage gap. _Introduction_ -- The interplay of magnetism and superconductivity is an important research arena in condensed matter physics [1; 2; 3]. While the magnetism can hamper the conventional superconducting pairing and superconductivity ceases to exist if the magnetic field exceed Pauli limit [4], the magnetism can enable the finite momentum and/or triplet pairing for unconventional superconductivity, giving rise to interesting physics. For instance, an in-plane magnetic field can partially destroy the pairing for a 2D system with strong spin orbit interaction (SOI) proximity coupled with a s-wave superconductor. This in turn leads to a segmented Fermi surface that can be used to create Majorana bound states, reveal information on spin textures of electron Fermi surface in the normal state, and characterize Fulde-Ferrell-Larkin-Ovchinnikov state in unconventional superconductors [5; 6; 7]. Recently, this gapless superconducting state has been observed experimentally [7]. Note that besides the gapless superconducting states discussed here, there are two other gapless superconducting states: the first one features a Bogoliubov Fermi surface, in which the gapless superconducting states are due to the form factor in excitation spectrum, like that in the p-wave and d-wave superconductors [8; 9; 10] and the second one is also created by applying an external magnetic field (under Pauli limit) to the superconductor, but in which the gap is fully closed along the whole Fermi surface, as observed in Ref.[11]. In addition, for an Ising superconductor [12; 13; 14; 15; 16; 17] with in-plane magnetic field, the presence of equal-spin triplet pairing at finite energy leads to a mirage gap that coexists with quasi-particle density of states [18; 19]. The interplay of finite momentum and finite energy superconducting pairing was investigated [20]. Recently, in addition to ferromagnetic phase and antiferromagnetic phase, a third magnetic phase dubbed altermagnetic phase has been identified [21; 22; 23; 24; 25; 26; 27; 28; 29]. The altermagnet (AM) has a collinear antiferromagnetic structure with a large non-relativistic anisotropic spin splitting (ASS), which leads to a number of interesting physics unique to AM, including giant and tunneling magnetoresistance [22], anomalous spin Hall effect [25; 31; 32; 33], spin splitting torque and T-odd spin Hall effect [26; 27; 28; 29], pronounced thermal transport [30], and the spin Seebeck and spin Nernst effect of magnon in the absence of Berry curvature as a result of the giant spin splitting of magnonic band [34; 35]. Moreover, there are abundant materials that exhibit AM phase such as RuO\({}_{2}\), MnTe, CrO, and CrSb ranging from insulator, semiconductor, semimetal to metallic systems [24] making it an ideal platform for material engineering [36; 37; 38; 39]. When an AM is sandwiched between two superconducting leads, \(0\)-\(\pi\) oscillation was predicted due to the finite momentum pairing [40; 41; 42]. Andreev reflection from the interface of AM and superconductor was studied to explore its dependence on the orientation of AM relative to the interface, impurity disorder, and tunneling barrier [34; 43]. In addition, it was shown that the first and second order topological superconductivity in 2D AM metals can emerge [45; 46]. For a 2D system proximitized to a s-wave superconductor, it normally require in-plane magnetic field and effective SOI to achieve gapless superconducting state and mirage gap. Since the in-plane magnetic field may destroy the proximitized superconducting state before creating the gapless superconducting state, there is a very narrow window that the in-plane magnetic field can maneuver, making it difficult to control and manipulate the gapless state. Our work shows that the use of in-plane magnetic field is not necessary. By tuning anisotropic spin splitting (ASS) the AM proximitized to a s-wave superconductor (AM-SC) can change from s-wave superconductor to a gapless superconductor with a d-wave like segmented Fermi surface. At the same time, the mirage gap emerges due to the finite energy pairing, which can be identified by the quantized Andreev reflection coefficient. Turning on SOI destroys the gapless superconducting state but enriches the physics of mirage gap. For instance varying strength of SOI can lead to a transition from a d-wave AM-SC state to a s-wave AM-SC state while the mirage gap can become anisotropic with \(C_{4}\) symmetry. _Hamiltonian_ -- The Hamiltonian of altermagnet (AM) is given by (\(\hbar=e=2m=1\)) \[H_{0}={\bf k}^{2}+t_{J}(k_{x}^{2}-k_{y}^{2})\sigma_{z}+\lambda(k_{x}\sigma_{y}-k_{ y}\sigma_{x})-\mu\] where \(\mu\) is the chemical potential and \(t_{J}\) is a coupling constant responsible for the anisotropic spin spitting. Since this Hamiltonian has \(C_{4}\) symmetry, we can also rotate one of the principal axes by an angle \(\theta\) \[k_{x} = k_{x}^{\prime}\cos\theta+k_{y}^{\prime}\sin\theta\] \[k_{y} = -k_{x}^{\prime}\sin\theta+k_{y}^{\prime}\cos\theta\] to find \[H_{0}(\theta) = k^{2}+t_{1}(k_{x}^{2}-k_{y}^{2})\sigma_{z}+t_{2}k_{x}k_{y}\sigma _{z} \tag{1}\] \[+ \lambda_{1}(k_{x}\sigma_{y}-k_{y}\sigma_{x})+\lambda_{2}(k_{x} \sigma_{x}+k_{y}\sigma_{y})-\mu\] where \(t_{1}=t_{J}\cos 2\theta\), \(t_{2}=t_{J}\sin\theta\cos\theta\), \(\lambda_{1}=\lambda\cos\theta\), and \(\lambda_{2}=\lambda\sin\theta\). In the following calculation, our energy unit is eV. If AM is proximitized to a s-wave superconductor with a gap function \(\Delta\), the Hamiltonian of this AM-SC becomes [18; 47] \[H=\begin{pmatrix}H_{0}(k)&\Delta i\sigma_{y}\\ -\Delta i\sigma_{y}&-H_{0}^{*}(-k)\end{pmatrix} \tag{2}\] It is easy to show that the operator \(\sigma_{z}\) commutes with \(H\)[50]. _Pairing mechanism_ -- Defining the general pairing correlation function [18; 48; 51] \[{\cal F}({\bf k},\epsilon)=\Delta(F_{0}\sigma_{0}+{\bf F}\cdot{\mathbf{\sigma}})i \sigma_{y} \tag{3}\] where \(F_{0}\) and \({\bf F}\) denote the singlet and triplet pairing correlations. For instance, the triplet pairing wave function corresponding to \(F_{z}\) is \(|\psi\rangle=F_{z}(|\uparrow\downarrow\rangle+|\downarrow\uparrow\rangle)\). From the Gorkov equation [18; 48; 51; 52], \[\begin{bmatrix}\varepsilon-H_{0}(k)&-\Delta i\sigma_{y}\\ \Delta i\sigma_{y}&\varepsilon+H_{0}^{*}(-k)\end{bmatrix}\begin{bmatrix}{\cal F }(k,\varepsilon)\\ \bar{G}(k,\varepsilon)\end{bmatrix}=\begin{bmatrix}0\\ 1\end{bmatrix} \tag{4}\] where \(\varepsilon\) is energy, \({\cal F}\) and \(\bar{G}\) are anomalous and regular Green's functions, respectively. \({\cal F}\) is determined by the following equation, \[[\Delta^{2}i\sigma_{y}-i(\varepsilon+H_{0}^{*}(-k))\sigma_{y}( \varepsilon-H_{0}(k))]{\cal F}=\Delta. \tag{5}\] Assuming \(\lambda=0\) and \(a\equiv t_{1}(k_{x}^{2}-k_{y}^{2})+t_{2}k_{x}k_{y}\), the Hamiltonian is expressed as \(H_{0}=k^{2}\sigma_{0}+a\sigma_{z}\). In this case, \({\cal F}\) is solved from Eq.(5), \[F_{0}({\bf k},\varepsilon) = \big{(}\varepsilon^{2}-\Delta^{2}-{\bf k}^{4}+a^{2}\big{)}/M({\bf k },\varepsilon)\,\] \[F_{z}({\bf k},\varepsilon) = 2\varepsilon a/M({\bf k},\varepsilon)\, \tag{6}\] with \[M({\bf k},\varepsilon)=4\varepsilon^{2}a^{2}-\big{(}\varepsilon^{2}-\Delta^{ 2}-{\bf k}^{4}+a^{2}\big{)}^{2}. \tag{7}\] Hence both singlet and triplet pairing are present at finite energy while at \(\varepsilon=0\) where only singlet pairing survives. The pairing at finite energy leads to a pseudo-gap which was termed as mirage gap [18]. To find the location and width of mirage gap, we diagonalize the Hamiltonian Eq.(2) and obtain four eigenvalues \(E_{-\pm}=-a\pm\sqrt{\Delta^{2}+k^{4}}\) and \(E_{+\pm}=a\pm\sqrt{\Delta^{2}+k^{4}}\). The main gap is determined by \(E_{-\mp}-E_{+-}=-2a+2\sqrt{\Delta^{2}+k^{4}}\). Hence \(\sqrt{\Delta^{2}+k^{4}}=a\) gives the condition for closing of the main gap at particular (\({\bf k},\theta\)), giving rise to the segmented Fermi surface (the graphical solution is shown in Fig.1b for \(\theta=0\)). The evolution of band structure of mirage gap (\(E\) versus \(k_{x}\) for \(k_{y}=0\)) is shown in Fig.1a, from which we see that the main gap at \(E_{F}=0\) is opened for small \(t_{J}\). At a critical value of \(t_{J}\), e.g., \(t_{J}=0.36\) for \(k_{y}=0\), we have \(E_{+-}=E_{-+}=0\) and the main gap is closed at \(k_{y}=0\). However, it does not mean that the system becomes a normal state. In Fig.1b, we plot the Fermi surface at \(E_{F}=0\) with chemical potential \(\mu=0.05\), which clearly shows that it is a gapless superconducting state with segmented Fermi surface [5]. Upon further increasing \(t_{J}\), the mirage gap is formed while the main gap remains closed, suggesting that the existence of main gap and mirage gap are mutual exclusive at fixed \(k_{y}\). When \(t_{J}>2.0\) the system turns into a normal state and the Fermi surface becomes a circle. Hence during the increasing of \(t_{J}\), the system changes from the s-wave superconducting states to a d-wave like gapless superconducting state and finally becomes a normal state. This is different from the mirage gap investigated in Ref.[18] where both main gap and mirage gap can be present at the same time. The width of mirage gap is \[\delta=E_{++}-E_{+-}=2\sqrt{\Delta^{2}+k^{4}} \tag{8}\] which is independent of \(t_{J}\) while the location of the mirage gap (mid point of the gap) is \((E_{++}+E_{+-})/2=a\) which is linearly proportional to \(t_{J}\). Since the system has \(C_{4}\) symmetry we expect that the mirage gap enjoys the same symmetry as well. If we turn on the SOI, spin is not a good quantum number anymore and additional features emerge. It is easy to show that all four components of the pairing correlation function are nonzero. Moreover, mirage gap and main gap can open up at the same time similar to the case of Ising superconductor with in-plane magnetic field [18]. As will be seen below that these conclusions agree with the quantum transport calculation. To reveal the nature of mirage gap, in the following we perform quantum transport calculation for AM system with one normal lead and one AM-superconducting lead. _Quantum transport formalism_ -- The schematic plot of the system we considered is shown in Fig.4a where an AM-nanoribbon is in contact with an AM-superconducting lead. In the presence of mirage gap, the transmission coefficient in general consists of the Andreev reflection coefficient \(T^{A}\) and quasi-particle transmission coefficient \(T^{Q}\) which can be calculated using the nonequilibrium Green's function. In the Nambu representation (\(e\uparrow,e\downarrow,h\uparrow,h\downarrow\)), the Andreev reflection and quasi-particle transmission coefficients are defined as (assuming \(E_{F}\geq 0\)) \[T^{A} = {\rm Tr}[\Gamma_{L\epsilon}G^{r}\Gamma_{L\hbar}G^{a}],\] \[T^{Q} = {\rm Tr}[\Gamma_{L\epsilon}G^{r}\Gamma_{R\epsilon}G^{a}],\] where e and h denote the electron and hole. In addition, \(T_{\sigma}^{A}=\mathrm{Tr}[\Gamma_{L\epsilon}G^{r}\Gamma_{L\hbar}G^{a}]_{\sigma\sigma}\) and \(T_{\sigma}^{Q}=\mathrm{Tr}[\Gamma_{L\epsilon}G^{r}\Gamma_{R\epsilon}G^{a}]_{\sigma\sigma}\) are the Andreev reflection and quasi-particle transmission coefficients with spin \(\sigma=\uparrow,\downarrow\)[49]. The linewidth function is defined as \(\Gamma_{L/R}=i\left[\Sigma_{L/R}^{r}-\Sigma_{L/R}^{a}\right]\) where \(\Sigma_{L/R}^{r}\) is the retarded self-energy describing the coupling between the left/right lead and the central scattering region. Here \(G^{r}=[E_{F}-H-\Sigma_{L}^{r}-\Sigma_{R}^{r}]\) is the retarded Green's function, where \(E_{F}\) is the Fermi energy and \(H\) is the Hamiltonian of the central scattering region. The advanced Green's function is given by \(G^{a}=[G^{r}]^{\dagger}\). In the numerical calculation, we discretize the Hamiltonian in a \(20\times 20\) mesh and set \(\mu=0.05\) and \(\Delta=0.001\). _Numerical results_ -- We first discuss the case of \(\lambda=0\). Note that in Eq.(1) the principal axis makes an angle \(\theta\) with normal of the normal metal-superconductor interface. We first give an example of the Andreev reflection at \(\theta=0\) and establish the fact that the spin resolved Andreev reflection coefficient \(T_{\sigma}^{A}\) is an integer within the gap (note that the main gap and mirage gap are mutual exclusive at a particular angle). In Fig.1c, we plot the Andreev reflection coefficient versus \(t_{J}\). Typical values of \(t_{J}\) with the corresponding band structures is shown in Fig.1a. As long as the main gap is not closed, i.e., \(t_{J}<0.36\), we find \(T_{\sigma}^{A}=1\) within the gap while away from the gap \(T_{\sigma}^{A}\) decays to zero. Fig.1d depicts \(T_{\sigma}^{A}\) and \(T_{\sigma}^{Q}\) versus \(E_{F}\) at \(t_{J}=0.4\) for the mirage gap. It shows that, by increasing \(t_{J}\), \(T^{A}\) at the main gap splits into two spin resolved \(T_{\sigma}^{A}\) below and above \(E_{F}=0\) with \(T_{\sigma}^{A}=1\) within the mirage gap. Therefore the energy dependence of \(T_{\sigma}^{A}\) for the main gap and the mirage gap have the same behavior. However, if we plot the total Andreev reflection coefficient, \(T^{A}\) is not a constant value with the mirage gap since \(T_{\sigma}^{A}\) is nonzero outside of the mirage gap. Similar behavior is found at \(E_{F}=0\). When the main gap is closed, \(T^{A}\) at \(E_{F}=0\) is not equal to \(2\) and its nonzero value is contributed from Andreev reflection of the mirage gap at \(E_{F}\neq 0\). From Fig.1d, we also see that both charge and spin Andreev reflections are nonzero, confirming the existence of singlet and triplet (\(F_{z}\)) pairing because if there was only singlet pairing the spin Andreev reflection would not be allowed. Note that in the presence of mirage gap, quasi-particle transmission is allowed as seen from Fig.1d since there is no global gap. It is easily confirmed that \(\sum_{\sigma}(T_{\sigma}^{A}+T_{\sigma}^{Q})=2\) from Fig.1d. Therefore perfect quasi-particle transmission occurs when Andreev reflection coefficient vanishes. Now we study the angular dependence of Andreev reflection coefficients which are plotted in Fig.2a,b for different \(t_{J}\) at \(E_{F}=0\) and \(E_{F}=2\Delta\), respectively. We see that the Andreev reflection from the main gap is isotropic (s-wave superconducting AM) for small \(t_{J}\) until \(t_{J}>0.36\) where \(T^{A}\) versus \(\theta\) becomes anisotropic with d-wave signature, indicating formation of the mirage gap. Therefore a critical value for \(t_{J}\) exists, separating s-wave and d-wave behaviors of superconducting AM. Even for \(t_{J}=1.0\), the main gap still exists for certain range of angles. We note that the both the main gap and mirage gap show \(C_{4}\) symmetry with principal axis at \(\theta=\pi/4\) and \(\pi/2\), respectively, which confirms that the main gap and mirage gap are mutually exclusive at particular angle. In Fig.2c, we display \(T^{A}\) versus \(E_{F}\) for different \(\theta\) while fixing \(t_{J}=1.0\). At \(\theta=0\), there are two mirage gaps with spin resolved \(T^{A}=1\) within individual gap indicating that the mirage gap is spin resolved. As we increase the angle \(\theta\) to \(\pi/16\) (not shown in the figure) or \(\pi/8\), the mirage gaps moves towards \(E_{F}=0\) while an additional pair of mirage gaps appear with a much narrow width. At \(\theta=3\pi/16\), there Figure 1: (a). Energy for different \(t_{J}\). Expressions of \(E_{\pm\pm}\) are given in text main text. Here we only show the energy band in the region \(k_{x}=(-\pi,0)\) while fixing \(k_{y}=0\). Red and blue curves denote spin up and down, respectively. Note that the band structure is symmetric when \(k_{x}\) changes to \(-k_{x}\). (b). Segmented Fermi surface for \(t_{J}=0.45\) showing d-wave signature. (c). The Andreev reflection coefficient \(T^{A}=T_{+}^{A}+T_{+}^{A}\) versus \(t_{J}\) at \(E_{F}=0\). (d). The spin resolved Andreev reflection \(T_{\sigma}^{A}\) and quasi-particle transmission coefficient \(T_{\sigma}^{Q}\) versus Fermi energy at a fixed \(t_{J}=0.4\) where the mirage gap emerges. In (c) and (d), we set \(\lambda=0\) and \(\theta=0\). is only one pair of mirage gap left and at \(\theta=\pi/4\) the mirage gap disappears and the main gap opens up with \(T^{A}=2\) within the gap. In Fig.2d, we depict Andreev reflection coefficient versus \(E_{F}\) for different \(t_{J}\) while fixing \(\theta=0\). At \(t_{J}=0\), we have the main gap with \(T^{A}=2\) and \(t_{J}=0.4\) the Andreev reflection coefficient is obtained by adding two spin resolved \(T^{A}_{\sigma}\) in Fig.1d. At \(t_{J}=1.0\) we find that the mirage gap is moving away from \(E_{F}=0\) with the center of the gap shifting linearly in \(t_{J}\) while the width of the mirage gap is independent of \(t_{J}\) which agrees with the analytic analysis in Eq.(8). Next we investigate the effect of SOI on the main gap, mirage gap, and Andreev reflection. In the presence of SOI, the spin is not a good quantum number and we use total Andreev reflection coefficient instead of spin resolved one. Once again, we show numerical results for \(\theta=0\) unless specified otherwise. In Fig.3a, we show the evolution of band structure for different \(t_{J}\) at \(\lambda=0.07\). Several observations are in order. (1). The presence of SOI will shift the band horizontally and therefore there are two main gaps at different momenta. The mirage gap opens up for small \(t_{J}\) and can coexist with the main gap in the presence of SOI. We see that the mirage gap and main gaps are located at different momenta as well. At this stage, the system shows s-wave superconducting character. (2). As we increase \(t_{J}\) the width of main gap and the position of the mirage gap remain almost the same while the width of mirage gap increases slowly. When \(t_{J}=0.6\), the second pair of mirage gap occurs at a larger energy \(|E_{F}|\) with a much narrow width (we only show one of them here). At this point, the number of transmission channel is two. We also notice that along with the occurrence of the mirage gap there is also a huge insulating local gap \(\Delta_{\rm ins}\) marked in Fig.3a whose width decreases with increasing of \(t_{J}\). (3). When \(t_{J}\) is increase further, the second pair of mirage gap moves towards \(E_{F}=0\) and the insulating gap is closed at a critical value of \(t_{J}\). When \(t_{J}\) is larger than the critical value, the gap is reopened forming a pair of superconducting main gap adjacent to the original main gap along \(k_{x}\)-axis. At the same time, the maximum number of transmission channel can be three. Note that the number of transmission channel depends on \(\mu\), \(t_{J}\), \(\lambda\), and \(E_{F}\). Fig.3b depicts the energy band at \(t_{J}=1.0\) and \(\lambda=0.07\) for other angles, showing that this new gap is highly anisotropic with \(C_{4}\) symmetry. The mirage gap also exhibits anisotropy similar to Fig.2b. As will be discussed below that the angular dependence of this new gap is the same as \(T^{A}(\theta)\) shown in Fig.3b. In this sense, the system displays d-wave character for the main gap at \(E_{F}=0\). In Fig.3c, we show the angular dependence of Andreev reflection coefficient \(T^{A}(\theta)\) for different \(\lambda\) and fixing \(t_{J}=1.0\). It shows that for both zero and small SOI, \(T^{A}(\theta)\) displays a d-wave like character. However, as soon as the SOI is turned on the symmetry axis of the d-wave is rotated by \(\pi/4\). For larger SOI, for example, \(\lambda=0.1\), \(T^{A}(\theta)\) changes from d-wave like to s-wave like. At \(t_{J}=1.0\), the maximum number of transmission channel reaches three as long as \(T^{A}(\theta)\) is d-wave like. Fig.3d plots the Andreev reflection versus \(E_{F}\) for different \(t_{J}\) and fixed \(\lambda=0.07\). Due to the existence of two different widths of main gap, the Andreev reflection close to \(E_{F}=0\) is three and when \(E_{F}\) is outside of narrow main gap but within the wide main gap, we have \(T^{A}=2\) and \(T^{Q}=1\). Similar situation occurs for the mirage gap since there are also two pairs of mirage gap that overlap with each other near \(E_{F}=0.012\) (see also Fig.3a). While near \(E_{F}=0.018\) there is only one pair of mirage gap and therefore \(T^{A}=1\) within the gap. We also see that the width of the wide main gap and position of the first mirage gap remain the same for different \(t_{J}\) while the width of the first mirage gap increases with \(t_{J}\) in agreement with the observation made in Fig.3a. Interestingly, although \(T^{A}\) remains symmetric when \(E_{F}\) changes sign, for \(\lambda\neq 0\) the quasi-particle transmission coefficient is not a symmetric function any more because the number of transmission channel across \(E_{F}=(-0.02,0.02)\) can vary from two to three. Now we show that the gapless superconducting state studied in Ref.[5] and [6], the mirage gap occurs as well. The Hamiltonian is the 2D surface of a topological insulator with an in-plane magnetic field or Zeeman energy \(\mathbf{V}\) defined as \[H_{0}=v_{F}(k_{x}\sigma_{y}-k_{y}\sigma_{x})-\mu+\mathbf{V}\cdot\mathbf{\sigma} \tag{9}\] which is proximity coupled with a s-wave superconductor with the full Hamiltonian given by Eq.(2). In Fig.4b, we depict the band evolution of this model which clearly shows that the superconducting gap is closed at \(\theta=\pi/2\), i.e., along \(k_{y}\)-axis. In Fig.4c, the angular dependence of \(T^{A}\) is plotted for different \(V\), from which we see that is a s-wave like superconducting state at \(V=0\) and changes to p-wave like gapless superconducting state which has been studied in details in Ref.[5]. In Fig.4d, the mirage gap is manifested in the integer Andreev transmission coefficient similar to what we just discussed for AM-superconductor. _Conclusion_ -- In summary, we show that in the absence of SOI the AM-superconductor can exhibit d-wave gapless superconducting state by tuning ASS and the singlet and triplet pairing occurs at finite energy at the same time, leading to the mirage gap which is quantified by integer Andreev reflection coefficient with a nonzero quasi-particle transmission coefficient. When SOI is present, both main gap and mirage gap are nonzero and the system changes from a d-wave superconducting state to a s-wave superconducting state when the strength of SOI exceeds a critical value. _Acknowledgments_ -- This work was supported by the National Natural Science Foundation of China (Grant No. 12034014, 12074230, and 12174262).
2303.15251
Inertial effects in ultrafast spin dynamics
The dynamics of magnetic moments consist of a precession around the magnetic field direction and a relaxation towards the field to minimize the energy. While the magnetic moment and the angular momentum are conventionally assumed to be parallel to each other, at ultrafast time scales their directions become separated due to inertial effects. The inertial dynamics give rise to additional high-frequency modes in the excitation spectrum of magnetic materials. Here, we review the recent theoretical and experimental advances in this emerging topic and discuss the open challenges and opportunities in the detection and the potential applications of inertial spin dynamics.
Ritwik Mondal, Levente Rózsa, Michael Farle, Peter M. Oppeneer, Ulrich Nowak, Mikhail Cherkasskii
2023-03-27T14:35:31Z
http://arxiv.org/abs/2303.15251v2
# Inertial effects in ultrafast spin dynamics ###### Abstract The dynamics of magnetic moments consists of a precession around the magnetic field direction and a relaxation towards the field to minimize the energy. While the magnetic moment and the angular momentum are conventionally assumed to be parallel to each other, at ultrafast time scales their directions become separated due to inertial effects. The inertial dynamics gives rise to additional high-frequency modes in the excitation spectrum of magnetic materials. Here, we review the recent theoretical and experimental advances in this emerging topic and discuss the open challenges and opportunities in the detection and the potential applications of inertial spin dynamics. ## I Introduction The increasing challenge of processing and storing a rapidly growing amount of digital information requires novel technological solutions operating at smaller length scales and at increased speed, yet in a more energy-efficient manner. While current magnetic devices enable data storage on short length scales with a low energy consumption, reading and rewriting the bits using magnetic field pulses [1] is not possible below the nanosecond time scale. To manipulate the spins on shorter time scales, electrical currents and ultrafast optical laser pulses have been employed. These methods enable ultrafast demagnetization within femtoseconds [2] and magnetization switching within picoseconds in a broad variety of magnetic materials [3; 4; 5; 6; 7; 8]. Many aspects of ultrafast demagnetization and switching can be successfully described either phenomenologically [9; 10], or microscopically based on the Landau-Lifshitz-Gilbert (LLG) equation [11; 12] in its stochastic form [13; 14; 15]. While the latter approach is widely applied to modelling magnetization dynamics in the presence of thermal fluctuations, it relies on the crucial assumption that the spin degrees of freedom are coupled to a heat bath responsible for the dissipation as well as the thermal noise, while details of the considerably faster electronic and lattice degrees of freedom constituting the heat bath are neglected [13; 16]. Recent derivations of the LLG equation based on a relativistic theory [17; 18] have proven that this approximation is no longer justified if the spin directions significantly vary over the course of femtoseconds. At ultrashort time scales, the LLG equation has to be corrected by accounting for the fact that the magnetization direction can no longer instantaneously follow the angular momentum. This delay can be described by appending an inertial term including the second time derivative of the magnetization to the LLG equation [19; 20; 21; 22]. This phenomenological consideration is supported by various derivations of the inertial term based on microscopic relativistic quantum theories [17; 18; 23; 24]. There are numerous theoretical predictions on how the signatures of inertial dynamics can be detected, but experimental observations are limited so far. Most likely this can be attributed to the fact that conventional magnetic measurements focus on the low-frequency regime, typically on the GHz range in ferromagnets, where the inertia plays little role and its effects may alternatively be explained based on the conventional LLG equation. However, the magnetic moments not only experience precession around the effective field in the presence of the inertial term, but they also perform a high-frequency nutation around the angular momentum, see Fig.1. Hence, the nutation gives rise to an additional peak in the ferromagnetic resonance spectrum in the high-frequency regime [25], which is typically found in the THz range in contrast to the conventional precession resonance at GHz frequencies. The most convincing experimental signatures of inertial dynamics to date are based on the observation of this high-frequency response in NiFe, CoFeB [26] and Co [27] films. In this perspective, we first describe the inertial LLG equation by motivating the precession, damping and inertial terms. We discuss the consequences of inertial dynamics on resonance spectra, on the spin-wave dispersion and on switching processes not only in ferromagnets, but also in antiferromagnets and ferrimagnets. We also outline the challenges and opportunities concerning the experimental observation of inertial spin dynamics, paving the way towards a microscopic understanding and possible technological applications of the evolution of magnetic moments on ultrafast time scales. ## II Magnetization dynamics Here, we summarize the main aspects of LLG dynamics, and point out in which aspects it has to be modified at ultrashort time scales, culminating in the formulation of the inertial LLG equation.
2310.12406
FinEntity: Entity-level Sentiment Classification for Financial Texts
In the financial domain, conducting entity-level sentiment analysis is crucial for accurately assessing the sentiment directed toward a specific financial entity. To our knowledge, no publicly available dataset currently exists for this purpose. In this work, we introduce an entity-level sentiment classification dataset, called \textbf{FinEntity}, that annotates financial entity spans and their sentiment (positive, neutral, and negative) in financial news. We document the dataset construction process in the paper. Additionally, we benchmark several pre-trained models (BERT, FinBERT, etc.) and ChatGPT on entity-level sentiment classification. In a case study, we demonstrate the practical utility of using FinEntity in monitoring cryptocurrency markets. The data and code of FinEntity is available at \url{https://github.com/yixuantt/FinEntity}
Yixuan Tang, Yi Yang, Allen H Huang, Andy Tam, Justin Z Tang
2023-10-19T01:38:40Z
http://arxiv.org/abs/2310.12406v1
# FinEntity: Entity-level Sentiment Classification for Financial Texts + ###### Abstract In the financial domain, conducting entity-level sentiment analysis is crucial for accurately assessing the sentiment directed toward a specific financial entity. To our knowledge, no publicly available dataset currently exists for this purpose. In this work, we introduce an entity-level sentiment classification dataset, called **FinEntity**, that annotates financial entity spans and their sentiment (positive, neutral, and negative) in financial news. We document the dataset construction process in the paper. Additionally, we benchmark several pre-trained models (BERT, FinBERT, etc.) and ChatGPT on entity-level sentiment classification. In a case study, we demonstrate the practical utility of using FinEntity in monitoring cryptocurrency markets. The data and code of FinEntity is available at [https://github.com/yixuantt/FinEntity](https://github.com/yixuantt/FinEntity). ## 1 Introduction _"We see ChatGPT's prowess and traction with consumers as a near-term threat to Alphabet's multiple and a boost for Microsoft and Nvidia."1_ In this Wall Street Journal article, multiple financial entities are mentioned, but their sentiments are contrasting (Positive for Microsoft and Nvidia, and Negative for Alphabet ). In fact, a considerable portion of real-world financial text (such as news articles, analyst reports, and social media data) contains multiple entities with varying sentiment (Malo et al., 2014; Huang et al., 2023; Sinha and Khandait, 2021; Shah et al., 2023). Nevertheless, most existing sentiment classification corpora in the financial domain are sequence-level, i.e., the sentiment label is associated with the entire text sequence. Consequently, these sequence-level sentiment datasets are unsuitable for the entity-level sentiment classification task. Footnote 1: [https://www.wsj.com/articles/microsoft-and-google-will-both-have-to-bear-ais-costs-11674006102](https://www.wsj.com/articles/microsoft-and-google-will-both-have-to-bear-ais-costs-11674006102) Developing a natural language processing (NLP) system for entity-level sentiment classification necessitates the availability of a dataset with entity tagging and sentiment annotation. To our knowledge, no such public dataset currently exists. In this paper, we fill this gap by _constructing a new dataset that annotates both the financial entity spans and their associated sentiments within a text sequence_. We outline the development of a high-quality entity-level sentiment classification dataset for the financial domain, called **FinEntity**. Subsequently, we benchmark several pre-trained language models (PLMs) and a zero-shot ChatGPT model (gpt-3.5-turbo) on entity-level sentiment classification tasks. The results demonstrate that fine-tuning PLMs on FinEntity outperforms the zero-shot GPT model. This finding suggests that manually collecting a high-quality domain-specific dataset and fine-tuning PLMs is more suitable than relying on the zero-shot GPT model. We further demonstrate the practical utility of FinEntity in investment and regulatory applications, extending the work of Ashwin et al. (2021); Wong et al. (2022). Collaborating with a regulatory agency, we apply the fine-tuned PLMs to a unique cryptocurrency news dataset. Experimental results indicate that the individual cryptocurrency sentiment, inferred using the FinEntity fine-tuned PLM, exhibits a stronger correlation with cryptocurrency prices than traditional sequence-level sentiment classification models. Furthermore, the inferred individual cryptocurrency sentiment can better forecast future cryptocurrency prices - leading to enhanced risk monitoring for regulatory agencies and investors. We make the FinEntity dataset publicly available and hope it will be a valuable resource for financial researchers and practitioners in developing more accurate financial sentiment analysis systems. Related Work Financial Sentiment Classification.NLP techniques have gained widespread adoption in the finance domain Huang et al. (2023); Yang et al. (2023). One of the essential applications is financial sentiment classification Kazemian et al. (2016); Yang et al. (2022); Frankel et al. (2022); Chuang and Yang (2022). However, prior literature on financial sentiment classification focuses on the entire text sequence Kazemian et al. (2016); Yang et al. (2022); Frankel et al. (2022). If a text paragraph contains multiple financial entities with opposing sentiment (as common in financial news or analyst reports), sentiment analysis for the entire text sequence may no longer be accurate. Consequently, a more fine-grained sentiment analysis approach is needed, one that is specific to individual financial entities. Financial Entity Tagging and Sentiment Dataset.Existing financial sentiment classification datasets, such as Financial Phrase Bank Malo et al. (2014), SemEval-2017 Cortis et al. (2017), AnalystTone Dataset Huang et al. (2023), Headline News Dataset Sinha and Khandait (2021) and Trillion Dollar Words Shah et al. (2023), are based on entire text sequence (sentence or article). FiQA, an open challenge dataset 2, features aspect-level sentiment; however, it does not include entity annotations. SEntriN Sinha et al. (2022) is a dataset for financial entity analysis in short news headlines. However, this dataset uses a pre-defined entity list to match entities and does not have entity tagging, so it is still not applicable to recognize financial entities from text. Moreover, most of the headlines contain only one entity, which makes it close to a sequence-level sentiment dataset. For financial entity tagging dataset, FiNER Shah et al. (2023) and FNXL Sharma et al. (2023) are created for financial entity recognition and numeral span tagging respectively, but both lacks sentiment annotation. Therefore, we aims to bridge this gap by constructing a high-quality, entity-level sentiment classification dataset, which not only label the financial entity spans in sentences, but also annotate their associated sentiments. Footnote 2: [https://sites.google.com/view/figa/home](https://sites.google.com/view/figa/home) ## 3 Dataset Construction Initial Dataset.We obtain a financial news dataset from Refinitiv Reuters Database. In the prescreening step, we utilize a pre-trained Named Entity Recognition model 3 and a sequence-level sentiment classification model 4 to infer the number of ORG entities and sequence sentiment. Subsequently, we compile a dataset with a balanced distribution of positive/negative/neutral articles, ensuring that 80% of the sequences contain more than one entity. Following prescreening, we obtain a dataset comprising 4,000 financial news sequences5. Footnote 3: [https://huggingface.co/dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER) Footnote 4: [https://huggingface.co/yiyanghkust/finbert-tone](https://huggingface.co/yiyanghkust/finbert-tone) Footnote 5: A sequence consists of multiple sentences. **Label.** Entity-level sentiment classification is a sequence labeling task. As such, we employ the BILOU annotation scheme. Specifically, each token in an input sequence is assigned one of the BILOU labels, indicating the beginning, inside, last, outside, and unit of an entity span in the sequence. Additionally, each annotated BILU entity is tagged with a sentiment label (positive/neutral/negative), while the O entity does not receive a sentiment label. Consequently, each token in the input sequence is assigned to one of thirteen labels (BILU-positive/neutral/negative and one O label). Annotators.A total of 12 annotators are recruited, all of whom are senior-year undergraduate students majoring in Finance or Business at an English-speaking university. Three annotators label the same example to ensure data quality and perform cross-check validation. Therefore, each annotator is assigned 1,000 examples. We employ the LightTag platform 6 for the annotation job. Annotators are instructed to tag all named entities in the sequence and the sentiment associated with each entity. Focusing on the financial domain, we limit the named entity to companies (such as Apple Inc.), organizations (such as The Fed), and asset classes (such as equity, REIT, and Crude Oil). Annotators are advised not to tag persons, locations, events, or other named entities. A screenshot of the annotation interface is shown in Appendix A. Footnote 6: [https://www.lighttag.io/](https://www.lighttag.io/) **Annotation Consistency.** A total of 28,631 entities are annotated by the annotators, and we conduct cross-checks in order to ensure data quality. Initially, we employ the Jaccard similarity coefficient to measure entity-level consistency between pairs of annotators. The overall Jaccard similarity of the dataset is 0.754. Furthermore, the number of examples with a Jaccard similarity equal to 1.0 is 44.35%, indicating that 44.35% of examples in the dataset have exactly the same [begin, end] span by all three annotators. We filter this subset for further sentiment consistency checks. Subsequently, we utilize Fleiss' Kappa to measure each example's sentiment annotation consistency Gwet (2014). We select examples with a Fleiss' Kappa higher than 0.8 and then use majority voting to obtain the entity's sentiment, ensuring high consistency in sentiment annotations. **Final Dataset: FinEntity.** The final FinEntity dataset contains 979 example paragraphs featuring 503 entities classified as Positive, 498 entities classified as Negative, and 1,130 entities classified as Neutral, resulting in a total of 2,131 entities. Table 1 and Table 2 are detailed distrutions of FinEntity. The sentiment label distribution of entities is fairly balanced. Moreover, About 60% of the financial text in the dataset contains multiple entities. A sample of FinEntity is shown in Appendix B. Our ensuing analysis is based on the FinEntity dataset. ## 4 Benchmarking Entity-level Sentiment Classification **PLMs.** We benchmark several PLMs for entity-level sentiment classification. We consider fine-tuning BERT (bert-base-cased) Devlin et al. (2018) and a finance-domain specific FinBERT Yang et al. (2020) by incorporating a linear layer on top of each token's hidden output. In order to account for token label dependency, we replace the linear layer with a conditional random field (CRF) layer, yielding BERT-CRF and FinBERT-CRF respectively. We implement those PLMs using the transformers library Wolf et al. (2019). **ChatGPT.** For comparison with state-of-the-art generative LLMs, we examine the zero-shot and few-shot in-context learning performance of ChatGPT by querying OpenAI's gpt-3.5-turbo model with a 0.0 temperature value. Following previous literature Shah et al. (2023), we construct a prompt designed to elicit structured responses. The detailed prompt for zero-shot and few-shot learning is shown in Appendix C. **Evaluation.** We randomly partition the dataset into 80% training dataset and 20% testing dataset. We employ Seqeval Nakayama (2018) for evaluation, reporting F1-scores, which include the negative, positive, and neutral classifications, respectively. It is important to note that for entity-level sentiment classification, a testing example is considered correctly classified only if all the named entities in the example are correctly tagged and their associated sentiments are correctly classified. This implies that this task is much more challenging than traditional sequence-level sentiment classification tasks. **Results.** We present the benchmarking results in Table 3. These results demonstrate that fine-tuning PLMs exceeds the performance of ChatGPT model, in line with Rogers et al. (2023) which suggests that zero-shot ChatGPT is not a strong baseline for many NLP tasks. The results provide important implications that for domain-specific, customized NLP tasks, manually collecting a high-quality dataset, through more labor-intensive, indeed achieves better performance than the current state-of-the-art generative LLM. ## 5 Case Study: Cryptocurrency Market In this section, we demonstrate the practical utility of the FinEntity dataset for investment and regulatory applications. As a case study, we focus on the cryptocurrency market, which is notoriously volatile and has been plagued by instances of manipulation and fraud, as evidenced by the recent FTX crash. As such, it is crucial for regulators and investors to closely monitor the market sentiment toward different cryptocurrencies. In this case study, we collaborate with a regulatory agency responsible for overseeing the monetary and financial systems in both local and global markets. The team at the regulatory agency shares a cryptocurrency news dataset (non-overlapping with our Reuters data) that they have internally col \begin{table} \begin{tabular}{c c c c} \hline & Positive & Negative & Neutral & Total \\ \hline Number & 503 & 498 & 1,130 & 2,131 \\ Percentage & 23.60\% & 23.37\% & 53.03\% & 100\% \\ \hline \end{tabular} \end{table} Table 1: Sentiment Label Distribution of Entities \begin{table} \begin{tabular}{c c c c c} \hline & Single Entity & Multiple Entity & Total \\ \hline Number & 390 & 589 & 979 \\ Percentage & 39.83\% & 60.16\% & 100\% \\ \hline \end{tabular} \end{table} Table 2: Single/Multiple Entity Distribution \begin{table} \begin{tabular}{c c c c c c} \hline & BERT & BERT- & FinBERT & FRBERT & ChatGPT & ChatGPT \\ & CNP & CNP & CNP & (rate-) & flow \\ & & & & & (b) \\ \hline Negative & 0.73 & 0.82 & 0.83 & **0.83** & 0.58 & 0.62 \\ Positive & 0.81 & 0.81 & 0.81 & **0.84** & 0.39 & 0.73 \\ Neutral & 0.82 & 0.81 & **0.81** & 0.82 & 0.71 & 0.61 \\ Micro Avg & 0.80 & 0.81 & 0.83 & **0.84** & 0.59 & 0.67 \\ Macro Avg & 0.80 & 0.81 & 0.83 & **0.85** & 0.56 & 0.65 \\ Weighted Avg & 0.80 & 0.81 & 0.83 & **0.84** & 0.59 & 0.68 \\ \hline \end{tabular} \end{table} Table 3: Entity-level Sentiment Classification Results. lected. The dataset comprises 15,290 articles spanning from May 20, 2022 to February 1, 2023. Each article is associated with a timestamp, enabling time-series analysis. We select four cryptocurrencies (Bitcoin, Ethereum, Dogecoin, Ripple) as the target entities for analysis, owing to their substantial market dominance and the attention they receive from investors. ### Sentiment Classification **Traditional approach: sequence-level.** To facilitate comparison, we utilize a sequence-level pre-trained financial sentiment classification model (Huang et al., 2023). For each focal cryptocurrency, such as Bitcoin, we extract sentences containing the target word (e.g., Bitcoin, BTC) from articles on the same date and feed them into the model to obtain sentiment labels (positive/negative/neutral). **Our approach: entity-level.** We employ the FinBERT-CRF model, which is fine-tuned on the FinEntity dataset, to extract daily entity-level sentiment. Specifically, we feed an article into FinBERT-CRF and obtain a set of named entities along with their associated sentiments. Subsequently, we group entities and aggregate sentiments to derive daily sentiment labels. ### Contemporaneous Correlation Analysis We begin by measuring the contemporaneous correlation between daily focal cryptocurrency prices and inferred sentiments. To obtain a numerical value for the sentiment score, we code positive labels as +1, neutral as 0, and negative as -1. Then, We calculate the sum of sentiment scores contained in each day's articles and normalize them using min-max normalization. For illustration purposes, we present the correlation graph for Bitcoin in Figure 1. The graph indicates a positive correlation between the price and sentiment score inferred from the entity-level sentiment model FinBERT-CRF. Additionally, we observe that both the sentiment and the price of Bitcoin experienced a sharp decline in early November 2022, attributable to the bankruptcy of FTX. Next, we compute the maximum information coefficient (MIC) between the cryptocurrency price and the inferred sentiment. Figure 2 displays a moderate positive correlation. Furthermore, entity-level sentiment exhibits higher correlations than sequence-level sentiment. Cryptocurrency markets are highly interconnected and frequently covered together in the press. Upon examining this dataset, we find that 37.53% of the examples contain more than one entity. Among these examples, 12.50% include entities that have opposing sentiments. Thus, the entity-level sentiment model can more accurately infer the sentiment of a focal cryptocurrency compared to traditional sequence-level models. ### Prediction Experiment We also conduct a forecasting task where we predict the next day's Bitcoin price using its price time series and the inferred sentiment. For the price feature, we utilize the Open-High-Low-Close (OHLC) price. For sentiment feature, we opt for using the percentage of three different sentiment labels for each day as features. We chronologically divide the dataset into three parts: 60% for training, 20% for validation, and 20% for testing. We employ an LSTM as the prediction model. It incorporates a time step of 10 and a hidden size of 128. We consider LSTM model that uses the OHLC price only and LSTM models that incorporate additional sentiment features inferred from either sequence-level or entity-level models. Table 4 reveals that the model incorporating entity-level sentiment features exhibits better accuracy than those utilizing sequence-level sentiment features or excluding sentiment features altogether. This further emphasizes that sequence-level approach is insufficient due to the presence of multiple entities within Figure 1: Daily Bitcoin prices and the sentiments inferred from FinBERT-CRF model. Figure 2: Correlation between sequence-level and entity-level sentiment and different cryptocurrency prices. a single sequence, highlighting the practical utility of our manually curated FinEntity dataset. ## 6 Conclusion In this paper, we present the construction of FinEntity, a dataset with financial entity tagging and sentiment annotation. We benchmark the performance of PLMs and ChatGPT on entity-level sentiment classification and showcase the practical utility of FinEntity in a case study related to the cryptocurrency market. We believe that FinEntity can be a valuable resource for financial sentiment analysis in investment and regulatory applications. ## Limitations First, our dataset construction relies on Reuters news. We have not extensively investigated the transferability of this news dataset to other financial corpora, such as corporate reports or social media posts. Second, since the dataset is in English, its applicability to non-English financial texts may be limited. Besides, financial decisions should consider multiple factors since relying on the above-mentioned forecasting methods may entail risks. ## Ethics Statement The study was conducted in accordance with the ACL Ethics Policy. The annotators were recruited via a University Research program. All the activities, including the data annotation, comply with the University and program policy. Confidentiality and anonymity were ensured throughout the study by using unique participant identifiers and securely storing the data.
2308.08614
Boosting Logical Reasoning in Large Language Models through a New Framework: The Graph of Thought
Recent advancements in large-scale models, such as GPT-4, have showcased remarkable capabilities in addressing standard queries. However, when facing complex problems that require multi-step logical reasoning, their accuracy dramatically decreases. Current research has explored the realm of \textit{prompting engineering} to bolster the inferential capacities of these models. Our paper unveils a pioneering prompting technique, dubbed \textit{Graph of Thoughts (GoT)}. Through testing on a trio of escalating challenges: the 24-point game, resolution of high-degree polynomial equations, and derivation of formulas for recursive sequences, our method outperformed GPT-4, achieving accuracy improvements of $89.7\%$, $86\%$, and $56\%$ for each respective task. Moreover, when juxtaposed with the state-of-the-art (SOTA) prompting method, \textit{Tree of Thought (ToT)}, our approach registered an average accuracy boost of $23\%$, $24\%$, and $15\%$.
Bin Lei, pei-Hung Lin, Chunhua Liao, Caiwen Ding
2023-08-16T18:13:27Z
http://arxiv.org/abs/2308.08614v1
# Boosting Logical Reasoning in Large Language Models through a New Framework: The Graph of Thought ###### Abstract Recent advancements in large-scale models, such as GPT-4, have showcased remarkable capabilities in addressing standard queries. However, when facing complex problems that require multi-step logical reasoning, their accuracy dramatically decreases. Current research has explored the realm of _prompting engineering_ to bolster the inferential capacities of these models. Our paper unveils a pioneering prompting technique, dubbed _Graph of Thought (GoT)_. Through testing on a trio of escalating challenges: the 24-point game, resolution of high-degree polynomial equations, and derivation of formulas for recursive sequences, our method outperformed GPT-4, achieving accuracy improvements of \(89.7\%\), \(86\%\), and \(56\%\) for each respective task. Moreover, when juxtaposed with the state-of-the-art (SOTA) prompting method, _Tree of Thought (ToT)_, our approach registered an average accuracy boost of \(23\%\), \(24\%\), and \(15\%\). ## Introduction While large language models (LLMs) have revolutionized productivity with their ability to address a plethora of basic queries Dwivedi et al. (2023), rooted in their extensive knowledge, they still grapple with inherent limitations in cognitive Chen et al. (2021) and reasoning skills Sap et al. (2022). This deficit becomes evident in tasks demanding multi-step considerations Creswell et al. (2022); Paranjape et al. (2023); Nye et al. (2021); Kojima et al. (2022); Shridhar et al. (2023), even when employing cutting-edge models like GPT-4 (OpenAI 2023). Many prior works have tried to address the weaker logical reasoning capabilities of LLMs using prompting engineering Chang (2023); Zhou et al. (2022); Strobelt et al. (2022); Jiang et al. (2022); Wu et al. (2022) approaches, such as the _Chain-of-Thought (CoT)_Wei et al. (2022), _Self-Consistency of Chain-of-Thought SC-CoT_Wang et al. (2022), and _Tree-of-Thought (ToT)_Yao et al. (2023) methods. Among these, the ToT method performs the best. It achieves an accuracy of \(74\%\) in certain logical reasoning tasks like the 24-point game Yao et al. (2023), which is significantly higher than the default GPT-4's \(7.3\%\)Yao et al. (2023). These four methods are illustrated in (a) to (d) on the left side of Figure 1. Despite the success, these methods are still far away from practical usage. For instance, using the ToT method achieves a \(74\%\) accuracy rate in the 24-point game, which still lags behind the human reasoning capabilities, with the human performance baseline in this game being approximately \(98.5\%\) (4nums.com 2023). _Can we further enhance the reasoning capability of large models to achieve or even surpass human-level performance?_ In light of this, this paper introduces a method named _Graph of Thought (GoT)_. Our approach is inspired by the emulation of human cognitive processes Wang and Chiew (2010); Kamijo et al. (2007); Estes (2022). Let's consider one of the most renowned mathematical logic problem, the Goldbach's Conjecture Wang (2002); Carbo-Dorca (2016), mathematicians do not attempt to enumerate all possible techniques and theorems. Instead, they reason backward from the conclusion Carbo-Dorca (2016); Bamberg et al. (2003); Wang (2022). They identify promising avenues of research, and ascertain the essential foundational knowledge required to pursue a particular line of thought. Importantly, different lines of thought are not isolated; they are interconnected and collaborative contribute towards forming the final solution Granville (2007); Oliveira e Silva et al. (2014). Consequently, diverging from the previously established works, this paper introduces a distinct problem-solving approach. It mainly contributes in three aspects: 1. Introduction of a novel **graph structure** to enhance the connections between different lines of thought (nodes). 2. Implementation of a **checking mechanism** to ensure the accuracy of the connections between different lines of thought. 3. Proposal of a new **graph updating** method, designed to facilitate rapid iteration for further reasoning. These points will be further detailed in the _Graph of Thought_ Section. On the right side of Figure 1, we present the sturcture of our Graph. It distinguishes itself from previous ToT or CoT in the following two aspects: * Our prompting approach initiates from the outcome rather than the conditions. * Graph-based structure eliminate any inherent hierarchy among the intermediate nodes. This design allows for potential relationships between any pair of intermediate nodes. We test our method on three logic reasoning tasks of increasing difficulty. The accuracy of our method surpass the current SOTA in all cases. In the 24-point game, its accuracy reaches \(97\%\), which is \(23\) percentage points higher than the current SOTA, approaching human reasoning levels (\(98.5\%\) (4nums.com 2023)). ## Related Work _I-O_ Prompting: The most prevalent prompting method is the Input-Output prompting. In this approach, one provides the conditions to the large model, which then produces answers following a token-level, left-to-right decision-making process [14]. This method is the default mode utilized by GPT-4. _CoT_ Prompting [13]: CoT prompting aims to guide the model in generating coherent text by establishing logical continuity. It is based on the assumption that by progressively expanding and supplementing chains of viewpoints and arguments, the model can generate more coherent and reasoning-based outputs. This method encourages the model to follow a sequential line of thought while answering questions, ensuring that the generated text is logically connected and coherent. _SC-CoT_ Prompting [12]: SC-CoT prompting is an extension of the Chain-of-Thought method. It requires the model to maintain self-consistency and logical coherence while generating text. In other words, the generated content should be internally consistent and semantically connected. This consistency and logical flow can be ensured by leveraging large models to vote or score, followed by a selection process. By emphasizing the self-consistency of the model's output, this prompting method reduces logical errors and inconsistencies, thereby improving the quality of the generated text. _ToT_ Prompting [14]: ToT prompting employs structured prompts to assist the model in generating hierarchical and structured text. This method utilizes a tree-like structure to represent the relationships between different concepts, which serves as input prompts. The model can organize and reason about the text based on the structure of the tree, resulting in more accurate and structured answers. Tree of Thought also employs a voting or scoring mechanism to filter the generated results, thereby not only reducing errors but also decreasing computational costs. Tree-of-Thought prompting aims to provide richer semantic representation and improved logical organization, enhancing the model's reasoning capabilities and generation quality. These are the four prompting methods, each offering guidance and constraints at different levels to facilitate language models in generating accurate, coherent, and reasoning-based text. ## Graph of Thought Our design contains three key innovations: 1. **Graph Structure**: By constructing thought graphs, we model various aspects and concepts of the problem as nodes, while their relationships and connections are represented by edges. This graphical representation facilitates the LLM in capturing and comprehending complex logical relationships, thereby improving its reasoning capabilities and accuracy in answering questions. 2. **Inspection Mechanism**: We address the challenge of reducing errors in LLMs by implementing a rechecking process for potentially correct results. In this process, we provide a more robust estimation of result accuracy. By calculating the confidence or probability of different candidate answers, we can effectively assess and weigh their reliability. This allows us to select the final answer based on the probabilities associated with each candidate result. Our re-evaluation mechanism is executed using our Checker function, which offers greater precision compared to traditional scoring or voting systems. This enhanced accuracy arises primarily because: i) we filter for a single, optimal result rather than selecting several decent outcomes, and ii) our checker function comprises multiple linked inspectors, ensuring a more rigorous review. 3. **Convenient Graph Updating**: Throughout the graph traversal process, thoughts that pass the Checker function are continuously added to our condition sequence. For complex problems that may not be resolved in a single graph traversal, there's no need to keep track of all previously traversed paths. We can simply re-input our updated condition sequence. This significantly reduces potential redundant reasoning. A prime example of this can be seen in our experimental section's third task, with the graph update process illustrated in Figure 5. Our first key point is detailed in the _Graph Construction_ Section. Here, we delve into the process of creating our graph, its distinct features, and how to interpret it, elucidating with a toy example. The second and third focal points are elaborated in the _Graph Updating and Path Finding_ Section. This section encompasses a comparative probabilistic analysis between our Checker function and traditional scoring mechanisms, as well as our graph updating algorithm. ### Graph Construction We first need to establish a directed cognitive graph, which possesses the following characteristics: Figure 1: Comparison of various prompting approaches. * The graph's creation originates from our final target. * The graph contains some AND-Crossroad nodes. These AND-Crossroad nodes can only be returned if all paths from the intersection are unobstructed. * The graph includes some Condition nodes, which can return to any node as long as there's a path between them. * For the remaining nodes, as long as there is an unobstructed path from the node, it can be returned. * Whether there's an obstacle on a path is determined by the Checker function. * If a path can begin with certain condition nodes and return uninterrupted to the final target, then it is considered a valid path. This path can be used as the final output. A toy example is illustrated in Figure 2. In this instance, the graph can be represented as: { A: {(B, C), (D, E), (F)}, B: {(2, 3)}, C: {(1, 2)}, D: {(C, G)}, E: {(F, H)}, F: {(5)}, G: {(1, )}, H: {}, I: {}. Our ultimate destination is Node A. The creation of the graph begins here. Two promising paths, namely \([(1,2),(2,3),(C,B),(A)]\) and \([(5),(F),(A)]\), are highlighted in green. In this example, there are five condition nodes: Node 1, Node 2, Node 3, Node 4, and Node 5. These nodes can return to any AND-Crossroad or intermediate node without any prerequisite conditions. Node 5 leads to Node F, hence Node F is considered as a returnable node. Both Node 1 and Node 2 are returnable nodes, therefore, they can pass through the AND-crossroad and return to Node C. Similarly, Nodes B and A can be reached. The construction of the graph is according to the Algorithm 1. Our algorithm adopts a depth-first traversal approach to recursively create the mental graph. In this process, we call the LLM twice. The first call to LLM is to find paths based on the known node information, and the second call is to find new nodes that can reach the current node. We can manually set the number of searches to control the size of the graph, or continue searching until all new nodes are included in the Condition nodes or no new nodes can be found. ``` 1:Input: Conditions \(Cs\), Nodes \(Ns\), Graph \(G\) 2:OUTPUT: Updated Graph \(G\) 3:functionCreate_Graph(\(Cs,Ns,G\)) 4:for\(N\)in\(Ns\)do 5:if\(N\)in\(Cs\)then 6:continue 7:endif 8:\(N\rightarrow\) LLM \(\rightarrow\)\(paths\)\(\triangleright\) Create paths by LLM 9:\(G[T]\leftarrow\{\}\)\(\triangleright\) Create new node 10:for\(path\)in\(paths\)do 11:\(path\rightarrow\) LLM \(\rightarrow\)\(ns\)\(\triangleright\)\(ns\) : new nodes 12:\(G[T]\).add(tuple(\(ns\)))\(\triangleright\) update the Graph 13:Create_Graph(\(Cs\), \(ns\), \(G\)) 14:endfor 15:endfor 16:endfunction ``` **INPUT**: Conditions \(Cs\), Nodes \(Ns\), Graph \(G\) [MISSING_PAGE_POST] For the original scoring function \(F_{scoring}\): \[F_{\text{scoring}}(P_{\text{LLM}},C)(s)=1[s_{max}\ \geq\ s\ \geq\ s_{t}]\] Where score \(s\ \sim\ P_{\text{LLM}}(s|C)\), \(P_{LLM}\) is the probability distribution of the LLM, \(C\) represents the current set of conditions, \(s_{max}\) is the maximum score, \(s_{t}\) is the selection threshold score. The probability of selecting a certain path as a promising path in the scoring mechanism is: \[P_{F_{scoring}=1}=\sum_{s=s_{t}}^{s_{max}}P_{LLM}(s|C)\] For our inspection function \(F_{inspection}\): \[F_{\text{inspection}}(P_{LLM},C)(s)=1[r=True]\] Where result \(r\ \sim\ P_{checker}(r|C)\), and each check is independent and identically distributed. The probability of selecting a certain path as a correct path in our inspections mechanism is: \[P_{F_{inspection}=1}=P_{checker}(True|C)=(P_{LLM}(s_{max}|C))^{n}\] Where \(n\) is the number of inspectors. The comparison between the two selection methods is as follows: \[(P_{LLM}(s_{max}|C))^{n}<P_{LLM}(s_{max}|C)<\sum_{s=s_{t}}^{s_{max}}P_{LLM}(s|C)\] Evidently, the selection criteria of our inspection method are more stringent. This is one of the key reasons contributing to the accuracy improvement. The specific comparison of accuracy is demonstrated in our _Experiment_ Section. Path finding represents the final step in our algorithm, the primary objective of which is to utilize the conditions to compute our ultimate results. After the graph has been updated, promising intermediate nodes have already been incorporated into the conditions list, thus simplifying the path finding process. This operation can be accomplished with a single loop, which we will not belabor here. Notably, due to our stringent selection criteria, there may be instances when we need to rebuild the graph to locate suitable paths. In these scenarios, by utilizing the updated conditions list, we can retain information from all previous graphs. This approach allows us to avoid duplication of paths without maintaining the prior graph information. A good example is the third task in our _Experiment_ Section as shown in Figure 5. ## Experiment We conduct three distinct experimental sets to validate the effectiveness of our methodology: the 24-point game, the resolution of high-order equations, and the computation of arithmetic sequence formulas. The complexity of these tasks escalates in the given order. For each experiment, we begin by illustrating how the LLM leverages our method to generate results, using a specific example for clarity. In detailing this procedure, key outcomes linked to pivotal intermediate nodes crafted by the LLM were accentuated using [1] All experiments were executed using the Chat Completion mode of GPT-4 between July 1st and July 31st, 2023, with a set sampling temperature of \(0.7\). **INPUT**: Graph \(G\), Conditions \(Cs\), distance \(D\), number of inspectors \(n\) **OUTPUT**: Updated Graph \(G\) ``` 1:functionUpdate_Graph(\(G,Cs,D\)) 2:if\(D==0\)then 3:return\(G\) 4:endif 5:\(new\_Cs\gets list()\)\(\triangleright\) Initialize new conditions 6:for\(node\in G\)do 7:for\(path\in G[node]\)do 8:ifall\(items\)in\(path\)are in\(Cs\)AND\(\textsc{Checker}(node,path,n)\)then 9:\(new\_Cs.\text{append}(node)\) 10:break 11:endif 12:endfor 13:endfor 14:\(Cs\gets Cs\cup new\_Cs\)\(\triangleright\) Update the condition set 15:\(G\gets G\setminus new\_Cs\)\(\triangleright\) Update the graph 16: Update_Graph(\(G,Cs,D-1\)) 17:endfunction 18:functionChecker(\(node,path,n\))\(\triangleright\) Multiple verification 19:for\(i\in\text{range}(1,n+1)\)do 20:\(result(True/False)\gets LLM(node,path)\) 21:if\(\negresult\)then 22:returnFalse 23:endif 24:endfor 25:returnTrue 26:endfunction ``` **Algorithm 2**Update the Graph Therefore, using such problems to test multi-step logical reasoning capabilities is a good choice. Figure 3 illustrates the process of solving this problem using our GoT method. The number outside the bracket in the circle represents the current value, and the number inside the bracket represents the numbers that have not been used yet. The equations beside the paths indicate the specific computational steps of that path. Every path needs to be checked by the Checker function. The inspection process for two paths is shown in the figure. In each Checker function, several inspectors are connected in series. The final Checker function only returns True if all inspectors return True. If the final result returns False, then the path cannot be traversed. As shown in the figure: The operation \(\framebox{12/4=3}\) generated by LLM passes the Checker function and is traversable, while \(\framebox{18-12=3}\) fails and is not traversable. Calculate starting from the final result, passing through traversable paths, aided by condition nodes, and reaching the endpoint via AND-Crosrosroad. Therefore, the correct answer to this problem is: \(\framebox{13-10=3;12/3=4;4*6=24}\). The same 24-points game is also conducted in the ToT study. We compare our results with theirs and presented the outcomes in Table 1. In the CoT-SC experiment, \(k=100\) denotes the success rate calculated using the best of \(k\) samples. In the ToT experiment, the authors conduct a breadth-first search, where \(b\) in the table stands for 'breadth'. In the GoT experiment, \(n\) denotes the number of inspectors in the Checker function. The results show that even when the number of inspectors \(n=0\), our accuracy in this game surpasses that of the Tree-of-Thought prompting method when \(b=5\). As the number of inspectors increases, more errors in the intermediate computation process are corrected, gradually improving the accuracy. When \(n=3\), our accuracy rate reaches \(93\%\), which is a \(\times\)**12.73** increase compared to the initial standard I-O prompting. When \(n=5\), our accuracy rate reaches \(97\%\), which is a **23\(\%\)** higher compared to the ToT when \(b=5\). ### Solving High-Degree Polynomial Equations Next, we increase the difficulty of the task by deploying GoT to tackle higher-degree equations. While standard formulas exist for the roots of low-degree equations, their counterparts for higher degrees demand more intricate solutions, such as Newton's method or the Durand-Kerner method. Consider equations like \(x^{6}+3x^{4}-2018x^{3}+3x^{2}+1=0\) or \(x^{4}-3x^{3}+3x+1=0\); they don't lend themselves to straightforward solutions. We apply our GoT approach to this problem and compared its performance with other methods. Our dataset is derived from the Mathematics Dataset [10]. Here is an example problem: \[\boxed{\begin{array}{l}\text{Solve the equation: }3x^{4}-69x^{3}+1284x^{2}-4212x-\\ 3888=0\end{array}}\] This problem is more complex than the previous 24-point problem. Figure 4 illustrates the process of our GoT method in addressing this kind of problem. We start with the goal of solving the equation, and in this example, the LLM provides us with three possible approaches: the formula method, Prime factoring, and numerical substitution. In the figure, we mark them with red arrows, green arrows, and yellow arrows respectively. We list some examples of the LLM's attempts, such as for the factoring method, where the LLM tried -9, 2, and -3 - Among these, -9 is a solution to the equation, and when substituting \(x=2\) and \(x=-3\) into the left side of the original equation, the values of the equation are negative and integer, respectively. Therefore, the LLM suggests using the numerical method to try some potential solutions between these two numbers. First, it tries \(\framebox{x=-\frac{7}{6}}\) but when this value of \(x\) is substituted into the left side of the original equation, the value is still negative. Afterward, the LLM suggests contin \begin{table} \begin{tabular}{c c} \hline **Method** & **Accuracy** \\ \hline IO [17] & 7.3\% \\ CoT [17] & 4.0\% \\ CoT-SC (k = 100) [17] & 9.0\% \\ ToT (b = 1) [17] & 45\% \\ ToT (b = 5) [17] & 74\% \\ \hline **GoT (n = 0)** & 77\% \\ **GoT (n = 1)** & **85\%** \\ **GoT (n = 3)** & **93\%** \\ **GoT (n = 5)** & **97\%** \\ \hline \end{tabular} \end{table} Table 1: GoT vs. Other Methods in 24-Point Game Figure 4: An example of GoT in Solving Polynomial Functions. Black oval: Intermediate node; Red oval: Final target; Blue oval: Condition nodes; Black dot: AND-Crosroad; Green arrow: Prime Factorization related path; Blue arrow: Condition related path; Red arrow: Formula Method related path; Yellow arrow: Substitution Method related path. Figure 3: An example of GoT in 24-points game. \(n_{1}\),\(n_{2}\),\(n_{3}\): Inspectors \(1\),\(2\),\(3\). using to try possible factors between \(\begin{array}{|c|c|}\hline\hline\text{\bf Method}&\text{\bf Accuracy}\\ \hline\text{IO}&3.0\%\\ \text{CoT}&21\%\\ \text{ToT}&\text{(b = 5)}&25\%\\ \text{ToT}&\text{(with Calculator)}&65\%\\ \hline\text{\bf GoT}&\text{\bf(n=0)}&\text{\bf 31\%}\\ \text{\bf GoT}&\text{\bf(n=1)}&\text{\bf 45\%}\\ \text{\bf GoT}&\text{\bf(n=5)}&\text{\bf 73\%}\\ \text{\bf GoT}&\text{\bf(with Calculator)}&\text{\bf 89\%}\\ \hline\end{array}\) From the table, we can see that although the large model's algebraic calculation ability is quite worrying (the accuracy rate of the ToT method surprisingly increased by 40% after providing a calculator), our Checker function plays a very good error correction role. When the number of inspectors \(n\) is set to \(5\), the accuracy of our GoT method has surpassed that of ToT with a calculator. Moreover, if a calculator is available to the GoT method, its accuracy can reach \(89\%\). ### Deriving Formulas for Recursive Sequences The final set of experiments aimed to evaluate the LLM's reasoning abilities in a more challenging scenario: deriving recurrence relations for sequences. We have collected sequence-related problems from mathematics competitions spanning nearly 20 years, and our dataset is provided in the supplementary material. Here is an example problem: In the sequence \(a_{n}\), \(a_{1}=1\), and for \(n\geq 1\), \(a_{n+1}=(1+\frac{1}{n})\cdot a_{n}+\frac{n+1}{2^{n}}\). Find the general formula for the sequence \(\{a(n)\}\). For complex mathematical problems like this, a single round of simple graph traversal search is often insufficient to provide solutions. Multiple rounds of graph updates are needed, continually supplementing known conditions, in an attempt to obtain the eventual correct outcome. The Figure 5 below demonstrates an example of our approach to solving such problems. In this example, we go through three rounds of graph traversal. After each traversal, new information is added to the condition sequence. In Graph 1, the large model provides three possible directions: 1. Consider using mathematical induction. 2. Computing the difference between adjacent terms. 3. Try to construct a new equation using existing conditions. Combining 2 and 3, along with the original equation in the condition sequence \(a_{n+1}=\left(1+\frac{1}{n}\right)\cdot a_{n}+\frac{n+1}{2^{n}}\) the model derives a new equation: \(\begin{array}{|c|}\hline\frac{a_{n+1}}{n+1}=\frac{a_{n}}{n}+\frac{1}{2^{n}} \\ \hline\end{array}\) This equation successfully pass our verification mechanism and is added to the condition sequence after the first round of graph traversal. In Graph 2, due to changes in the condition sequence, the output of the large model also differs from the first round of graph traversal. This time, it still provides three possible solutions: 1. Try to replace the existing variables. 2. Calculating the difference between adjacent terms. 3. Try to construct a new equation using existing conditions. Next, the large model, first by combining points 2 and 3, changes the form of the equation in the condition sequence from \(\begin{array}{|c|}\hline\hline\frac{a_{n+1}}{n+1}=\frac{a_{n}}{n}+\frac{1}{2^ {n}}\\ \hline\end{array}\)to \(\begin{array}{|c|}\hline\frac{a_{n+1}}{n+1}-\frac{a_{n}}{n}=\frac{1}{2^{n}} \\ \hline\end{array}\). Subsequently, the large model combines this equation with point 1 and replaces \(\frac{a_{n}}{n}\) with \(b_{n}\), eventually obtaining \(\begin{array}{|c|}\hline\hline b_{n+1}-b_{n}=\frac{1}{2^{n}}\\ \hline\end{array}\) In Graph 3, Due to the sufficiency of the conditions, the large model directly suggests adopting the mathematical induction method to solve the problem. Through condition \begin{table} \begin{tabular}{c|c} \hline \hline **Method** & **Accuracy** \\ \hline IO & 3.0\% \\ CoT & 21\% \\ ToT (b = 5) & 25\% \\ ToT (with Calculator) & 65\% \\ \hline \hline **GoT (n = 0)** & **31\%** \\ **GoT (n = 1)** & **45\%** \\ **GoT (n = 5)** & **73\%** \\ **GoT (with Calculator)** & **89\%** \\ \hline \hline \end{tabular} \end{table} Table 2: GoT vs. Other Methods in Solving Polynomial Equations \(b_{n+1}-b_{n}=\frac{1}{2^{n}}\) and the mathematical induction method, the large model derives the result \(\left[\begin{array}{c}b_{n}=2-\frac{1}{2^{n-1}}\end{array}\right]\) Subsequently, utilizing condition \(\left[\begin{array}{c}b_{n}=\frac{a_{n}}{n}\end{array}\right]\) the model arrives at the result \(a_{n}=(2-\frac{1}{2^{n-1}})\times n\). In this experiment, we test the dataset using IO, CoT, and ToT methods respectively and compare with GoT. * In the **IO** method, the prompt formulation is: "Please help me solve the following problem: " + _problem_. * Using the **CoT** method, the prompt structure is: "Given that" + _the condition part of the problem_ + "we want to determine " + _the question part of the problem_ + "What might be the next step?" Once the model suggests a subsequent step, this response is appended to the condition segment of the problem. We then consult the model repeatedly until it offers a perceived correct solution or indicates uncertainty. * For the **ToT** method, the prompt reads: "Considering " + _the condition part of the problem_ + "we seek to find out" + _the question part of the problem_ + "What could be the potential next steps?" Upon the model's recommendation of subsequent steps, its response is integrated into the condition segment of the problem. We then treate each recommended step as a distinct node, with a capped traversal depth of 5. We utilize a 5-shots prompting approach. Subsequently, we provide the large model with two mathematical tools 1. Variable Transformations: e.g., Transform \(a_{n+1}=\left(1+\frac{1}{n}\right)\cdot a_{n}+\frac{n+1}{2^{n}}\) to \(a_{n}=\left(1+\frac{1}{n-1}\right)\cdot a_{n-1}+\frac{n}{2^{n-1}}\). 2. Formula Simplification: e.g., Simplify \(a_{n+1}=\left(1+\frac{1}{n}\right)\cdot a_{n}+\frac{n+1}{2^{n}}\) to \(\frac{a_{n+1}}{n+1}-\frac{a_{n}}{n}=\frac{1}{2^{n}}\). and conducted the tests again. The experimental results are shown in the Table 3. Using direct IO input, the accuracy is only 1%. Testing with the CoT method yield a comparable accuracy of about 3%. The methods ToT and GoT see improved accuracy rates, rising to 17% and 20% respectively. Before granting the large model access to mathematical tools, the best-performing model is the GoT model with number of inspections \(n\) set to 5, achieving an accuracy of 55%. After enabling the mathematical tools, the accuracy for the GoT model increases to 57%, while the ToT model rise to 42%. ## Conclusion In this study, we introduce _Graph of Thoughts_, a novel prompting technique that significantly enhances the inferential capabilities of large language models. The experimental results on three tasks of increasing difficulty-- the 24-point game, solving higher-degree equations, and deriving recursive formulas for sequences-- demonstrate its superiority. When compare to the SOTA Language Model, GPT-4, our method boost accuracy by \(89.7\%\), \(86\%\), and \(56\%\), respectively. Against the current best prompting strategy, our approach improve accuracy by \(23\%\), \(24\%\), and \(15\%\) for the respective tasks. These findings underscore the significant advantage of our technique in assisting large models to accomplish complex multi-step logical reasoning tasks. Figure 5: An example of GoT in Deriving the recurrence formula. Graph 1: The first graph traversal; Graph 2: The second graph traversal; Graph 3: The third graph traversal; Black oval: Intermediate node; Red oval: Final target; Blue oval: Condition nodes; Black dot: AND-Crossroad; Green arrow: The conversions from intermediate node to condition node. \begin{table} \begin{tabular}{c c} \hline **Method** & **Accuracy** \\ \hline IO & 1.0\% \\ CoT & 3.0\% \\ ToT (b = 5) & 17\% \\ ToT (with Mathematical tools) & 42\% \\ \hline **GoT (n = 0)** & **20\%** \\ **GoT (n = 1)** & **31\%** \\ **GoT (n = 5)** & **55\%** \\ **GoT (with Mathematical tools)** & **57\%** \\ \hline \end{tabular} \end{table} Table 3: GoT vs. Other Methods in Solving Recursive Sequences
2308.07527
FeatGeNN: Improving Model Performance for Tabular Data with Correlation-based Feature Extraction
Automated Feature Engineering (AutoFE) has become an important task for any machine learning project, as it can help improve model performance and gain more information for statistical analysis. However, most current approaches for AutoFE rely on manual feature creation or use methods that can generate a large number of features, which can be computationally intensive and lead to overfitting. To address these challenges, we propose a novel convolutional method called FeatGeNN that extracts and creates new features using correlation as a pooling function. Unlike traditional pooling functions like max-pooling, correlation-based pooling considers the linear relationship between the features in the data matrix, making it more suitable for tabular data. We evaluate our method on various benchmark datasets and demonstrate that FeatGeNN outperforms existing AutoFE approaches regarding model performance. Our results suggest that correlation-based pooling can be a promising alternative to max-pooling for AutoFE in tabular data applications.
Sammuel Ramos Silva, Rodrigo Silva
2023-08-15T01:48:11Z
http://arxiv.org/abs/2308.07527v1
# FeatGeNN: Improving Model Performance for Tabular Data with Correlation-based Feature Extraction+ ###### Abstract Automated Feature Engineering (AutoFE) has become an important task for any machine learning project, as it can help improve model performance and gain more information for statistical analysis. However, most current approaches for AutoFE rely on manual feature creation or use methods that can generate a large number of features, which can be computationally intensive and lead to overfitting. To address these challenges, we propose a novel convolutional method called FeatGeNN that extracts and creates new features using correlation as a pooling function. Unlike traditional pooling functions like max-pooling, correlation-based pooling considers the linear relationship between the features in the data matrix, making it more suitable for tabular data. We evaluate our method on various benchmark datasets and demonstrate that FeatGeNN outperforms existing AutoFE approaches regarding model performance. Our results suggest that correlation-based pooling can be a promising alternative to max-pooling for AutoFE in tabular data applications. Keywords:Automated Feature Engineering feature creation correlation-based pooling tabular data machine learning. ## 1 Introduction Creating effective features is a crucial aspect of machine-learning projects. Essentially, it involves deriving new features from existing data to train a model or extract more information for statistical analysis. Discovering novel features from raw datasets is often the key to improving model performance [1]. Traditionally, feature creation is a manual process that heavily relies on an analyst's domain knowledge and programming skills. However, this approach can be limiting, as an analyst's intuition and expertise often influence the features created. To overcome these limitations, researchers have been exploring the field of Automated Feature Engineering (AutoFE). AutoFE aims to automate the feature creation process, enabling the discovery of more complex and effective features without relying solely on human input. Automated feature engineering methods involve applying transformations to raw data to create new features. One commonly used technique is the expansion-reduction method [4], which generates a large number of features and then applies a feature selection algorithm to reduce their dimensionality. During the expansion phase, various transformations, such as logarithmic, max/min, or sum, are applied to the raw data. In the reduction phase, a feature selection method is utilized to identify the most effective set of features, which can significantly enhance a model's performance. The possible number of transformation operations that can be performed on already-transformed features is practically infinite, which leads to an exponential increase in the feature space. This issue can cause a problem in reducing the number of feature evaluations required. To address this issue, researchers have proposed adaptive methods for AutoFE. For instance, Khurana et al. [5] introduced a Q-learning agent capable of performing feature transformation search, achieving higher performance but still generating a large number of features. In another study [6], a Multi-Layer Perceptron (MLP) was trained to suggest the best transformations for each raw feature, resolving the problem of excessive feature generation. More recently, DIFER [7], a gradient-based method for differentiable AutoFE, has demonstrated superior performance and computational efficiency compared to other approaches, although it still requires significant computation In recent years, the use of deep neural networks (DNNs) has become increasingly widespread across a range of fields, such as computer vision and natural language processing [19, 18]. Typically, these models extract new features by feeding input features into the hidden layers of a DNN. While this approach is effective in capturing complex interactions between implicit and explicit features, it may not always generate useful new features due to a lack of relevant interactions in the dataset [9]. Moreover, most existing works use max-pooling in the pooling layer, which may not be optimal for tabular data because it does not preserve the order and context of features in the data matrix. Additionally, max-pooling is intended to identify the most significant features within an image, which may not always be relevant or effective for tabular data. To address the limitations of existing AutoFE methods, we propose Feat-GeNN, a convolutional approach that leverages correlation as a pooling function to extract and generate new features. FeatGeNN first applies convolutional filters to the raw data to extract high-level representations. Then, instead of using traditional pooling functions like max or average pooling, it computes the correlation between the extracted features, which helps to identify the most informative features. The selected features are then passed through a multi-layer perceptron (MLP) to create the final set of new features. Preliminary results indicate that FeatGeNN outperforms existing AutoFE methods in both the number of generated features and model performance, demonstrating its potential as a potent tool for creating features in machine learning. ## 2 Related work The main goal of feature engineering is to transform raw data into new features that can better express the problem to be solved. Training a model with the generated features can increase the performance of the model. However, the process of feature engineering can be limited by the expertise, programming skills, and intuition of the person working with the data. For this reason, AutoFE approaches have recently gained attention. The authors of [7] propose a differentiable AutoML model that efficiently extracts low and high-order features. The model includes three steps: Initialization, Optimizer Training, and Feature Evaluation. In initialization, features are constructed randomly and evaluated using a machine-learning model in the validation set. In training the optimizer, a tree-like structure is created with an encoder, a predictor, and a decoder, called a parse tree. The encoder maps the post-order traversal string to a continuous space, the predictor is a 5-layer MLP that maps the representation to the score computed by a machine learning model, and the decoder maps the embedding to the discrete feature space. In the final step of feature evolution, the best \(n\) features are selected and optimized using a gradient-based approach. In [4], the authors present an algorithm that uses mathematical functions to generate new features for relational databases. The algorithm begins by identifying the entities that make up the database and defines a set of mathematical functions that are applied at both the entity level and the relational level. The proposed approach first enumerates all possible transformations on all features and then directly selects features based on their impact on model performance. However, due to the potentially large number of features generated, it is necessary to perform feature selection and dimensionality reduction to avoid overfitting and improve the interpretability of the model. In [6], the authors propose a novel model for feature engineering in classification tasks that can generalize the effects of different feature transformations across multiple datasets. The model uses an MLP for each transformation to predict whether it can produce more useful features than the original set. The Quantile Sketch Array (QSA) achieves a fixed-size representation of feature values to handle features and data of different lengths. The QSA uses Quantile Data Sketch to represent feature values associated with a class label. The authors of [12] have proposed an RNN-based approach to address the feature explosion problem in feature engineering and support higher-order transformations. Their architecture uses an RNN to generate transformation rules with a maximum order for each raw feature within a fixed time limit. For datasets with multiple raw features, the authors use multiple RNNs as controllers to generate transformation rules for each feature. The transformed features are evaluated using a machine learning algorithm and the controller is trained using policy gradients. The model includes two special unary transformations: "delete" and "terminate", which remove a feature and terminate the current transformation, respectively, to determine the most appropriate transformation order. In [5] they propose a heuristic model for automating feature engineering in supervised learning problems. Their model is based on a tree structure, where the raw dataset is the root, each node is a transformed dataset, and the edges represent the transformation functions. The goal is to find the node with the highest score, reducing the feature construction problem to a search problem. The authors present three exploration strategies to traverse the tree. The first is depth-first traversal," in which a random transformation is applied to the root and the algorithm then explores a branch until there is no further improvement. Then it chooses another node with the highest score and starts the process again. The second is the "Global Traversal", where a global search is performed to find the most promising node out of all the nodes explored so far. The third is "Balanced Traversal", in which the algorithm chooses either an exploration or exploitation strategy at each step based on a time or node budget. To handle the explosive growth of columns, feature selection is required as they grow. Cognito allows the selection of features after each transformation to clean up the dataset and ensure a manageable size. In addition, at the end of the model execution, the algorithm performs another feature selection for all columns in the dataset, including the newly created columns. AutoFeat is a method presented in [11] that generates and selects non-linear input features from raw inputs. The method applies a series of transformations to the raw input and combines pairs of features in an alternating multi-step process to generate new features. However, this leads to an exponential increase in the size of the feature space, so a subsampling procedure is performed before computing new features. The authors have shown that two or three steps of the feature technique are usually sufficient to generate new features. After feature engineering, the new dataset has a higher number of features than the original dataset. To reduce the dimensionality, the authors developed a feature selection procedure. First, they remove new features that are highly correlated with the original or simpler features. Then they apply a wrapper method with L1-regular linear models to select the most informative and non-redundant features from the dataset. In the end, only a few dozen features are retained and used after the feature creation and selection process. In [27], autolearn is proposed, a learning model based on regression between pairs of features and aimed at discovering patterns and their variations in the data. The method selects a small number of new features to achieve the desired performance. The proposed method consists of four phases: Pre-processing to reduce dimensionality, where the authors perform feature selection based on information gain (IG); Mining of correlated features to define and search for pairwise correlated features, where the distance correlation [8] is calculated to determine if there is an interesting predictive relationship between a pair of features; Feature generation, where regularized regression algorithms are used to search for associations between features and generate new features; and Feature selection, where features that do not add new information to the dataset are discarded. The authors of [3] have proposed a novel model that achieves both memorization and generalization by simultaneously training a linear model component and a neural network component. The model consists of two components: The Wide component, which is a generalized linear model of the form \(yW^{t}xb\), where \(y\) denotes prediction, \(x\) denotes features, \(w\) denotes model parameters and \(b\) denotes bias. The input features can be either raw or transformed, the most important transformation being the cross-product transformation; and the Deep component, which is a feed-forward neural network. For categorical features, an embedding is created, which is then added to the dataset and fed into the network. The authors of [2] have proposed a model for predicting CTR that can handle interactions between low and high-order features by introducing a factorization-machine (FM) based neural network. The model consists of two parts: the FM component, which generates low-order features and can generate interactions between 1st and 2nd-order features with low computational cost, and the deep component, a feed-forward neural network that learns interactions between higher-order features. The input to the network is a high-dimensional vector of sparse data containing categorical and continuous variables as well as grouped fields. The FGCNN is another approach proposed in CTR for prediction[9]. This model consists of two components, namely the Feature Generation and the Deep Classifier. The Feature Generation component uses the mechanisms inherent in the Convolutional Neural Network (CNN) and the Multilayer Perceptron (MLP) to identify relevant local and global patterns in the data and generate new features. The Deep Classifier component then uses the extended feature space to learn and make predictions. Our work introduces a CNN-based model with correlation-pooling for extracting high-order features and improving model performance. Unlike traditional pooling functions such as max-pooling, which focus on selecting the maximum value within a pooling region, correlation-pooling considers the linear relationships between features in the data matrix. It measures the correlation coefficient between the features and aggregates them based on their correlation values to capture the interdependencies and patterns in the data. By incorporating correlation-based pooling into the feature extraction process, FeatGeNN can effectively extract high-order features that reflect the underlying relationships among input variables. Our proposed method achieves competitive results on a range of problems, suggesting that correlation-based pooling is a promising technique for working with tabular data in neural networks. ## 3 Proposed Approach In this section, we describe the proposed Feature Generation with Evolutionary Convolutional Neural Networks (FeatGeNN) model in detail. ### Problem Formulation Given an dataset \(D=\langle F,tg\rangle\), where \(F=\{f_{1},f_{2},...,f_{n}\}\) are the raw features and \(tg\) the target vector. We denote as \(L_{E}^{M}(D,tg)\) the performance of the machine learning model \(M\) that is learned from \(D\) and measured by the evaluation metric \(E\) (e.g. accuracy). In addition, we transform a raw set of features \(D\) into \(D_{new}\) by applying a set of transformation functions \(T=\{t_{1},t_{2},...,t_{n}\}\). Formally, the goal of the AutoFE is to search the optimal transformed feature set \(D^{*}\) where \(L_{E}^{M}(D^{*},tg)\) is maximized. ### FeatGeNN Model In this study, we use a convolutional neural network to extract features that can improve the performance of a machine learning model (i.e., Random Forest). As explained earlier, using an MLP alone to generate new features would not Figure 1: The FeatGeNN process. result in a good set of new features. The reason for this is the relationship between the number of informative interactions between features and the total number of features in the feature space. Also, using a CNN alone might not lead to good performance because a CNN only considers local interactions and does not consider many important global interactions between features [9]. To overcome this problem, we use an architecture that combines the MLP with the CNN. The FeatGeNN model includes two main blocks, namely local feature extraction and global feature generation (Figure 1). The first block attempts to identify the most informative interactions between local features, while the second block generates new features from the features extracted by the local feature extraction block and combines them globally. The Local Feature Extraction block includes two main operations, namely Pooling and Convolution. Among these operations, the pooling operation plays a crucial role in reducing dimensionality and preserving the most informative features for subsequent layers. In previous work on feature generation for tabular data with CNN, max-pooling was mainly used. However, we found that using max-pooling for tabular data may not give the desired result because the model may not compare closely related features, thus affecting the features generated by the model. Therefore, we propose the use of correlation-pooling to address this issue. In correlation pooling, the variant of pooling used in our Local Feature Extraction block, uses Pearson correlation[29] to group features that are highly correlated. By grouping these features, correlation-pooling can preserve the relationship between closely related features and thus improve the quality of the features extracted by the CNN model. This is in contrast to max-pooling, which preserves only the most dominant feature in a group and may ignore other relevant features that are closely related. Therefore, by incorporating Pearson correlation in the pooling operation, correlation-pooling can effectively circumvent the limitation of max-pooling and help generate more informative features for subsequent layers in the CNN model. The Pearson correlation coefficient can be formulated as follows for our problem: \[r=\frac{n\sum xy-(\sum x)(\sum y)}{\sqrt{[n\sum x^{2}-(\sum x)^{2}][n\sum y^{2 }-(\sum y)^{2}]}} \tag{1}\] where \(x\) and \(y\) represent the values of the features \(X\) and \(Y\) respectively, and \(X,Y\in F\), where \(F\) is the set of all features. The variable \(n\) denotes the number of samples in the dataset \(D\). To avoid having to run the Pearson algorithm twice, we have introduced an iterative calculation of the Pearson coefficient. This means that at the current stage of model development, we compute the Pearson coefficient \(r\) to perform the pooling operation for the subsequent evolutionary generation of the model. To reduce the computations required, we also added a threshold to limit the number of data sent to the correlation calculation, i.e., a model can only use 70% of the data to calculate the correlation value for the features. While the Pearson correlation is a statistical measure that describes the linear relationship between two variables, it is not suitable for analyzing relationships between more than two characteristics. To overcome this limitation, we use the multivariate correlation matrix, which consists of pairwise Pearson correlation coefficients between all pairs of variables. This matrix allows us to analyze relationships between multiple variables and identify the most highly correlated variables. The overall correlation value for the feature \(f\) can be formulated as follows: \[CS_{f}=\frac{\sum_{k}^{N}r_{fk}}{N} \tag{2}\] where \(CS_{f}\) is the correlation score for the feature \(f\), \(r_{fk}\) represent the person correlation score for the feature tuple (_f_,_k_) and \(N\) the total number of feature in the dataset. In the Global Feature Generation block, an MLP is utilized to merge the features extracted from the Local Feature Extraction block and generate novel features. These novel features are then appended to the original dataset and used in the machine-learning model. Figure 2: Correlation-Pooling process. ### Evolution Process In this work, we adopt an evolution process for conducting AutoFE, as depicted in Figure 3. This process involves three distinct steps: (1) Feature Selection, (2) Population Initialization, and (3) Feature Evolution. The first step of our proposed approach is to reduce the combination of uncorrelated and redundant features using a feature selection method. We used the Maximum Relevance-Minimum Redundancy (MRMR) [16] method for this purpose. By minimizing the combination of such features, we aim to reduce the introduction of noise into the model and improve the quality of the features generated by the CNN model. During the population initialization step, we generate a population of the CNN model \(POP\) that is evolved in the Feature Evolution step. To evaluate this initial population, we use a machine learning model on the dataset resulting from step (1). Specifically, we take the set of features _F*_ from the Feature Selection step and input them into the CNN model \(p\) (where \(p\in POP\)) to generate \(n\) new features \(f\). These newly created features are concatenated with the original dataset \(D\) to create a new dataset \(D*\{F\cup f\}\), which is then evaluated by the machine learning model \(L^{m}\) to obtain a score \(S_{p}\). In the trait evolution step, a genetic algorithm [17] is used to evolve the population and identify the most effective traits to improve the performance score Figure 3: Feature Evolution process. obtained by \(L^{m}\). During each epoch of the genetic algorithm, for each model \(p\) that is not part of the elite group \(E\) (where \(Eis\in POP\)), a crossover is performed between its weights and those of a second model \(p^{\prime}\), which is selected using a round-robin tournament [28]. Following the crossover process, the offspring generated by this operation can be subjected to mutation. The features produced by the offspring are then evaluated, as described in the initialization of the population initialization step. If the score obtained by \(L^{m}\) is better than the current score for \(p\) or if depreciation is allowed, the offspring replaces the current model \(p\) and the score is updated. ## 4 Results In this section, we aim to answer the following research questions: * **RQ1:** How effective is correlation-pooling compared to Max-Pooling? * **RQ2:** Study of the impact of the number of data on the correlation pooling computation? * **RQ3:** How effective is the proposed FeatGeNN approach? (Comparison with literature) ### Experimental Setup To evaluate the performance of the FeatGeNN model, on classification problems, 6 classification datasets from the UCI repository, which were used in the state-of-the-art methods [7][12], were selected. The description of each dataset in terms of the Number of Features and Number of Samples is presented in Table 1. In our experiments, we use the _f1-score_ as the evaluation measure, which is also commonly used in the related works [12] and [7]. The threshold for questions RQ1 and RQ2 was set at 80% of the available data in the dataset. To ensure robustness and reliability, we use 5-fold cross-validation, in which the dataset is divided into five subsets or folds and the evaluation is performed five times, with each fold serving once as a test set. This approach helps mitigate the effects of data variability and provides a more comprehensive assessment of the model's \begin{table} \begin{tabular}{l|r r} \hline Datasets & Samples & Features \\ \hline SpamBase & 4601 & 57 \\ Megawatt1 & 253 & 37 \\ Ionosphere & 351 & 34 \\ SpectF & 267 & 44 \\ Credit\_Default & 30000 & 25 \\ German Credit & 1001 & 24 \\ \hline \end{tabular} \end{table} Table 1: Statistics of the benchmarks used to perform the evaluation of the FeatGeNN features. performance. As for the chosen algorithm, we use Random Forest as the base method in all our experiments. Random Forest is a popular and widely used ensemble learning method known for its robustness and ability to handle different types of data. ### Effectiveness of Correlation-Pooling vs. Max-Pooling (RQ1) In this subsection, this experiment aims to answer: _Can our FeatGeNN with Correlation-Pooling achieve competitive results compared to the version with Max-Pooling?_ Table 2 shows the comparison results in terms of F1 score. The results show that the FeatGeNN with correlation-pooling outperforms the version with max-pooling in most datasets. The only exceptions are the Megawatt1 and Credit_Default datasets, where the results are very similar. This result can be attributed to the fact that correlation-pooling takes into account the relationships between features when generating new features, which contributes to its relatively better performance. ### Impact of the Number of Data on the Correlation Pooling Computation (RQ2) In this subsection, our experiment aims to answer the question: _What is the influence of the number of available data on the Correlation-Pooling computation?_. Figure 4 shows the performance of three versions of FeatGeNN: FeatGeNN (using all available data), FeatGeNN (using 60% of the data), and FeatGeNN* (using 30% of the data). The results show that, as expected, the performance of the model varies with the amount of data used to compute the correlation-pooling. On average, the version with access to the entire dataset achieves a performance improvement of 0.76% and 1.38% compared to the FeatGeNN and FeatGeNN* versions, respectively. Compared to the version that used 80% of the available data, the result after 30 epochs is very similar, although the version with more data performs \begin{table} \begin{tabular}{l|c c c} \hline Dataset & Base & FeatGeNN & FeatGeNN* \\ \hline SpamBase & 0.9102 0.9422 (0.011) & **0.9530** (0.016) \\ Megawatt1 & 0.8890 0.9148 (0.002) & 0.9151 (0.002) \\ Ionosphere & 0.9233 0.9587 (0.012) & **0.9667** (0.004) \\ SpectF & 0.7750 0.8682 (0.018) & **0.8776** (0.013) \\ Credit\_Default & 0.8037 0.8092 (0.003) & 0.8095 (0.003) \\ German Credit & 0.7401 0.7775 (0.006) & **0.7814** (0.002) \\ \hline \end{tabular} \end{table} Table 2: Comparing FeatGeNN performance with Correlation-Pooling and Max-Pooling. The * denotes the version of the FeatGeNN that was executed with Correlation-Pooling. The results are the average score, and the standard deviation, after 30 runs better in fewer epochs. These results indicate that the performance of FeatGeNN is still competitive with the original version, even though the performance decreases slightly with less available data. ### Effectiveness of FeatGeNN (RQ3) In this subsection, this experiment aims to answer: _Can our FeatGeNN with Correlation-Pooling achieve competitive results when compared to the state-of-the-art models?_. We compare FeatGeNN on 6 datasets with state-of-the-art methods, including (a) Base: Raw dataset without any transformation; (b) Random: randomly apply a transformation to each raw feature; (c) DFS [4]; (d) AutoFeat [11]; (e) LFE [6]; (f) NFS [12]; and (g) DIFER [7]. Table 3 shows the comparative results of FeatGeNN relative to existing methods (results reported in [7]). From Table 3 we can observe that in the classification tasks, the comparison shows that FeatGeNN, performs the best for the SpamBase, Credit_Default, German Credit, and SpectF benchmarks, the second Figure 4: The performance of the different versions of FeatGeNN is compared in terms of the amount of data used for computation. In the image, the symbol represents the version that used 60% of the available data, the * symbol represents the version that used 30% of the data, and the symbol \({}^{*}\) represents the version that used 100% of the data. The FeatGeNN without symbol stands for the version that used 80% of the available data. best for the Ionosphere benchmark and achieves the same result as the DIFER method for the Megawatt1 benchmark. Although DIFER achieves the best performance in the Ionosphere benchmark, they only achieve 0.58% more than the best result obtained by our proposed method. Regarding the number of features, Table 4 shows that FeatGeNN excels in producing fewer features for the Megawatt1, SpectF, and Credit_Default datasets compared to other methods. For the remaining datasets, FeatGeNN achieves comparable results with the same number of features. Compared to the performances of Base and Random, FeatGeNN achieved an average improvement of 5.89% considering all datasets, which demonstrates the potential of the features generated by our proposed model. ## 5 Conclusion In this study, we presented a novel approach for generating new features in tabular data that combines feature selection and feature generation to improve the performance of predictive models. Our proposed method uses a CNN architecture to effectively capture local features during convolution operations (Local Feature Extraction), thereby reducing the number of combinations required in the MLP phase (Global Feature Generation). In addition, we integrated a \begin{table} \begin{tabular}{l c c c c c} \hline Dataset & Random & AutoFeat* & NFS* & DIFER* & FeatGeNN \\ \hline SpamBase & 1 & 46 & 57 & 1 & 1 \\ Megawatt1 & 8 & 48 & 37 & 29 & 8 \\ Ionosphere & 1 & 52 & 34 & 1 & 1 \\ SpectF & 8 & 37 & 44 & 9 & 8 \\ Credit\_Default & 4 & 30 & 25 & 5 & 4 \\ German Credit & 1 & 22 & 24 & 1 & 1 \\ \hline \end{tabular} \end{table} Table 4: Comparison between FeatGeNN, DIFER, AutoFeat, and Random (\(*\) the results reported on [7]). \begin{table} \begin{tabular}{l|c c c c c c c c} \hline Dataset & Base & Random & DFS & AutoFeat & NFS & DIFER & FeatGeNN* & FeatGeNN \\ \hline SpamBase & 0.9102 & 0.9237 & 0.9102 & 0.9237 & 0.9296 & 0.9339 & 0.9530 (0.016) & **0.9644** \\ Megawatt1 & 0.8890 & 0.8973 & 0.8773 & 0.8893 & 0.9130 & **0.9171** & 0.9151 (0.002) & **0.9171** \\ Ionosphere & 0.9233 & 0.9344 & 0.9175 & 0.9117 & 0.9516 & **0.9770** & 0.9644 (0.012) & 0.9713 \\ SpectF & 0.7750 & 0.8277 & 0.7906 & 0.8161 & 0.8501 & 0.8612 & 0.8776 (0.013) & **0.8802** \\ Credit\_Default & 0.8037 & 0.8060 & 0.8059 & 0.8060 & 0.8049 & 0.8096 & 0.8095 (0.003) & **0.8102** \\ German Credit & 0.7410 & 0.7550 & 0.7490 & 0.7600 & 0.7818 & 0.7770 & 0.7814 (0.002) & **0.7827** \\ \hline \end{tabular} \end{table} Table 3: Comparison between FeatGeNN with other methods from the literature, reported on [7]. * reports the average and standard deviation across 30 runs, while the FeatGeNN column reports the maximum value across the same runs. correlation-pooling operation as a dimensionality reduction step. Our approach demonstrates efficient feature learning and achieves competitive results compared to the architecture used by Max-Pooling and state-of-the-art methods. As a direction for future research, we intend to explore information theory methods as possible alternatives for pooling operations. This could further increase the effectiveness of our approach to learning new features.
2303.05042
Aspects of Quantum Gravity Phenomenology and Astrophysics
With the discovery of gravitational waves, the search for the quantum of gravity, the graviton, is imminent. We discuss the current status of the bounds on graviton mass from experiments as well as the theoretical understanding of these particles. We provide an overview of current experiments in astrophysics such as the search for Hawking radiation in gamma-ray observations and neutrino detectors, which will also shed light on the existence of primordial black holes. Finally, the semiclassical corrections to the image of the event horizon are discussed.
Arundhati Dasgupta, José Fajardo-Montenegro
2023-03-09T05:31:42Z
http://arxiv.org/abs/2303.05042v1
# Aspects of Quantum Gravity Phenomenology and Astrophysics ###### Abstract With the discovery of gravitational waves, the search for the quantum of gravity, the graviton, is imminent. We discuss the current status of the bounds on graviton mass from experiments as well as the theoretical understanding of these particles. We provide an overview of current experiments in astrophysics such as the search for Hawking radiation in gamma-ray observations and neutrino detectors, which will also shed light on the existence of primordial black holes. Finally, the semiclassical corrections to the image of the event horizon are discussed. ## 1 Introduction The gravitational quantum is still elusive experimentally and somewhat "elusive" theoretically [1, 2, 3]. In electrodynamics, the quantum of the electromagnetic wave is known as the photon, and we work with the interactions of photons to derive quantum electrodynamics (QED) phenomena. In the case of gravity, gravitational waves have been discovered 100 years after their prediction. The question is, are there "gravitons" or quanta of these waves? Like QED, one can define the "Fock" space quantization for the linearized Einstein equations and study free gravitons. However, introducing interactions with gravitons to study scattering amplitudes leads to uncontrollable infinities [3]. This is known as the "non-renormalizability" of perturbative quantum gravity. General relativity might be nonperturbative in the quantum regime, and the story of the quanta could be present in the geometry measurements of area and volume [4]. These "nonperturbative" theoretical explorations cannot be verified, as they are still in the realm of the microscopic Planck length regime of \(10^{-35}\) m. We investigate the semiclassical fluctuations of the flat geometry using loop quantum gravity (LQG) coherent states and discuss whether that can be interpreted as a graviton quantum. Further in the 1970s, the discovery of black hole thermodynamics and Hawking radiation were studied as "semiclassical phenomena", where gravity remained classical and other fields were quantum. The isolated black hole was found to have a temperature proportional to its surface gravity and entropy equal to its horizon surface area. For a solar-mass black hole, which might have formed using stellar collapse, this temperature is of the order of \(10^{-8}\) K. If we observe the current-day black holes, then they are immersed in the background cosmic radiation, which has a temperature of 2.783 K. As the heat flows from higher to lower temperatures, the black holes would not radiate into the surroundings, and as of now, there is no experimental evidence of Hawking radiation. The study of black hole mergers using gravitational waves has provided evidence for the area increase theorem [5]. How would one obtain a verification of the temperature and radiative properties of black holes? The existence of _primordial black holes_ (PBH) of small mass, originating in density fluctuations of the early universe, would allow for high-temperature black holes and Hawking decays in the form of gamma-ray bursts. The search for PBH has been a subject of experimental study [6]. We discuss this in some detail, and the approximations which describe the theoretical derivation of Hawking radiation are also discussed. The current experiments provide stringent restrictions on the PBH contributions to photon and neutrino fluxes observed on earth, as well as as fractions of dark matter [7, 8, 9, 10, 11]. Strangely, new observations from gravitational wave data suggest that there are subsolar mass black holes. Recent work tries to find the origins of these, either as PBH or from other processes without the Chandrasekhar limit in the collapse process [12]. Whereas this is very interesting, this is not exactly the realm of quantum gravity, though the research in this area might shed light on semiclassical aspects. However, astrophysical phenomena, such as the black hole merger event, the collapse of a supernova to form a black hole, and neutron star mergers, are strong gravitational events. The energies at which the events happen have strongly coupled gravitational interactions. The quantum dynamics near these events is interesting, and even though the effect is weak, one can try and find indirect evidence in the observational data. Using LQG coherent states, some of these can be studied semiclassically. We discuss these and also comment on other observational results from the semiclassical gravity program for astrophysical observations, including that for the image of the event horizon [13, 14]. There are several collaborations in quantum gravity phenomenology which, in particular, discuss Lorentz violations and quantum anomalies. The appropriate discussions on these topics can be found in [15]. For a previous comprehensive review on quantum gravity phenomenology, see [16]. One of the aims of this current review is to also provide a pedagogical introduction to some aspects such as the search for primordial black holes, which is a very active field currently. This review has discussions on the (i) graviton, (ii) Hawking radiation, and (iii) semiclassical corrections to strong gravity systems such as the event horizon. The following section discusses the theory of the graviton and the experimental bounds. Section 3 describes the phenomena of Hawking radiation, as well as the experimental efforts to detect the emitted particles from PBH. Section 4 describes the physics of the event horizon and quantum correction predictions to the same. The final section concludes with the present status of the field of research in the above and future avenues of quantum gravity phenomenology. ## 2 Graviton The electromagnetic (EM) wave is a solution to Maxwell's equation and is observed in nature. The visible spectrum is known as light, the infrared, which we interpret as heat, and radio waves. The ultraviolet radiation is also detectable and useful as are X-rays in many practical day-to-day events. These, when quantized, give us the photon description of the EM wave, and represent the source-free "free" EM fields. The actual production of EM radiation is from accelerated charges, but as the waves propagate out in space, they can be studied as "free" EM fields. In the case of gravity, Einstein's action is nonlinear, and the gravitational field has self-interactions. To find the "free" plane wave which propagates on its own, we take a linearized gravity, "weak fluctuations" over a flat background. Nonperturbative waves, produced using strong gravitational interactions, have been studied in [17]. As the linearized gravitational waves represent classically "free" fields, one would expect that the Fock space quantization of these would be obtained similarly to the photon quantum electrodynamics description. However, herein lies the problem: the graviton theory is a nonrenormalizable theory [3]. Is it because the graviton vacuum, which represents the Minkowski spacetime is not a vacuum? Is flat space really a vacuum state in a true theory of quantum gravity? Can we have a perturbation over the flat-space system and describe a graviton as a quantum state in the flat-space background? In the case of the EM theory, the EM field propagates in a flat background that, however, serves as a noninteractive arena for the EM fields to propagate. The photon is created and annihilated out of the QED vacuum, which is a state with the photon quantum number as zero. In the following, we discuss whether seeking a similar quantum field vacuum for the graviton is relevant. We also discuss the question of which physics of the systems we should experiment for the observation of the graviton. ### The Linearized Theory of the Graviton In the following, we discuss Einstein's theory of the linearized metric. The field equations for the Einstein action is "free" in its gauge-fixed form; however, if we try to write the full Einstein Lagrangian for the gravitational field, then there are interaction vertices to all orders for the graviton. The quantum amplitudes including these interactions do not converge, and neither can the theory be renormalized using standard techniques. To begin with, we write the metric of spacetime \(g_{\mu\nu}\) as a flat space \(\eta_{\mu\nu}\) and a weak fluctuation \(h_{\mu\nu}\). \[g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}. \tag{1}\] It is assumed that \(|h_{\mu\nu}|_{\rm max}\ll 1\) (\(\mu,\nu,\alpha,\beta\) etc\(=0,\ldots,3\)). Note that using standard convention, the metric is dimensionless and the amplitude of the fluctuations are defined using the absolute maximum value. From experiments [1], we are aware now that the amplitude of the "gravitational wave" is of the order of \(10^{-22}\) as received on earth. One can write the Einstein Lagrangian density as a function of this metric, its determinant \(g\), and scalar curvature \(R\), \[{\cal L}=\sqrt{g}\ R=-\frac{1}{2}\sqrt{-1+h}\left[(h^{\mu\nu})(\eta^{\alpha \beta}\partial_{\alpha}\partial_{\mu}h_{\nu\beta}-\Box\ h_{\mu\nu})\right]. \tag{2}\] In the above, we have kept the terms in the Lagrangian which are quadratic in \(h_{\mu\nu}\). The linear terms of the form \(\eta^{\mu\nu}\eta^{\lambda\rho}\partial_{\rho}\partial_{\mu}h_{\lambda\nu}\) are total derivatives and contribute only at the boundaries, which we ignore. Further, \(\Box\equiv\eta^{\alpha\beta}\partial_{\alpha}\partial_{\beta}\), and \(h\) is the trace of \(h_{\mu\nu}\). The equation of motion from the above to a linear order in "\(h_{\mu\nu}\)" is \[\eta^{\alpha\beta}\partial_{\alpha}\partial_{\mu}h_{\nu\beta}-\Box\ h_{\mu\nu }=0. \tag{3}\] This still has a gauge degree of freedom due to diffeomorphism invariance, which can be fixed by putting the \(\partial^{\alpha}h_{\alpha\beta}=0\) restriction on the linearized metric. The equation of motion reduces to a "wave equation" \[\Box\ h_{\mu\nu}=0. \tag{4}\] The solution for this is a transverse wave (due to Lorentz's condition) and has two polarizations as additional restrictions to fix the residual gauge freedom keeping only two [18]. The two polarizations are taken as \(h_{+}=A_{+}\cos(\omega z-\omega t)\) and \(h_{\times}=A_{\times}\cos(\omega z-\omega t)\), if it is propagating in the z-direction [18], with angular frequency \(\omega\) and amplitude \(A_{+},A_{\times}\). The question is: can these waves, when quantized, give us "quanta" as it is possible for photon quantization? In other words, can one define a Fock space representation for the perturbative Hilbert space of Einstein's gravity? The answer is surprisingly difficult, as the Einstein action introduces self-interactions of the gravitons to all orders, which cannot be renormalized using standard field theory techniques. The gravitational propagator can be calculated, but the quantum corrections cannot be made finite using regularization and renormalization techniques. One can see the origin of self-interactions even at this order in the Lagrangian in Equation (2) as the nonpolynomial "measure" \(\sqrt{-1+h}\) can give rise to the interaction terms upon expanding the square root. A simple "degree of superficial divergence" counting of the gravitational perturbative Feynman diagram gives the number as \(D=2(k+1)\), where \(k\) is the number of independent momentum interactions [19]. This number therefore increases with the number of loops in the scattering calculations and cannot be absorbed by redefining the bare Lagrangian. For Yang-Mill's (YM) theory the same degree is given as \(D=4-L_{e}\), where \(L_{e}\) is the number of external legs of the Feynman diagram. The YM theory is therefore renormalizable, as the number of terms in the Lagrangian which need to be renormalized is finite (\(0<L_{e}<4\)). One can use asymptotic techniques to obtain a renormalizable effective Lagrangian for gravity, but we do not discuss this in this review [20]. However, can there be a "free" graviton theory where we can ignore all the interactions? Up to a certain length scale, a "free graviton" quantization can be formulated, but the entire theory is also complicated by the definition of the "gravitational vacuum". In the theory of gravitational physics, the metric is the basic degree of freedom, and the graviton is a "perturbation" over the flat-space geometry. In a true quantization of the theory, the flat spacetime geometry is also an emergent "metric". If the metric is an operator, then causality and therefore quantization is not defined. The vacuum likely is the state with no metric or the state that is such that \[\hat{g}_{\mu\nu}\ |0\rangle=0. \tag{5}\] There have been several attempts to obtain the perturbative quantum state using a polymer state in the nonperturbative quantization framework of loop quantum gravity. We report on those works briefly and then describe a semiclassical description of a "gravitational wave" using LQG. It remains though that the most complicated aspect of Einstein's gravity is the fact that the field which has to be quantized is the metric of the spacetime, the causality of the system is complicated by the quantization, and macroscopic configurations have to be emergent. ### Gravitons in Loop Quantum Gravity It was shown in [21] that the SU(2) generators of the loop quantum gravity (LQG) variables decouple into three independent gauge generators in the linearized approximation. In LQG, the basic variables are obtained from the ADM formulation of the canonical gravity. The spacetime is foliated by spatial slices \(\Sigma\) with a timelike normal vector along the fourth direction, specified using the coordinate \(t\). The induced three-metric on \(\Sigma_{t}\) is given as \(q_{ab}\), \((a,b=1,2,3)\); the metric in the ADM formulation is given as \[ds^{2}=-(N^{2}+N^{a}N_{a})dt^{2}+N^{a}dx_{a}dt+q_{ab}dx^{a}dx^{b}, \tag{6}\] where \(N^{2}\) is the lapse, \(N^{a}\) is the shift, and \(q_{ab}\) is the induced metric of the time slices \(\Sigma_{t}\). The second fundamental form of this metric is \(K_{ab}={\cal L}_{t}q_{ab}\) and is the extrinsic curvature tensor which characterizes the embedding of the slice. The LQG variables are defined using the soldering forms \(e^{I}_{a}\) which connect the tangent space (\(I=1,2,3\)) of the three slices to the world volume. The canonical variables are defined as \[e^{I}_{a}e_{bI}=q_{ab},\ \ E^{a}_{I}E^{bI}=q\ q^{ab},\ \ \ A^{I}_{a}=\Gamma^{I}_{a}-K_{ ab}E^{bI}, \tag{7}\] where \(e^{I}_{a}\) is the triad, \(E^{a}_{I}\) are densitized triads, and \(A^{I}_{a}\) have the properties of a connection due to their definition in terms of the spin connection \(\Gamma^{I}_{a}\) and the extrinsic curvature tensor \(K_{ab}\). The details of the variables can be found in [22]. There is usually an Immirzi parameter in the definition of the gauge connection, and this reflects an ambiguity in the system. We chose to set it to one, for the purpose of this paper. The internal indices \(I\) transform in the SU(2) group, which is isomorphic to the group of rotations in the three-dimensional tangent space [22]. The generators of the transformations in the internal directions are the Gauss constraints \[{\cal G}^{I}=\partial_{a}e^{aI}+\epsilon^{IJK}e^{a}_{J}A_{aK}. \tag{8}\] In the linearized approximation, \(q=1\), \(q^{ab}=\delta^{ab}+h^{ab}\) and \(A^{I}_{a}=0\), if one keeps the constraint up to a linear order in the fields, the constraint algebra commutes, i.e., \[{\cal G}^{I}_{\rm Lin}=\partial_{a}(\delta e^{aI})+\epsilon^{IJK}\delta^{a}_{ J}\delta A_{aK}, \tag{9}\] where due to the linearized metric, one has \[e^{aI}=\delta^{aI}+\delta e^{aI},\ \ \ A_{aK}=0+\delta A_{aK}, \tag{10}\] and \[h^{ab}=\delta e^{aI}\delta^{b}_{I}, \tag{11}\] \[\{\delta e^{I}_{a}(x),\delta A_{Kb}(y)\}=\kappa\delta^{3}(x-y)\delta^{I}_{K}\delta_ {ab}, \tag{12}\] where \(\kappa\) is related to Newton's constant \(G\)[22, 23]. The \(\delta e^{I}_{a}\) and the \(\delta A_{Kb}\) are the linearized dynamical fields, which are quantized. In the limit \(\kappa\to 0\), \[\left\{{\cal G}^{I}_{\rm Lin},{\cal G}^{J}_{\rm Lin}\right\}=0. \tag{13}\] Interestingly, if one keeps the next order in the constraint definition, the algebra is not zero to a linear order as the Poisson bracket gives a linear result in the fields. \[{\cal G}^{I}_{\rm Lin}=\partial_{a}(\delta e^{aI})+\epsilon^{IJK}\left(\delta^ {a}_{J}+\delta e^{aJ}\right)\delta A_{aK}, \tag{14}\] and \[\left\{{\cal G}^{I}_{\rm Lin},{\cal G}^{J}_{\rm Lin}\right\}=\kappa\left( \delta A^{IJ}-\delta^{IJ}\delta A^{b}_{b}\right)\delta^{3}(x-y). \tag{15}\] This term would go to zero in the limit \(\kappa\to 0\). To avoid these confusions about the algebra and also questions about the Minkowski "quantum state" about which perturbation is being performed, we use the full SU(2) degrees of freedom and imposed the linear metric only in the semiclassical approximation. The details of the calculations appear in [24]. For the polymer quantization of linearized gravity using the \(U(1)\times U(1)\times U(1)\) Hilbert space, one can use the work of [26]. This approach is based on the linearized algebra of LQG variables, as given in Equation (13). The LQG phase space thus has a \(U(1)\times U(1)\times U(1)\) symmetry in the linearized approximation, instead of the full \(SU(2)\). The Hilbert space quantum states are of the form \[|\vec{\alpha},\{q\}\rangle=|\alpha_{1},q_{1}\rangle|\alpha_{2},q_{2}\rangle| \alpha_{3},q_{3}\rangle, \tag{16}\] where \(|\alpha_{i},q_{i}\rangle\) are elements of a \(U(1)\) Hilbert space. \(q_{i}\) label integers and \(\alpha\) labels the discrete network. The flux operator defined in terms of the triads is given as [26] \[X^{a}_{\vec{\alpha},\{q\}(r)}(\vec{x})=\sum_{I}q_{I}\int ds_{I}(\vec{e}_{I}(s ^{I}),\vec{x})\hat{e}^{a}_{I}, \tag{17}\] where \(s_{I}\) is a surface in three dimensions, which the discrete edge \(e_{I}\) of the graph \(\alpha\) intersects once. The Fock space quantum vacuum for the graviton is a transform of the state in Equation (16). Whether this facilitates further study of the perturbation theory of the graviton is yet to be investigated. The transform is given as \[\Phi_{0}:=\sum_{\alpha,q}c_{0\vec{\alpha},\{q\}}\langle\vec{\alpha},q|, \tag{18}\] where \[c_{0\vec{\alpha},\{q\}}=\exp\left(-\frac{\imath}{4}\int\ d^{3}x\ G^{\vec{ \alpha},\{q\}(r)}_{ab}(\vec{x})*X^{ab}_{\vec{\alpha},\{q\}(r)}(\vec{x})\right), \tag{19}\] where these are "smeared" operators in the LQG polymer space, and \(r\) is a measure of the Gaussian smearing (\(X_{r}(\vec{x})=\int d^{3}yX(\vec{y})\exp(-|\vec{x}-\vec{y}|^{2}/2r^{2})/((2\pi r ^{2})^{3/2})\)). \[X^{ab}_{\vec{\alpha},\{q\}(r)}=\sum_{i}X^{a}_{\alpha_{i},q_{i}}\delta^{b}_{i}. \tag{20}\] The \(G^{\vec{\alpha},\{q\}(r)}_{ab}(\vec{x})\) is related to the flux of the two "graviton" polarizations in the light cone. We refrain from getting into the details of the above, but the reader is urged to follow the details of the derivation in [26] and [27]. Whereas this approach to obtaining a "quantum" of linearized gravity is technically rather involved and involves an additional scale "\(r\)" apart from the usual discretization of quantum variables, it is believed to give a polymer representation of the "graviton". The expectation values of the operators are preserved in the transform and therefore, one loop corrections to the graviton propagator can be tested. A derivation of a one-loop correction using a perturbation of reduced loop quantum cosmology states exists in [28]. Another reference for the reduced phase-space quantization of linearized gravitational waves is [29]. Moreover, a more recent work uses the free graviton Lagrangian and obtains a "polymer state" for the same. This approach obtains some corrections to the gravitational wave propagator [30]. However, in none of the above papers the emergence of the background Minkowski metric is discussed. The self-interaction of gravitons is also not obtained to all orders, as predicted by the Einstein Lagrangian. In the next section, we try to find some phenomenological implications of the graviton's existence in observational data. ### Gravitons in Semiclassical Gravity In this subsection, we derive the semiclassical phase space of the gravitational wave metric and obtain a coherent state for the system using the techniques of [22, 24]. To begin with, we find the triads for the metric and the LQG gauge connection, which are the classical variables for the system. The details can be found in [24]. The spatial metric for a standard gravitational wave metric in the tt-gauge is (the lapse is one and shift is zero in the ADM form of the four-metric) \[q_{ab}=\left(\begin{array}{ccc}1+h_{+}&h_{\times}&0\\ h_{\times}&1-h_{+}&0\\ 0&0&1\end{array}\right). \tag{21}\] In the process of obtaining the coherent state for the above metric, we identify the classical phase space in terms of the LQG variables [25]. The triads \(e^{I}_{a}e_{bI}=q_{ab}\) are obtained as \[e^{I}_{a}=\left(\begin{array}{ccc}\sqrt{\frac{1-(h_{+}^{2}+h_{\times}^{2})} {2(1-h_{\times})}}&\sqrt{\frac{1-(h_{+}^{2}+h_{\times}^{2})}{2(1-h_{\times})} }&0\\ \frac{1-(h_{\times}-h_{+})}{\sqrt{2(1-h_{\times})}}&\frac{-1+(h_{\times}+h_{+ })}{\sqrt{2(1-h_{\times})}}&0\\ 0&0&1\end{array}\right)=\left(\begin{array}{ccc}\frac{1}{\sqrt{2}}+\frac{h_ {\times}}{2\sqrt{2}}&\frac{1}{\sqrt{2}}+\frac{h_{\times}}{2\sqrt{2}}&0\\ \frac{1}{\sqrt{2}}+\frac{1}{\sqrt{2}}(h_{+}-\frac{h_{\times}}{2})&-\frac{1}{ \sqrt{2}}+\frac{1}{\sqrt{2}}(h_{+}+\frac{h_{\times}}{2})&0\\ 0&0&1\end{array}\right). \tag{22}\] Obviously, in our gauge choice, the triad is not diagonal at the zeroth order. The extrinsic curvature of the metric is obtained using the definition \(K_{ab}=-\partial_{t}q_{ab}\), and the SU(2)-valued gauge connections defined in Equation (7) are: \[A^{1}_{x} = -\frac{1}{2\sqrt{2}}(\partial_{z}h_{\times}+\partial_{z}h_{+})= A^{2}_{y}\] \[A^{1}_{y} = -\frac{1}{2\sqrt{2}}(\partial_{z}h_{\times}-\partial_{z}h_{+})= -A^{2}_{x}\] \[A^{1}_{z} = A^{2}_{z}=A^{3}_{x}=A^{3}_{y}=0\] \[A^{3}_{z} = \frac{1}{2}\partial_{z}h_{+}.\] We also computed the nonzero spin connections for this metric [25]. Next, we take a discretization of the background geometry. This smearing of variables is required to obtain smooth commutators of the quantum theory, instead of distributional delta functions. For details, see [22], and the smearing of the gauge connection on one-dimensional curves gives holonomies which involve path-ordering. \[h_{e}(A)={\cal P}\exp\left(\int A\right). \tag{23}\] The discretization is not dictated by the theory but is motivated from the flat geometry of the classical three-metric. We take a planar graph, which form a cubic 3-d polyhedronal decomposition of the three-geometry, as shown in Figure 1. Therefore, there are six links and/or six faces meeting at a given vertex. The holonomies and the momentum are calculated as smeared along the one-dimensional edges of the graph, and the two-dimensional faces of the cube which the links intersect precisely at one point. These calculations are done using the techniques of [24]. The holonomies of the three independent links in the \(x\), \(y\), and \(z\) directions and the corresponding momenta are given up to a linear order in the amplitudes \(A_{+}\), \(A_{\times}\), \[h_{e_{x}} = 1-i\frac{\epsilon}{2}A_{x}^{I}\sigma_{I} \tag{24}\] \[h_{e_{y}} = 1-i\frac{\epsilon}{2}A_{y}^{I}\sigma^{I}\] (25) \[h_{e_{z}} = 1+i\frac{A_{+}}{2}\sin\left(\omega\left(z_{0}-t_{0}+\frac{ \epsilon}{2}\right)\right)\sin\left(\frac{\epsilon}{2}\right)\sigma_{3}, \tag{26}\] where one has taken a vertex at \((x_{0},y_{0},z_{0})\) and the links are of width \(\epsilon\). \(\sigma_{I}\) are the Pauli matrices. Next, one takes the faces centred at the middle of the links, i.e., at \(x_{0}+\epsilon/2\), \(y_{0}+\epsilon/2\), and \(z_{0}+\epsilon/2\), and of area \(\epsilon^{2}\). The momenta are labelled by the edges which intersect the faces. The momenta are defined as \(P_{e}^{I}=\frac{1}{\kappa}\int_{S_{e}}*E^{I}\). \[P_{e_{x}}^{1} = \frac{1}{\sqrt{2}\kappa}\left(\epsilon^{2}+\frac{\epsilon^{2}(A_ {\times})}{2}\cos(\omega(z_{0}-t_{0}))\right) \tag{27}\] \[P_{e_{x}}^{2} = \frac{1}{\sqrt{2}\kappa}\left(\epsilon^{2}+\frac{\epsilon^{2}(2A_ {+}-A_{\times})}{2}\cos(\omega(z_{0}-t_{0}))\right)\] (28) \[P_{e_{y}}^{2} = \frac{1}{\sqrt{2}\kappa}\left(-\epsilon^{2}+\frac{\epsilon^{2}(2 A_{+}+A_{\times})}{2}\cos(\omega(z_{0}-t_{0}))\right)\] (29) \[P_{e_{y}}^{1} = \frac{1}{\sqrt{2}\kappa}\left(\epsilon^{2}+\frac{\epsilon^{2}(A_ {\times})}{2}\cos(\omega(z_{0}-t_{0}))\right)\] (30) \[P_{e_{z}}^{3} = \frac{1}{\kappa}\epsilon^{2}. \tag{31}\] As the densitized triads are smeared over two-dimensional areas and acquire dimensions, the momenta are defined with the dimensional constant \(1/\kappa\), \(\kappa=8\pi G/c^{3}\) to make the variables dimensionless. In the quantum version, this acquires the role of \(1/\hbar\kappa=1/l_{p}^{2}\), where \(l_{p}\) is the Planck length. The coherent states Figure 1: (**a**) Building block for the decomposition of the 3-geometry. (**b**) Example of one of the smearing surfaces to calculate the momenta. are defined as peaked at the classical values of a complexified SL(2,C) element as specified by Hall [31], \[g_{e}=\exp(iT^{I}P_{e}^{I})h_{e},\] and a detailed coherent state can be written for the above classical phase space, now described only using the discrete one-dimensional smeared holonomies and corresponding momenta. Note these "coherent states", as defined in [22] for LQG, are representative semiclassical states and are not exactly identifiable as "coherent states" as in completely solvable Hamiltonian systems. However, these states have minimal uncertainty in the time slice they are defined in. Next, we calculate the semiclassical corrections to the geometry by using the results of [13]. The coherent states are given for one such discrete element \(e\) and the LQG smeared variables as, \[\psi^{t}(g_{e},h_{e})=\sum_{j}(2j+1)\exp(-\tilde{t}j(j+1)/2)\chi_{j}(g_{e}h_{e }^{-1}), \tag{32}\] where \(\chi_{j}(h_{e})\) is the character of the \(j\)th irreducible representation of SU(2). One can find the expectation value of the momentum operator \(\hat{P}_{e}^{I}\) in this state, and one obtains it to the first order in the semiclassical parameter \(\tilde{t}\)[13] \[\langle\psi^{t}|\hat{P}_{e}^{I}|\psi^{t}\rangle=P_{e}^{I}\left(1+\frac{\tilde {t}}{P_{e}}\left(\frac{1}{P_{e}}-\coth(P_{e})\right)\right)=P_{e}^{I}\left(1+ \tilde{t}f(P_{e})\right), \tag{33}\] where \(P_{e}=\sqrt{P_{e}^{I}P_{e}^{I}}\) and \(f(p)=(1/p)(1/p-\coth(p))\). Therefore, one can calculate the semiclassical corrections to the metric of the classical gravitational wave, if one writes a coherent state for each discrete element \(e\) which comprises the entire Minkowski three-volume divided into cubic cells as in the figure. The vertices of the cube which are shared by three+three coherent states and these can have SU(2) intertwiners [32], but the nature of the corrections remain the same. Note these coherent states are not exactly similar to the coherent states for photons, which are Abelian. These coherent states are non-Abelian in nature. In fact, if we take the pure Minkowski space and use the coherent state as a measure of the quantum fluctuation, what would we generate as the corrected metric? All the \(P_{e}^{I}\)'s for the Minkowski metric can be obtained as given above and, in the limit, \(A_{+,\times}=0\) would represent the Minkowski metric. In this particular gauge, the corrections generate semiclassical fluctuations in the \(\eta_{xx}\), \(\eta_{yy}\), and \(\eta_{zz}\) components but not in the \(\eta_{xy}\) directions. Next, we discuss the fluctuations to the gravitational wave metric as generated from the coherent state which peaks at the gravitational wave metric. Obviously, the metric would fluctuate and generate semiclassical corrections to the geometry at order \(\tilde{t}\). We set the semiclassical parameter (which has to be dimensionless) as a ratio of the Planck scale to the gravitational wave, wavelength, or \(\tilde{t}=l_{p}^{2}/\lambda^{2}\). We take the wavelength as that is the length scale which characterizes the wave system. A relevant-frequency gravitational wave, which might generate detectable semiclassical fluctuations, has to be of very high frequency. Let us say a \(10^{35}\) Hz gravitational wave will have the semiclassical parameter as \(\tilde{t}\approx 10^{-16}\). In the above, have we predicted a "quantum origin" of the gravitational wave that would comprise the "graviton"? Obviously, the story is not about particles in gravitational physics, or matter quanta, but the quantum of geometry. The tiny area measurements in each basis state of the operator \(\hat{P}_{e}^{I}\) represent the "graviton", the condensate of which is represented by the coherent-state wave packet. It thus remains that from our perspective, the Minkowski geometry is not the gravitational vacuum, but also emergent from a semiclassical state. Therefore, one should not confuse the quantum gravity vacuum state with the "matter vacua". We suggest two ways to search for quantum gravity bounds/origins in a gravitational wave experiment: * As the coherent states are non-Abelian in nature, the expectation values of operators have semiclassical corrections which originate due to self-interactions. These can be detected for high-frequency gravitational waves. 2. The search for individual "gravitons" or quanta of geometry would require much more precise instruments, able to resolve the coarse-graining of geometry itself. The latter (ii) will require further investigations, in particular about what the dynamical fundamental "quanta" of LQG is. One also has to find if there is a gauge invariant observable which is measurable in experiments. Our questions seem to seek answers by quantizing matter and the gravitational degrees of freedom simultaneously. _However, due to the hierarchy problem, it is preferred that matter is quantized and the gravitational degrees of freedom are semiclassical in the current epoch._ In the combined Hilbert space of the matter and gravitational degrees of freedom \(H_{\rm matter}\otimes H_{\rm grav}\), the combined matter-gravity state should be taken as \[|\Psi\rangle=|\psi_{\rm matter}\rangle\otimes|\psi^{\rm grav}_{\rm semiclassical }\rangle. \tag{34}\] For previous work in adding matter interactions in LQG, refer to [23]. Using criterion (i) and the idea that matter quanta interact with gravitational degrees of freedom at semiclassical length scales, one finds that the semiclassical fluctuations of the metric are relevant. We therefore calculate the metric corrections as predicted from the coherent states for LQG constructed by Thiemann, Winkler, [22] and as observed in [13]. They emerge as \[g_{xx} = (1+h_{+})(1+2\tilde{t}\ f(P_{e_{x}})) \tag{35}\] \[g_{yy} = (1-h_{+})(1+2\tilde{t}\ f(P_{e_{y}}))\] (36) \[g_{xy} = h_{\times}(1+\tilde{t}\ f(P_{e_{x}})+\tilde{t}\ f(P_{e_{y}}))\] (37) \[g_{zz} = 1+\tilde{t}\ f(P_{e_{z}}). \tag{38}\] The gauge invariant momenta are found to be: \[P_{e_{x}} = \frac{\epsilon^{2}}{\kappa}\left(1+\frac{1}{2}h_{+}\right) \tag{39}\] \[P_{e_{y}} = \frac{\epsilon^{2}}{\kappa}\left(1-\frac{1}{2}h_{+}\right)\] (40) \[P_{e_{z}} = \frac{\epsilon^{2}}{\kappa}. \tag{41}\] The continuum limit is obtained using \(\lim_{\epsilon\to 0}P_{e}/\epsilon^{2}\). This gives the metric fluctuations at a location \((x_{0},y_{0},z_{0})\) and one can solve the propagation of matter in this corrected metric. As evident in the continuum limit, the corrections are functions of the classical triads, and thus dependent only on the \(z\) coordinate. Moreover, the corrections are relevant only at one instant \(t=t_{0}\) of the spacetime. For a 100 Hz frequency, the gravitational wave will have a semiclassical correction of the order of \(10^{-84}\), which is way smaller than the gravitational wave amplitude. If one probes higher-frequency gravitational waves, and therefore shorter wavelengths, the Planck scale coarse-graining will start manifesting itself and the effects might be evident in a gravitational wave detector. The Minkowski metric is also corrected semiclassically, and one can probe these using quantum fields in these geometries. ### Summary In this section, we gave a "semiclassical" state which could describe a gravitational wave at one instant. It predicted fluctuations which could be measurable for high-frequency waves \(\geq 10^{30}\) Hz. These frequencies were way above the ones observed in the LIGO detectors. From the current observation of gravitational waves, there are bounds on the "graviton mass". From LIGO, the bound is \(1.2\times 10^{-22}\ {\rm eV}\). This bound does not shed light on the origins of the mass from the methodology. Theoretically, the graviton mass can originate from quantum corrections to the Einstein theory, as well as from matter interactions which preserve diffeomorphism invariance. In this review, we do not discuss massive gravitons. Search for Hawking Radiation and Primordial Black Holes The discovery that quantum mechanics near black hole horizons results in particle creation originates in the paper by SW Hawking [33]. In that paper, a quantum field vacuum was time-evolved in the collapsing geometry of a star. The quantum state evolved into a thermal state, with a temperature inversely proportional to the mass of the black hole. In [33], it was shown that the exact temperature of a solar-mass black hole was \(10^{-8}\) K. However, it would not radiate into the surrounding, which was at 2.78 K. This led to the search for black holes with mass \(\sim 10^{14}\,\)g, and these could have formed in the early universe. Due to the Chandrasekhar limit, astrophysical black holes have a bounded mass if formed from stellar collapse. On the other hand, early universe density fluctuations can lead to the formation of tiny black holes, with horizon size fractions of a millimetre. These black holes have intrinsic temperatures higher than the current CMB temperature of 2.78 K. Even if the early universe had been hot, as the primordial universe cooled down, these black holes would start radiating and evaporate eventually or form Planck size remnants. ### Formation of Primordial Black Holes (PBH) The story of the collapse of matter to form black holes is well-studied in the work of Choptuik [34]. Scalar data in an initial slice undergo collapse, and the mass of the black hole formed has a scaling equation. This physics is true for early universe cosmology. It is noted that the matter undergoing collapse is taken as dust in most calculations and the Fermion/quark composition (required for the Chandrasekhar limit) of the cosmic soup is mostly ignored. For a comprehensive review of primordial black hole formation, one is referred to [6]. Here, we briefly outline the methods used to study matter collapse in the early universe. One of the main ingredient in the study of collapse in the early universe is Jean's instability. This instability characterizes density fluctuations in a fluid. The formula for Jean's instability is obtained by equating the time for free fall (or the time taken for an object of radius \(R\) to collapse under its own gravity) to the time taken by a sound wave to cross the radius. It is therefore a critical radius for which a pressure wave in the fluid gets trapped. Jean's critical length can also be obtained by solving for perturbations flowing in a fluid and the self-gravitational force generated by the perturbation. In the following, we discuss Jean's instability. ### Jean's Instability In this section, we discuss the collapse in a fluid of density \(\rho\). This process also gives a rough description of the physics of a "density" collapsing under "perturbations" or under its own weight. The time for "free fall" of a mass in an elliptic orbit of eccentricity one, according to Kepler's laws (of planetary motion) is \[\tau^{2}=\frac{\pi^{2}}{2}\frac{R^{3}}{GM}, \tag{42}\] where \(M\) is the mass causing the orbit, and \(R\) is the distance from the focus of the ellipse. We use this to model self-collapse of a mass under its own gravity. If the mass collapses, then only half of this time is taken. Given that the total mass in a radius \(R\) of a spherical distribution of constant density \(\rho\) is \[M=\frac{4\pi}{3}R^{3}\rho, \tag{43}\] approximating the mass using this formula, the time for free fall is given as a function of density as \[\tau=\sqrt{\frac{3\pi}{32G\rho}}. \tag{44}\] If the speed of sound in the fluid is \(c_{s}\), then the time for sound to flow through a distance \(R\) is \[\frac{R}{c_{s}}. \tag{45}\] This time would be the same as that a pressure wave flowing through the medium would take. If the gravitational collapse time is greater than the pressure wave time, the mass is unstable, and the critical length scale of the fluid region is given as \[R_{\rm JL}=\left(\frac{3\pi}{32}\right)^{1/2}\frac{c_{s}}{\sqrt{G\rho}}. \tag{46}\] The same "collapse formula" can be derived using a spherical homogeneous mass \(M\), whose radius increases by a perturbation \(\Delta R=-\alpha R\), where \(\alpha\) is a small perturbation. The change in pressure using the formula \(\delta p/\delta\rho=c_{s}^{2}\) can be related to the change in density due to the compression, and this gives rise to a force and "acceleration" obtained as \[a_{p}=\frac{\delta p}{\rho_{0}R}=\frac{3\alpha c_{s}^{2}}{R}. \tag{47}\] In the above, we took \(\delta\rho=3\alpha\rho_{0}\), where \(\rho_{0}\) is the original density. Simultaneously the shrinking of the radius gives rise to an increase of the Newtonian acceleration \[a_{\rm g}=\frac{2GM\alpha}{R^{2}}. \tag{48}\] If the gravitational acceleration exceeds the "pressure acceleration", the mass is expected to collapse, which gives a critical length \[\frac{3\alpha c_{s}^{2}}{R_{C}}=\frac{2GM\alpha}{R_{C}^{2}}=\frac{4\pi}{3} \rho_{0}R_{C}^{3}\frac{2G}{R_{C}^{2}}\to R_{c}\propto\frac{c_{s}}{\sqrt{\rho_ {0}G}}. \tag{49}\] Thus, the critical radius for the collapse in a fluid of density \(\rho\) is proportional to the speed of pressure waves \(c_{s}\) in the medium. Here, one of the important assumptions for the calculation of the speed of sound is the assumption that for the early universe fluid, entropy is conserved. We next discuss if a change in the description of the fluid of the early universe might change this Jean's length. The above discussion on Jean's instability can be found in many references, including [35, 36]. ### A Quantum Entropy Production Fluid and Jean's Instability In the above Newtonian derivation of gravitational collapse, the requirement that the fluid be isentropic may not be true in the early universe. In fact, entropy production causes the flow of the universe to be as in an "open system", where the big bang singularity is resolved [37]. We take a slight detour and discuss the situation where there is entropy production in the fluid as anticipated in [37]. In [37], it is conjectured that spacetime can generate particles which add to the fluid, the energy momentum tensor of the Einstein equation. This particle creation is a quantum process and might add insight to the origins of today's cosmological observations. In [37], it is shown that in such open systems, cosmological singularity is not formed. In this review, we briefly discuss whether the open system allows for PBH formation. The conservation law for open thermodynamic systems is given as \[d(\rho V)+pdV-\frac{h}{n}d(nV)=0, \tag{50}\] where \(n\) is the particle number and \(h=\rho+p\) is the "enthalpy" of the system. In most irreversible systems, as in systems with chemical reactions, enthalpy is a measure of the energy of the system, and is a path-independent quantity. The thermodynamics of these systems is controlled by the chemical potential \(\mu\), and the entropy per unit volume "\(s\)" is defined as \[\mu n=h-Ts, \tag{51}\] with \(T\) being the temperature of the system. The pressure for this fluid is given as \[p=\frac{n\dot{\rho}}{\dot{n}}-\dot{\rho}. \tag{52}\] If one assumes a fluid in the form of "radiation", i.e., \(\rho=aT^{4}\) and \(n=bT^{3}\), where \(a\) and \(b\) are dimensional constants [37], obviously, from Equation (52), the equation of state is \(p=\rho/3\). In such an open system, if one obtains the propagation equation of a "pressure wave", then the conservation of mass and momentum equations are different. In previous work, the speed of sound in such a fluid was taken as \(c_{s}=\sqrt{1/3}\), which was at constant entropy for the calculation of the Jean's instability. However, the speed of sound changes in a fluid with entropy production. We try to see the origin of the speed of a pressure wave in a gravitating fluid, and it is nonisentropic, with dynamics given by the equations above. To describe the propagation of pressure waves in a system, one uses the following equations: For the conservation of mass equation in the fluid, one has \[\frac{\partial\rho}{\partial t}+\vec{\nabla}\cdot(\rho\vec{v})=\dot{n_{i}}, \tag{53}\] where we have the "convective" derivative of the density and any particle production on the other side of the equation. The conservation of momentum equation or Euler's equation gives (we assume that the fluid is not viscous) \[\frac{\partial(\rho\vec{v})}{\partial t}+\vec{v}\cdot\vec{\nabla}(\rho\vec{v} )=-\vec{\nabla}p+\rho g. \tag{54}\] In the above, the Navier-Stokes equations have been reduced by setting the viscosity to zero. On the right-hand side, there is a potential term which can be a gravitational potential term. In all discussions for the speed of sound, or the speed of pressure waves in the system, the velocity is taken to be small, and the density and pressure undergo perturbations. We assume no gravitational potential at this stage. If there is a linear perturbation in the velocity, density, and pressure of the fluid, with the \(\dot{n}\) remaining the same, the perturbations lead to the following equations \[\frac{\partial\delta\rho}{\partial t}+\rho_{0}\vec{\nabla}\cdot\vec{\delta v }=0, \tag{55}\] and \[\rho_{0}\frac{\partial\vec{\delta v}}{\partial t}=-\vec{\nabla}\delta p. \tag{56}\] If the system is isentropic, i.e., homogeneous, one can take a partial derivative of Equation (55) and obtain \[\frac{\partial^{2}\delta\rho}{\partial t^{2}}+\rho_{0}\vec{\nabla}\cdot\frac{ \partial\vec{\delta v}}{\partial t}=0. \tag{57}\] In the above, using Equation (56), one obtains \[\frac{\partial^{2}\delta\rho}{\partial t^{2}}-\nabla^{2}\delta p=0. \tag{58}\] In the isentropic approximation \[\delta\rho=\left(\frac{\partial\rho_{0}}{\partial p_{0}}\right)_{s}\delta p, \tag{59}\] one plugs in the above and obtain \[\frac{\partial^{2}\delta\rho}{\partial t^{2}}-c_{s}^{2}\nabla^{2}\delta\rho=0, \tag{60}\] and one obtains the speed of propagation of the density perturbations as \[\frac{1}{c_{s}}=\sqrt{\left(\frac{\partial\rho_{0}}{\partial p_{0}}\right)_{s}}. \tag{61}\] In case the fluid has entropy changes, they induce a change in volume. One therefore can obtain for nonisentropic fluids \[\delta\rho=\left(\frac{\partial\rho_{0}}{\partial p_{0}}\right)_{s}\delta p+ \left(\frac{\partial\rho_{0}}{\partial s_{0}}\right)_{p}\delta s. \tag{62}\] If we use the thermodynamic equation for entropy production as \[\delta s=\left(\frac{\partial s_{0}}{\partial\rho_{0}}\right)_{T}\delta p, \tag{63}\] then, in the formula for the "density perturbation" velocity, we have \[c=\sqrt{\frac{c_{s}^{2}c_{p}^{2}}{c_{s}^{2}+c_{p}^{2}}}, \tag{64}\] where \[\frac{1}{c_{p}^{2}}=\left(\frac{\partial\rho_{0}}{\partial s_{0}}\right)_{p} \left(\frac{\partial s_{0}}{\partial p_{0}}\right)_{T}. \tag{65}\] If we add the gravitational potential in Euler's equation, then the wave equation has an inhomogeneous term which has a "force driving term" obtained from the gradient of a gravitational potential. If we take the potential to originate from the density, we have \(\nabla^{2}\phi_{1}=4\pi G\rho_{0}\), then \[\frac{\partial^{2}\delta\rho}{\partial t^{2}}-c^{2}\nabla^{2}\delta\rho=-4\pi G \rho_{0}\delta\rho. \tag{66}\] We assume a plane wave solution for the density wave \(\delta\rho\sim e^{i(\omega t+\vec{k}\cdot\vec{x})}\), and we find \[\omega^{2}-c^{2}k^{2}=4\pi G\rho_{0}, \tag{67}\] so a critical "pressure wave" is identified. For waves with wave numbers above that, the system will see instability. The critical wave number is given as \[k^{2}=\frac{4\pi G\rho_{0}}{c^{2}}. \tag{68}\] Jean's instability is thus identified as perturbations having a wavelength greater than \[\lambda_{J}>\sqrt{\frac{\pi}{G\rho_{0}}}\,c. \tag{69}\] Unlike the previous estimate of the length scale where the gravitational instability sets in, here, the speed of sound is not a mere \(\sqrt{1/3}\) as given in the formula for an isentropic radiation fluid but is obtained using Equation (64). In a turbulent early universe, therefore, it is expected that the fluid would be nonisentropic. In addition, the open universe will ensure entropy production as spacetime generates particle species to add to the fluid. As the speed differs, so will the threshold for the formation of PBH. Note the origin of this change from an underlying quantum theory is implicit in the velocity change of the pressure wave. Note our results for a nonisentropic fluid is just one way to see how some of the formulas used for PBH might change; for other origins of change in Jean's instability formula in cosmic fluids, see [38]. ### PBH Formation How does one obtain the dynamics of formation of PBH in the early universe? It is postulated that the FLRW universe metric could have perturbations induced by the density fluctuations of the fluid. These can be modelled using a spherical symmetry, and the conditions for the formation of "trapped surfaces" or apparent horizons derived using the "Misner-Sharp" equations. These PBH can then accrete and grow in size, and there can be PBH formed of masses which are bigger than the solar masses of \(10M_{\odot}\)-\(30M_{\odot}\). A great deal of the current work on PBH discusses these and the fraction of PBH contributing to dark matter halos \(f_{PBH}\). For further reading on the PBH production and the interest in them as contributors to dark matter and physical processes such as microlensing, etc., refer to [8]. As the black hole formation follows the same numerical flow as in the spherical collapse obtained by Choptuik, the PBH's mass has the following "scaling" formula \[M_{\rm PBH}=K\ M_{H}(t_{H})(\delta_{m}-\delta_{c})^{\gamma}, \tag{70}\] where \(\delta_{m}=(\rho-\rho_{b})/\rho_{b}\) is the fluctuation in the fluid density over the Hubble density, at the radius where a compaction function is maximum. \(\delta_{c}\) is the fluctuation at the critical radius related to the Jean's instability in the fluid found earlier. \(\delta_{c}\) represents the threshold of black hole formation. This equation can only be trusted in the regime \(\delta_{m}-\delta_{c}\sim 10^{-2}\). \(M_{H}(t_{H})\) is the Misner-Sharp mass of the horizon, \(K\) is a numerical constant. \(\gamma\) is a universal scaling exponent and varies depending on the fluctuation profile and the equation of state of the fluid. This equation provides the basis for PBH formation, though using classical equations. The compaction function \(C(r,t)\) is defined as the excess of mass over the FLRW mass \(M_{b}\) defined as \(M_{b}=4\pi\rho_{b}R^{3}/3\), \[C(r,t)=2\frac{M(r,t)-M_{b}(r,t)}{R(r,t)}. \tag{71}\] If one takes the perturbation of the FLRW metric to be modelled by a function \(\zeta(r,t)\), in the FLRW metric three-slice as \(a^{2}(t)e^{2\zeta(r,t)}r^{2}d\Omega\), one gets a formula for the compaction function in terms of this parameterized fluctuation as \[C(r)=\frac{2}{3}\left(1-(1-r\zeta^{\prime}(r))^{2}\right). \tag{72}\] This facilitates the study of this function in terms of the curvature fluctuations of the metric. The various calculations of the "peak" values of this compaction function use different ensembles for the fluctuations and accordingly, obtain different values. It is postulated that when the compaction function exceeds a critical value, a collapse occurs, otherwise the fluctuation dissipates away. The density contrast parameter is related to the peak value of the compaction function as \[\delta_{m}=C(r_{m}). \tag{73}\] In this article, we refrain from discussing the various ways of finding PBH compaction function but only show a way the change in threshold value \(\delta_{c}\) of PBH formation influences the collapse process. This critical value is related to Jean's instability in the cosmic fluid and as shown previously, vary according to the approximations used. A dependence on the formula for PBH on the nature of the fluid is discussed in [8]. As shown in Equations (69) and (64), the threshold of the onset of the instability of a fluid changes if quantum "particle creation" is allowed. In [37], the fluid exchanges particles with the gravitational "quantum field". In this open universe, there is no initial singularity [37], and as we anticipate, the formation of PBH would also differ. The masses would be different, and the nature of the cosmological fluctuations of the gravitational metric would also differ as per the "entropy production" of this open universe. A more detailed calculation using quantum cosmology is required for the exact changes required in the _theoretical_ predictions of the PBH's mass, and the PBH formation from the cosmic soup. The formation of PBH can vary from masses of the order of \(10^{5}\) g - \(10^{50}\) g, and therefore, they can range from small black holes to larger-than-solar-mass black holes. The lower limit is based on the Planck mass and the upper limit is based on the cosmological mass. How can we verify the existence of PBH? The existence of PBH can be verified using the observation of particles received on earth, which might have originated from the PBH using the Hawking radiation process. It is this process which we describe next. We discuss PBH whose evaporation time \(\propto M^{3}\) is about the age of the universe. These PBH might have radiated away their mass in the form of photons and neutrinos and would provide evidence for the phenomena of Hawking radiation. The mass of these black holes is estimated as \(<10^{14}\)g. Curiously, there was an attempt to find quantum gravity effects on PBH production using loop quantum cosmology (LQC) corrections to the scale factor and the density [39]. The authors found that using the LQC-corrected early universe cosmology, the production of PBH was increased theoretically compared to estimates from other theoretical models as that of the Brans-Dicke gravity. ### Evaporation of PBH The mechanism of radiation from black holes can be studied using the power law for the emission of particles. In the 1970s [33, 40, 41], one typically calculated the power law using Hawking's formula for the particle flux from black holes. The total energy radiated per unit time from PBH of Hawking temperature \(T_{H}\) is given as \[\frac{dE}{dt}=\int d\omega\int d\Omega\sum_{lm}\frac{\Gamma_{\omega slm}}{ \exp(\omega/T_{H})\pm 1} \tag{74}\] where \(\Gamma_{\omega slm}\) is the grey-body factor for the black hole geometry and represents matter waves scattering off the gravitational potential outside the black hole. \(s,l,m\) represent the spin and angular momentum quantum numbers of particles with frequency \(\omega\). The sign in the denominator is positive for bosons and negative for fermions. The Hawking temperature for a nonrotating black hole is inversely proportional to the mass. The grey-body factor is calculated using the solutions to the classical equation of motion of the particles in the black hole background and is a function of the spin, angular momentum, mass, and frequency of the emission. The fraction of power radiated in different species can be calculated. The total power radiated can be calculated numerically as \[P=2.011\times 10^{-4}\,\hbar c^{5}G^{-2}M^{-2}, \tag{75}\] where \(M\) is the mass of the black hole. Most of the above is radiated out in the form of neutrinos (81.4%), 16.7% as photons and 1.9% as gravitons, as long as the black holes have mass \(M>10^{17}\)g [40] After the black hole has shrunk further, the temperature being higher, and the mass being denser, the black hole radiates quarks in the form of muons and other particles such as electrons and positrons. For this range of black holes, \(10^{14}\) g \(<M<10^{17}\) g the power radiated was found to be \[P=3.6\times 10^{-4}\,\hbar c^{5}G^{-2}M^{-2}, \tag{76}\] 90% is equally divided in electrons, positrons, and neutrinos, 9% in photons, and 1% in gravitons [40]. In this work, when computing the power of Hawking particles, the numerical calculations of the grey-body factors were used, and the above division into fractions were based on the spin of the particles. The emission of massive particles would have a different calculation, but for a detection on earth, the massless particles acquire relevance. In a follow up work [41], the emission of gamma rays with energy of about 120 MeV was discussed, and a study of "gamma ray bursts" from evaporating PBH was introduced. In there, a mass distribution was assumed for PBH, and this is an ingredient in the current analysis of the data received on earth. The search for Hawking radiation phenomena in the universe is thus a search for primordial black holes and the particles emitted from them. There are several searches for primordial black holes using gamma-ray bursts which might be the evidence of these black holes evaporating. In the next, we describe some of these searches in detail and provide a bibliography. ### Archived Data The Imaging Compton Telescope (COMPTEL) [42] was decommissioned in 2007, but there remained the archived data to analyze gamma rays. The search from these data has shown bounds for the primordial black holes (PBH) <\(10^{17}\)g [43]. ### Gamma-Ray Bursts There are several satellite-based experiments, which are functional or at the planning stage such as AMEGO and e-ASTROGRAM. AMEGO is an abbreviation for the All-sky Medium Energy Gamma-ray Observatory experiment and comprises a silicon tracker, a cesium iodide calorimeter, and a scintillator anti-coincidence detector. All these will form the payload of a satellite. The detector will operate in the MeV range and provide a wider field of view than the Fermi-LAT detector. This detector is planned by NASA. e-Astrogram is a European Science Commission gamma-ray detector, based on similar instrumentation as AMEGO [44]. The e-Astrogram project aims to observe the frequency range of 0.3 MeV to 3 GeV. It is also aiming to be more sensitive at a particular frequency than previous instruments. These instruments will send data about the gamma-ray bursts and other sources which will give a clue on the existence of primordial black holes in the early universe. ### Hess The HESS is a gamma-ray observation experiment using an array of atmospheric imaging Cerenkov telescopes with energy in the TeV range. The telescopes are in Namibia. We report on the techniques of the HESS experiment in details here as an example, but it is one of several developments for PBH observations [45]. As the PBH which are smaller than \(10^{17}\) g might have evaporated by now, one searches for gamma-ray burst signals. The PBHs are expected to have evaporated with an explosion of gamma rays, which have a high energy and last only for a few seconds. Using statistics and the methods of [46] Feldman and Cousins, one can estimate the "rate of" the PBH formation density \(\dot{\rho}_{PBH}\), with 95% and 99% confidence levels. Further, we discuss this experiment's data analysis [45] in details to illustrate the methodology of the search of PBH. Let us say an unknown parameter \(\mu\) is being assessed using the measurements of a variable \(x\). Usually, one uses Bayesian statistics to estimate the "belief" in a system's parameter being \(\mu_{t}\). This is given using the formula \[P(\mu_{t}|x_{0})={\cal L}(x_{0}|\mu)\frac{P(\mu_{t})}{P(x_{0})}, \tag{77}\] where \({\cal L}(x_{0}|\mu_{t})\) is the "likelihood" of obtaining \(x_{0}\) given \(\mu_{t}\). However, it is assumed that there is prior knowledge of the probability \(P(\mu_{t})\) of finding \(\mu_{t}\) independent of what \(x_{0}\) is, which might not be the case. The probability \(P(x_{0})\) can be absorbed in the normalization of the conditional probability. In Bayesian methods, the belief in finding \(\mu_{t}\) given the measured values of \(x\) is expressed as a "confidence". This is mathematically \[\int_{\mu_{1}}^{\mu_{2}}P(\mu_{t}|x_{0})d\mu_{t}=\alpha, \tag{78}\] where \(\alpha\) is the degree of confidence for \(\mu_{t}\) to be in the confidence interval \([\mu_{1},\mu_{2}]\). In [46], a variation of this is given, for estimating the value of a parameter \(\mu\) given the measurements of the variable \(x\). If one takes the ratio of two likelihoods, then the "prior knowledge" required in Bayesian statistics is not there. \[R=\frac{{\cal L}(x|\mu)}{{\cal L}(x|\mu_{\rm best})}, \tag{79}\] where \(\mu_{\rm best}\) is the value of the parameter which maximizes the conditional probability. This ratio determines the acceptance region in the \(x\) variable, for a given value of \(\mu\). A sum of the observation probabilities in decreasing order of \(R\), until the required confidence limit is reached, provides a good estimate for the confidence intervals or upper limits for a parameter. In the HESS observations, gamma rays were detected using the Cerenkov telescopes on earth. The number of photons detected could vary from one to infinity in a given time interval \(\Delta t\). A time interval of \(\Delta t=10\) s was taken for the purpose. We assumed that the detection of photon "clusters" of size \(k\) followed a Poisson distribution \[P(k,N)=e^{-N}\frac{N}{k!}, \tag{80}\] where \(N(r,\alpha,\delta,\Delta t)\) is the number of \(\gamma\) rays emitted from PBH from a distance \(r\) in the angular interval in the sky specified by \(\alpha,\delta\) in unit time \(\Delta t\). Integrating this over all space, i.e., \(r,\alpha,\delta\), and over all runs of the experiment, the number of significant clusters of photons detected were estimated to be \[n_{\rm sig}(k,\Delta t)=\dot{\rho}_{PBH}V_{\rm eff}(k,\Delta t), \tag{81}\] where \[V_{\rm eff}(k,\Delta t)=\sum_{i}T_{i}\int d\Omega_{i}\int dr\ r^{2}\ P(k,N)= \sum_{i}T_{i}\Omega_{i}\frac{(r_{0}\sqrt{N_{0}})^{3}}{2}\frac{\Gamma(k-3/2)}{ \Gamma(k+1)}, \tag{82}\] where \(N_{0}\) is the number of photons emitted from PBH at a distance of \(r_{0}\). \(T_{i}\) is the run's live time of the experiment, and \(\Omega_{i}\) is the solid angle of the observations. Based on the observed data, the statistical analysis using the techniques of Feldman and Cousins was implemented. The parameter being sought was \(n_{\rm sig}\) given \(n\) as the observed variable. Note that these photon clusters, which might be from evaporating PBH, were received along with the background photons, whose number was taken as \(\bar{n}\), or off photons. \[R=\prod_{n}\frac{{\cal L}(n|\bar{n}+n_{\rm sig})}{{\cal L}(n|\bar{n})}. \tag{83}\] Here, the maximal value of the likelihood function was taken as that of the background \(\bar{n}\). The \(\chi^{2}\) estimate of the above can be found as [46]: \[{\rm LNR}=-2\ln(R)=2\sum_{n}n_{\rm sig}+n(\ln(n)-\ln(\bar{n}-n_{\rm sig})), \tag{84}\] where \(n\) is the number of observed photon signals in the on position of the telescopes and \(\bar{n}\) is the number of mean observed signals in the off data. This is an estimate of the background photons, obtained by averaging over "scrambled" time intervals. In deriving the above, we used the Poisson distribution. This LNR had a maximum of 0.006 in the preliminary data for \(\Delta t=10\) s and 6240 runs of four of the five telescopes [45]. This showed that there was not much of the PBH excess data. However, if one sets \(\mathrm{LNR}=4,9\), one can obtain an upper-limit estimate for \(\dot{\rho}_{PBH}\), with 95% and 99% confidence levels. The upper limit was found to be \[\dot{\rho}_{\mathrm{PBH}} < 2.5\times 10^{4}/pc^{3}yr\ \ (95\%), \tag{85}\] \[\dot{\rho}_{\mathrm{PBH}} < 5\times 10^{4}/pc^{3}yr\ \ (99\%), \tag{86}\] These data points were further updated with other experiments such as VERITAS, MILAGRO, FERMI-LAT, and SWGO [47]. A comparative plot of the experimental predictions of evaporating PBH or final bursts at the 99% confidence limit is given in Figure 2. The data for this are quoted from [48] (2021). For some recent updates in the field of constraints on PBH see [49]. For recent data on HESS, one can refer to the experiment's website [50]. ### Neutrino Experiments The Hawking radiation from PBH releases neutrinos. The flux of these as a function of the PBH production and then a further analysis for "secondary effects" producing neutrinos were analyzed. The data from several experiments were taken and showed almost no or a very small estimation of the PBHs. Using a recent work [51], we comment on the results. A neutrino spectrum rate was defined using the Hawking emission spectrum as in Equation (74). Further, there can be secondary neutrino production due to the decay of hadrons produced initially: \[\frac{d^{2}N_{\nu}}{d\omega_{\nu}dt}=\int_{0}^{\infty}dM\,\frac{d\mathcal{N}}{ dM}\left(\frac{d^{2}N_{\nu}}{d\omega_{\nu}dt_{\mathrm{prim}}}+\frac{d^{2}N_{\nu}}{d \omega_{\nu}dt_{\mathrm{sec}}}\right). \tag{87}\] where the black hole's mass distribution could be taken as a Gaussian log-normal profile, \[\frac{d\mathcal{N}}{dM}=\frac{1}{\sqrt{2\pi}\sigma M}\exp\left(-\frac{\ln^{2}( M/M_{\mathrm{PBH}})}{2\sigma^{2}}\right), \tag{88}\] Figure 2: The upper estimates of the number of final bursts at the 99% confidence limit from some experiments [48]. or simply a delta function profile centred at \(M=M_{\rm PBH}\). In the above, \(M_{\rm PBH}\) is an average mass, and \(\sigma\) is the standard deviation, as the mass of the black hole is allowed to vary. A plot of the differential neutrino flux from extragalactic sources and the milky way can be calculated using publicly available software [51] and plotted. The differential flux of the neutrino plotted as a function of the energy \(\omega_{\nu}\) varied between \(10^{2}\) and \(10^{-5}\), as the energy varied from 1 to 100 MeV for PBH of mass \(10^{13}\) g. The evaporated PBH were taken as a fraction of the cosmic background which is \(10^{-18}\) to obtain this result. The experimental bounds obtained from the Super-Kamiokande data showed that for PBH which were already evaporated, the abundance ratio was about \(10^{-17}\) for \(10^{13}\) g black holes and a confidence limit of \(90\%\). The question is of course what the above bounds imply for quantum gravity phenomenology? Whereas the PBH production cannot be ruled out completely, using the above estimation methods, it remains that the mechanism of PBH formation could be different, and the emission flux calculations greatly modified by intervening cosmic flows and quantum effects. In this aspect, one has to wait for future experiments such as JUNO, DARWIN, ARGO, and DUNE, and perhaps quantum cosmology predictions of the PBH formation from a more fundamental theory such as loop quantum gravity. It is obvious from the above discussions that the detection of bursts of photons and neutrinos on earth gives a very small window for the PBH to exist which would be evaporating now, i.e., those having masses \(10^{5}\)g-\(10^{14}\)g. However, as we know, there can still be the option that there are PBH which have not evaporated away but have formed remnants. These will still be candidate dark matter contributors. The fraction of PBH which contribute to dark matter and have not been evaporated yet is also estimated as \(\sim 10^{-3}\) for masses of the order of \(10^{16}\)g as in [52]. There are other papers investigating this using various data sources such as microlensing, accretion disk luminosity, radio signals, anisotropies of the CMB, etc. We refer the reader to reviews in this field [8]; there are also discussions of the PBH formation and evaporation using LQG corrected metrics, though in reduced phase-space formulations [53]. In our opinion, whereas the search is now much focused than earlier on what a gamma-ray burst or a neutrino flux from PBH may be, the research is still nascent. ## 4 Event Horizon In the initial days of the discovery of the black hole metric solution to Einstein's equation, the existence of the horizon was one of the most bizarre predictions. The existence of trapped surfaces in general relativity was later firmly established with the Ray-Chowdhury equations and Hawking-Penrose singularity theorems. However, the debate continued on whether the event horizon existed, as it was unobservable. With the discovery of compact objects and the observation of X-rays from them, various models were tested for the existence of the event horizon. As the conclusions were model-dependent, the search continued, until the event horizon telescope project produced an "assembled image" of the photon sphere surrounding a black hole [54, 14]. This confirmed some of the predictions about the behaviour of geodesics near a black hole's horizon, but did it confirm the presence of an event horizon? Perhaps not, but this is as "good as it gets". The snapshot of the photon sphere assimilated from eight infrared telescopes captured the electromagnetic waves circulating a compact object. The question we are asking in this article is: can we use the observations of geodesics around a black hole to measure semiclassical physics? In a work using semiclassical states in loop quantum gravity [13], it was shown that quantum fluctuations could cause instabilities in black holes, and these could produce tangible detectable effects for astrophysical black holes [13]. The main results of the paper were the calculation of a nonpolynomial correction to the metric of the Schwarzschild black hole. The semiclassically corrected metric was shown to be of the following form \[ds^{2} = -\left(1-\frac{r_{g}}{r}-\tilde{t}\ h_{tt}\right)dt^{2}+\tilde{t }\ h_{rt}\ dtdr+\left\{\frac{1}{(1-r_{g}/r)}+\tilde{t}\ h_{rr}\right\}dr^{2}+ \tag{89}\] \[+\left(r^{2}+\tilde{t}\ h_{\theta\theta}\right)d\theta^{2}+\left( r^{2}\sin^{2}\theta+\tilde{t}\ h_{\phi\phi}\right)d\phi^{2}.\] where \(r_{g}\) is the Schwarzschild radius, and the location of the horizon is at \(r_{g}=2GM\), where \(M\) is the mass of the black hole. \(h_{tt},h_{rt},h_{rr},h_{\theta\theta}\), and \(h_{\phi\phi}\) are the perturbations motivated from the corrections to the metric [13]. The perturbations of the metric could be attributed to other quantum models of gravity, but we used the one motivated from [13], and a shift was generated, \(h_{rt}\), breaking the "static" nature of the metric. The \(\tilde{t}\) which appears in this coherent state was obtained using the length scales of the system and was thus a ratio of Planck's area to the area of the horizon \(\tilde{t}=l_{p}^{2}/r_{g}^{2}\). Using this, we solved for the geodesics of the black hole. The geodesics were taken as circular orbits and the radial coordinate \(r\) was solved as a function of the coordinate \(\phi\). These orbits described the trajectory of light rays which were incident on the black hole geometry from a distance, and the impact parameter measured the perpendicular distance of the light ray from the horizon. Using the invariant distance on the Schwarzschild geometry, one can write the equation of motion for the geodesic of a photon as a differential equation in the azimuth \(\phi\), which was taken as the affine parameter along the geodesic. The deviations in geodesic computations for the rotating black hole from the nonrotating black holes were small [55] but detectable. For rotating black holes, the cross section of the photon scattering might not be circular [55], but the difference was about 4%. However, quantum corrections might be different, and one needs to formulate coherent states for rotating black holes separately. The effect of the presence of "echoes" might still be true. The results stated in this paper thus apply to nonrotating black holes strictly but pave the way for realistic ones. If we arrange the terms in a way they can be grouped into terms which are zeroth order in \(\tilde{t}\) and then first order in \(\tilde{t}\) (in the equatorial plane), one gets [14]: \[\frac{1}{r^{4}}\left(\frac{dr}{d\phi}\right)^{2}+\frac{1}{r^{2}}\left(1-\frac {r_{g}}{r}\right)\left(1+\tilde{t}\ \frac{h_{\phi\phi}}{r^{2}}-\tilde{t}\ h_{rr}\right)=\frac{1}{b^{2}}\left(1+2 \ \tilde{t}\ \frac{h_{\phi\phi}}{r^{2}}-\tilde{t}\ h_{rr}+\tilde{t}\ \frac{h_{tt}}{1-r_{g}/r}\right). \tag{90}\] As one traces the trajectory through the entire path, the asymptotic angle of "scattering" from the black hole geometry emerges as a function of the impact parameter of the photon. The solution is obtained using a set of elliptic integrals and one finds \[\exp(-\phi_{\infty})=\delta^{1+0.0203\ \tilde{t}}\ \exp\left(+\frac{0.47\ \tilde{t}^{1/2}}{(0.67\delta+0.225\ \tilde{t})^{1/2}}+0.23\ \tilde{t}+1.712\ \frac{\tilde{t}}{\delta}\right), \tag{91}\] where \(\delta=b-b_{c}\), and \(\phi_{\infty}\) is the asymptotic angle the geodesic makes as it re-emerges to the asymptotic region. The difference of the photon geodesic impact parameter with the impact parameter of the critical orbit \(b_{c}=3\sqrt{3}M\) is expected to be zero as the photon can orbit an infinite number of times round the horizon. One can see that in Equation (91), the \(\tilde{t}\to 0\) reduces to a linear term in \(\delta\). Most importantly, \(\delta\to 0\) as \(\phi_{\infty}=\mu+2n\pi\rightarrow\infty\). \(n\) counts the number of times the geodesic encircles the black hole, and this goes to infinity for the critical geodesic with the critical impact parameter. The photon circles the black hole an infinite number of times, when the critical impact parameter is reached. If we take the semiclassical corrections, then the plot of \(w(\delta)\) (the RHS of Equation (91) as a function of \(\delta\) shows that the function does not reach zero but bounces off (see Figures 3 and 4), and this we can associate with the presence of a quantization. This observation is commensurate with the work in fuzzballs and ECHOS [56]. In these models, the horizon is replaced by a "wall" at a particular distance from the black hole. In our calculations with the LQG coherent states [13], we found the explicit location of the "wall" as a function of the semiclassical parameter \(\tilde{t}\). We expect that our results can be eventually verified from observational data from astrophysical black holes [56]. ## 5 Conclusions As it happens, the search for quantum gravity in experiments is still nascent. However, we expect that in the early universe, the length scales were quantum, and therefore the search for relics of quantum gravity is on Figure 4: Plot of the semiclassically corrected photon geodesic impact parameter relation. The plot shows a bounce as the distance from the critical radius approaches the semiclassical length scale of \(\tilde{t}\sim 10^{-66}\) units. Figure 3: Plot of the semiclassically corrected photon geodesic impact parameter relation. The plot shows a bounce as the distance from the critical radius approaches the semiclassical length scale of \(\tilde{t}\sim 10^{-8}\) units. going. There are a number of papers in this Universe special issue in quantum gravity phenomenology which discuss cosmology and the effect of quantum cosmology in observational physics. In this review, the experiments we discussed only provided bounds on the mass of the graviton, the PBH production. We discussed the quantum effects which could be "directly" observable in recent experiments including in gravitational wave detectors and event horizon telescope images. We also reported on the numerous experiments which observe particles from distant celestial events on earth. We showed theoretical calculations and reported on bounds from experiments on Hawking emission from PBH. The experimental bounds did not violate any theoretical predictions. The observations provide directions for the experimental community to seek for more precise measurements. The plot of the electric and magnetic polarizations from the EHT [57] and the launching of LISA [58] are ongoing efforts in that direction. The study of fast radio bursts (FRB) provided an effort towards finding the quantum origins of astrophysical phenomena. The most promising experiments on earth for the quantum effects of gravity remain the GW detectors and the possibility that one would detect a "graviton" or its semiclassical version in the near future. **Acknowledgments**:This article is written for universe special issue in Quantum Gravity Phenomenology. AD would like to thank the universe editorial team, particularly Cici Xia for making this two volumes possible. AD is also thankful to co-editor Alfredo Iorio for collaboration.
2304.09807
VMA: Divide-and-Conquer Vectorized Map Annotation System for Large-Scale Driving Scene
High-definition (HD) map serves as the essential infrastructure of autonomous driving. In this work, we build up a systematic vectorized map annotation framework (termed VMA) for efficiently generating HD map of large-scale driving scene. We design a divide-and-conquer annotation scheme to solve the spatial extensibility problem of HD map generation, and abstract map elements with a variety of geometric patterns as unified point sequence representation, which can be extended to most map elements in the driving scene. VMA is highly efficient and extensible, requiring negligible human effort, and flexible in terms of spatial scale and element type. We quantitatively and qualitatively validate the annotation performance on real-world urban and highway scenes, as well as NYC Planimetric Database. VMA can significantly improve map generation efficiency and require little human effort. On average VMA takes 160min for annotating a scene with a range of hundreds of meters, and reduces 52.3% of the human cost, showing great application value. Code: https://github.com/hustvl/VMA.
Shaoyu Chen, Yunchi Zhang, Bencheng Liao, Jiafeng Xie, Tianheng Cheng, Wei Sui, Qian Zhang, Chang Huang, Wenyu Liu, Xinggang Wang
2023-04-19T16:47:20Z
http://arxiv.org/abs/2304.09807v2
# VMA: Divide-and-Conquer Vectorized Map Annotation System for Large-Scale Driving Scene ###### Abstract High-definition (HD) map serves as the essential infrastructure of autonomous driving. In this work, we build up a systematic vectorized map annotation framework (termed VMA) for efficiently generating HD map of large-scale driving scene. We design a divide-and-conquer annotation scheme to solve the spatial extensibility problem of HD map generation, and abstract map elements with a variety of geometric patterns as unified point sequence representation, which can be extended to most map elements in the driving scene. VMA is highly efficient and extensible, requiring negligible human effort, and flexible in terms of spatial scale and element type. We quantitatively validate the annotation performance on real-world urban and highway scenes, as well as NYC Plainmetric Database. VMA can significantly improve map generation efficiency and require little human effort. On average VMA takes 160min for annotating a scene with a range of hundreds of meters, and reduces 52.3% of the human cost, showing great application value. HD Map Generation, Divide-and-Conquer Scheme, Auto Labeling, Scene Reconstruction, Vectorized Representation, Autopilot, Scene Understanding. ## I Introduction High-definition (HD) map contains a wide range of static map elements and encodes rich prior information of the driving scene, serving as the essential infrastructure of autonomous driving. However, map generation requires laborious human effort and high expenses, restricting the application and deployment of autonomous driving systems. In this work, we propose a systematic Vectorized Map Annotation (VMA) framework to improve the map generation efficiency of large-scale driving scenes. VMA is highly extensible and flexible in terms of map element types. HD map is composed of diverse map elements with a variety of geometric patterns: line-shaped elements (road curb, lane divider, stop line, _etc._), regular-shaped elements (arrow, speed bump, diamond marking, _etc._), and closed-shaped regions (crosswalk, road intersection, diversion, _etc._). Though some image processing methods [1, 2] and segmentation-based methods [3, 4, 5] can be integrated into the map generation pipeline for efficiency improvement, these existing methods can only tackle a small fraction of element types, and fail to model elements with rather complex geometric shapes. Differently, VMA models various map elements in a unified manner, _i.e._, map elements with different geometric patterns (line-shaped, regular-shaped and closed-shaped) are all generally abstracted as point sequence for automatic annotation. VMA unifies and simplifies the element representation. And this unified representation can be extended to most map elements in the driving scene, and is compatible with various map standards (like OpenDRIVE [6], Lanelet2 [7], and Apollo [8]). Spatial extensibility is another key problem of map generation. On one hand, scaling up the spatial range of map is troublesome, due to memory and computation limitations. On the other hand, some map elements (_e.g._, road curb) have a long spatial coverage. Keeping the continuity and completeness of these elements is difficult. To solve the spatial extensibility problem, we design a divide-and-conquer annotation scheme and a vectorized merging algorithm, VMA has no restriction on the spatial range and is widely applicable. An overview of the proposed framework is shown in Fig. 1. Specifically, VMA reconstructs the static scene through crowdsourced multi-trip aggregation. Based on the reconstructed point cloud map (PCL map) and odometry information, VMA splits the scene into annotation units. We build up MapReduce(tm) based [9] Unit Annotator model, which takes unit PCL map as input and directly outputs unit vectorized map. Then vectorized map merging is performed to merge all unit vectorized maps into global vectorized map. Finally, point sparsification and human verification are performed to improve the annotation results. VMA is highly automatic and significantly improves map generation efficiency. To validate the effectiveness, we apply VMA in both urban scene and highway scene. On average, VMA takes 160min for annotating a scene with a range of hundreds of meters and reduces 52.3% of the human cost. We also leverage VMA to perform lane boundary detection on NYC Planimetric Database for benchmark evaluation. VMA is of great application value in autonomous driving and can also be applied to other robotic scenarios. This paper's contributions are summarized as follows: * We design a divide-and-conquer annotation scheme to solve the spatial extensibility problem of HD map gener ation, and abstract map elements with a variety of geometric patterns as unified point sequence representation. * Based on the unified map representation and divide-and-conquer annotation scheme, we build up a systematic vectorized map annotation framework (termed VMA) for automatically generating HD map of large-scale driving scenes. VMA is highly automatic and extensible, requiring negligible human effort, and flexible in terms of spatial scale and element type. * We quantitatively and qualitatively validate the annotation performance on real-world urban and highway scenes, as well as NYC Planimetric Database. VMA can significantly improve map generation efficiency and require little human effort, showing great industrial application value. The remaining part of the paper is organized as follows. A review of the related works on road marking extraction, line-shaped element extraction, online mapping, and lane detection is given in Section II. Section III describes the overall design of VMA, including unified point sequence representation, static scene reconstruction procedure, and divide-and-conquer scene annotation scheme. Experiments are presented in Section IV. And finally, conclusions and discussions are presented in Section V. ## II Related Work ### _Road Marking Extraction_ Road markings are signs on road surfaces usually painted with highly retro-reflective materials, which make them noticeable to human vision and sensors of autonomous vehicles. Road markings are essential features of HD map because they provide rich traffic information (lane type, lane direction, _etc._) for navigation. Traditionally, road marking extraction is achieved through image processing methods [1, 2], _e.g._, image denoising, image enhancement, edge-based detection, k-mean clustering, and regional growth. With the rapid development of deep learning, CNN-based methods have been widely employed in detecting and recognizing road markings. [10] predicts a semantic segmentation binary mask to distinguish lane markings from the background. [11] designs a wavelet-enhanced FCNN to segment high-resolution aerial imagery, and create a 3D reconstruction of road markings based on the least-squares line-fitting. [12] proposes a self-attention-guided capsule network, to extract road markings from aerial images. Different from these works, we model road marking as corner point sequence to achieve unified map element representation. ### _Line-shaped Element Extraction_ Existing methods for line-shaped element extraction can be divided into two kinds: segmentation-based methods [3, 4, 5] and iterative methods [13, 14, 15]. Segmentation-based methods Fig. 1: **The VMA framework. VMA reconstructs the static scene through crowd-sourced multi-trip aggregation, and adopts a divide-and-conquer pipeline for scene annotation. VMA abstracts map elements with a variety of geometric patterns as unified point sequence representation for vectorized map annotation.** predict segmentation probabilistic map from aerial images and then perform heuristic post-processing algorithms on the segmentation map to get the line-shaped map representation. DeepRoadMapper [3] segments the aerial images into interest categories, and connects the endpoint of the discontinued roads based on a specific distance threshold to alleviate the discontinuity issue of the road segmentation. [4] fixes the road network disconnectivity issue by predicting the orientation and segmentation of the road network and correcting the segmentation results with n-stacked multi-branch CNN. [5] adopts the encoder-decoder architecture as well as the attention mechanism, and introduces an edge detection algorithm to refine the segmented results. Iterative graph growing methods predict the map element in an auto-regressive manner (vertex by vertex). [13] designs a convolutional recurrent network to predict the road boundary. [14, 15] adopt better training strategy and graph growing policy to improve performance from the perspective of imitation learning. VMA models both line-shaped elements and road markings, as well as area elements, in a unified vectorized manner. ### _Online Mapping_ Online mapping aims to create vehicle-centric local map on the fly and demands high efficiency. With the development of PV-to-BEV transformation [16], previous methods [17, 18, 19, 20, 21, 22, 23] perform BEV semantic segmentation based on surround-view image data captured by vehicle-mounted cameras. To build vectorized semantic map, HDMapNet [24] adopts a segmentation-then-vectorization paradigm. To achieve end-to-end learning [25, 26, 27], VectorMapNet [28] adopts a coarse-to-fine two-stage pipeline and utilizes an auto-regressive decoder to predict points sequentially. MapTR [9] proposes permutation-equivalent modeling to exploit the undirected nature of semantic map and designs a parallel end-to-end framework. Online methods create vehicle-centric local map and focus on the balance of efficiency and accuracy. VMA aims at creating large-scale global map and focuses on map generation quality. ### _Lane Detection_ Lane detection [29, 30, 31, 32, 33, 34, 35, 36] is targeted at detecting lane elements in the scene. LaneATT [29] utilizes anchor-based deep lane detection model. LSTR [32] uses the Transformer architecture to output parameters of a lane shape model. GANet [30] formulates lane detection as a keypoint estimation and association problem and takes a bottom-up design. [31] performs 3D lane detection in BEV. BezierLaneNet [35] uses a fully convolutional network to predict 4-order Bezier lanes. PersFormer [36] proposes a Transformer-based architecture for spatial transformation and unifies 2D and 3D lane detection. VMA considers a wide range of map elements which include lane. ## III Framework The framework of VMA is depicted in Fig. 1. VMA first performs static scene reconstruction (Sec. III-A) and adopts a divide-and-conquer pipeline for scene annotation (Sec. III-B). The framework is highly automatic and requires little human effort. ### _Static Scene Reconstruction_ The static scene reconstruction procedure is mainly composed of multi-trip data collection, point cloud pre-processing (dynamic object filtering and motion distortion compensation), single-trip localization and mapping, and multi-trip point cloud aggregation (Fig. 2). **Multi-Trip Data Collection.** The data collection vehicles are equipped with multi-modal sensors (GPS, IMU, wheel, LiDAR and camera). GPS, IMU, and wheel collect position and motion information. LiDAR generates point cloud map. Camera collects color and texture information of the scene and is equipped for map verification. The data collection procedure for a target scene is crowd-sourced and composed of several trips. On each trip, one vehicle takes a specific route travelling over the target scene. Different vehicles take different routes to fully cover the target scene. **Dynamic Object Filtering.** Since the scene reconstruction of VMA includes temporal aggregation, the motion of traffic participants (vehicles, pedestrians, cyclists, _etc._) may result in noisy points in the point cloud map. Thus, before aggregation, VMA adopts 3D detection algorithm [37] to mark out objects with 3D bounding box and filter out points inside the box for each LiDAR frame. Fig. 2: **Static scene reconstruction**. VMA generates dense point cloud map through crowd-sourced multi-trip aggregation. **Motion Distortion Compensation.** LiDAR sensor scans the environment in a rotational manner. A frame (sweep) of point cloud is composed of a set of scans at different timestamps of a rotation period. When the ego vehicle is moving, the pose of LiDAR sensor is continuously changing within each period. The scans of a frame are not aligned, resulting in motion distortion. Based on IMU pre-integration, the ego motion is estimated and the motion distortion is calculated to de-skew LiDAR point clouds. **Single-Trip Localization.** In consideration of system efficiency, firstly the localization [38, 39, 40, 41, 42] process is independently and parallelly performed inside each trip. We adopt LIO-SAM [40] for single-trip offline localization. Specifically, initial odometry prior is generated through IMU pre-integration. Scan-matching [43] among LiDAR frames is used for odometry optimization. And GPS information is introduced to eliminate pose drift, which offers absolute pose measurements. Loop closure detection is performed to correct accumulated errors over time. **Multi-Trip Point Cloud Aggregation.** The localization results of different trips are further jointly optimized through graph optimization [44, 45]. The pose at each timestamp of each trip is defined as node of graph. We adopt Scan-Context [46] as frame descriptor for ICP, and obtain the relative pose between nodes of different trips. The relative pose of adjacent frames of the same trip is defined as the intra-trip edge, while the relative pose of two frames from two different trips is defined as the inter-trip edge. With the inter-trip and intra-trip edge constraints, we complete the overall pose graph optimization and get the optimized localization results. Based on the optimized localization results, all LiDAR frames are aligned for aggregation. The aggregated dense point cloud map covers the whole target scene, and contains rich semantic information for scene annotation. ### _Divide-and-Conquer Scene Annotation_ VMA is targeted at annotating spatially unlimited large-scale scene, for which one-shot annotation is infeasible due to memory and computation limitations. Thus, a divide-and-conquer annotation scheme is designed to solve the spatial extensibility problem. Specifically, VMA splits the scene into overlapped annotation units, uses MapTR-based Unit Annotator to output the vectorized representation of map in each unit, and incrementally merges unit vectorized map to generate the global vectorized map. **Unified Point Sequence Representation.** VMA abstracts a wide range of map element as unified point sequence representation. Based on the geometric characteristics, map element is categorized into three main types, _i.e._, line element, discrete element and area element (see Table I and Fig. 1). Line elements mainly include road curb, lane divider, and stop line. Through sampling points at a fixed interval along the line, we get the corresponding point sequence representation \(\{p_{i}\}^{N}\) (\(N\) denotes the point number). By sequentially connecting these points, we get N-point polylines to represent line elements. Discrete elements include all regular-shaped elements (like arrow, speed bump, and diamond marking), and is represented by the four corner points \(\{p_{i}\}^{4}\) of the bounding box. The four points are ordered (front-left, front-right, back-right, and back left). The orientation of the element can be inferred from the order. Area elements include closed-shaped regions, _e.g._, cross-walk, road intersection, diversion, _etc_. We frame the annotation of these regions as boundary regression, _i.e._, through sampling points at a fixed interval along the region's boundary, we convert the boundary into an ordered point sequence \(\{p_{i}\}^{N}\), which forms N-point polygons to represent the regions. We annotate the regions by detecting these points. **Odometry-based Scene Splitting.** VMA splits the scene into fixed-sized annotation units based on the odometry information (Sec. III-A). Specifically, as depicted in Fig. 3, along the vehicle's trajectory, VMA samples ego positions at fixed time or distance intervals, denoted as \(\mathbb{P}=\{\text{pos}_{1},\text{pos}_{2},...,\text{pos}_{n}\}\). Centered at these sampled positions, VMA crops unit PCL map from the global PCL map. And the scene is split into a group of annotation units along the ego trajectory. **Unit Annotation.** We build up MapTR-based Unit Annotator model (see Fig. 1) which takes unit PCL map as input and directly outputs unit vectorized map. The unit PCL map is first sent into an encoder network, which flattens the unit PCL map into 2D map, extracts features with CNN, and outputs 2D scene feature map. Then a DETR-like [25, 9] decoder predicts map elements in a set-to-set manner. Each element corresponds to one sequence of learnable embeddings \(\{e_{i}\}^{N}\) (one embedding for one point). Each embedding is sent into a MLP layer to predict point's coordinate \((x,y)\) in the scene feature map. And then we sample features at \((x,y)\) of 2D scene feature map for predicting offsets \((\Delta x,\Delta y)\) to update \((x,y)\). Feature sampling and coordinate update are iteratively performed. Finally, the decoder outputs the point sequence representation of map element, as well as corresponding semantic Fig. 3: **Scene splitting. Ego trajectory and sampled ego position are respectively marked with red line and dot.** type and attribution (see Table I). The MapTR-based Unit Annotator is initially optimized with a small set of human-annotated scene data, and continuously optimized in a closed-loop manner (see Sec. III-B). The human-annotated labels are converted to the vectorized point sequence representation for supervision. In the training phase, we perform one-to-one hierarchical assignment [9] between human-annotated elements and predicted elements. After one-to-one hierarchical assignment, the predicted point sequence is paired with the GT point sequence point by point (as depicted in Fig. 4, \(P\) matches _Q_). Two kinds of supervision are adopted to constraint the geometry, p2I (point-to-line) and p2p (point-to-point) constraint. P2p constraint is the \(L_{2}\) distance between paired points, while the P2I constraint is the point-to-line distance between predicted point \(P\) and the two adjacent edges of \(Q\). P2p constraint is applied to key points which require exact localization, _i.e._, the start and end points of line element, and corner points of discrete elements, while p2I constraint is applied to other points for adaptively keeping the geometry. When \(L_{left}\) and \(L_{right}\) are non-collinear, p2I constraint is equivalent to p2p constraint and makes \(P\) converge to \(Q\); when \(L_{left}\) and \(L_{right}\) are collinear, under the p2I constraint \(P\) is not necessary to converge to \(Q\) but is on the line. In this case, the convergence constraint is relaxed and also keeps the same geometry of element. Semantic type and attribution of each element are predicted by a MLP layer and supervised with cross-entropy loss. **Incremental Vectorized Map Merging.** MapTR-based Unit Annotator outputs unit vectorized map, consisting of fragmented map elements because of the unit border truncation. Incremental vectorized merging is performed to merge all unit vectorized maps \(\{M_{local}^{i}\}\) into global vectorized map \(M_{global}\). Concretely, based on the ego trajectory, unit vectorized maps are incrementally and sequentially merged: \[M_{global}=(...(((M_{local}^{1}\oplus M_{local}^{2})\oplus M_{local}^{3}) \oplus M_{local}^{4})...\oplus M_{local}^{n}), \tag{1}\] where \(\oplus\) denotes the merging process. The merging strategies vary according to the geometric patterns of map element: * Line element (polyline): line elements are associated if the following conditions are met (refer to Fig. 5 (top)): two polylines overlap with each other; the overlapped length exceeds a threshold \(\theta_{line}\); the endpoints of two polylines are close to the other polyline. We merge the associated elements (red one and yellow one) into one polyline (green one) by simply removing the overlapped part. * Discrete element (corner point sequence). Chamfer distance \(D_{charfer}\) between two corner point sequences are calculated. If the distance is smaller than a threshold \(\delta_{discrete}\), the two elements are associated. Through non-maximum suppression, associated elements are merged into one. * Area element (polygon). If the IoU between two polygons exceeds a threshold \(\delta_{area}\), the two polygons are associated. Associated polygons are merged into one polygon with the union operation (see Fig. 5 (bottom)). * Attribution. The merging strategy of attribution is majority vote. The attribution of merged element is the majority of all associated elements. **Point Sparsification.** After incremental vectorized merging, fragmented map elements are transformed into complete map elements which are represented by dense and redundant point sequence. For the convenience of application and storage efficiency, VMA adopts the Douglas-Peucker algorithm to simplify the representation of map element, which iteratively removes points from the polyline (or polygon) while keeping the geometry. **Human-in-the-Loop Verification.** Through the proposed divide-and-conquer scene annotation procedure, global vectorized map is obtained in a highly automatic manner. VMA introduces human verification for guaranteeing the annotation quality. After human verification, qualified vectorized map \begin{table} \begin{tabular}{l c c c} \hline \hline Geometric Type & Vectorized Representation & Semantic Type & Attribution \\ \hline \multirow{4}{*}{Line Element} & \multirow{4}{*}{\(N\)-Point Sequence (Polyline)} & Lane Divider & Direction: Unidirectional / Bidirectional; Line Type: Solid / Dotted / Fishbone;... \\ & & Curb & Curb Type: Ground Side / Road Side / Guardrail \\ & & Stop Line & - \\ & &... &... \\ \hline \multirow{4}{*}{Discrete Element} & \multirow{4}{*}{Corner Point Sequence} & Arrow & Arrow Type: Straight / Turn Off / More Right / No Turn Left /...;... \\ & & Speed Bump & - \\ & & Lane Sign & Lane Sign Type: Bike Lane / Bus Lane \\ & & Marking & Marking Type: Diamond Marking / Inverted Triangle Marking \\ \hline \multirow{2}{*}{Area Element} & \multirow{2}{*}{\(N\)-Point Sequence (Polygon)} & Crosswalk &... \\ & & Diversion & - \\ \hline \hline \end{tabular} \end{table} TABLE I: **Geometric type, vectorized representation, semantic type, and attribution of map element in VMA. Map elements with different geometric patterns (line elements, discrete elements and area elements) are all abstracted as unified point sequence representation for vectorized map annotation. VMA also outputs attributions of map element.** Fig. 4: **Geometric constraint of map element.** data are archived and regarded as extra training data for finetuning the MapTR-based Unit Annotator. With VMA running for a longer time, the annotation quality is continuously improved and less human effort is needed. ## IV Experiments ### _Datasets_ **Real-World Urban Scene and Highway Scene.** To validate the effectiveness of VMA, we apply VMA to map automatic annotation in both urban scene and highway scene. We collect a large amount of real-world scene data in a crowd-sourced way with the data collection vehicles equipped with multi-modal sensors. And the reported metrics are based on a validation set consisting of 850 urban scenes and 2082 highway scenes. 10231 urban scenes and 8334 highway scenes with map annotations are used to optimize the MapTR-based Unit Annotator. **NYC Planimetric Database.** For fair comparisions with methods based on aerial images, following the Topo-boundary benchmark [15], we perform lane boundary detection on NYC Planimetric Database. NYC Planimetric Database contains 2147 high-resolution aerial images (\(5000\times 5000\)) with road-boundary annotations. Each pixel corresponds to \(15.2cm\) in 3D space. According to the Topo-boundary benchmark [15], the high-resolution images are split into \(1000\times 1000\) images for evaluation. More details about the benchmark setting are available in [15]. ### _Experimental Settings_ For line and area element, every instance corresponds to a 50-point sequence (_i.e._, \(N=50\)). We train the MapTR-based Unit Annotator of VMA on eight RTX 3090 GPUs with batch size 8. By default, the spatial range of annotation unit is \(50m\times 50m\) for urban and highway scenes and 1000 pixels \(\times\) 1000 pixels for NYC Planimetric Database. We use AdamW [47] optimizer and Cosine Annealing [48] scheduler with weight decay 0.01 and initial learning rate \(2\times 10^{-4}\). ### _System Runtime_ VMA is a highly automatic map annotation system. We present the detailed runtime of the system in Table II. For static scene reconstruction, the data collection is performed crowd-sourced way and approximately takes 30min per scene. Dynamic object filtering, single-trip localization, and multi-trip point cloud aggregation respectively takes 15min, 40min, and 60min per scene on average. Thanks to the divide-and-conquer scheme, the scene annotation procedure is quite efficient. Unit annotation is parallelly performed on GPU and costs little time. VMA requires negligible human effort (12min/scene averagely for map quality verification). And the overall runtime for annotating a scene (with a range of hundreds of meters) is 160min on average. ### _Human Cost Comparison_ As shown in Table IV, on average, manual annotation requires 25min per scene. VMA requires 12min per scene and reduces 52.3% of the human cost. ### _Qualitative Evaluation_ Qualitative results of urban scene, highway scene, and NYC Planimetric Database are presented in Fig. 6, Fig. 7, and Fig. 8. VMA directly outputs high-quality global vectorized map for the whole scene, through a highly automatic divide-and-conquer scene annotation procedure. Human annotators only need to verify the automatic annotation results and adjust a small fraction of map elements. The map generation efficiency is significantly improved. We can also observe some failed cases in Fig. 6 and Fig. 7. Specifically, if the point cloud map is not dense enough and the scene features are not rich enough, the map annotation quality is not satisfactory. The solution to this problem is increasing the trip number of data collection for better covering the target scene. \begin{table} \begin{tabular}{c c c c} \hline \hline Procedure & Sub-procedure & Time & Hardware \\ \hline \multirow{6}{*}{Scene Reconstruction} & Data Collection & \(\approx\) 30min/scene & - \\ & Dynamic Object Filtering & \(\approx\) 15min/scene & GPU \& CPU \\ & Motion Distortion Compensation & \(\leq\) 1min/scene & CPU \\ & Single-Trip Localization & \(\approx\) 40min/scene & CPU \\ & Multi-Trip Point Cloud Aggregation & \(\approx\) 60min/scene & CPU \\ \hline \multirow{6}{*}{Scene Annotation} & Scene Splitting & \(\leq\) 1min/scene & CPU \\ & Unit Annotation & \(\leq\) 1min/scene & GPU \\ \cline{1-1} & Vectorized Merging & \(\leq\) 1min/scene & CPU \\ \cline{1-1} & Point Sparsification & \(\leq\) 1min/scene & CPU \\ \cline{1-1} & Human Verification & \(\approx\) 12min/scene & - \\ \hline Overall & - & \(\approx\)160min/scene & - \\ \hline \hline \end{tabular} \end{table} TABLE II: **Detailed runtime of VMA.** Fig. 6: **Automatic annotation results of urban scene.** Zoon in for better view. \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c}{Precision \(\uparrow\)} & \multicolumn{3}{c}{Recall \(\uparrow\)} & \multicolumn{3}{c}{F1-score \(\uparrow\)} & \multirow{2}{*}{Naive \(\uparrow\)} & \multirow{2}{*}{APLS \(\uparrow\)} & \multirow{2}{*}{ECM \(\uparrow\)} \\ \cline{2-2} \cline{5-12} & 2 pixels & 5 pixels & 10 pixels & 2 pixels & 5 pixels & 10 pixels & 2 pixels & 5 pixels & 10 pixels & \\ \hline Seg.-then-skeleton.[14] & 0.607 & **0.890** & 0.928 & 0.505 & 0.736 & 0.768 & 0.533 & 0.778 & 0.811 & 0.698 & 0.577 & 0.550 \\ Deeproadmapper[3] & 0.578 & 0.854 & 0.898 & 0.475 & 0.694 & 0.725 & 0.505 & 0.740 & 0.775 & 0.719 & 0.615 & 0.959 \\ OrientationRefine[4] & **0.620** & 0.878 & 0.913 & **0.602** & 0.850 & 0.884 & **0.605** & 0.855 & 0.888 & 0.797 & 0.750 & 0.756 \\ RoadTracer[49] & 0.391 & 0.707 & 0.791 & 0.416 & 0.743 & 0.821 & 0.399 & 0.718 & 0.798 & 0.869 & 0.739 & 0.824 \\ VeeRoad[50] & 0.461 & 0.769 & 0.854 & 0.459 & 0.752 & 0.830 & 0.458 & 0.756 & 0.837 & 0.883 & 0.756 & 0.846 \\ ConvBoundary[13] & 0.510 & 0.845 & 0.934 & 0.455 & 0.692 & 0.752 & 0.465 & 0.737 & 0.805 & **0.958** & 0.671 & 0.786 \\ DAGMapper[51] & 0.407 & 0.751 & 0.868 & 0.353 & 0.649 & 0.747 & 0.371 & 0.684 & 0.787 & 0.896 & 0.679 & 0.758 \\ Ciuru[14] & 0.550 & 0.833 & 0.890 & 0.538 & 0.815 & 0.873 & 0.542 & 0.820 & 0.877 & 0.910 & 0.826 & 0.889 \\ Enhanced-Ciurb[15] & 0.560 & 0.839 & 0.894 & 0.542 & 0.811 & 0.864 & 0.549 & 0.821 & 0.874 & 0.925 & 0.822 & 0.893 \\ VMA & 0.533 & 0.872 & **0.935** & 0.530 & **0.856** & **0.913** & 0.528 & **0.858** & **0.916** & 0.900 & **0.865** & **0.901** \\ \hline \hline \end{tabular} \end{table} TABLE III: **Quantitative results on NYC Planimetric Database for comparison.** The best results are highlighted in bold font. For all the metrics, larger values indicate better performance. Fig. 7: **Automatic annotation results of highway scene.** Zoon in for better view. \(P_{\text{pre}}\). Precision is the ratio of predicted pixels whose \(cd_{\text{pre}}\) are smaller than preset threshold \(\tau\) in all predicted pixels. Recall is the ratio of ground truth pixels whose \(cd_{\text{gt}}\) are smaller than preset threshold \(\tau\) in all ground truth pixels. The metrics are formulated as: \[\text{Precision}=\frac{|\{p|d(p,P_{\text{gt}})<\tau,\forall p\in P_{ \text{pre}}\}|}{|P_{\text{pre}}|},\] \[\text{Recall}=\frac{|\{p|d(p,S_{\text{pre}})<\tau,\forall p\in P_ {\text{gt}}\}|}{|P_{\text{gt}}|}, \tag{2}\] \[\text{F1-score}=\frac{2\cdot\text{Precision}\cdot\text{Recall}}{ \text{Precision}+\text{Recall}}.\] **Naive Connectivity.** This metric measures the connectivity of predicted instance. Naive Connectivity uses the Hausdorff distance to match every predicted instance and ground-truth \begin{table} \begin{tabular}{l c} \hline \hline Annotation Manner & Human Cost \\ \hline Manual Annotation & \(\approx\)25min/scene \\ Auto Annotation w/ Verification (VMA) & \(\approx\)12min/scene \\ \hline \hline \end{tabular} \end{table} TABLE IV: **Human Cost Comparison.** Fig. 8: **Automatic annotation results of NYC Planimetric Database.** Zoon in for better view. instance. Every predicted instance will be assigned to one ground-truth instance, according to the smallest Hausdorff distance. Multiple predicted instances could be assigned to one ground-truth instance. \(M_{i}\) represents the number of predicted instances to which each true value is assigned. \(C_{i}=\frac{1(M_{i}>0)}{M_{i}}\) is the connectivity of ground-truth, which means the short prediction of long ground-truth instance will be punished. The final result is the average sum of connectivity of each ground-truth instance. **APLS.** APLS (Average Path Length Similarity) is proposed by [52] and has been widely used to evaluate topology correctness. It's based on Dijkstra's algorithm to measure the similarity of path. **ECM.** Naive Connectivity ignores the length of predicted instances. ECM (Entropy-based Connectivity) metric is used in [15], which assigns predicted longer instances with higher weight in the metric. ECM is formulated as: \[\begin{split} ECM&=\sum_{i=1}^{N}\alpha_{i}e^{-C_{i} },\\ C_{i}&=\sum_{j=1}^{M_{i}}-p_{j}log(p_{j}),\\ p_{j}&=\frac{\text{length}(G_{\text{pre}}^{j})}{ \sum_{I\in S_{i}}\text{length}(I)}.\end{split} \tag{3}\] In ECM, Hausdorff distance matching is replaced with pixel-level voting mechanism. Each pixel of predicted instance votes ground-truth instance with the closest Euclidean distance, and the predicted instance will be assigned to ground-truth instance with the most votes. And the connectivity value \(C_{i}\) is calculated from the entropy of dominance value. \(S_{i}\) is the set of all predicted instances assigned to one ground truth \(G_{\text{gt}}^{i}\). Dominance value \(p_{j}\) is the ratio of the length of predicted instance \(G_{\text{pre}}^{j}\) to the sum of the length of all the instance in \(S_{i}\). \(M_{i}\) is the number of \(S_{i}\). \(\alpha_{i}\) is the completion of \(G_{gt}^{i}\), which is equal to the sum of the length of assigned instances in \(G_{pre}\) projected onto \(G_{gt}^{i}\) divided by the length of \(G_{gt}^{i}\). Quantitative comparisons on NYC Planimetric Database are presented in Table III. VMA achieves the best results in terms of most metrics. Besides, VMA is efficient in both training and inference phase (on average 0.35s/scene for training and 0.11s/scene for inference), as shown in Table III. Quantitative evaluations of urban scene are presented in Table VI. For all types of map elements, even under the strict distance threshold 0.30\(m\), VMA achieves high recall, precision, and F1-score. It shows the unified point sequence representation well models map elements with various geometric patterns. Especially for lane divider and curb, which are the most important map elements of the driving scene, [email protected]\(m\), [email protected]\(m\), and [email protected]\(m\) all exceed 0.90. Some infrequent elements (like speed bump) correspond to limited training samples. With the training samples accumulated and the closed-loop annotator optimization, the annotation quality can be continuously improved. Quantitative evaluations of highway scene are presented in Table VII. The metrics of the highway scene are much higher than those of the urban scene. The highway scene is standardized and easy to annotate, while the urban scene is more complex and includes a large number of intersection scenarios. This difference accounts for the gap of metrics. Annotation accuracy of attributions is shown in Table VIII. Attribution predictions of VMA are quite accurate, requiring little human effort for correction. ## V Conclusion and Discussion We present VMA, a vectorized map annotation framework based on the unified map representation and the divide-and-conquer annotation scheme. VMA is highly automatic and extensible, requiring negligible human effort and flexible in terms of spatial scale and element type. VMA can significantly improve the map generation efficiency and require little human effort, showing great industrial application value. We find that the quality of scene reconstruction affects the annotation quality a lot. By introducing more sensor information (like camera and RADAR) for scene reconstruction, VMA can be further enhanced. And the divide-and-conquer scheme of VMA can also be extended to large-scale lane graph construction based on LaneGAP [53]. We leave these as future work.
2310.13718
A Workflow Approach to Visualization-Based Storytelling with Cultural Heritage Data
Stories are as old as human history - and a powerful means for the engaging communication of information, especially in combination with visualizations. The InTaVia project is built on this intersection and has developed a platform which supports the workflow of cultural heritage experts to create compelling visualization-based stories: From the search for relevant cultural objects and actors in a cultural knowledge graph, to the curation and visual analysis of the selected information, and to the creation of stories based on these data and visualizations, which can be shared with the interested public.
Johannes Liem, Jakob Kusnick, Samuel Beck, Florian Windhager, Eva Mayr
2023-10-17T09:28:43Z
http://arxiv.org/abs/2310.13718v1
# A Workflow Approach to Visualization-Based Storytelling ###### Abstract Stories are as old as human history--and a powerful means for the engaging communication of information, especially in combination with visualizations. The InTaVia project is built on this intersection and has developed a platform which supports the workflow of cultural heritage experts to create compelling visualization-based stories: From the search for relevant cultural objects and actors in a cultural knowledge graph, to the curation and visual analysis of the selected information, and to the creation of stories based on these data and visualizations, which can be shared with the interested public. Human-centered computing--Visualization--Visualization application domains; Applied computing--Arts and humanities-- + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote †: thanks: [ [ [ + Footnote †: Footnote: thanks: [ [ [ + Footnote: thanks: [ [ [ + Footnote: thanks: [ [ [ + Footnote: thanks: [ [ [ [ + Footnote: thanks: [ [ [ + Footnote: thanks: [ [ [ + Footnote: thanks: [ [ [ + Footnote: thanks: [ [ [ + Footnote: thanks: [ [ [ + Footnote: thanks: [ [ [ [ + Footnote: [ [ [ Figure 1] [ [ Figure 1] [ Figure 1] [ [ Figure 1] [ [ [ [ (iv) can be communicated in a data story to recipients. Even clearer, Lee et al. [11] describe the generation of stories as a three-phased visual data storytelling process: (i) explore the data, (ii) make a story, and (iii) tell the story. Recent approaches in narrative digital humanities also automatize parts of these processes to automatically generate structured story points from large knowledge bases, or to extract them from texts for their subsequent visualization [2]. In other fields (e.g., data journalism) there are similar visualization-based storytelling systems with similar layout and workflows, such as for the analysis and content extraction of social media data [18]. However, aside from early, experimental work on fully AI-driven story creation, human minds remain the main drivers of the outlined multi-stage workflows. While the corresponding process models seem to follow a linear order at first sight, the actual workflows require various iterations and cycles of re-exploring and re-collecting further data for identifying and generating a story [11]. This is why a close interconnection of modules for querying, curating, and analyzing data, together with a module for the creation of visualization-based stories might suit these workflows better and has been chosen in the InTaVia project. ## 3 The InTaVia Platform: Knowledge Graph, Curation & Visualization The H2020-project InTaVia ("In/Tangible European Heritage - Visual Analysis, Curation & Communication", [https://intavia.eu](https://intavia.eu)) develops a platform for the visual analysis, curation and communication of CH information. As a main source, it has assembled a transactional and multimodal knowledge graph for cultural heritage data to counteract some of the structural problems resulting from siloed and separated data collections in digital cultural heritage realms [10, 14]. Among these problems, it primarily works to overcome the separation of databases for a) "tangible" cultural objects (such as paintings, sculptures, buildings or literary texts) and b) for "intangible", contextual information, such as the bio-graphical information on cultural actors and artists contained in biographical and prosographical lexica. The InTaVia knowledge graph draws together data from both types of knowledge collections and currently includes 22,347,784 triples on 111,551 actors from four different European prosopographical data sources (Austria, Finland, the Netherlands, and Slovenia) with data on 172,370 related cultural heritage objects from Wikidata and European [13]. Next to information about persons and cultural heritage objects, the knowledge graph includes entities for institutions (e.g., academies or universities), historical events (e.g., wars), and places. Whether person, assembly, or thing - InTaVia treats each of these entities as a potential protagonist of a story, so that it can have a history of "biographical" events (e.g., birth, creation, travel), including time stamps and relations to other entities. The system's architecture (Figure 2) was designed to support our guiding workflow model (Figure 1) with cultural heritage data for users of the InTaVia frontend1. For the first step in the workflow, querying the data, users can constrain query parameters (e.g., names, occupations, date ranges) either by form fields or by interactive visualizations (visual query builder with "scenetd widges" [4, 21]). For the creation (second step) and curation (third step) of data, users are enabled to create new data, or edit and enrich existing data sets locally, in the module of InTaVia's "data curation lab". Footnote 1: [https://intavia.acdh-dev.oeanv.ac.at/](https://intavia.acdh-dev.oeanv.ac.at/) For the fourth step in the workflow, data can be visually analyzed either for individual entities from an geo-perspective in a detail view (Figure 3) or for multiple entities in linked coordinated views [22]. Different types of interactive visualizations are available based on the specific data and with regard to CH experts' related research questions: timelines, maps, and network visualizations. After they explored and gained insights on the data, users can move to the fifth Figure 3: Dürer’s journey to The Netherlands (1520-1521) in space and time including travel directions, stops and events, and produced art works along the way. Visualized in the Visual Analytics Studio. Based on manually curated data [9]. Figure 2: Architecture of and information flow within the InTaVia platform, supporting a variety of cultural heritage data practices with visualization-based interfaces, including activities of searching, creating, curating, analyzing, and communicating for a large variety of user groups. step of the workflow model and assemble the results of their visually supported query, curation and analysis activities into visualization-based stories. ## 4 The InTAvia Storytelling Suite The Storytelling Suite implements a two-staged storytelling process by the means of two functional sub-modules: The **Story Creator** allows users to create dynamic slideshow-based stories [17] and the **Story Viewer** displays the resulting interactive stories in desktop and mobile browsers. Both modules are developed as responsive JavaScript web applications using contemporary frameworks, including React [15] and Vue.js [20] for the interface, and MapLibre [12] and D3.js [5] for the interactive visualizations. ### Story Creator The Story Creator is the authoring component of the Visual Storytelling Suite, providing a user-friendly interface for creating and editing slide-based stories enabling the seamless integration of visualizations and other multimedia elements into the story. The data basis for stories is either originating from the InTAVia knowledge graph or from manually curated and imported data, which is then utilized in the following workflow. Users can initiate new stories or re-import previously exported stories. Throughout the project, we developed several multimodal, representative showcases that are available on the overview page. These stories provide a summary of functionalities and aim to inspire the authoring of new stories. By clicking on a story name, users can access the Story Creator and make changes to the content chunks and story flow including their embedded visualizations. **Content Creation and Editing.** At the core of the Story Creator is the slide editor (Figure 4 (2)), where users define visualizations and content chunks within a slide. Users can either incorporate visualizations created specifically for the story or reuse visualizations created during the prior analysis in the visual analytics studio. Each story slide has a predefined, yet user-selected layout dividing a slide into areas for visualizations and content chunks. We predefined a set of layouts which also work well on mobile devices. To ensure this, we limited the number of possible visualizations to one per slide, whereby a maximum number of two content panels (able to hold multiple elements) is allowed to enable the detailed discourse on multimedia content chunks. Users can customize the layout of content elements through drag and drop interactions within a slide's grid. **Slide Management and Flow Control.** In the story flow panel each slide is represented by a thumbnail card, which can be duplicated, deleted by buttons or rearranged using drag and drop interactions. The Story Creator incorporates a further feature called _nested slides_ that allows users to create drill-down stories [19] to optionally provide detailed marration steps between the current and next slide. This feature enhances the storytelling experience by enabling users to present additional information or delve into specific details on demand without interrupting the flow of the main narrative. Nested slides are useful for providing context, explanations, or supplementary content within a specific segment of the story. The inclusion of the nested slides feature empowers users to create multi-layered narratives, offering flexibility and depth in presenting information, and providing a more flexible, immersive and interactive storytelling experience. Figure 4: Overview of the Story Creator interface: (1) data panel containing collections, entities, and events, (2) main panel for adding visualizations and media content to slides, (3) story flow panel with slide overview and content toolbar. The selected and highlighted event dots on the map are zoomed and panned into the center of the screen during the presentation of slides in the Story Viewer to set the focus on these specific events. **Visualizations and Interactions.** To facilitate entity subsets, the Story Creator incorporates the data panel (Figure 4 (1)) as list of entities and events. From there users can enrich or create new visualizations by adding entities, such as persons or objects, and their related events into them by the press of a button or drag and drop interactions. In the current state of the prototype it is possible to utilize timelines and geo-spatial maps as visualizations within the stories. Since only one visualization per slide is allowed, the choice of the used visualization type depends mainly on the specific focus of the slide either on temporal or geo-spatial contextualization. Both of them are similarly designed to display entities and their related events, supported by various coloring modes to differentiate visually between entity-identities, event-kinds and a temporal color scale [1] ranging from begin to end of the entity's or event set's time period. To minimize visual clutter both visualizations contain the option to cluster events in donut or dot cluster glyphs as shown in Figure 3. The various interactive elements of the visualizations and the interface are linked together to react on common interactions such as mouse-overs. Selecting events within the visualizations allows to focus on them during the story viewing to enable seamless transitions with animations throughout the story slides during the presentation in the InTaVia Story Viewer. **Annotations and Content Chunks.** By providing additional context and information through annotating slides with multimedia contents such as text, images, and videos users add the narration and increase tangibility. More advanced content types such as multiple-choice quizzes and the HTML-container hold the potential for gamification and further interactivity. Because of the flexible layout options, the various content types can be combined with the visualizations and arranged together. For example multimedia quizzes are possible through the alignment of images/videos and quizzes. The HTML ("Hypertext Markup Language") content type acts as container to include further applications such as three-dimensional object renderings, other web-applications or further visualizations, but also any other content by rendering of HTML. Each of the content chunks is adjustable through a settings dialog to personalize the story's visual elements and create visually compelling and tailored narratives. ### _Story Viewer_ The Story Viewer is an integral part of the Visual Storytelling Suite, designed to enable users to preview and experience the stories created in the Story Creator. The Story Viewer brings the created stories with their configured visualizations and content chunks to life, providing users with an interactive and engaging narrative experience. It enables seamless transitions between slides, smooth visualization rendering, and includes interactive elements, fostering exploration, user engagement and immersion in the storytelling process (Figure 5). ## 5 Case Study: Traveling with Albrecht Durer We illustrate the different functionalities of the Storytelling Suite with an exemplary story on the influences of Albrecht Durer's travel activities on his couvre, which was generated by the Durer expert Anja Grebe [7, 9]. Albrecht Durer (1471-1528) counts among the central figures of Western art history. Thanks to his extensive travel activities and his widely sold prints, his works quickly spread all over the globe and now form the pride of museums and collections worldwide [8]. Durer is arguably also one of the best biographically documented artists from the early modern times. One of the best documented parts of his life is the so-called Journey to the Netherlands, thanks to a related travel diary and to other contemporary sources [9]. For Albrecht Durer's life, three major journeys (two to Italy and one to the Netherlands) play an essential role, as they have been deemed an undeniable factor and driver of both the development of Durer's style and the development of his transnational reputation. Based on a geographical analysis of Durer's travel activities and related cultural objects, two stories have been generated: a macro story giving an overview on his life and work (see [https://youtu.be/yR2NtX7Dnow](https://youtu.be/yR2NtX7Dnow)) and a more fine-grained story focusing on his journey to the Netherlands. While these two stories have been developed separately, they could also be presented in a nested fashion, where the user can start from the biographical macro-story first, to explore parts of his life- such as the journey to the Netherlands-in greater detail on demand in nested slides. Figure 5: Overview of the Story Viewer interface on desktop (left) and mobile devices (right) which organize map (1) and media content (2) either next to each other or in an expandable panel. The four visited and selected stations on Albrecht Dür’s travel to Italy are in the focus and annotated by texts and an image for the tangible narration. ## 6 Discussion Storytelling guidelines often assume that an intention or message stands above or behind every story that should be conveyed. However, in order to get there, a large number of practices to process and analyze different sources of information have to be conducted and orchestrated. In the case of cultural heritage topics, the information has to be found, collected and unified, before it can be processed elaborately. InTaVia demonstrates how this workflow can be supported in one interaterative platform--enabled by a modular architecture which provides tools to support these different but interrelated steps of the workflow: (i) searching for data on related cultural objects and actors in an integrated knowledge graph; (ii) (re-)creating missing data aspects, (iii) inspecting and curating relevant data; (iv) visually analyzing and representing the data; and (v)creating stories with these data and visualizations--as the presented case study shows in an illustrative manner. While many of the resulting stories created by cultural heritage experts are compelling, we realized that there are several limitations associated with our workflow: (1) A decisive factor for the productive exploratory analysis of CH data is a certain richness and interconnection of the knowledge graph--which is not given across the whole reach of the existing InTaVia graph.2 To overcome this problem, we take several routes: First, we currently aim to increase the interconnectedness of entities by means of different NLP and AI-based enrichment procedures. Secondly, we allow users to import their own datasets from different sources (manually generated or mapped to the InTaVia data model) and use our modules for visual analysis and storytelling on them. Footnote 2: For some well-connected clusters within the data, we can demonstrate the feasibility of the approach, e.g. by searching for actors related to Tuusula (an artist community at a lake in Finland) or the Wiener Küntlerhaus (an art association in Vienna). (2) Even though everyone tells stories, not everyone is a skilled storyteller--or knows how to select data and design visualizations. We currently follow several threads and strategies to increase guidance for users: a) We conduct a survey on visualization-based storytelling in the digital humanities to explore and sketch out the corresponding design space. b) We study how specific design features of visualization-based stories influence the attention, interest, and engagement of recipients. c) We design guiding UI elements, which will be included in the interface to support users unfamiliar to visualization-based storytelling. The outlined workflow approach to visualization-based storytelling offers a chance to reach out to domain-experts in the cultural heritage field, to catalyze intra- and inter-disciplinary collaboration when creating such stories, and to inform and provide the interested public with compelling stories on cultural topics in the context of museums, in cultural tourism, but also in classrooms. By creating compelling visualization-based stories for various users and application domains, we can catch the attention of casual users and create awareness on important cultural topics in a wide range of arts and humanities fields. ###### Acknowledgements. We would like to thank the whole InTaVia team for the excellent and constructive collaboration, which enabled the presented work. The InTaVia project project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 101004825.
2303.03433
Fixed-domain curve counts for blow-ups of projective space
We study the problem of counting pointed curves of fixed complex structure in blow-ups of projective space at general points. The geometric and virtual (Gromov-Witten) counts are found to agree asymptotically in the Fano (and some $(-K)$-nef) examples, but not in general. For toric blow-ups, geometric counts are expressed in terms of integrals on products of Jacobians and symmetric products of the domain curves, and evaluated explicitly in genus 0 and in the case of $\text{Bl}_q(\mathbb{P}^r)$. Virtual counts for $\text{Bl}_q(\mathbb{P}^r)$ are also computed via the quantum cohomology ring.
Alessio Cela, Carl Lian
2023-03-06T19:01:02Z
http://arxiv.org/abs/2303.03433v3
# Fixed-domain curve counts for blow-ups of projective space ###### Abstract We study the problem of counting pointed curves of fixed complex structure in blow-ups of projective space at general points. The geometric and virtual (Gromov-Witten) counts are found to agree asymptotically in the Fano (and some \((-K)\)-nef) examples, but not in general. For toric blow-ups, geometric counts are expressed in terms of integrals on products of Jacobians and symmetric products of the domain curves, and evaluated explicitly in genus \(0\) and in the case of \(\mathsf{Bl}_{q}(\mathbb{P}^{r})\). Virtual counts for \(\mathsf{Bl}_{q}(\mathbb{P}^{r})\) are also computed via the quantum cohomology ring. ###### Contents * 1 Introduction * 1.1 Curve-counting with fixed domain * 1.2 Curve counts on \(\mathbb{P}^{r}\) * 1.3 New results * 1.3.1 Enumerativity * 1.3.2 Geometric calculations * 1.3.3 Virtual calculations * 1.4 Acknowledgments * 2 Enumerativity * 2.1 Failure of SAE * 2.2 Generalities for Fano varieties * 2.3 Blow-ups of \(\mathbb{P}^{r}\) * 2.4 SAE for \(\mathsf{Bl}_{q}(\mathbb{P}^{r})\) * 2.5 SAE for del Pezzo surfaces * 2.6 SAE in dimension \(3\) * 2.6.1 Low degree effective curves in \(\mathbb{P}^{3}\) * 2.6.2 Proof of SAE * 3 Geometric counts * 3.1 Integral formula on \(\mathbb{P}\) * 3.1.1 Interlude on the permutohedral variety * 3.1.2 Transversality * 3.2 Integral formula on \(S\) 3.2.1 Cohomology of \(\operatorname{Jac}^{d}(C)\times\operatorname{Sym}^{k_{1}}(C)\times\cdots\times \operatorname{Sym}^{k_{r+1}}(C)\) * 3.2.2 Computation of \(c(\widetilde{\mathcal{E}})\) * 3.3 Specialization to genus \(0\) * 3.4 Specialization to \(X=\operatorname{Bl}_{q}(\mathbb{P}^{r})\) * 4 Virtual counts for \(X=\operatorname{Bl}_{q}(\mathbb{P}^{r})\) * 4.1 Preliminaries on \(QH^{*}(X)\) * 4.2 The quantum Euler class of \(X\) * 4.3 Proof of Theorem 14 ## 1 Introduction ### Curve-counting with fixed domain Let \(X\) be a nonsingular, irreducible, projective variety over \(\mathbb{C}\) of dimension \(r\). Curve-counting problems on \(X\) are traditionally formulated in Gromov-Witten theory as intersection numbers on the space of stable maps \(\overline{\mathcal{M}}_{g,n}(X,\beta)\) against the virtual fundamental class \[[\overline{\mathcal{M}}_{g,n}(X,\beta)]^{\operatorname{vir}}\in A_{ \operatorname{vdim}(\overline{\mathcal{M}}_{g,n}(X,\beta))}(\overline{ \mathcal{M}}_{g,n}(X,\beta)).\] For example, fixing \(n\) subvarieties \(X_{i}\subset X\), pulling back the classes of the \(X_{i}\) under the evaluation maps \(\operatorname{ev}_{i}:\overline{\mathcal{M}}_{g,n}(X,\beta)\to X\) and integrating the product against the virtual class gives a virtual count of the number of genus \(g\) curves of class \(\beta\) passing through the \(X_{i}\). We will be concerned with such problems where, in addition, the complex structure of the domain curve \((C,p_{1},\ldots,p_{n})\) is fixed (and general). Gromov-Witten counts are therefore obtained by additionally restricting to the pullback of a point under the forgetful map \(\overline{\mathcal{M}}_{g,n}(X,\beta)\to\overline{\mathcal{M}}_{g,n}\). There are two salient features of the fixed-domain version of the problem: first, degenerating \((C,p_{1},\ldots,p_{n})\) to a union of rational curves, the answers may be expressed purely in terms of the small quantum cohomology ring of \(X\)[4, Theorem 1.3], and are therefore tend to be simpler than arbitrary Gromov-Witten invariants. Second, the restriction to a general curve often avoids the most pathological subvarieties of \(\overline{\mathcal{M}}_{g,n}(X,\beta)\), and therefore, the fixed-domain Gromov-Witten invariants are much more often enumerative [16]. Furthermore, even when enumerativity fails, the corresponding geometric count, which is different, is usually well-defined and may still be accessible via more subtle methods. In this paper, we will restrict to the case in which the subvarieties \(X_{i}\subset X\) are points \(x_{i}\), but many of our methods work more generally. We now set up the problem precisely. Fix integers \(g,n\geq 0\) satisfying the stability condition \(2g-2+n>0\) so that the moduli space \(\overline{\mathcal{M}}_{g,n}\) of \(n\)-pointed, genus \(g\) stable curves is well-defined. Fix \(\beta\in H_{2}(X,\mathbb{Z})\) an effective curve class satisfying the condition \[\beta\cdot K_{X}^{\vee}>0.\] Let \(\overline{\mathcal{M}}_{g,n}(X,\beta)\) be the moduli space of genus \(g\), \(n\)-pointed stable maps to \(X\) in class \(\beta\) and assume that the dimensional constraint \[\operatorname{vdim}(\overline{\mathcal{M}}_{g,n}(X,\beta))=\dim(\overline{ \mathcal{M}}_{g,n}\times X^{n})\] holds. This is equivalent to \[\beta\cdot K_{X}^{\vee}=r(n+g-1). \tag{1}\] If the dimension constraint (1) holds, we expect a finite number of maps from a fixed \(n\)-marked curve \((C,p_{1},\ldots,p_{n})\) of genus \(g\) to \(X\) in class \(\beta\) where the \(p_{i}\) are incident to general points of \(X\). Unless otherwise specified, we will always assume that constraint (1) holds throughout. The **virtual Tevelev degree**\(\mathsf{v}\mathsf{Tev}^{X}_{g,n,\beta}\) of \(X\) is defined in [4, Definition 1.1] to be the corresponding virtual count. Formally, let \[\tau:\overline{\mathcal{M}}_{g,n}(X,\beta)\to\overline{\mathcal{M}}_{g,n} \times X^{n} \tag{2}\] be the canonical morphism obtained from the domain curve and the evaluation maps. Then, \(\mathsf{v}\mathsf{Tev}^{X}_{g,n,\beta}\in\mathbb{Q}\) is defined by the equality \[\tau_{*}[\overline{\mathcal{M}}_{g,n}(X,\beta)]^{\mathrm{vir}}=\mathsf{v} \mathsf{Tev}^{X}_{g,n,\beta}\cdot[\overline{\mathcal{M}}_{g,n}\times X^{n}] \in A^{0}(\overline{\mathcal{M}}\times X^{n})_{\mathbb{Q}}.\] The **geometric Tevelev degree**\(\mathsf{Tev}^{X}_{g,n,\beta}\in\mathbb{Z}\) of \(X\) is defined in [16, Definition 2] under the further assumption that the restriction of \(\tau\) to maps with smooth domain \[\tau:\mathcal{M}_{g,n}(X,\beta)\to\mathcal{M}_{g,n}\times X^{n}\] has reduced and \(0\)-dimensional general fiber, in which case its cardinality is by definition \(\mathsf{Tev}^{X}_{g,n,\beta}\). We record here the following crucial lemma, see [16, Proposition 14, proof of Proposition 22]. It does not require the assmption (1). **Lemma 1**.: _Let \(f\subset\mathcal{M}_{g,n}(X,\beta)\) be an irreducible component. Suppose that \(Z\) dominates \(X^{n}\) and that \(n\geq g+1\). Then, \(Z\) is generically smooth of the expected dimension._ _More generally, let \(Z\subset\overline{\mathcal{M}}_{g,n}(X,\beta)\) be an irreducible component whose general point \(f:C\to X\) has the property that \(C\) is a union of a smooth genus \(g\) curve \(C_{sp}\) containing \(m\) of the marked points, and \(n-m\) rational tails attached to \(C_{sp}\), each containing a marked point. Suppose that \(Z\) dominates \(X^{n}\) and that \(m\geq g+1\). Then, \(Z\) is generically smooth of the expected dimension._ In particular, \(\mathsf{Tev}^{X}_{g,n,\beta}\) is well-defined whenever \(n\geq g+1\). This bound is not sharp: for example, when \(X=\mathbb{P}^{r}\), the geometric Tevelev degrees are also well-defined whenever \(n\geq r+1\) (in which case \(f:C\to\mathbb{P}^{r}\) is necessary non-degenerate) by Brill-Noether theory. It is an interesting question to determine for other \(X\) whether the bound \(n\geq g+1\) can be improved. Three questions emerge: 1. What is \(\mathsf{v}\mathsf{Tev}^{X}_{g,n,\beta}\)? 2. What is \(\mathsf{Tev}^{X}_{g,n,\beta}\)? 3. Is \(\mathsf{v}\mathsf{Tev}^{X}_{g,n,\beta}=\mathsf{Tev}^{X}_{g,n,\beta}\)? (That is, is \(\mathsf{v}\mathsf{Tev}^{X}_{g,n,\beta}\) enumerative?) The three questions are clearly not independent, but in practice involve somewhat different ideas interacting in subtle ways. This paper is concerned with aspects of all three questions when \(X\) is a blow-up of \(\mathbb{P}^{r}\) at a collection of points. Other examples of interest include complete intersections and homogeneous spaces [16, 4, 5, 14]. We next review the preliminary case \(X=\mathbb{P}^{r}\). ### Curve counts on \(\mathbb{P}^{r}\) The answers to all three questions above on fixed-domain curve-counts are fully understood for \(X=\mathbb{P}^{r}\). Virtual counts for \(\mathbb{P}^{r}\) are easy to obtain. We write \(\beta=d\) for the class of \(d\) times a line throughout this section. **Theorem 2**.: _[_4_, (3)]_ _Assume (1). Independently of \(d\), we have_ \[\mathsf{v}\mathsf{Tev}_{g,n,d}^{\mathbb{P}^{r}}=(r+1)^{g}.\] In fact, this formula had been obtained much earlier [3] before the modern theory of the Gromov-Witten virtual fundamental class was available. The virtual and geometric counts agree when \(d\) is large compared to \(r\) and \(g\): **Theorem 3**.: _[_9_]_ _Assume (1) and that \(d\geq rg+r\) (equivalently, \(n\geq d+2\)). Then,_ \[\mathsf{Tev}_{g,n,d}^{\mathbb{P}^{r}}=(r+1)^{g}.\] Obtaining geometric counts in _low_ degree for \(\mathbb{P}^{r}\) is much more difficult; the virtual and geometric counts no longer agree. For \(\mathbb{P}^{1}\), we have: **Theorem 4**.: _[_7, 9, 6_]_ _Assume (1) for \(r=1\). Then,_ \[\mathsf{Tev}_{g,n,d}^{\mathbb{P}^{1}} =2^{g}-\sum_{j=0}^{g-d-1}\binom{g}{j}+(g-d-1)\binom{g}{g-d}+(d-g- 1)\binom{g}{g-d+1}\] \[=\int_{\mathrm{Gr}(2,d+1)}\sigma_{1}^{g}\cdot\sum_{a_{0}+a_{1}=n- 3}\sigma_{a_{0}}\sigma_{a_{1}}\] Binomial coefficients \(\binom{g}{j}\) with \(j<0\) are interpreted to vanish, recovering Theorem 3 when \(d\geq r+1\). Finally, the computation of geometric counts \(\mathsf{Tev}_{g,n,d}^{\mathbb{P}^{r}}\) is completed in [15], given in terms of Schubert calculus and a formula of Klyachko [12]. We do note restate the formula here. ### New results In this paper, we study Tevelev degrees of blow-ups of projective spaces at (general) points. We obtain new results on all three aspects of the question: enumerativity, geometric, and virtual calculations. #### 1.3.1 Enumerativity Fixed-domain virtual curve counts are much more likely to be enumerative than arbitrary Gromov-Witten invariants. For example, we have already seen that virtual and geometric counts for \(\mathbb{P}^{r}\) agree in large degree, whereas higher-genus Gromov-Witten invariants of \(\mathbb{P}^{r}\) typically fail to be enumerative. This phenomenon was first studied systematically in [16]. We assume throughout that the dimensional constraint (1) holds. **Definition 5**.: _We say that \(X\) satisfies **strong asymptotic enumerativity (SAE)** if :_ 1. _For all_ \(g\geq 0\)_, there exists a constant_ \(C(X,g)\) _(depending only on_ \(X\) _and_ \(g\)_) for which, if_ \(n\geq C(X,g)\)_, then the general fiber of the map_ (2) \[\tau:\overline{\mathcal{M}}_{g,n}(X,\beta)\to\overline{\mathcal{M}}_{g,n}\times X ^{n}\] _is contained in_ \(\mathcal{M}_{g,n}(X,\beta)\)_, and_ 2. _one can take_ \(C(X,0)=3\)_._ If \(\tau\) has general fiber contained in \(\mathcal{M}_{g,n}(X,\beta)\) and furthermore \(n\geq g+1\), then by Lemma 1, the general fiber of \(\tau\) is reduced and \(0\)-dimensional, so \(\mathsf{Tev}^{X}_{g,n,\beta}=\mathsf{vTev}^{X}_{g,n,\beta}\), i.e. \(\mathsf{vTev}^{X}_{g,n,\beta}\) is enumerative. Thus, we have: **Proposition 6**.: _Suppose that \(X\) satisfies SAE. Then, virtual Tevelev degrees are enumerative:_ * _in arbitrary genus whenever_ \(n\) _(equivalently,_ \(\beta\cdot K^{\vee}_{X}\)_) is sufficiently large (depending on_ \(g\)_), and_ * _always when_ \(g=0\)_._ In [16], SAE is proven for homogeneous spaces for linear algebraic groups (see [16, Theorem 10]) and non-singular hypersurfaces of very low degree (see [16, Theorem 11, Corollary 34]), implying the conclusion of Proposition 6 in these examples. As it is difficult to imagine how one could obtain this conclusion for \(X\) without SAE, the following question is natural. **Question 7**.: _What geometric conditions on (non-singular, projective) \(X\) guarantee SAE?_ It was originally expected that all _Fano_\(X\) satisfy SAE [16, Speculation 12 and SS4]. However, we have been informed that examples of special Fano hypersurfaces of large degree failing to satisfy SAE have been constructed in forthcoming work of Beheshti-Lehmann-Riedl-Starr-Tanimoto. The SAE property therefore seems somewhat more subtle. Our first main set of results concerns SAE for \(X\) given by the blowup of \(\mathbb{P}^{r}\) at distinct points \(q_{1},\ldots,q_{\ell}\). We have the following negative result: **Theorem 8**.: _Suppose that \(r\geq 4\) and \(\ell\geq 2\). Then, \(X\) fails to satisfy SAE._ Such \(X\) are not Fano, and indeed, the existence of negative rational curves is used an in essential way. In fact, we prove a more general criterion for failure of SAE which also applies to various special configurations of points \(q_{j}\) (e.g., where some three of the \(q_{j}\) are collinear), see Proposition 15. On the other hand, we have the following positive result. **Theorem 9**.: \(X=\mathsf{Bl}_{q_{1},\ldots,q_{\ell}}(\mathbb{P}^{r})\) _satisfies SAE in the following cases:_ 1. _(Theorem_ 24_)_ \(X\) _is del Pezzo, i.e._ \(r=2\)_, and the_ \(\ell\leq 8\) _points satisfy the property that no three lie on a line, no six lie on a conic, and, if_ \(\ell=8\)_, the points do not all lie on a cubic singular at one of the_ \(q_{j}\)_._ 2. _(Theorem_ 32_)_ \(r=3,\ell\leq 4\)_, and the_ \(q_{j}\) _are (linearly) general._ 3. _(Theorem_ 23_)_ \(r\) _is arbitrary and_ \(\ell=1\) In Theorem 9, examples 1 and 3 are Fano; in fact, these are all examples of Fano varieties obtained by blowing up \(\mathbb{P}^{r}\) at general points. In example 2, \(X\) is merely \((-K_{X})\)-nef when \(\ell\geq 2\); these are the first examples of non-Fano varieties proven to satisfy SAE. When the points \(q_{j}\) are general, the only cases left open are when \(r=2,\ell\geq 9\) and \(r=3,\ell\geq 5\). We believe SAE should hold in these examples, but we do not see a way to obtain unconditional proofs. In dimension 2, one would at least need a bounded negativity statement for general blowups of \(\mathbb{P}^{2}\), if not the full power of the SHGH conjecture (see [10] for a survey), and the situation in dimension 3 seems even more hopeless. #### 1.3.2 Geometric calculations In SS3.1, we prove an integral formula for the geometric Tevelev degrees of blow-ups \(\pi:X\to\mathbb{P}^{r}\) at \(\ell\leq r+1\) general points \(q_{1},\ldots,q_{\ell}\). Without loss of generality, we will take the \(q_{j}\) to be fixed points under the standard action of a \((r+1)\)-dimensional torus on \(\mathbb{P}^{r}\). Let \(\mathsf{H}\in H^{2}(X)\) denote the pullback of the hyperplane class from \(\mathbb{P}^{r}\), and let \(\mathsf{E}_{1},\ldots,\mathsf{E}_{\ell}\in H^{2}(X)\) be the classes of the exceptional divisors. Let \(\mathsf{H}^{\vee},\mathsf{E}_{1}^{\vee},\ldots,E_{\ell}^{\vee}\in H_{2}(X)\) be the corresponding dual basis, and let \[\beta=d\mathsf{H}^{\vee}+\sum_{i=1}^{\ell}k_{i}\mathsf{E}_{i}^{\vee}\in H_{2} (X,\mathbb{Z})\] be an effective curve class. We are interested in counting maps from a fixed general curve \((C,p_{1},\ldots,p_{n})\) to \(X\) in class \(\beta\). Equivalently, we count maps to \(\mathbb{P}^{r}\) of degree \(d\) and with multiplicities \(k_{1},\ldots,k_{\ell}\) at the points \(q_{1},\ldots,q_{\ell}\), respectively. Such a map is defined by a line bundle \(\mathcal{L}\) of degree \(d\) and \(\ell\) divisors \(D_{1},\ldots,D_{\ell}\) mapping to \(q_{\ell}\). When \(\ell\leq r+1\), this is equivalent to the data of \(r+1\) sections \[f_{i}\in H^{0}\left(C,\mathcal{L}\left(-\sum_{j\neq i}D_{j}\right)\right)\] which have no common vanishing point _viewed as sections of \(H^{0}(C,\mathcal{L})\)_. We will parametrize such data in what follows. **Setup** Write \(S:=\mathsf{Jac}^{\mathsf{d}}(C)\times\mathsf{Sym}^{k_{1}}(C)\times\ldots \times\mathsf{Sym}^{k_{\ell}}(C)\). Over \(S\times C\), we have the following line bundles: * the pullback \(\mathcal{P}\) from \(\mathsf{Jac}^{\mathsf{d}}(C)\times C\) of the Poincare line bundle; * for all \(i=1,\ldots,\ell\), the pullback \(\mathcal{D}_{i}\) from \(\mathsf{Sym}^{k_{i}}(C)\times C\) of the universal divisor. Let \(\nu:S\times C\to S\) be the projection and define \[\mathcal{E}:=\bigoplus_{i=1}^{r+1}\nu_{*}\left(\mathcal{P}\left(-\sum_{j\neq i }\mathcal{D}_{j}\right)\right).\] We will assume that \[d-\sum_{i\in I}k_{i}>2g-1\text{ for all }I\subseteq\{1,\ldots,\ell\}\text{ such that }|I|\leq r, \tag{3}\] in order to ensure that \(\mathcal{E}\) is a vector bundle. We will see later in SS3.1 that we will also want this to remain true upon replacing \(\mathcal{L}\) with \(\mathcal{L}(-p_{i})\), hence the appearance of \(2g-1\) rather than \(2g-2\) on the right hand side. Let \(\eta:\mathbb{P}:=\mathbb{P}(\mathcal{E})\to S\) be the corresponding projective bundle corresponding to _lines_ (not \(1\)-dimensional quotients) in the fibers of \(\mathcal{E}\). Therefore, a point of \(\mathbb{P}\) consists of the data of a point of \(S\) and sections \(f_{j}\) as above, taken up to simultaneous scaling. Our first result expresses Tevelev degrees of \(X\) as integrals on \(\mathbb{P}\). Before stating the result, we require some additional notation. We set \[\widetilde{\mathsf{H}}=c_{1}(\mathcal{O}_{\mathbb{P}}(1))-\eta_{1}-\cdots- \eta_{\ell}\in H^{*}(\mathbb{P},\mathbb{Z}) \tag{4}\] where \(\eta_{i}\) is the pullback from \(\mathsf{Sym}^{k_{i}}(C)\) of the class of the divisor \(N_{i}=\{D:D-p\geq 0\}\) (here \(p\) is any fixed point in \(C\)). Then, we have: **Proposition 10** (Integral formula).: _Assume conditions (1), (3) hold, and in addition that_ \[n-d\geq g+1. \tag{5}\] _Then, \(\mathsf{Tev}_{g,n,\beta}\) is well-defined and the following integral formula holds:_ \[\mathsf{Tev}^{X}_{g,n,\beta}=\int_{\mathbb{P}}(\widetilde{\mathsf{H}}^{r}+ \sigma_{1}\widetilde{\mathsf{H}}^{r-1}+\cdots+\sigma_{r})^{n}, \tag{6}\] _where \(\sigma_{i}\) is the \(i\)-th elementary symmetric function in \(\eta_{1},\ldots,\eta_{\ell}\) (so \(\sigma_{i}=0\) for \(i>\ell\))._ The condition \(n-d\geq g+1\) is needed to rule out extraneous contributions when the sections \(f_{j}\) are identically zero (see Lemma 40 and Remark 41), and also ensure dimensional transversality later (Lemma 50). The factor \(\widetilde{\mathsf{H}}^{r}+\sigma_{1}\widetilde{\mathsf{H}}^{r-1}+\cdots+ \sigma_{r}\) is the class of a subscheme \(V(x_{i})\) of \(\mathbb{P}\) corresponding to the condition that \(f(p_{i})=x_{i}\). However, care is needed to deal with the possibility that the sections \(f_{j}\) vanish simultaneously at \(p_{i}\); a precise definition of \(V(x_{i})\) is given in SS3.1. The computation of the class \([V(x_{i})]\) and the fact that the \(V(x_{i})\) intersect transversely at points enumerated by \(\mathsf{Tev}^{X}_{g,n,\beta}\) are both subtle. Pushing forward to \(S=\mathsf{Jac}^{d}(C)\times\mathsf{Sym}^{k_{1}}(C)\times\cdots\times\mathsf{ Sym}^{k_{\ell}}(C)\) using Grothendieck-Riemann-Roch, we obtain (see SS3.2.1 for notation): **Theorem 11**.: _Assume conditions (1), 3 and 5 hold. Then,_ \[\mathsf{Tev}^{X}_{g,n,\beta}=\sum_{m=0}^{\min(n,k_{1},\ldots,k_{r+1})}{n\choose m }(-1)^{m}\int_{S}\prod_{i=1}^{r+1}(1+\eta_{i})^{n-m-1+g-d+\overline{k}_{i}} \eta_{i}^{m}\cdot\exp\left(\frac{\overline{\tau}_{i}+\Theta-\overline{x}_{i}} {1+\eta_{i}}\right).\] We extract from Theorem 11 relatively simple formulas in two special cases. **Theorem 12** (\(g=0\) specialization).: _Let \(X\) be the blow-up of \(\mathbb{P}^{r}\) at \(\ell\leq r+1\) points and let_ \[\beta=d\mathsf{H}^{\vee}+\sum_{i=1}^{\ell}k_{i}\mathsf{E}_{i}^{\vee}\in H_{2 }(X,\mathbb{Z})\] _be an effective curve class. Assume conditions (1), (3), and (5). Then,_ \[\mathsf{Tev}^{X}_{0,n,\beta}=\sum_{m=0}^{\min(k_{1},\ldots,k_{r+1},n)}(-1)^{m }{n\choose m}\prod_{i=1}^{r+1}{n-d+\sum_{j\neq i}k_{j}-1-m\choose k_{i}-m}\] _where we set \(k_{\ell+1}=\cdots=k_{r+1}=0\) when \(\ell<r+1\)._ In particular, when \(\ell<r+1\), the summation goes away, and \(\mathsf{Tev}^{X}_{0,n,\beta}\) is a product of binomial coefficients. It is often the case that the formula above gives zero. For example, take \(\ell=1\) and \(\beta=d\mathsf{H}^{\vee}+k\mathsf{E}^{\vee}\) (where we write simply \(\mathsf{E}^{\vee}_{1}=\mathsf{E}^{\vee}\)), where \((r-1)k\leq d\). Assuming (1), the inequalities (3), and (5) are satisfied. Then, Theorem 12 reads simply \[\mathsf{Tev}^{X}_{0,n,\beta}=\binom{n-d-1}{k}.\] If we assume further that \((r-1)k\leq d<(2r-1)k\), then we have \(0\leq n-d-1<k\), so we conclude that \(\mathsf{Tev}^{X}_{0,n,\beta}=0\). In other words, the map \[\tau:\mathcal{M}_{g,n}(X,\beta)\to\mathcal{M}_{g,n}\times X^{n}\] fails to be dominant. We do not have a geometric explanation for this phenomenon. Specializing Theorem 11 to \(\ell=1\) in arbitrary genus yields: **Theorem 13** (\(\ell=1\) specialization).: _Let \(X=\mathsf{Bl}_{q}(\mathbb{P}^{r})\) and let_ \[\beta=d\mathsf{H}^{\vee}+k\mathsf{E}^{\vee}\in H_{2}(X,\mathbb{Z})\] _be an effective curve class. Assume that conditions (1), (3), and (5) hold. Then,_ \[\mathsf{Tev}^{X}_{g,n,\beta}=\sum_{m=0}^{g}(2r)^{g-m}(1-r)^{m}\binom{g}{m} \binom{n-d+g-m-1}{k}.\] Obtaining explicit formulas beyond these two specializations is cumbersome in general. For example, we have checked that, if \(X\) is the blow-up of \(\mathbb{P}^{2}\) at two points and \(\beta\cdot K^{\vee}_{X}\) is sufficiently large, then \(\mathsf{Tev}^{X}_{g,n,\beta}\) is equal to \[\sum_{\begin{subarray}{c}a_{1}+b_{1}+b_{2}+b_{3}+a_{3}+a_{4}=g\\ b_{1}+b_{2}+b_{3}=a_{2}\end{subarray}}\binom{g}{a_{1}\ a_{2}\ a_{3}\ a_{4}} \binom{a_{2}}{b_{1}\ b_{2}\ b_{3}}5^{a_{1}}(-1)^{b_{1}+a_{3}+a_{4}}\sum_{\ell= 0}^{\min(k_{1}+a_{3}-a_{2},k_{2}+a_{4}-a_{2},b_{1})}(-1)^{\ell}\binom{b_{1}}{ \ell}.\] Note that all of our geometric results require the inequalities (3) and (5). If \(g\) and \(k_{j}\) are viewed as fixed, then both inequalities hold for sufficiently large \(d\), so our results may be viewed as "large degree" counts in analogy with Theorem 3. The low degree counts seem, unsurprisingly, more difficult, and are left open. We remark that when \(d-(k_{1}+\cdots+k_{r})<0\) and \(n\geq 1\), a map \(f:C\to X\) as above would need to have image contained in the strict transform of the torus-invariant hyperplane where the last coordinate is zero. On the other hand, as the \(x_{i}\) are general, they may be chosen not to lie on this hyperplane, so it follows immediately that \(\mathsf{Tev}^{X}_{g,n,\beta}=0\) in this case. #### 1.3.3 Virtual calculations Via a computation in the quantum cohomology ring \(QH^{*}(X)\), we also obtain: **Theorem 14**.: _Let \(X=\mathsf{Bl}_{q}(\mathbb{P}^{r})\) and let_ \[\beta=d\mathsf{H}^{\vee}+k\mathsf{E}^{\vee}\in H_{2}(X,\mathbb{Z})\] _be an effective curve class. Assume that (1) holds and that \(n-d\geq 1\). Then,_ \[\mathsf{vTev}^{X}_{g,n,\beta}=\sum_{m=0}^{g}(2r)^{g-m}(1-r)^{m}\binom{g}{m} \binom{n-d+g-m-1}{k}.\] Thus, the formula matches that of Theorem 13, but holds in a wider range. ### Acknowledgments Portions of this work were completed during the first author's visits to HU Berlin in November 2022 (during the workshop "Resonance varieties, topological invariants of groups, moduli") and February 2023 (during the Northern German Algebraic Geometry Seminar) and the second author's visits to ETH Zurich in July 2022 (during the ICM satellite meeting, Algebraic Geometry and Number Theory) and November 2022. We thank both institutions for their continued support. We also thank Gavril Farkas, Zhuang He, Woonam Lim, Rahul Pandharipande, Eric Riedl, and Johannes Schmitt for helpful discussions. A.C. was supported by SNF-200020-182181. C.L. was supported by an NSF postdoctoral fellowship, grant DMS-2001976, and the MATH+ incubator grant "Teelev degrees." ## 2 Enumerativity ### Failure of SAE We give a general construction of \(X\) failing SAE. **Proposition 15**.: _Let \(\pi:X\to\mathbb{P}^{r}\) be the blow-up of \(\mathbb{P}^{r}\) at any set of points \(q_{1},\ldots,q_{m}\in\mathbb{P}^{r}\). Suppose that there exists a curve \(\mathbb{P}^{1}\cong D\subset X\), such that either:_ 1. \(D\cdot K_{X}^{\vee}<0\)_, or_ 2. \(D\cdot K_{X}^{\vee}=0\)_,_ \(r=2\)_, and_ \(\pi(D)\) _is not a point._ _Then, \(X\) fails to satisfy SAE._ Proof.: We will prove that for every \(g\), there is a divergent sequence \(\{n_{k}=n_{k}[g]\}_{k}\) such the general fiber of the map \(\tau\) contains maps with singular domain for all \(n_{k}\). We start by assuming that \(D\) has negative degree against \(K_{X}^{\vee}\). Fix \(g\geq 0\), and for \(d\geq 0\), set \[\beta[d]=d\mathsf{H}^{\vee}+rD\in H_{2}(X,\mathbb{Z}),\] where \(\mathsf{H}^{\vee}\) is the pullback of the class of a line on \(\mathbb{P}^{r}\). Notice that \(\beta[d]\cdot K_{X}^{\vee}\) diverges for large \(d\). Let \(n\geq 0\) be such that equation (1) is satisfied with \(\beta=\beta[d]\), i.e. \[d(r+1)+r(D\cdot K_{X}^{\vee})=r(n+g-1);\] in order for this to be possible, we assume that \(d\) is divisible by \(r\). Let \[\mathcal{M}_{\Gamma}\subseteq\partial\overline{\mathcal{M}}_{g,n}(X,\beta)\] be the locally closed locus consisting of stable maps where the domain curve consists of a genus \(g\) smooth curve mapping to \(X\) with class \(d\mathsf{H}^{\vee}\) and containing all \(n\) of the markings, attached to a smooth rational tail mapping to \(X\) with class \(rD\). We claim that \(\mathcal{M}_{\Gamma}\) dominates \(\overline{\mathcal{M}}_{g,n}\times X^{n}\), so that SAE fails. To see this, notice that \(m=n-D\cdot K_{X}^{\vee}>n\) satisfies the dimensional constraint (1) for \(\mathbb{P}^{r}\) with curve class \(d\mathsf{H}^{\vee}\). Therefore, since \(\mathsf{Tev}_{g,m,dH^{\vee}}^{\mathbb{P}^{r}}>0\) (for example, we have \(m\geq d+2\) for large \(d\), so we can apply [9, Theorem 1.1]), for general \((C,p_{1},\ldots,p_{m})\in\mathcal{M}_{g,m}\) and \(x_{1},\ldots,x_{m}\in\mathbb{P}^{r}\) there exists \[f:(C,p_{1},\ldots,p_{m})\to\mathbb{P}^{r}\] in class \(d\mathsf{H}^{\vee}\) mapping \(p_{i}\) to \(x_{i}\) for all \(i=1,\ldots,m\). We are also free to assume that \(x_{n+1}\in\pi(D)\). From \(f\), we construct a new map \[\tilde{f}:(\tilde{C},p_{1},\ldots,p_{n})\to X\] where \(\widetilde{C}\) is obtained from \(C\) by attaching a \(\mathbb{P}^{1}\) at \(p_{n+1}\) and forgetting \(p_{i}\) for \(i>n+1\). The restriction \(\tilde{f}\) to \(C\) is the unique morphism such that \(\pi\circ\tilde{f}=f\), and the restriction to \(\mathbb{P}^{1}\) is a fixed degree \(r\) cover of \(D\). The map \([\tilde{f}]\) lies in \(\mathcal{M}_{\Gamma}\), and maps to a general point of \(\overline{\mathcal{M}}_{g,n}\times X^{n}\), as needed. The proof in the second case is analogous: we have \(m=n\) above, so we cannot add an \((n+1)\)-st point on \(C\) constrained to map to \(D\), but note that \(f(C)\cap\pi(D)\neq\emptyset\) automatically, so we can construct a map \(\tilde{f}\) as before. Proof of Theorem 8.: Apply Proposition 15 with \(D\) equal to the strict transform of the line between any two of the blown up points. Similarly, any blow-up of \(\mathbb{P}^{r}\) at a set containing three collinear points fails to satisfy SAE. ### Generalities for Fano varieties We now collect general statements used to prove SAE for Fano varieties; in this subsection, we will always assume that \(X\) is Fano. **Definition 16**.: _Let \(\beta\in H_{2}(X,\mathbb{Z})\) be an effective curve class (possibly 0). We say that \(\beta\) is **ordinary** if the evaluation map \(\mathcal{M}_{0,1}(X,\beta)\to X\) is dominant._ Fix a genus \(g\geq 0\) and a non-zero effective curve class \(\beta\) on \(X\). Assume that \[\operatorname{vdim}(\overline{\mathcal{M}}_{g,n}(X,\beta))\leq\dim( \overline{\mathcal{M}}_{g,n}\times X^{n}). \tag{7}\] (Note: this is weaker than (1).) For integers \(a\geq 0\) and \(m\in[0,n]\), define \[\mathcal{M}_{\Gamma}^{(m,a)}\subseteq\overline{\mathcal{M}}_{g,n}(X,\beta)\] be the locally closed locus parametrizing maps whose domain \(C\) has the following topological type. \(C\) contains a smooth _spine_ component \(C_{\mathrm{sp}}\) of genus \(g\), containing the marked points \(p_{n-m+1},\ldots,p_{n}\). Attached to \(C_{\mathrm{sp}}\), we have trees of rational curves \(T_{1}^{\prime},\ldots,T_{a}^{\prime}\), as well as trees of rational curves \(T_{1},\ldots,T_{n-m}\), such that the tree \(T_{i}\) contains the marked point for \(i=1,\ldots,n-m\). See also Figure 1. Here \((n-m)+a\geq 0\); we assume further that if \((n-m)+a=0\), that is, if \(C\) is irreducible, then (7) is a _strict_ inequality. Up to permuting indices of the marked points, the \(\mathcal{M}_{\Gamma}^{(m,a)}\) contain all of the boundary strata that dominate \(\overline{\mathcal{M}}_{g,n}\). Therefore, \(X\) satisfies SAE if and only if, given \(g,n\), where either \(g=0\) and \(n\geq 1\), or \(n\) is large (depending on \(X\) and \(g\)), the space \(\mathcal{M}_{\Gamma}^{(m,a)}\) fails to dominate \(\overline{\mathcal{M}}_{g,n}\times X^{n}\) for all \(m,a\). Let \(Z\) be an irreducible component of \(\mathcal{M}_{\Gamma}^{(m,a)}\) and \([f]\in Z\) be a general point. For \(n\) large (depending on \(X\) and \(g\)) and for \(g=0\) and \(n\geq 1\), consider the following two conditions: * if \(a=0\) and the pushforward under \(f\) of each component of \(T_{i}\) is an ordinary class for \(i=1,\ldots,n-m\), then we have the inequality \[\dim_{[f|_{C_{\mathrm{sp}}}]}(\overline{\mathcal{M}}_{g,n}(X,f_{ *}[C_{\mathrm{sp}}]))-\operatorname{vdim}(\overline{\mathcal{M}}_{g,n}(X,f_{ *}[C_{\mathrm{sp}}]))-n\] \[\leq-(g+1);\] * for all \(i=1,\ldots,n-m\), either \(f_{*}[T_{i}]\cdot K_{X}^{\vee}\geq r+1\) or the pushforward under \(f\) of each component of \(T_{i}\) is an ordinary class. We will prove that if conditions (*) and (**) hold for all \(Z\), then \(X\) satisfies SAE. This will follow from the next two propositions. **Proposition 17**.: _Suppose that \(a=0\), that, for \(i=1,\ldots,n-m\), the pushforward under \(f\) of each component of \(T_{i}\) is an ordinary class, and that condition (*) holds. Then, \(Z\) fails to dominate \(\overline{\mathcal{M}}_{g,n}\times X^{n}\)._ Proof.: Arguing as in [16, Proof of Proposition 22], we have \[\dim_{[f]}(Z) \leq\dim_{[f|_{C_{\mathrm{sp}}}]}(\overline{\mathcal{M}}_{g,n}(X,f_{*}[C_{\mathrm{sp}}]))-\operatorname{vdim}(\overline{\mathcal{M}}_{g,n}(X,f_{*}[C_{\mathrm{sp}}]))-n\] \[+\dim(\overline{\mathcal{M}}_{g,n}\times X^{n})+m\] where the inequality is strict when \(n=m\). If \(m\geq g+1\) and \(Z\) dominates \(\overline{\mathcal{M}}_{g,n}\times X^{n}\) then, by [16, Proposition 13], we have \[\dim_{[f|_{C_{\mathrm{sp}}}]}(\overline{\mathcal{M}}_{g,n}(X,f_{*}[C_{ \mathrm{sp}}]))=\operatorname{vdim}(\overline{\mathcal{M}}_{g,n}(X,f_{*}[C_{ \mathrm{sp}}]))\] and this is a contradiction (\(Z\) cannot dominate \(\overline{\mathcal{M}}_{g,n}\times X^{n}\) for dimensional reasons). When instead \(m<g+1\), then we conclude using Condition (*). **Proposition 18**.: _Suppose that both conditions (*) and (**) hold. Then, \(Z\) fails to dominate \(\mathcal{M}_{g,n}\times X^{n}\)._ Proof.: Because \(X\) is Fano, we have \[f_{*}([T_{i}^{\prime}])\cdot K_{X}^{\vee}>0\] for all \(i=1,\ldots,a\). Therefore, arguing as in [16, proof of Proposition 23], we may assume without loss of generality that \(a=0\). If the pushforward under \(f\) of each irreducible component of each \(T_{i}\) for \(i=1,\ldots,s\) is ordinary, then we are done by Proposition 17. Suppose now without loss of generality that \(T_{1},\ldots,T_{s}\) with \(0<s\leq n-m\) are trees each containing some component whose pushforward via \(f\) is not ordinary. Then, by condition (**), we have \[\deg(f_{*}[T_{i}])\geq r+1\] for all \(i=1,\ldots,s\). Let \(\hat{f}:\hat{C}\to X\) be the stable map obtained from \(f\) by deleting \(T_{1},\ldots,T_{s}\) and \(x_{1},\ldots,x_{s}\) and let \(\widehat{\beta}=\widehat{f}_{*}[\widehat{C}]\). Notice that \([\widehat{f}]\) belongs to an irreducible component \(\widehat{Z}\) of \[\mathcal{M}_{\Gamma}^{(m,0)}\subset\ \overline{\mathcal{M}}_{g,n-s}(X,\widehat{ \beta})\] It is enough to show that \(\widehat{Z}\) does not dominate \(\overline{\mathcal{M}}_{g,n-s}\times X^{n-s}\). This will follow from Proposition 17, once we notice the following two facts. First, from (7) and condition (**) we have \[\widehat{\beta}\cdot K_{X}^{\vee}=\beta\cdot K_{X}^{\vee}-\sum_{i=1}^{s}f_{* }[T_{i}]\cdot K_{X}^{\vee}\leq r(n-s+g-1)-s\] and so \[\operatorname{vdim}(\overline{\mathcal{M}}_{g,n-s}(X,\widehat{\beta}))< \dim(\overline{\mathcal{M}}_{g,n-s}\times X^{n-s}).\] Second, by Equation 7 and Condition (**), we have \[s\leq\frac{\beta\cdot K_{X}^{\vee}}{r+1}=\frac{r}{r+1}\cdot n+\frac{r}{r+1} \cdot(g-1)\] and so for large \(n\) also \(n-s\) is large and for \(g=0\) and \(n\geq 1\), also \(n-s\geq 1\). To summarize, we have shown that: **Proposition 19**.: _Suppose that every irreducible component \(Z\subset\mathcal{M}_{\Gamma}^{(m,a)}\) as above satisfies Conditions (*) and (**) for large \(n\) (depending on \(X\) and \(g\)), and for all \(n\geq 1\) when \(g=0\). Then, \(X\) satisfies SAE._ ### Blow-ups of \(\mathbb{P}^{r}\) Let \(X\) be the blow-up of \(\mathbb{P}^{r}\) at \(\ell\) general points. The next results will later be useful to deal with conditions (*) and (**). **Lemma 20**.: _Let_ \[\beta=d\mathsf{H}^{\vee}+\sum_{i=1}^{\ell}k_{i}\mathsf{E}_{i}^{\vee}\] _be a non-zero effective curve in \(H_{2}(X,\mathbb{Z})\). If \(\beta\) is ordinary, then_ \[d\geq\sum_{i\in I}k_{i}\text{ for all }I\subseteq\{1,\ldots,\ell\}\text{ with }|I|\leq r.\] Proof.: Fix \(I\subseteq\{1,\ldots,r+1\}\) such that \(|I|=r\). Let \(f:\mathbb{P}^{1}\to X\) be a curve in class \(\beta\); because \(\beta\) is ordinary, we may assume that the image of \(f\) is not contained the strict transform \(\Lambda_{I}\) of the hyperplane in \(\mathbb{P}^{r}\) generated by the points \(\pi(\mathsf{E}_{i})\) for \(i\in I\). In particular, we have \[0\leq\beta\cdot\Lambda_{I}=d-\sum_{i\in I}k_{i}.\] as desired. **Lemma 21**.: _Let \(0\neq\beta\in H_{2}(X,\mathbb{Z})\) be an effective ordinary curve class. Then, \(\beta\cdot K_{X}^{\vee}\geq 2\)._ Proof.: By [16, Proposition 13], every irreducible component of \(\overline{\mathcal{M}}_{0,1}(X,\beta)\) dominating \(X\) has dimension equal to \(\beta\cdot K_{X}^{\vee}+r-2\), which must be greater than or equal to the dimension of \(X\). **Lemma 22**.: _Assume that \(\ell\leq r+1\). Let \(f:C\to X\) be a map from a smooth genus \(g\) curve in class \(d\mathsf{H}^{\vee}+\sum_{i=1}^{\ell}k_{i}\mathsf{E}_{i}^{\vee}\) with either \(d>0\) or \(d=k_{1}=\ldots=k_{\ell}=0\). Then, \(h^{1}(C,f^{*}T_{X})\leq rg\)._ Proof.: When \(d=k_{1}=\ldots=k_{\ell}=0\), we have an equality. Assume that \(d>0\). If \(\ell\leq r+1\), then \[H^{0}(X,T_{X})\otimes\mathcal{O}_{X}\to T_{X}\] is surjective on \(X\smallsetminus\bigcup_{i=1}^{\ell}\mathsf{E}_{i}\), and so also \[H^{0}(C,f^{*}T_{X})\otimes\mathcal{O}_{C}\to f^{*}T_{X}\] is surjective on \(C\smallsetminus\bigcup_{i=1}^{\ell}f^{-1}(\mathsf{E}_{i})\), which is non-empty. Choosing \(r\) global sections of \(H^{0}(C,f^{*}T_{X})\), linearly independent upon restriction to some point of \(C\), yields a morphism \[\mathcal{O}_{C}^{\oplus r}\to f^{*}T_{X}\] which is surjective outside finitely many points on \(C\). In particular, the induced map \[H^{1}(C,\mathcal{O}_{C})^{\oplus r}\twoheadrightarrow H^{1}(C,f^{*}T_{X})\] is surjective, and this gives the stated conclusion. ### SAE for \(\mathsf{Bl}_{q}(\mathbb{P}^{r})\) Let \(X=\mathsf{Bl}_{q}(\mathbb{P}^{r})\) be the blow-up of \(\mathbb{P}^{r}\) at one point. Write \(\mathsf{H},\mathsf{E}\) for the divisor classes of the pulled-back hyperplane class and the exceptional divisor, respectively, and write \(\mathsf{H}^{\vee},\mathsf{E}^{\vee}\) for the corresponding dual basis of \(H_{2}(X)\). **Theorem 23**.: \(X\) _satisfies SAE._ Proof.: We use Proposition 19. Let \(Z\subseteq\mathcal{M}_{\Gamma}^{(m,a)}\) be an irreducible component dominating \(\overline{\mathcal{M}}_{g,n}\times X^{n}\) and let \([f]\in Z\) be a general point. 1. We verify Condition (*). Assume that \(k=0\) and that the pushforward under \(f\) of each component of \(T_{i}\) is an ordinary class for \(i=1,\ldots,n-m\). Write \(\beta=d\mathsf{H}^{\vee}+k\mathsf{E}^{\vee}\). We distinguish two cases. 2. Suppose that \(f_{*}[C_{\mathrm{sp}}]=-k^{\prime}\mathsf{E}^{\vee}\), with \(k^{\prime}>0\). Notice that, since the pushforward under \(f\) of each component of \(T_{i}\) is an ordinary class for \(i=1,\ldots,n-m\), by Lemma 20, we must have \[d-(k+k^{\prime})\geq 0.\] (8) On the other hand, we have \[\overline{\mathcal{M}}_{g,n}(X,-k^{\prime}\mathsf{E}^{\vee})=\overline{ \mathcal{M}}_{g,n}(\mathbb{P}^{r-1},k^{\prime}),\] and so by Lemma 22, \[\dim_{[f|_{C_{\mathrm{sp}}}]}\overline{\mathcal{M}}_{g,n}(X,-k^{ \prime}\mathsf{E}^{\vee}) \leq\operatorname{vdim}(\overline{\mathcal{M}}_{g,n}(\mathbb{P}^{ r-1},k^{\prime}))+rg\] \[=(r-4)(1-g)+k^{\prime}r+n+rg.\] Therefore, \[\dim_{[f|_{C_{\mathrm{sp}}}]}( \overline{\mathcal{M}}_{g,n}(X,f_{*}[C_{\mathrm{sp}}]))- \operatorname{vdim}(\overline{\mathcal{M}}_{g,n}(X,f_{*}[C_{\mathrm{sp}}]))+( r-1)g\] \[=g-1+k^{\prime}+(r-1)g\] \[\leq g-1+d-k+(r-1)g\] \[< n-(g+1),\] where the last equality holds for every \(n\geq 1\) when \(g=0\) and for \(n\) large when \(g>0\). Here, in the last two inequalities, we have used (8) and the dimensional constraint (7). 3. If instead \(f_{*}[C_{\mathrm{sp}}]=d^{\prime}\mathsf{H}+k^{\prime}\mathsf{E}^{\vee}\) with \(d^{\prime}\geq k^{\prime}\geq 0\) and \(d^{\prime}>0\) or \(d^{\prime}=k^{\prime}=0\), then Condition (*) follows from Lemma 22. 4. We verify Condition (**). If some tree \(T_{i}\) for \(i\in\{1,\ldots,n-m\}\) contains a component \(T_{i}^{0}\) whose pushforward via \(f\) is not ordinary, then it must be \[f_{*}[T_{i}^{0}]=-k^{\prime}\mathsf{E}^{\vee}\text{ for some }k^{\prime}>0\] which has degree \(k^{\prime}(r-1)\geq r-1\). Also, \(T_{i}\) has to contain a component \(\overline{T_{i}}\) whose pushforward via \(f\) is ordinary and which, by Lemma 21, has degree at least \(2\). Therefore, \(f_{*}[T_{i}]\) has at least degree \(r+1\). ### SAE for del Pezzo surfaces **Theorem 24**.: _Let \(X\) be a del Pezzo surface. Then, \(X\) satisfies SAE._ Proof.: We will use Proposition 19 again. Let \(Z\subseteq\mathcal{M}_{\Gamma}^{(m,a)}\) be an irreducible component and \([f]\in Z\) a general point. 1. First, we verify Condition (*). If \(m\geq g+1\), then by [16, Proposition 13], we have \(h^{1}(C_{\mathrm{sp}},f^{*}T_{X})=0\), and so \[\dim_{[f]_{C_{\mathrm{sp}}}}(\overline{\mathcal{M}}_{g,n}(X,f_{*}[C_{\mathrm{ sp}}]))=\mathrm{vdim}(\overline{\mathcal{M}}_{g,n}(X,f_{*}[C_{\mathrm{sp}}]))\] and Condition (*) holds. If instead \(m<g+1\), then \(n-m\geq n-g-1\). Observe that, for all \(i=1,\ldots,n-m\), the pushforward of some component \(\overline{T_{i}}\) of \(T_{i}\) must be an ordinary class different from \(0\). Therefore, by Lemma 21, we have \[f_{*}([T_{i}])\cdot K_{X}^{\vee}\geq 2\text{ for }i=1,\ldots,n-m,\] and so \[f_{*}[C_{\mathrm{sp}}]\cdot K_{X}^{\vee}\leq\beta\cdot K_{X}^{\vee}-2(n-m)<2( (g+1)+g-1),\] (9) where in the second inequality we used (7). In particular, when \(g=0\), the right hand side is \(0\), yielding a contradiction. Therefore, for \(g=0\), it must be that \(m\geq 1=g+1\) and \(h^{1}(C_{\mathrm{sp}},f^{*}T_{X})=0\). For \(g\) arbitrary, (9) implies that \(f_{*}[C_{\mathrm{sp}}]\) has uniformly bounded degree against \(K_{X}^{\vee}\), so there are only finitely many possibilities for \(f_{*}[C_{\mathrm{sp}}]\) by the polyhedrality of the effective cone of \(X\). Therefore, \(h^{1}(C_{\mathrm{sp}},f^{*}T_{X})\) is bounded by a constant \(b[X,g]\in\mathbb{Z}_{\geq 0}\) (depending only on \(X\) and \(g\)), and we have \[\dim_{[f]_{C_{\mathrm{sp}}}}(\overline{\mathcal{M}}_{g,n}(X,f_{*}[C_{\mathrm{ sp}}]))-\mathrm{vdim}(\overline{\mathcal{M}}_{g,n}(X,f_{*}[C_{\mathrm{sp}}]))\leq b [X,g].\] This implies that Condition (*) holds. 2. We now deal with Condition (**). Let \(T_{i}\) be a tree containing a component whose pushforward under \(f\) is not ordinary. It still must contain a component \(\overline{T_{i}}\) such that \(f_{*}([\overline{T_{i}}])\) is an ordinary curve (and so of degree at least \(2\) by Lemma 21), and at least one more irreducible component, of degree at least \(1\) because \(X\) is Fano. Therefore, \(\deg(f_{*}([T_{i}])\geq 3=r+1\) and we are done. ### SAE in dimension \(3\) Let \(\pi:X\to\mathbb{P}^{3}\) be the blow-up of \(\mathbb{P}^{3}\) at \(\ell\leq 4\) general points. Write \(\mathsf{H},\mathsf{E}_{1},\ldots,\mathsf{E}_{\ell}\) for the divisor classes of the pulled-back hyperplane class and the exceptional divisor, respectively, and write \(\mathsf{H}^{\vee},\mathsf{E}_{1}^{\vee},\ldots,\mathsf{E}_{\ell}^{\vee}\) for the corresponding dual basis of \(H_{2}(X)\). In this section, we prove that \(X\) satisfies SAE (Theorem 32). Note that such \(X\) are not Fano; this is, to our knowledge, the first example of a non-Fano variety whose Tevelev degrees are asymptotically enumerative. #### 2.6.1 Low degree effective curves in \(\mathbb{P}^{3}\) We begin by recalling a result about the effective cone of \(X\). **Lemma 25**.: _[_8_, Proposition 4.1]_ _The effective cone of \(X\) is linearly generated by \(1\)-dimensional linear spaces in the exceptional divisors and the strict transforms of \(1\)-dimensional linear subspaces of \(\mathbb{P}^{3}\), possibly passing through the \(\ell\) blown up points._ **Corollary 26**.: _The anticanonical class \(K_{X}^{\vee}=4\mathsf{H}-2(\mathsf{E}_{1}+\cdots+\mathsf{E}_{\ell})\) is nef._ **Remark 27**.: _For any curve class \(\beta\), the intersection number \(\beta\cdot K_{X}^{\vee}\) is always even; we will use this fact later._ **Lemma 28**.: _Let \(C\subset X\) be an irreducible curve of geometric genus 0 and with non-positive degree against \(K_{X}^{\vee}\). Then, \([C]\in H_{2}(X,\mathbb{Z})\) is the class of a the strict transform of a line through two of the \(q_{j}\). (In particular, \(C\) has degree 0 against \(K_{X}^{\vee}\).)_ Proof.: Write \[[C]=d\mathsf{H}^{\vee}+\sum_{j=1}^{\ell}k_{j}\mathsf{E}_{j}^{\vee};\] we may assume \(d,k_{1},\ldots,k_{\ell}\geq 0\). Let \(\mathbb{T}\subset\mathbb{P}^{3}\) be a torus acting on \(\mathbb{P}^{3}\) for which the \(q_{j}\) are torus-fixed points. We also denote by \(\mathbb{T}\subset X\) the pullback to \(X\), so that \(\mathbb{T}\) also acts on \(X\). We claim that every curve in class \([C]\) lies in the strict transform \(\Lambda\) of some torus-fixed hyperplane of \(\mathbb{P}^{3}\). If not, then \([C]\) would be ordinary, and so by Lemma 21, would have degree at least 2. To see that \([C]\) would be ordinary, let \(\eta:\widetilde{C}\to C\) the normalization map and let \(p\in\widetilde{C}\) be such that \(\eta(p)\) does not belong to the strict transform of any torus fixed hyperplane of \(\mathbb{P}^{3}\). Then, the morphism \[\mathbb{T}\times\widetilde{C} \to X\] \[(t,x) \mapsto t\cdot\eta(x)\] would yield a dominant map \[\mathbb{T}\to\mathcal{M}_{0,1}(X,[C])\xrightarrow{\mathrm{ev}_{p}}X\] implying that \([C]\) is ordinary. Notice that \(\Lambda\) is isomorphic to the blow-up of \(\mathbb{P}^{2}\) of at most 3 points, and \([C]\) may be regarded as pushed forward from a class \([C]\in H_{2}(\Lambda,\mathbb{Z})\). Then, if \([C]\) is not the class of the line through two of the \(q_{j}\), then \[d\geq k_{i}+k_{j}\text{ for all }1\leq i<j\leq 3, \tag{10}\] (we take \(k_{j}=0\) for \(j>\ell\) if \(\ell\leq 2\)), whence \[1 \geq K_{X}^{\vee}\cdot\beta\] \[=4d-2(k_{1}+k_{2}+k_{3})\] \[=4d-(k_{1}+k_{2})-(k_{1}+k_{3})-(k_{2}+k_{3})\] \[\geq d.\] On the other hand, the degree against \(K_{X}^{\vee}\) of the classes \(\mathsf{H}^{\vee}+\mathsf{E}_{i}^{\vee}\) and \(-\mathsf{E}_{j}^{\vee}\) is 2, so we find no other possibilities for \(\beta\) **Lemma 29**.: _Let \(\beta\in H_{2}(X,\mathbb{Z})\) be an effective ordinary curve class such that \(K_{X}^{\vee}\cdot C=2\). Then, \(\beta=\mathsf{H}^{\vee}+E_{j}^{\vee}\) for some \(j\in\{1,\ldots,\ell\}\)._ Proof.: Write \[\beta=d\mathsf{H}^{\vee}+\sum_{j=1}^{\ell}k_{j}\mathsf{E}_{j}^{\vee}.\] Then, by hypothesis, \[2=\beta\cdot K_{X}^{\vee}=4d-2(k_{1}+\cdots+k_{\ell})\] and, since \(\beta\) is ordinary, by Lemma 20 we have \[d\geq\sum_{j\in J}k_{j}\text{ for all }J\subseteq\{1,\ldots,\ell\}\text{ with }|J|=3.\] Therefore, we have \(4d\geq 3(k_{1}+\cdots+k_{\ell})\) and so \[2-(k_{1}+\cdots+k_{\ell})=4d-3(k_{1}+\cdots+k_{\ell})\geq 0\] and we obtain \(k_{1}+\cdots+k_{\ell}\in\{0,1,2\}\), from which the lemma follows easily. The previous lemmas motivate the following definition. **Definition 30**.: _Let \(0\neq\beta\in H_{2}(X,\mathbb{Z})\) be an effective curve class. We say that:_ * \(\beta\) _is_ _exceptional_ _if_ \(\beta=-k\mathsf{E}_{j}^{\vee}\) _for some_ \(k>0\) _and_ \(j\in\{1,\ldots,\ell\}\)_,_ * \(\beta\) _is a_ _fixed line_ _if it is a positive multiple of the strict transform of a line in_ \(\mathbb{P}^{3}\) _through two of the blown-up points, and_ * \(\beta\) _is a_ _special line_ _if it is not a fixed line and is the strict transform of a line in_ \(\mathbb{P}^{3}\) _through one of the blown-up points._ We remark that exceptional classes and fixed lines are not ordinary, but special lines are. #### 2.6.2 Proof of SAE Let \(\pi:X\to\mathbb{P}^{3}\) be the blow-up at \(\ell\leq 4\) points \(q_{1},\ldots,q_{\ell}\). Fix a genus \(g\geq 0\) and a non-zero effective curve class \(\beta\). We will follow the setting of SS2.2, but need to make suitable modifications now that \(X\) is not Fano. Assume inequality (7) holds. For integers \(a\geq 0\) and \(s,m\geq 0\) such that \(s+m\leq n\), define \[\mathcal{M}_{\Gamma}^{(s,m,a)}\subseteq\overline{\mathcal{M}}_{g,n}(X,\beta)\] to be the locally closed locus parametrizing maps whose domain \(C\) has the following topological type. \(C\) contains a smooth _spine_ component \(C_{\mathrm{sp}}\) of genus \(g\), containing the marked points \(p_{n-m+1},\ldots,p_{n}\). Attached to \(C_{\mathrm{sp}}\), we have trees of rational curves \(T_{1}^{\prime},\ldots,T_{a}^{\prime}\), as well as trees of rational curves \(T_{1},\ldots,T_{n-m}\), such that the tree \(T_{i}\) contains the marked point \(p_{i}\) for \(i=1,\ldots,n-m\). Furthermore, the integer \(s\) is such that \(T_{i}\) contains a non-ordinary component if and only if \(i\leq s\). See also Figure 2.. Again, we assume that if \((n-m)+a=0\), that is, if \(C\) is irreducible, then (7) is a _strict_ inequality. Let \(Z\) be an irreducible component of \(\mathcal{M}_{\Gamma}^{(s,m,k)}\). We will prove: **Proposition 31**.: _If either \(n\) is large (depending on \(X\) and \(g\)) or \(g=0\) and \(n\geq 1\), then the component \(Z\) fails to dominate \(X^{n}\times\overline{\mathcal{M}}_{g,n}\)._ As an immediate corollary, we obtain: **Theorem 32**.: \(X\) _satisfies SAE._ We prove Proposition 31 by first making several reductions. Let \([f]\in Z\) be a general point. **Reduction 1:**_It is enough to prove Proposition 31 for \(a=0\)._ Proof.: Because \(f_{*}[T^{\prime}_{i}]\cdot K^{\vee}_{X}\geq 0\), this is clear when \(m<n\), as we may simply delete the components \(T^{\prime}_{i}\), so assume that \(m=n\). For the same reason, we can also assume that \(a=1\), that \(T^{\prime}_{1}\) is irreducible, and that \(f_{*}[T^{\prime}_{1}]\cdot K^{\vee}_{X}=0\). It follows from Lemma 28 and stability that \[f_{*}[T^{\prime}_{1}]=m(\mathsf{H}^{\vee}+\mathsf{E}^{\vee}_{j}+\mathsf{E}^{ \vee}_{j^{\prime}})\text{ for some }m\in\mathbb{Z}_{>0}\text{ and }1\leq j<j^{\prime}\leq\ell.\] Let \(\mathsf{L}\) be the line in \(\mathbb{P}^{3}\) through \(\pi(\mathsf{E}_{j})\) and \(\pi(\mathsf{E}_{j^{\prime}})\), and \(\rho:\widetilde{X}\to X\) be the blow-up of \(X\) along the strict transform of \(\mathsf{L}\). Call \(\overline{\beta}=\beta-f_{*}[T^{\prime}_{1}]\) and let \(\overline{Z}\subset\mathcal{M}_{g,n}(X,\overline{\beta})\) be the irreducible component to which \([f|_{C_{\mathrm{sp}}}]\) belongs. From \(\overline{Z}\) we get an irreducible substack \[\widetilde{Z}\subseteq\mathcal{M}_{g,n}(\widetilde{X},\widetilde{\beta})\] dominating \(\overline{Z}\). Here, \(\widetilde{\beta}\in H_{2}(\widetilde{X},\mathbb{Z})\) is an effective curve class with the property that \[\widetilde{\beta}\cdot\mathsf{E}>0\] where \(\mathsf{E}\) is the class of the exceptional divisor in \(\widetilde{X}\). If \(Z\) dominates \(X^{n}\times\mathcal{M}_{g,n}\), then \(\widetilde{Z}\) dominates \(\widetilde{X}^{n}\times\mathcal{M}_{g,n}\), and applying [16, Proposition 13] twice, we obtain \[\dim(Z)=n+\beta\cdot K^{\vee}_{X}>n+\widetilde{\beta}\cdot K^{\vee}_{ \widetilde{X}}\geq\dim(\widetilde{Z}),\] contradicting the fact that \(\widetilde{Z}\) dominates \(Z\) Figure 2: Topological type of the domain curves of points in \(\mathcal{M}^{(m,s,a)}_{\Gamma}\) Assume henceforth that \(a=0\). **Reduction 2:**_It is enough to prove Proposition 31 for \(s=0\)._ Proof.: If, for some \(i\in\{1,\ldots,s\}\), we have \(f_{*}[T_{i}]\cdot K_{X}^{\vee}\geq 4=\dim(X)+1\), then arguing as in the proof of Proposition 18 above, we can delete \(T_{i}\) from the domain. Thus, we may assume that, for every \(i=1,\ldots,s\), we have \(f_{*}[T_{i}]\cdot K_{X}^{\vee}\leq 2\) (recall that curves on \(X\) always have even anti-canonical degree). Let \[T_{i}=\bigcup_{j=0}^{k_{i}}T_{i}^{j} \tag{11}\] be the decomposition in irreducible components of \(T_{i}\). Then, by Lemmas 28 and 29, \(f_{*}[T_{i}^{j}]\) is a fixed line for all but one \(j\in\{0,\ldots,k_{i}\}\), say except for \(j=0\), and \(f_{*}[T_{i}^{0}]\) is a special line \(\mathsf{H}^{\vee}+\mathsf{E}_{h}^{\vee}\) for some \(h\in\{1,\ldots,\ell\}\). Now, for \([f]\in Z\) such that \(f(p_{i})=x_{i}\), we need \(x_{i}\in f(T_{i}^{0})\). Thus, the image \(f(T_{i}^{0})\) is the strict transform of the unique line between \(q_{h}\) and \(x_{i}\), while for \(j\geq 1\), the reduced image \(f(T_{i}^{j})^{\mathrm{red}}\) is either a point or the strict transform of the unique line in \(\mathbb{P}^{3}\) through two of the points \(q_{1},\ldots,q_{\ell}\). By assumption, there is at least a \(j\) for which \([f(T_{i}^{j})^{\mathrm{red}}]\) is a fixed line. However, for general \(x_{i}\), two such curves do not meet in \(X\). Thus, \(Z\) cannot dominate \(\overline{\mathcal{M}}_{g,n}\times X^{n}\) unless \(s=0\). Assume henceforth that \(a=s=0\). **Reduction 3:**_We can assume that each \(T_{i}\) is irreducible and that \(f_{*}[T_{i}]\) is a special line for \(i=1,\ldots,n-m\)._ Proof.: If \(f_{*}[T_{i}]\cdot K_{X}^{\vee}\geq 4\), then arguing as in the proof of Proposition 18, we can delete \(T_{i}\). By Lemma 21, we can assume that if (11) is the decomposition in irreducible components of \(T_{i}\), then \(f(T_{i}^{j})\) is a point unless \(j=0\), and \(f(T_{i}^{0})\) is the strict transform of a line through one of the points \(q_{1},\ldots,q_{\ell}\). In particular, replacing \(T_{i}\) with \(T_{i}^{0}\), it is enough to prove Proposition 31 when \(T_{i}=T_{i}^{0}\). Assume henceforth that each \(T_{i}\) is irreducible and that \(f_{*}[T_{i}]\) is a special line for \(i=1,\ldots,n-m\). **Reduction 4:**_It is enough to prove the following analogue of condition (*):_ * _The following inequality holds:_ \[\dim_{[f|_{C_{p}}]}(\overline{\mathcal{M}}_{g,n}(X,f_{*}[C_{sp}]))- \operatorname{vdim}(\overline{\mathcal{M}}_{g,n}(X,f_{*}[C_{sp}]))-n\\ \leq-(g+1);\] Proof.: The proof is the same as that of Proposition 17. Proof of Proposition 32.: It is enough to prove that condition (*') holds whenever \(n\) is large (depending on \(X\) and \(g\)) and whenever \(g=0\) and \(n\geq 1\). We distinguish two cases. * Suppose that \(f_{*}[C_{\rm sp}]=-k^{\prime}{\sf E}_{h}^{\vee}\) with \(k^{\prime}>0\), for some \(h\in\{1,\ldots,\ell\}\). Then, \(f(T_{i})\) must be the strict transform of some line in \(\mathbb{P}^{3}\) through \(q_{h}\), and \(C_{\rm sp}\) cannot contain any marked point (i.e., \(m=0\)). In particular, we have \[\beta=f_{*}[C]=n{\sf H}^{\vee}+(n-k^{\prime}){\sf E}_{h}^{\vee},\] and so by Equation (1), \[3(n+g-1)=\beta\cdot K_{X}^{\vee}=4n-2(n-k^{\prime})=2n+2k^{\prime},\] from which we deduce \[k^{\prime}=\frac{n}{2}+\frac{3}{2}(g-1).\] Proceeding as in Theorem 23, we find \[\dim_{[f|_{C_{\rm sp}}]}(\overline{\mathcal{M}}_{g,n}(X,f_{*}[C_{ \rm sp}]))-{\rm vdim}(\overline{\mathcal{M}}_{g,n}(X,f_{*}[C_{\rm sp}])) \leq 2g-1+k^{\prime}\] \[\leq\frac{7}{2}g-\frac{5}{2}+\frac{n}{2}\] which implies the required inequality (*'). * If \(f_{*}[C_{\rm sp}]=d^{\prime}{\sf H}^{\vee}+k^{\prime}_{1}{\sf E}_{1}^{\vee}+ \cdots+k^{\prime}_{\ell}{\sf E}_{\ell}^{\vee}\) with \(d^{\prime}>0\) or it is the \(0\) class, then condition (*') follows from Lemma 22. This concludes the proof. ## 3 Geometric counts ### Integral formula on \(\mathbb{P}\) In this section, we prove Proposition 10. We first recall the setup of SS1.3.2. Let \(\pi:X\to\mathbb{P}^{r}\) be the blow-up of \(\mathbb{P}^{r}\) at \(\ell\leq r+1\) torus-fixed points, and let \[\beta=d{\sf H}^{\vee}+\sum_{i=1}^{\ell}k_{i}{\sf E}_{i}^{\vee}\in H_{2}(X, \mathbb{Z})\] be an effective curve class. Let \(S:=\mathsf{Jac}^{\sf d}(C)\times\mathsf{Sym}^{k_{1}}(C)\times\ldots\times \mathsf{Sym}^{k_{\ell}}(C)\), and let \(\nu:\mathbb{P}\to S\) be the projective bundle parametrizing \(\ell\)-tuples of sections \[f_{i}\in H^{0}\left(C,\mathcal{L}\left(-\sum_{j\neq i}D_{j}\right)\right)\] up to simultaneous scaling. We will write \[\mathcal{L}^{\prime}:=\mathcal{L}\left(-\sum_{j=1}^{\ell}D_{j}\right).\] Recall also the notation \[\widetilde{\mathsf{H}}=c_{1}(\mathcal{O}_{\mathbb{P}}(1))-\eta_{1}-\cdots-\eta_{ \ell}\in H^{*}(\mathbb{P},\mathbb{Z})\] where \(\eta_{i}\) is the pullback from \(\mathsf{Sym}^{k_{i}}(C)\) of the class of the divisor \(N_{i}=\{D:D-p\geq 0\}\) (here \(p\) is any fixed point in \(C\)). Fix general points \(x_{1},\ldots,x_{n}\in X\). We can assume that \(x_{i}\) lies in the locus where \(\pi\) is an isomorphism and identify \(x_{i}\in\mathbb{P}^{r}\), so that we can write \[x_{i}=[x_{i,1}:\cdots:x_{i,r+1}].\] We wish to cut out subschemes of \(V(x_{i})\subset\mathbb{P}\) corresponding to the conditions \(f(p_{i})=x_{i}\), but care must be taken when the \(p_{i}\) vanishes at some of the \(f_{j}\). For all \(i=1,\ldots,n\) and \(a\leq r\), let \(V(x_{i})_{a}\subseteq\mathbb{P}\) denote the closed subscheme of points \[(\mathcal{L},D_{1},\ldots,D_{\ell},\{f=[f_{1}:\cdots:f_{r+1}]\})\] satisfying the equations \[x_{i,j}\cdot f_{h}(p_{i})-x_{i,h}\cdot f_{j}(p_{i})=0 \tag{12}\] for all \(h,j\) with \(1\leq h<j\leq a+1\), where we interpret \(f_{h}\) and \(f_{j}\) as sections of \(\mathcal{L}^{\prime}(D_{h}+D_{j})\) (upon twisting by \(D_{j}\) and \(D_{h}\), respectively), and then restrict to \(p_{i}\). Write \(V(x_{i})=V(x_{i})_{r}\). (Note more generally that the locus \(V(x_{i})_{a}\) corresponds to the condition that \(f(p_{i})\) lies in a general linear space of codimension \(a\).) and define now \[V(x_{1},\ldots,x_{n})=\bigcap_{i=1}^{n}V(x_{i}).\] Theorem 10 is implied by the following two statements. **Proposition 33**.: _For all \(i=1,\ldots,n\), we have_ \[[V(x_{i})_{a}]=\widetilde{\mathsf{H}}^{a}+\sigma_{1}\widetilde{\mathsf{H}}^{a -1}+\cdots+\sigma_{a}\text{ in }H^{2a}(\mathbb{P}).\] _where \(\sigma_{i}=\sigma_{i}(\eta_{1},\ldots,\eta_{a+1})\) for \(i=1,\ldots,a\)._ **Proposition 34** (Transversality).: _The scheme \(V(x_{1},\ldots,x_{n})\) is reduced and \(0\)-dimensional. Moreover, its \(\mathbb{C}\)-points are in bijection with the set of maps enumerated in \(\mathsf{Tev}^{X}_{g,n,\beta}\). (In particular, the \(f_{j}\), when regarded as sections of \(\mathcal{L}\), do not simultaneously vanish at any \(p\in C\).)_ The proof of transversality is deferred to the next section. Here, we prove Proposition 33. Throughout the rest of this section, we work with a fixed \(i\), and assume that \(x_{i}=[1:\cdots:1]\). Also, by setting \(\eta_{i}=0\) for \(i>\ell\), we assume that \(\ell=r+1\). **Lemma 35**.: _For all \(a\leq r\), the subscheme \(V(x_{i})_{a}\subset\mathbb{P}\) is generically reduced and irreducible of codimension \(a\)._ Proof.: We first prove the lemma upon restriction to the open set \(\mathbb{P}_{0}\subset\mathbb{P}\) where \(p_{i}\notin D_{v}\) for all \(v=1,2,\ldots,r+1\). In this case, the proposition is already true upon restriction to the fibers of the projective bundle \(\mathbb{P}\). Fix \(\mathcal{L}\in\mathsf{Jac}^{d}C\) and \(D_{v}\in\mathsf{Sym}^{k_{v}}C\), and write \[\mathbb{P}^{\prime}:=\mathbb{P}\left(\bigoplus_{t=1}^{r+1}H^{0}(C,\mathcal{L }^{\prime}(D_{t}))\right).\] Indeed, the restriction of \(V(x_{i})_{a}\) to \(\mathbb{P}^{\prime}\) is the linear space cut out by the \(a\) independent linear equations \(f_{1}(p_{i})=f_{v}(p_{i})\) for \(v=2,3,\ldots,a+1\), where we may as well consider \(f_{1},\ldots,f_{a+1}\) all as sections of the same line bundle \(\mathcal{L}\). We now claim that \(\mathbb{P}_{0}\cap V(x_{i})_{a}\) is dense in \(V(x_{i})_{a}\), which will prove the lemma. We show that any point \[(\mathcal{L},D_{1},\ldots,D_{r+1},\{f=[f_{1}:\cdots:f_{r+1}]\})\in V(x_{i})_{a}\] is a limit of points in \(\mathbb{P}_{0}\cap V(x_{i})_{a}\). Let \(B\) be the spectrum of a discrete valuation ring, and let \(\pi:C\times B\to B\) be the projection. Let \(\widetilde{D}_{1},\ldots,\widetilde{D}_{r+1}\) be divisors on \(C\times B\) restricting to \(D_{1},\ldots,D_{r+1}\) on the special fiber, and such that \(p_{i}\notin\widetilde{D}_{1},\ldots,\widetilde{D}_{r+1}\) upon restriction to the generic fiber. For each \(t\), we construct a section \[\widetilde{f}_{t}\in H^{0}(C\times B,\mathcal{L}^{\prime}(\widetilde{D}_{t}) )\cong H^{0}(B,\pi_{*}(\mathcal{L}^{\prime}(\widetilde{D}_{t})))\] restricting to \(f_{t}\) on the special fiber, and for which the restrictions of \(\widetilde{f}_{t}\) to the \(\{p_{i}\}\times B\) are all equal to each other, when regarded as sections of \(H^{0}(B,\mathcal{L}|_{p_{i}})\). By the assumption that \(p_{i}\notin\widetilde{D}_{1},\ldots,\widetilde{D}_{r+1}\) on the generic fiber, the \(\widetilde{f}_{t}\) and \(\widetilde{D}_{t}\) define a \(K(B)\)-point of \(\mathbb{P}_{0}\cap V(x_{i})_{a}\), as needed. The construction is as follows. First, consider the restriction map \[\gamma:H^{0}(C\times B,\mathcal{L}^{\prime}(\widetilde{D}_{1}))\to H^{0} \left(C\times B,\,\mathcal{L}^{\prime}(\widetilde{D}_{1})\Big{|}_{0}\right) \cong H^{0}\left(C,\mathcal{L}^{\prime}(D_{1})\right))\,,\] where \(0\in B\) is the closed point. We have \(H^{1}(C\times B,\mathcal{L}^{\prime}(\widetilde{D}_{1})(-C\times 0))=0\) by Cohomology and Base Change and the Leray spectral sequence, so \(\gamma\) is surjective, and we may define \(\widetilde{f}_{1}\) to be any section extending \(f_{1}\). Similarly, the restriction map \[\gamma^{\prime}:H^{0}(C\times B,\mathcal{L}^{\prime}(D_{1}))\to H^{0}\left(C \times B,\,\mathcal{L}^{\prime}(D_{1})\big{|}_{(C\times 0)\cup(p_{i}\times B)}\right)\] is surjective, because \(H^{1}(C\times B,\mathcal{L}^{\prime}(D_{1})(-(C\times 0)-(p_{i}\times B)))=0\). Note here that in order to ensure that \(R^{1}\pi_{*}(\mathcal{L}^{\prime}(D_{1})(-(C\times 0)-(p_{i}\times B)))=0\), we need the right hand side of (3) to be \(2g-1\), rather than \(2g-2\). Therefore, we can define all other \(\widetilde{f}_{t}\) extending \(f_{t}\)_and_ with \(\widetilde{f}_{1}(p_{i})=\widetilde{f}_{t}(p_{i})\) in \(\mathcal{L}\). For all \(t=1,2,\ldots,r+1\), let \(W_{t}\subset\mathbb{P}\) denote the divisor cut out by the equation \(f_{t}(p_{i})=0\), where \(f_{t}\) is viewed as a section of \(\mathcal{L}^{\prime}(D_{t})\). Similarly, for all \(t=1,2,\ldots,r\), let \(W_{t,t+1}\) denote the locus on \(\mathbb{P}\) of points satisfying \(f_{t}(p_{i})-f_{t+1}(p_{i})=0\), where \(f_{t}-f_{t+1}\) is viewed as a section of \(\mathcal{L}^{\prime}(D_{t}+D_{t+1})\). Condition (3) implies that \(W_{t}\) and \(W_{t,t+1}\) are indeed divisors. **Lemma 36**.: _We have_ \[[W_{t}]=\widetilde{\mathsf{H}}+\eta_{a}\] _and_ \[[W_{t,t+1}]=\widetilde{\mathsf{H}}+\eta_{a}+\eta_{a+1}\] _in \(H^{2}(\mathbb{P})\)._ Proof.: We prove the second statement; the first is similar. The locus \(W_{t,t+1}\) is cut out by a tautological section of the line bundle \[\mathcal{O}_{\mathbb{P}}(1)\otimes\nu_{*}\left(\left.\mathcal{P}\left(-\sum_{v \neq t,t+1}\mathcal{D}_{v}\right)\right|_{p_{i}}\right)\cong\mathcal{O}_{ \mathbb{P}}(1)\otimes\bigotimes_{v\neq t,t+1}\mathcal{O}_{\mathsf{Sym}^{k_{v}} C}(-N_{v}),\] where \(\nu:C\times S\to S\) is the projection. Here, we have used the isomorphisms \[\mathcal{P}|_{\mathsf{Jac}^{d}(C)\times\{p_{i}\}}\cong\mathcal{O}_{\mathsf{Jac }^{d}(C)}\] and \[\mathcal{O}(-\mathcal{D}_{v})|_{\mathsf{Sym}^{k_{v}}(C)\times\{p_{i}\}}\cong \mathcal{O}_{\mathsf{Sym}^{k_{v}}C}(-N_{v})\text{ for }v=1,...,r+1.\] Therefore, the desired class is \[\mathsf{H}-\sum_{v\neq t,t+1}\eta_{v}=\widetilde{\mathsf{H}}+\eta_{t}+\eta_{t +1}\in H^{2}(\mathbb{P}).\] **Lemma 37**.: _Let \([V]_{a}\in H^{2a}(\mathbb{P})\) be the class of \(V(x_{i})_{a}\subset\mathbb{P}\). Then, for all \(a\leq r\), we have_ \[[V]_{a}=[V]_{a-1}(\widetilde{\mathsf{H}}+\eta_{a}+\eta_{a+1})-[V]_{a-2}( \widetilde{\mathsf{H}}+\eta_{a})\eta_{a}, \tag{13}\] _where we set \([V]_{-1}=0\) by convention._ Proof.: The claim follows from the following three statements, after applying Lemma 36. 1. We have a set-theoretic equality \[(V_{a-1}\cap W_{a,a+1})=V_{a}\cup(V_{a-2}\cap W_{a}\cap N_{a})\] in \(\mathbb{P}\). 2. The subschemes \(V_{a-1}\cap W_{a,a+1}\) and and \(V_{a-2}\cap W_{a}\cap N_{a}\) of \(\mathbb{P}\) (in addition to \(V_{a}\)) are generically reduced and irreducible of codimension \(a\). 3. \(V_{a-1}\cap W_{a,a+1}\) and \(V_{a-2}\cap W_{a}\cap N_{a}\) are not equal. Consider (1). The inclusion \(\supset\) is straightforward. A point in \(f\in V_{a-1}\cap W_{a,a+1}\) satisfies (12) whenever \(1\leq h<j\leq a\), and in addition for \((h,j)=(a,a+1)\). If is not the case that \(f\in V_{a}\), then (12) must fail for \(j=a+1\) and some \(h<a\); without loss of generality, take \(h=1\). The sections \(f_{1}\in\mathcal{L}^{\prime}(D_{1})\) and \(f_{a+1}\in\mathcal{L}^{\prime}(D_{a+1})\) are not equal at \(p_{i}\) when regarded as sections of \(\mathcal{L}^{\prime}(D_{1}+D_{a+1})\), but _are_ equal at \(p_{i}\) when regarded as sections of \(\mathcal{L}^{\prime}(D_{1}+D_{a}+D_{a+1})\), by applying (12) for \((h,j)=(1,a),(a,a+1)\). Therefore, we must have \(p_{i}\in D_{a}\), that is, \(f\in N_{a}\). Because we furthermore have \(f\in W_{a,a+1}\), we conclude that \(f_{a}\) is zero at \(p_{i}\) as a section of \(\mathcal{L}^{\prime}(D_{a}+D_{a+1})\). Similarly, applying (12) for \((h,j)=(1,a)\), we have that \(f_{a}\) is zero at \(p_{i}\) as a section of \(\mathcal{L}^{\prime}(D_{1}+D_{a})\). Now, either \(f_{a}\) is zero as a section of \(\mathcal{L}^{\prime}(D_{a})\), in which case we are done, or \(f\in N_{1}\cap N_{a+1}\), in which case (12) holds for \((h,j)=(1,a+1)\), contradicting the assumption at the beginning. (2) is proven exactly as in Lemma 35, namely, \(V_{a-1}\cap W_{a,a+1}\) is generically a sub-projective bundle of \(\mathbb{P}\) of codimension \(a\), and \(V_{a-2}\cap W_{a}\cap N_{a}\) is generically a sub-projective bundle of the pullback of \(\mathbb{P}\) over \(N_{a}\) of codimension \(a-1\). The details are omitted. (3) is immediate, for example, from the fact that \(N_{a}\) does not contain \(V_{a-1}\cap W_{a,a+1}\) Proof of Proposition 33.: We proceed by induction on \(a\). When \(a=1\), this is Lemma 36. Suppose \(a>1\). By the inductive hypothesis and Lemma 37, we have \[[V]_{a}= [V]_{a-1}(\widetilde{\mathsf{H}}+\eta_{a}+\eta_{a+1})-[V]_{a-1}( \widetilde{\mathsf{H}}+\eta_{a})\eta_{a}\] \[= \left(\widetilde{\mathsf{H}}^{a-1}+\sigma_{1}(\eta_{1},\ldots, \eta_{a})\widetilde{\mathsf{H}}^{a-2}+\cdots+\sigma_{a-1}(\eta_{1},\ldots, \eta_{a})\right)(\widetilde{\mathsf{H}}+\eta_{a}+\eta_{a+1})\] \[-\left(\widetilde{\mathsf{H}}^{a-2}+\sigma_{1}(\eta_{1},\ldots, \eta_{a-1})\widetilde{\mathsf{H}}^{a-3}+\cdots+\sigma_{a-2}(\eta_{1},\ldots, \eta_{a-1})\right)(\widetilde{\mathsf{H}}+\eta_{a})\eta_{a}\] \[= \widetilde{\mathsf{H}}^{a}+\sigma_{1}(\eta_{1},\ldots,\eta_{a+1}) \widetilde{\mathsf{H}}^{a-1}+\cdots+\sigma_{a}(\eta_{1},\ldots,\eta_{a+1})\] where in the last equality we used the identities \[(\eta_{a}+\eta_{a+1})\sigma_{i-1}(\eta_{1},\ldots,\eta_{a})+ \sigma_{i}(\eta_{1},\ldots,\eta_{a})-\eta_{a}\sigma_{i-1}(\eta_{1},\ldots,\eta _{a-1})\] \[-\eta_{a}^{2}\sigma_{i-2}(\eta_{1},\ldots,\eta_{a-1})=\sigma_{i}( \eta_{1},\ldots,\eta_{a+1})\] for \(i=1,\ldots,a\). #### 3.1.1 Interlude on the permutohedral variety For the proof of transversality, it will be convenient for formal reasons to pass from \(X\) to a further blow-up \(Y\), the _permutohedral variety_, at higher-dimensional linear subspaces. We now describe this blow-up and its relevant properties. Fix a dimension \(r\geq 2\). Write \([r+1]=\{1,2,\ldots,r+1\}\). For a non-empty subset \(S\subsetneq[r+1]\), let \(\Lambda_{S}\subset\mathbb{P}^{r}\) denote the torus-invariant linear space of dimension \((\#S-1)\) given by the vanishing of the coordinates indexed by the complement \([r+1]\backslash S\). **Definition 38**.: _Let \(\rho:Y\to X\) be the blow-up of \(X\) (which is itself obtained by blowing up the torus-fixed points of \(\mathbb{P}^{r}\)) at the strict transforms of the \(\binom{r+1}{2}\) torus-invariant lines of \(\mathbb{P}^{r}\), followed by the strict transforms of the \(\binom{r+1}{3}\) torus-invariant planes of \(\mathbb{P}^{r}\), and so on, through the torus-invariant codimension 2 subspaces._ **Definition 39**.: _For \(S\subset[r+1]\) of cardinality at most \(r-1\), let \(\mathsf{E}_{S}\) be the class of the strict transform of \(\Lambda_{S}\). Let \(\mathsf{H}^{\vee},\mathsf{E}_{S}^{\vee}\in H_{2}(Y)\) be the basis of 1-cycles dual to the basis \(\mathsf{H},\mathsf{E}_{S}\in H^{2}(Y)\) of divisors._ Note that \[K_{Y}=(r+1)\mathsf{H}-\sum_{\begin{subarray}{c}S\subset[r+1]\\ 1\leq\#S\leq r-1\end{subarray}}(r-\#S)\mathsf{E}_{S}.\] A point \(y\in Y\) can be expressed in coordinates as follows. First, take a point \(y_{0}\in\mathbb{P}^{r}=\mathbb{P}(\mathbb{C}[[r+1]])\), where \(\mathbb{C}[[r+1]]\) denotes the vector space with basis given by the set \([r+1]=\{1,2,\ldots,r+1\}\). The point \(y_{0}\) is the image of \(y\) under the composite blowup \(Y\to\mathbb{P}^{r}\). Then, let \(S^{1}\subsetneq[r+1]=:S^{0}\) be the subset of coordinates of \(y_{0}\) which are equal to zero, corresponding to the minimal \(T\)-invariant subvariety of \(\mathbb{P}^{r}\) in which \(y_{0}\) lies. Then, let \(y_{1}\in\mathbb{P}(\mathbb{C}[S^{1}])\cong\mathbb{P}^{\#S^{1}-1}\) be a point, representing a projectivized normal vector to \(\Lambda_{(S^{1})^{c}}\) at \(y_{0}\). Define \(S^{2}\subsetneq S^{1}\) analogously, as the set of coordinates of \(y_{1}\) equal to zero, and continue until \(y_{k}\) has all coordinates non-zero. Then, \(y\) consists of the data of the points \((y_{0},y_{1},\ldots,y_{k})\). Let \(S\subset[r+1]\) be any non-empty subset. Then, we have a projection map \(\rho_{S}:Y\to\mathbb{P}(\mathbb{C}[S])\) given by remembering the coordinates of \(y=(y_{0},y_{1},\ldots,y_{k})\) corresponding to \(S\), at the unique point \(y_{k^{\prime}}\) for which \(S\subseteq S^{k^{\prime}}\) and the corresponding coordinates are not all zero. Let \(C\) be a smooth curve. A map \(f:C\to Y\), which, upon post-composition with the blow-up \(Y\to\mathbb{P}^{r}\), has image not contained in any torus-invariant subvariety, may be is given by the following data: * a line bundle \(\mathcal{L}\) on \(C\), * for each \(S\subset[r+1]\) with \(1\leq\#S\leq r-1\), an effective divisor \(D_{S}\subset C\), and * for each \(j\in[r+1]\), a non-zero section \[f_{j}\in H^{0}\left(C,\mathcal{L}\left(-\sum_{j\not\in S}D_{S}\right)\right),\] such that, when regarded as sections in \(H^{0}(C,\mathcal{L})\), the \(f_{j}\) have no common vanishing locus. The point \(f(p)=(y_{0},\ldots,y_{k})\) may be computed as follows. First \(y_{0}\) is the point \([f_{0}(p):\cdots:f_{r+1}(p)]\), where the \(f_{j}\) are regarded as sections of \(\mathcal{L}\). Then, the subset \(S^{1}\subset[r+1]\) is the set of indices \(j\) for which \(f_{j}(p)=0\). The point \(y_{1}\) has coordinates given by the values of \(f_{j}(p)\) after twisting \(\mathcal{L}\) down by the unique positive multiple of \(p\) for which the \(f_{j}(p)\) are well-defined and not all zero for \(j\in S^{1}\). The rest of the \(y_{2},\ldots,y_{k}\) are then determined by further twists of \(\mathcal{L}\). #### 3.1.2 Transversality In this section, we prove Proposition 34. We continue to assume \(\ell=r+1\). The main difficulty is to show that a point of \(V(x_{1},\ldots,x_{n})\) represents an "honest" map in class \(\beta\). More precisely, let \(\mathbb{P}^{\circ}\subset\mathbb{P}\) be the open locus of \(f=[f_{1}:\cdots:f_{r+1}]\) for which the \(f_{j}\) share no common zeroes, when simultaneously viewed as sections of \(\mathcal{L}\). Then, we wish to show that \(V(x_{1},\ldots,x_{n})\subset\mathbb{P}^{\circ}\). In order for our argument to work, we will first need the following lemma. **Lemma 40**.: _Suppose \(f=[f_{1}:\cdots:f_{r+1}]\) is a point of \(V(x_{1},\ldots,x_{n})\). Then, for each \(j\), we have \(f_{j}\neq 0\) as a section of \(\mathcal{L}^{\prime}(D_{j})\)_ Proof.: Suppose, without loss of generality, that \(f_{1}=0\) and that \(f_{2}\neq 0\) as sections of \(\mathcal{L}^{\prime}(D_{1})\) and \(\mathcal{L}^{\prime}(D_{2})\), respectively (note that we cannot have all \(f_{j}=0\)). We may assume that none of the \(x_{i}\) lie in the hyperplane in which \(x_{i,1}=0\), so we must have that \(f_{2}\) vanishes at _all_\(p_{i}\) as a section of \(\mathcal{L}^{\prime}(D_{1}+D_{2})\). In particular, we need \(\deg(\mathcal{L}^{\prime}(D_{1}+D_{2}))\geq n\), that is, \[d-(k_{3}+\cdots+k_{r+1})\geq n,\] which immediately contradicts the assumption that \(n-d\geq g+1\). **Remark 41**.: _We will later use the assumption \(n-d\geq g+1\) again (see Lemma 50), but we point out that it is crucial for Lemma 40 to hold, which in turn is necessary for the transversality._ _For example, when \(g=0\) and \(r\geq 3\), suppose that the class \(\beta=d\mathsf{H}^{\vee}+k\mathsf{E}_{1}^{\vee}\) satisfies \(0\leq k\leq d\leq(r-1)k-r\). Then, we have \(n\leq d\), so one can take \(f_{2}=\cdots=f_{r+1}=0\) and \(f_{1}\in H^{0}(\mathbb{P}^{1},\mathcal{O}(d))\) to vanish at all of the \(p_{1},\ldots,p_{n}\). This construction produces points of \(V(x_{1},\ldots,x_{n})\) not lying in \(\mathbb{P}^{\circ}\), and infinitely many such if \(d>n\)._ _On the other hand, when \(g=0\) and \(r=2\), one can check that the assumption \(n-d\geq 1\) can be dropped, also later in Lemma 50._ We now turn to the main argument. The strategy is as follows: for any initial \(f_{\mathrm{init}}\in V(x_{1},\ldots,x_{n})\), we describe an algorithm that essentially amounts to twisting down at base-points of \(f_{\mathrm{init}}\) until we are able to define a map \(f:C\to\mathbb{P}^{r}\). The map \(f\) will also have various incidence conditions with respect to the torus-invariant loci of \(\mathbb{P}^{r}\) and the \(p_{i},x_{i}\); a naive dimension count would immediately predict the inexistence of such a map, unless \(f\in\mathbb{P}^{\circ}\) to begin with. We show that this expectation holds by passing from \(f\) to a map from \(f:C\to Y\), where \(Y\) is the permutohedral variety of the previous section, moving in a family of the expected dimension by Lemma 1. Our procedure consists of a sequence of modifications to the following data: * A line bundle \(\mathcal{L}\), * For every non-empty subset \(S\subsetneq\{1,2,\ldots,r+1\}\), an effective divisor \(D_{S}\subset C\), and * For all \(j=1,2,\ldots,r+1\), a _non-zero_ (see Lemma 40) section \[f_{j}\in H^{0}\left(C,\mathcal{L}\left(-\sum_{j\notin S}D_{S}\right)\right).\] The initial data is given by the data underlying the point \(f_{\mathrm{init}}\in\mathbb{P}\): namely, the underlying line bundle denoted \(\mathcal{L}_{\mathrm{init}}\), the divisors \(D_{\{j\}}\coloneqq D_{j}\) and \(D_{S}=0\) if \(\#S>1\), and the sections \((f_{j})_{\mathrm{init}}\) of \(\mathcal{L}_{\mathrm{init}}(-\sum_{v\neq j}D_{j})\). We say that \(p\) is a _base-point_ for the above data if \(f_{j}(p)=0\) as a section of \(\mathcal{L}\) for all \(j=1,2,\ldots,r+1\). Throughout, we distinguish the initial line bundle and sections \(\mathcal{L}_{\mathrm{init}},(f_{j})_{\mathrm{init}}\), which do not change, from the \(\mathcal{L},f_{j}\), which do. The letter \(f\) is only used at the end when we obtain a map \(f:C\to\mathbb{P}^{r}\) after no base-points remain. Similarly, we use \(D_{j}\) to denote the divisor associated to the point \(f_{\mathrm{init}}\in V(x_{1},\ldots,x_{n})\), which does not change, to distinguish it from \(D_{\{j\}}\), which does. We will repeatedly consider the following conditions on \(f_{j}\) given \(p\in C\): * \(p\in D_{\{j\}}\) * \(f_{j}(p)=0\) as a section of \(H^{0}\left(C,\mathcal{L}\left(-\sum_{v\neq j}D_{\{v\}}\right)\right)\) We now describe the algorithm. Write \(C^{\circ}=C-\{p_{1},\ldots,p_{n}\}\). The first three steps will be carried out independently for all points \(p\in C^{\circ}\); we fix such a point \(p\) throughout. 1. \((p\in C^{\circ}\) satisfies (ii) for _all_\(j)\) Let \(\alpha>0\) be the largest order to which \(p\) vanishes at every \(f_{j}\) (as a section of \(\mathcal{L}(-\sum_{v\neq j}D_{\{v\}})\)) to order \(\alpha\). Then, make the following modifications: * \(\mathcal{L}\) is replaced by \(\mathcal{L}(-\alpha p)\), * each \(D_{S}\) stays the same (in particular, all \(D_{S}\) with \(\#S>1\) remain empty), and * the new section \(f_{j}\in H^{0}\left(C,\mathcal{L}\left(-\sum_{v\neq j}D_{\{v\}}\right)\right)\) is taken to be the pre-image of the original section \(f_{j}\) under the inclusion \[H^{0}\left(C,\mathcal{L}(-\alpha p)\left(-\sum_{v\neq j}D_{\{v\}}\right) \right)\to H^{0}\left(C,\mathcal{L}\left(-\sum_{v\neq j}D_{\{v\}}\right) \right).\] (In other words, \(f_{j}\) is "twisted down by \(\alpha\) at \(p\).") After step 1, (ii) may still hold for some \(j\), but will no longer hold for all \(j\). * \((p\in C^{\circ}\) satisfies (i) and (ii) for _some_\(j\)) For any \(p,j\) as above, let \(\alpha_{p,j}>0\) be the maximal order to which both properties (i), (ii) hold, that is, \(D_{\{j\}}\) contains \(p\) with multiplicity \(\alpha_{p,j}\) and \(f_{j}\) vanishes at \(p\) to order \(\alpha_{p,j}\). Then, make the following modifications (independently for all \(j\)): * \({\cal L}\) is replaced by \({\cal L}(-\alpha_{p,j}p)\), * \(D_{\{j\}}\) is replaced by \(D_{\{j\}}-\alpha_{p,j}p\), while all \(D_{S}\) with \(\#S>1\) remain empty, and * \(f_{j}\) is twisted down by \(\alpha_{p,j}\) at \(p\). After step 2, we have \({\cal L},D_{S},f_{j}\) (still \(D_{S}=0\) if \(\#S>1\)) with the property that, for any \(p\in C^{\circ}\) and any \(j\in[r+1]\), at most one of the properties (i), (ii) hold. * \((p\in C^{\circ}\) satisfies (i) for _more than one_\(j\)) Let \(S_{1}\) be the set of \(j\in[r+1]\) for which (i) holds. Let \(\alpha_{j}\) be the order to which \(D_{\{j\}}\) contains \(p\); in particular, \(\alpha_{j}=0\) if \(j\notin S_{1}\). Write \(\alpha_{\max}\) for the largest of the \(\alpha_{j}\), and write \(\alpha_{\rm tot}\) for the sum of all of the \(\alpha_{j}\). As a section of \({\cal L}\), the order of vanishing of \(f_{j}\) at \(p\) is at least \(\alpha_{\rm tot}-\alpha_{j}\), and, after step 2, we have equality whenever \(j\in S_{1}\). In particular, the common order of vanishing of the \(f_{j}\) is equal to \(\alpha_{\rm tot}-\alpha_{\max}\), which is strictly positive exactly when \(\#S_{1}\geq 2\). To remove the base-point at \(p\), we wish therefore to twist down our sections \(f_{j}\in H^{0}({\cal L})\) by \(\alpha_{\rm tot}-\alpha_{\max}\) at \(p\). After the twist, the new vanishing order of \(f_{j}\) at \(p\) (as a section of \(H^{0}({\cal L})\)) will be \(\alpha^{\prime}_{j}:=\alpha_{\max}-\alpha_{j}\). We will keep track of this new vanishing condition via the divisors \(D_{S}\). This is achieved more precisely as follows. First, write \[0=\alpha^{\prime}(1)<\cdots<\alpha^{\prime}(t)\] for the _distinct_ integers appearing among the \(\alpha^{\prime}_{j}\). (Note that \(\alpha^{\prime}(t)\) is equal to \(\alpha_{\max}\) unless \(S_{1}=[r+1]\).) Then, define the filtration \[[r+1]=S^{1}\supsetneq S^{2}\supsetneq\cdots\supsetneq S^{t}\] by \[S^{m}=\{j\in[r+1]|\alpha^{\prime}_{j}\geq\alpha^{\prime}(m)\}.\] Note that \(S^{t}=[r+1]-S_{1}\) if \(S_{1}\subsetneq[r+1]\). Finally, we make the following modifications to our data. * \({\cal L}\) is replaced by \({\cal L}(-(\alpha_{\rm tot}-\alpha_{\max})p)\), * each \(D_{\{j\}}\) is (temporarily) replaced by \(D_{\{j\}}-\alpha_{j}p\), * for \(m=2,3,\ldots,t\), the divisor \(D_{[r+1]\setminus S^{m}}\) is replaced by the divisor \(D_{[r+1]\setminus S^{m}}+(\alpha^{\prime}(m)-\alpha^{\prime}(m-1))p\), and * \(f_{j}\) stays the same. (Note that it is a section of the same line bundle as before; indeed, the multiplicity of \(p\) in \[\sum_{S:j\notin S}D_{S}\] decreases by exactly \(\alpha_{\rm tot}-\alpha_{\max}\) in the previous two modifications.) After step 3, no \(p\in C^{\circ}\) is a base-point. Therefore, the \(f_{j}\) (viewed as sections of \(\mathcal{L}\)) define a map \(f^{\circ}:C^{\circ}\to\mathbb{P}^{r}\), with the property that the divisor \[D^{\prime}_{S}=\sum_{S^{\prime}\subseteq S}D_{S^{\prime}}\] (when restricted to \(C^{\circ}\)) is constrained to map to the torus-invariant locus \(\Lambda_{S}\subset\mathbb{P}^{r}\) defined earlier. There may be still additional vanishing; for example, the \(f_{j}\) may have unexpected vanishing as sections of \(\mathcal{L}\left(\sum_{j\notin S}D_{S}\right)\), but we will not need to take this into account. We now repeat the previous steps (with small modifications) on the points \(p_{i}\). The steps will be carried out for each \(p_{i}\) independently; we fix for the rest of this discussion an index \(i\). We will furthermore assume for simplicity as in the previous section that \(x_{i}=[1:\cdots:1]\). Care must be taken now to keep track of the effect of our twisting operations on the conditions \(V(x_{i})\). 1. (\(p_{i}\) satisfies (ii) for _all_\(j\)) 2. This step is identical to step 1, with \(p_{i}\) in place of \(p\), and the \(D_{S}\) with \(\#S>1\) playing no role. After step 4, it will no longer be true that \(p_{i}\) satisfies (ii) for all \(j\). 3. (\(p_{i}\) satisfies (i) for _all_\(j\)) 4. This step is new. For a fixed \(p_{i}\), \(\alpha>0\) be the largest order to which \(p_{i}\) is contained in all of the \(D_{j}\) simultaneously. Then, make the following modifications: * \(\mathcal{L}\) is replaced by \(\mathcal{L}((-\alpha r)p)\), * each \(D_{\{j\}}\) is replaced by \(D_{\{j\}}-\alpha p\), while all \(D_{S}\) with \(\#S>1\) remain unchanged, and * \(f_{j}\) stays the same (note that it is a section of the same line bundle as before). After step 5, it will no longer be true that \(p_{i}\) satisfies (i) for all \(j\). **Definition 42**.: _We say that \(p_{i}\) is inactive if either step 4 or 5 is run, that is, if \(p_{i}\) initially satisfies (i) for all \(j\) or (ii) for all \(j\), and that \(p_{i}\) is active otherwise._ If \(p_{i}\) is inactive, then the equations \(V(x_{i})\) are automatically satisfied. Therefore, after the twisting of either step 4 or 5, \(V(x_{i})\) imposes no additional conditions on \(f\). **Lemma 43**.: _Suppose that \(p_{i}\) is active. Then, condition (i) holds for \(j\) if and only if (ii) does._ Proof.: For convenience of notation, we argue here in terms of the initial divisors \(D_{j}\), have not yet been modified in a neighborhood of \(p_{i}\), and write \(\mathcal{L}^{\prime}=\mathcal{L}_{\mathrm{init}}(-D_{1}-\cdots-D_{r+1})\). Suppose that \(p_{i}\in D_{j}\). We may also assume that there exists a \(j^{\prime}\) that \(p_{i}\notin D_{j^{\prime}}\). Then, \(f_{j}(p_{i})=f_{j^{\prime}}(p_{i})=0\) in \(\mathcal{L}^{\prime}(D_{j}+D_{j^{\prime}})\), and because \(p_{i}\notin D_{j^{\prime}}\), we in fact have \(f_{j}(p_{i})=0\) in \(\mathcal{L}^{\prime}(D_{j})\). Conversely, if \(f_{j}(p_{i})=0\) in \(\mathcal{L}^{\prime}(D_{j})\), then, for all \(j^{\prime}\neq j\), we have \(f_{j^{\prime}}(p_{i})=0\) in \(\mathcal{L}^{\prime}(D_{j}+D_{j^{\prime}})\). If \(p_{i}\notin D_{j}\), then in fact \(f_{j^{\prime}}(p_{i})=0\) in \(\mathcal{L}^{\prime}(D_{j^{\prime}})\), for all \(j^{\prime}\) (including \(j\)), contradicting the assumption that \(p_{i}\) is active. If \(p_{i}\) is active, write \(S^{i}_{\circ}:=\{j\ |\ f_{j}\text{ satisfies both (i) and (ii) at }p_{i}\}\). Then, for any two \(j,j^{\prime}\notin S^{i}_{\circ}\), then \(f_{j}(p_{i})=f_{j^{\prime}}(p_{i})\neq 0\) as sections of \(\mathcal{L}^{\prime}(D_{j}+D_{j^{\prime}})\). **Definition 44**.: _If \(p_{i}\) is active, we say that \(p_{i}\) is regular if \(S^{i}_{\circ}=\emptyset\), that is, \(p_{i}\) is not a base-point of \(f\). We say that \(p_{i}\) is wild otherwise._ The final two steps are only needed for wild and inactive \(p_{i}\). 1. (\(p_{i}\) satisfies (i) and (ii) for _some_\(j\).) 2. Repeat step 2 with \(p_{i}\) in place of \(p\). The modifications are made exactly for \(j\in S^{i}_{\circ}\). Let \(S^{i}_{\circ,1},S^{i}_{\circ,2}\subset S^{i}_{\circ}\) denote the set of \(j\) still satisfying conditions (i), (ii) after step 6. 3. (\(p_{i}\) satisfies (i) for _more than one_\(j\)) 4. Repeat step 3 with \(p_{i}\) in place of \(p\). Here, the subset \(S^{i}_{\circ,1}\) plays the role of \(S_{1}\) in step 3 above. Note that, by Lemma 43, step 7 is run only if step 6 is. After step 7, \(f\) is finally base-point free everywhere, and therefore defines a map \(f:C\to\mathbb{P}^{r}\). We have already analyzed the incidence conditions imposed on \(C^{\circ}\) (after step 3); we now do so at the \(p_{i}\). First, suppose that \(p_{i}\) is inactive. As we have noted, the condition \(V(x_{i})\) no longer imposes constraints on the \(f_{j},D_{S}\) post-twisting. On the other hand, after steps 6 and 7, the divisors \(D_{S}\) containing \(p_{i}\) impose conditions on the intersection of \(\operatorname{im}(f)\) with the torus-invariant boundary of \(\mathbb{P}^{r}\) at \(p_{i}\) in exactly the same way as at the \(p\in C^{\circ}\). Then, we summarize the situation: For all non-empty \(S\subsetneq[r+1]\), we have a divisor \(D_{S}\subseteq C\) for which \[f(D_{S})\subseteq\Lambda_{S}. \tag{14}\] (Note that \(D_{S}\subseteq D^{\prime}_{S}\).) As we have already observed, we may be forgetting various other constraints. For example, the \(f_{j}\) may have non-generic vanishing at various points of \(C^{\circ}\) (including the \(D_{S}\)), the \(D_{S}\) may contain copies of the same points with various multiplicites, and the \(D_{S}\) may include the fixed inactive points \(p_{i}\) in its support. The constraint (14) will turn out to be sufficient. **Lemma 45**.: _Suppose that \(p_{i}\) is active. Then,_ \[f(p_{i})\in\langle x_{i},\Lambda_{S^{i}_{\circ}}\rangle=:\Lambda_{i}, \tag{15}\] _where \(\langle-\rangle\) denotes linear span (so that \(\Lambda_{i}\) has dimension \(\#S^{i}_{\circ}\))._ For regular \(p_{i}\), note that (15) is simply the condition \(f(p_{i})=x_{i}\). Proof.: For any two \(j,j^{\prime}\notin S^{i}_{\circ}\), then \(f_{j}(p_{i})=f_{j^{\prime}}(p_{i})\neq 0\) as sections of \[\mathcal{L}\left(-\sum_{S\notin j,j^{\prime}}D_{S}\right),\] and this property remains unchanged by steps 6 and 7 (and upon injection into \(\mathcal{L}\)). We now come to the crux of the argument: combining the constraints (14) and (15) on \(f\) enumerated above, we will pass from \(f:C\to\mathbb{P}^{r}\) to a map \(\bar{f}:C\to Y\), where \(Y\) is the permutohedral variety of dimension \(r\) defined in the previous section. Let \(\bar{f}:C\to Y\) be unique lift of \(f:C\to\mathbb{P}^{r}\) to \(Y\); this \(\bar{f}\) exists because the sections \(f_{j}\) remain non-zero throughout the entire twisting process, so the image of \(f\) is not contained in any torus-invariant subvariety. Write \(\overline{\beta}:=\bar{f}_{*}[C]\), as well as \(\overline{d}=\deg(\mathcal{L})\) (after twisting down all base-points) and \(k_{S}=\deg(D_{S})\) for the degree of the divisor \(D_{S}\). The following lemma may be regarded as a numerical incarnation of (14). **Lemma 46**.: _We have_ \[\deg(\overline{\beta})\leq(r+1)\overline{d}\;-\sum_{\begin{subarray}{c}S \subset[r+1]\\ 1\leq\#S\leq r-1\end{subarray}}(r-\#S)k_{S},\] _where the degree is measured against \(K_{Y}^{\vee}\)._ Proof.: Let \(Y=Y_{r-1}\to Y_{r-2}\to\cdots\to Y_{0}=\mathbb{P}^{r}\) be the sequence of blow-ups to obtain \(Y\), where the blow-up \(\rho_{s}:Y_{s+1}\to Y_{s}\) is at the strict transforms of the \(s\)-dimensional torus-invariant subvarieties of \(\mathbb{P}^{r}\). Note that the map \(f:C\to\mathbb{P}^{r}\) has degree \(\overline{d}\). In the unique lift \(f_{s}:C\to Y_{s}\) of \(f\) to \(Y_{s}\), for each \(S\) with \(\#S=s+1\), the divisor \(D_{S}\) is constrained to map to strict transform of \(\Lambda_{S}\) in \(Y_{s}\). Therefore, the blow-up \(\rho_{s}\) decreases the degree of \(C\) by at least \((r-\#S)k_{S}=(r-s-1)k_{S}\) for each such \(S\), that is, \[(f_{s})_{*}[C]\cdot K_{Y_{s}}^{\vee}-(f_{s+1})_{*}[C]\cdot K_{Y_{s+1}}^{\vee} \geq(r-s-1)\sum_{\#S=s+1}k_{S},\] from which the lemma follows. **Lemma 47**.: _At active \(p_{i}\), we have_ \[f(p_{i})\in\widetilde{\Lambda_{i}}. \tag{16}\] _where \(\widetilde{\Lambda_{i}}\) is the strict transform of \(\Lambda_{i}\) in \(Y\)._ Note that if \(p_{i}\) is regular, then we may identify \(\widetilde{\Lambda_{i}}\) with \(x_{i}\). Proof.: As \(x_{i}\in X\) is general, we may assume it does not lie in any torus-invariant subvariety, and identify it both with a point of \(Y\) and of \(\mathbb{P}^{r}\). Then, the subvariety \(\widetilde{\Lambda_{i}}\subset Y\) may be described in the language of SS3.1.1 as the fiber containing \(x_{i}\) of the projection \(\rho_{(S_{\circ}^{i})^{c}}:Y\to\mathbb{P}(\mathbb{C}[(S_{\circ}^{i})^{c}])\). That is, \(\widetilde{\Lambda_{i}}\) is the set of points of \(Y\) whose coordinates corresponding to the complement of \(S_{\circ}^{i}\) are equal to those of \(x_{i}\). When we take \(x_{i}=[1:\cdots:1]\), this is to say that these coordinates are all non-zero and equal to each other. The claim now follows from the fact that, by construction, the sections \(f_{j}\in H^{0}(C,\mathcal{L})\) comprising \(f:C\to\mathbb{P}^{r}\) have the same order of vanishing for \(j\in(S_{\circ}^{i})^{c}\). Therefore, twisting \(\mathcal{L}\) down by this order of vanishing yields a point \(y_{k^{\prime}}\) in the sequence \(y=(y_{0},\ldots,y_{k})\) for which the corresponding coordinates are all non-zero and equal to each other, for the same reason as in Lemma 45. If one or both of \(S_{\circ,1}^{i},S_{\circ,2}^{i}\) are empty after step 6, then in fact, \(f(p_{i})\) is further constrained to lie in a proper subvariety of \(\widetilde{\Lambda_{i}}\); we will, however, not need this. Note in particular that the expected number of conditions on \(\bar{f}\) imposed by (16) is \(r-\#S_{\circ}^{i}\). **Definition 48**.: _Let_ \[\tau^{\prime}:\mathcal{M}_{g,n}(Y,\bar{\beta})\to\mathcal{M}_{g,n}\times\prod_{p_{ i}\text{ active}}\mathbb{P}(\mathbb{C}[(S^{i}_{\circ})^{c}])\] _be the map obtained in the first factor by remembering the domain curve, and in the second by evaluating at an active marked point \(p_{i}\), and then projecting via \(\rho_{(S^{i}_{\circ})^{c}}\)._ Note that the projection \(\rho_{(S^{i}_{\circ})^{c}}\) is simply the blow-up map \(Y\to\mathbb{P}^{r}\) when \(p_{i}\) is regular. **Lemma 49**.: _The expected relative dimension of \(\tau^{\prime}\),_ \[\operatorname{vdim}(\mathcal{M}_{g,n}(Y,\bar{\beta}))-(3g-3+n)-\sum_{p_{i} \text{ active}}(r-\#S^{i}_{0}),\] _is non-positive, and is zero only if our starting point \(f_{init}\in\mathbb{P}\) was base-point free, that is, in \(\mathbb{P}^{\circ}\)._ Proof.: If \(f_{\text{init}}\in\mathbb{P}\) to begin with, then \[\bar{\beta}\cdot K^{\vee}_{Y}\leq\beta\cdot K^{\vee}_{X},\] where \(\beta\) is the original class \(d\mathsf{H}^{\vee}+\sum_{j}k_{j}\mathsf{E}^{\vee}_{j}\in H_{2}(X)\), with equality whenever no two of the original sections \(f_{j}\in H^{0}(C,\mathcal{L}^{\prime}(D_{j}))\) share a common vanishing point. Thus, the virtual dimension of \(\mathcal{M}_{g,n}(Y,\bar{\beta})\) is at most the dimension of \(\mathcal{M}_{g,n}\times X^{n}\) by (1). In particular, the quantity in question is non-positive. In general, we will show that each of the steps of our twisting algorithm has the effect of decreasing the quantity \[\Omega:=\left[(r+1)\deg(\mathcal{L})\ -\sum_{\begin{subarray}{c}S\subset[r+1]\\ 1\leq\#S\leq r-1\end{subarray}}(r-\#S)\deg(D_{S})\right]-r(g-1)-\sum_{p_{i} \text{ active}}(r-\#S^{i}_{\circ})\] The quantities \(\deg(\mathcal{L}),\deg(D_{S})\) are taken to be those that are changing throughout the twisting algorithm. As for the last term, we somewhat abusively set the values of \((r-\#S^{i}_{\circ})\) to be equal to \(r\) initially (as if all marked points \(p_{i}\) are initially regular), so that the last summation is initially equal to \(-rn\), and, by (1), \(\Omega\) is initially equal to zero. Then, if \(p_{i}\) is inactive, the summand \((r-\#S^{i}_{\circ})\) is removed after either step 4 or 5 (whichever is applied first). If \(p_{i}\) is wild, then the value of \(\#S^{i}_{\circ}\) is set to the correct one after step 6. If \(p_{i}\) is regular, the summand \((r-\#S^{i}_{\circ})\) remains equal to \(r\) throughout. The quantities \(\deg(\mathcal{L}),\deg(D_{S})\) are taken to be those that are changing throughout the twisting algorithm. By Lemma 46, the final value of \(\Omega\) is an upper bound for the expected relative dimension of \(\tau^{\prime}\). Therefore, if \(f_{\text{init}}\notin\mathbb{P}^{\circ}\), that is, at least one step of the algorithm is run, then the expected relative dimension of \(\tau^{\prime}\) will be strictly negative, as needed. 1. We decrease \(\deg(\mathcal{L})\) by \(\alpha>0\) and make no other changes, which decreases \(\Omega\) by \((r+1)\alpha\). 2. For each \(j\), we decrease \(\deg(\mathcal{L})\) and \(\deg(D_{\{j\}})\) each by \(\alpha_{p,j}>0\), which decreases \(\Omega\) by \(2\alpha_{p,j}\). * First, decreasing \(\deg(\mathcal{L})\) by \(\alpha_{\mathrm{tot}}-\alpha_{\mathrm{max}}\) decreases \(\Omega\) by \((r+1)(\alpha_{\mathrm{tot}}-\alpha_{\mathrm{max}})\), and replacing each \(D_{\{j\}}\) by \(D_{\{j\}}-\alpha_{j}p\) increases it by \((r-1)\alpha_{\mathrm{tot}}\). Finally, the modification of \(D_{[r+1]\setminus S^{m}}\) decreases \(\Omega\) by \((\#S^{m}-1)(\alpha^{\prime}(m)-\alpha^{\prime}(m-1))\). We thus need to show that \[-2\alpha_{\mathrm{tot}}+(r+1)\alpha_{\mathrm{max}}-\sum_{m=2}^{t}(\#S^{m}-1)( \alpha^{\prime}(m)-\alpha^{\prime}(m-1))<0.\] Let \[\alpha(1)>\alpha(2)>\cdots>\alpha(t)\] denote the distinct integers among the \(\alpha_{j}\); note that \(\alpha(t)=0\) if and only if \(S_{1}\subsetneq[r+1]\). Let \(r_{1},\ldots,r_{t}\) denote the number of times \(\alpha(1),\ldots,\alpha(t)\) appear among the \(\alpha_{j}\), so that \(r+1=r_{1}+\cdots+r_{t}\). We now have \[-2\alpha_{\mathrm{tot}}+(r+1)\alpha_{\mathrm{max}}-\sum_{m=2}^{t}( \#S^{m}-1)(\alpha^{\prime}(m)-\alpha^{\prime}(m-1))\] \[= -2\sum_{m=1}^{t}\alpha(m)r_{m}+(r+1)\alpha(1)-\sum_{m=2}^{t}(r_{ m}+\cdots+r_{t}-1)(\alpha(m-1)-\alpha(m))\] \[= -\sum_{m=1}^{t}r_{m}\alpha(m)+\alpha(1)-\alpha(t)<0.\] * As in step 1, we decrease \(\deg(\mathcal{L})\) by \(\alpha>0\) and make no other changes, which decreases \(\Omega\) by \((r+1)\alpha\). On the other hand, the term \((r-\#S^{i}_{\circ})\) goes away. Thus, the expected relative dimension of \(\tau^{\prime}\) goes down by \((r+1)\alpha-r>0\). * We decrease all of the \(\deg(D_{\{j\}})\) by \(\alpha>0\) and \(\deg(\mathcal{L})\) by \(r\alpha\), decreasing \(\Omega\) by \((r+1)\alpha\), and possibly remove the term \((r-\#S^{i}_{\circ})\) (if this was not done in the previous step), so \(\Omega\) goes down by at least \((r+1)\alpha-r>0\). We examine the effect of the last two steps depending on the type of \(p_{i}\). * if \(p_{i}\) is inactive, then applications of steps 6 and 7 only decrease \(\Omega\) further, by steps 2 and 3. * if \(p_{i}\) is regular, then no changes are made. * if \(p_{i}\) is wild, then step 6 decreases \(\deg(\mathcal{L})\) and \(\deg(D_{\{j\}})\) together by least 1 for each \(j\in S^{i}_{\circ}\), decreasing \(\Omega\) by at least \(2\cdot\#S^{i}_{\circ}\). Furthermore, the \(-(r-\#S^{i}_{\circ})\) is updated to the correct value (from \(-r\) originally) in this step. In total, the value of \(\Omega\) goes down at least by \[2\cdot\#S^{i}_{\circ}+(r-\#S^{i}_{\circ})-r>0\] in step 6. In step 7, \(\Omega\) can again only decrease, by step 3. (Note that step 7 does not affect the contribution of \(-(r-\#S^{i}_{\circ})\).) **Lemma 50**.: _Every irreducible component of \(\mathcal{M}_{g,n}(Y,\overline{\beta})\) dominating \(\mathcal{M}_{g,n}\times\prod_{p_{i}\text{ regular}}X\) (where we may equivalently replace \(X\) in the last factor with \(Y\) or \(\mathbb{P}^{r}\)) has expected dimension._ Proof.: We claim that there are at least \(g+1\) regular points among the \(p_{i}\). Indeed, every point \(p_{i}\) which is a base-point for our original \(f\in\mathbb{P}\) contributes at least \(1\) to the degree \(d\) of our original line bundle \(\mathcal{L}\), but by assumption, we have \(d\leq n-(g+1)\), so at least \(g+1\) regular points remain after our twisting algorithm. Therefore, the conclusion follows from Lemma 1. Proof of Proposition 34.: Lemmas 49 and 50 at long last imply that \(V(x_{1},\ldots,x_{n})\) is contained in \(\mathbb{P}^{\circ}\). Indeed, if this is not the case for some \(f\in V(x_{1},\ldots,x_{n})\), then applying the twisting algorithm to \(f\) yields a point on \(\mathcal{M}_{g,n}(Y,\bar{\beta})\) moving in a family dominating the target of \(\tau^{\prime}\), but this is a contradiction due to dimension reasons. Similarly, for dimension reasons, we see that \(f\in V(x_{1},\ldots,x_{n})\) must define a (base-point free) map \(f:C\to X\) of curve class _exactly_\(\beta\), as unexpected vanishing of the \(f_{j}\) can only decrease the degree of \(\beta\). It remains to check that any \(f\in V(x_{1},\ldots,x_{n})\) has no non-trivial tangent vectors. It is elementary to check that such a tangent vector would give rise to a non-trivial relative tangent vector of the map \(\tau:\mathcal{M}_{g,n}(X,\beta)\to\mathcal{M}_{g,n}\times X^{n}\) over a general point, which contradicts Lemma 1. ### Integral formula on \(S\) In this section, we push forward the formula of Theorem 10 to obtain Theorem 11. Up to setting \(k_{i}=0\) for \(i>\ell\), we can assume that \(\ell=r+1\). We start with a change of variables. Define \[\widetilde{\mathcal{E}}:=\mathcal{E}\otimes\mathcal{O}(N)=\bigoplus_{i=1}^{r+1 }\pi_{*}(\mathcal{P}(-\mathcal{D}+N+\mathcal{D}_{i}))=\bigoplus_{i=1}^{r+1} \pi_{*}(\mathcal{P}(\Delta_{i})),\] where we have written \[N=\sum_{i=1}^{r+1}N_{i},\ \mathcal{D}=\sum_{i=1}^{r+1}\mathcal{D}_{i}\text{ and } \Delta_{i}=-\mathcal{D}+N+\mathcal{D}_{i}\text{ for }i=1,\ldots,r+1.\] Using \(\widetilde{\mathcal{E}}\), we can rewrite the formula (6) as \[\mathsf{Tev}^{X}_{g,n,\beta}=\int_{\mathbb{P}(\widetilde{\mathcal{E}})}( \widetilde{\mathsf{H}}^{r}+\sigma_{1}\widetilde{\mathsf{H}}^{r-1}+\cdots+ \sigma_{r})^{n}\] where, by an abuse of notation, \(\widetilde{\mathsf{H}}=c_{1}(\mathcal{O}_{\mathbb{P}(\widetilde{\mathcal{E}}) }(1))\). Writing \[(\widetilde{\mathsf{H}}^{r}+\sigma_{1}\widetilde{\mathsf{H}}^{r- 1}+\cdots+\sigma_{r})^{n} =\left(\frac{\prod_{i=1}^{r+1}(\widetilde{\mathsf{H}}+\eta_{i})- \sigma_{r+1}}{\widetilde{\mathsf{H}}}\right)^{n}\] \[=\sum_{m=0}^{n}\binom{n}{m}(-1)^{m}\frac{\prod_{i=1}^{r+1}( \widetilde{\mathsf{H}}+\eta_{i})^{n-m}\sigma_{r+1}^{m}}{\widetilde{\mathsf{H }}^{n}}\] \[=\sum_{m=0}^{n}\binom{n}{m}(-1)^{m}\frac{\prod_{i=1}^{r+1}(\sum_{ a_{i}=0}^{n-m}\binom{n-m}{a_{i}}\widetilde{\mathsf{H}}^{n-m-a_{i}}\eta_{i}^{a_{i} +m})}{\widetilde{\mathsf{H}}^{n}},\] we obtain \[\begin{split}\mathsf{Tev}^{X}_{g,n,\beta}&=\sum_{m=0}^{n }\binom{n}{m}(-1)^{m}\int_{\mathbb{P}(\widetilde{\mathcal{E}})}\frac{\prod_{i=1} ^{r+1}(\sum_{a_{i}=0}^{n-m}\binom{n-m}{a_{i}}\widetilde{\mathsf{H}}^{n-m-a_{i} }\eta_{i}^{a_{i}+m})}{\widetilde{\mathsf{H}}^{n}}\\ &=\sum_{m=0}^{\min(n,k_{1},\ldots,k_{r+1})}\binom{n}{m}(-1)^{m} \int_{S}\left(\prod_{i=1}^{r+1}(1+\eta_{i})^{n-m}\eta_{i}^{m}\right)c( \widetilde{\mathcal{E}})^{-1},\end{split} \tag{17}\] where in the last equality we have pushed forward to \(S\). It remains to compute \(c(\widetilde{\mathcal{E}})\); this will be an application of Grothendieck-Riemann-Roch. 2.1 Cohomology of \(\mathsf{Jac}^{\mathsf{d}}(C)\times\mathsf{Sym}^{k_{1}}(C)\times\cdots\times \mathsf{Sym}^{k_{r+1}}(C)\) We now fix the notation necessary to compute the integral 17. Let \(e_{1},\ldots,e_{2g}\) be a symplectic basis of \(H^{1}(C,\mathbb{Z})\), and denote by \(e_{1}^{\prime},\ldots,e_{2g}^{\prime}\) the corresponding classes in \(H^{1}(\mathsf{Jac}^{\mathsf{d}}(C),\mathbb{Z})\) under the natural isomorphism \[H^{1}(\mathsf{Jac}^{\mathsf{d}}(C),\mathbb{Z})\to H^{1}(C,\mathbb{Z}).\] Let also \[\Theta=\sum_{\alpha=1}^{g}e_{\alpha}^{\prime}e_{\alpha+g}^{\prime}\] be the class of the theta divisor on \(\mathsf{Jac}^{\mathsf{d}}(C)\) and set \[\gamma=-\left(\sum_{\alpha=1}^{g}e_{\alpha}^{\prime}e_{\alpha+g}-e_{\alpha+g} ^{\prime}e_{\alpha}\right)\in H^{1}(\mathsf{Jac}^{\mathsf{d}}(C),\mathbb{Z}) \otimes H^{1}(C,\mathbb{Z}).\] By [2, Chapter VIII], we have \[\mathrm{ch}(\mathcal{P})=1+d\mathsf{p}+\gamma-\Theta\mathsf{p} \tag{18}\] where \(\mathsf{p}\in H^{2}(C,\mathbb{Z})\) is the point class. Next, we recall MacDonald's description [17] of the cohomology ring of \(\mathsf{Sym}^{k}(C)\). For \(i=1,\ldots,r+1\), define classes \(\zeta_{i,1},\ldots,\zeta_{i,2g}\in H^{1}(\mathsf{Sym}^{k_{i}}(C),\mathbb{Z})\) for \(\eta_{i}\in H^{2}(\mathsf{Sym}^{k_{i}}(C),\mathbb{Z})\) as the Kunneth components of the universal divisor \(\mathcal{D}_{i}\): \[\mathcal{D}_{i}=\eta_{i}+\sum_{\alpha=1}^{g}(\zeta_{i,\alpha}e_{\alpha+g}- \zeta_{i,\alpha+g}e_{\alpha})+k_{i}\mathsf{p}.\] Note that \(\eta_{i}=[N_{i}]\) as before. These generate the ring \(H^{*}(\mathsf{Sym}^{k_{i}}(C),\mathbb{Z})\), and setting \(\tau_{i,\alpha}=\zeta_{i,\alpha}\zeta_{i,\alpha+g}\), for every multiindex \(I\) without repetitions, we have \[\int_{\mathsf{Sym}^{k_{i}}(C)}\eta_{i}^{k_{i}-|I|}\tau_{i,I}=1\] where \(\tau_{i,I}=\prod_{\alpha\in I}\tau_{i,\alpha}\). Set also \[\overline{y}_{i} =\sum_{\alpha=1}^{g}\sum_{\begin{subarray}{c}1\leq j\leq r+1\\ j\neq i\end{subarray}}(\zeta_{j,\alpha}e_{\alpha+g}-\zeta_{j,\alpha+g}e_{ \alpha}),\] \[\overline{\tau}_{i} =\sum_{\alpha=1}^{g}\sum_{\begin{subarray}{c}1\leq j_{1},j_{2} \leq r+1\\ j_{1},j_{2}\neq i\end{subarray}}\zeta_{j_{1},\alpha}\zeta_{j_{2},\alpha+g},\] \[\overline{x}_{i} =\sum_{\alpha=1}^{g}\sum_{\begin{subarray}{c}1\leq j\leq r+1\\ j\neq i\end{subarray}}e^{\prime}_{\alpha}\zeta_{j,\alpha+g}-e^{\prime}_{ \alpha+g}\zeta_{j,\alpha},\] \[\text{and }\overline{k}_{i} =\sum_{\begin{subarray}{c}1\leq j\leq r+1\\ j\neq i\end{subarray}}k_{j},\] and write \[\Delta_{i}=\eta_{i}-\overline{y}_{i}-\overline{k}_{i}.\] The following lemma will be used in Section SS3.2.2. **Lemma 51**.: _The following identities hold._ 1. \(\Delta_{i}^{h}=\eta_{i}^{h}-\eta_{i}^{h-1}(h\overline{y}_{i}+h\overline{k}_{ i}\mathsf{p})+\eta_{i}^{h-2}(-h(h-1)\overline{\tau}_{i}\mathsf{p})\)__ 2. \(\gamma\overline{y}_{i}=\overline{x}_{i}\mathsf{p}\)__ Proof.: Identity (i) is proved by induction on \(h\) using \(\overline{y}_{i}^{2}=-2\overline{\tau}_{i}\mathsf{p}\). Identity (ii) is straightforward. #### 3.2.2 Computation of \(c(\widetilde{\mathcal{E}})\) We now express \(c(\widetilde{\mathcal{E}})\) in terms of the classes defined above. Fix an index \(i\) with \(1\leq i\leq r+1\). **Proposition 52**.: _We have_ \[\operatorname{ch}(\pi_{*}(\mathcal{P}(\Delta_{i}))=\exp(\eta_{i})(1-g+d- \overline{k}_{i}-\overline{\tau}_{i}-\Theta+\overline{x}_{i}).\] Proof.: By Grothendieck-Riemann-Roch and (18), we have \[\operatorname{ch}(\pi_{*}(\mathcal{P}(\Delta_{i})) =\pi_{*}(\operatorname{ch}(\mathcal{P}(\Delta_{i}))\cdot\mathsf{ Td}_{C})\] \[=\pi_{*}((1+d\mathsf{p}+\gamma-\mathsf{p}\Theta)\cdot\exp(\Delta _{i})\cdot(1+(1-g)\mathsf{p}).\] We compute \[\pi_{*}(\exp(\Delta_{i})) =\pi_{*}\left(\sum_{h=0}^{\infty}\frac{\Delta_{i}^{h}}{h!}\right)\] \[=\pi_{*}\left(\sum_{h=0}^{\infty}\frac{-h\overline{k}_{i}}{h!} \eta_{i}^{h-1}\mathsf{p}-\frac{h(h-1)}{h!}\eta_{i}^{h-2}\overline{\tau}_{i} \mathsf{p}\right)\] \[=\exp(\eta_{i})(-\overline{k}_{i}-\overline{\tau}_{i})\] where in the second equality we used identity (i) in Lemma 51. Similarly, we have \[\pi_{*}((d-\Theta)\mathsf{p}\,\exp(\Delta_{i}))=\exp(\eta_{i})(d- \Theta),\] \[\pi_{*}(-\gamma\,\exp(\Delta_{i}))=\exp(\eta_{i})\overline{x}_{i},\] \[\pi_{*}((1-g)\mathsf{p}\,\exp(\Delta_{i})\mathrm{ch}(\mathcal{P}) )=(1-g)\mathrm{exp}(\eta_{i}).\] Here, in the second equality we have used identity (ii) in Lemma 51. After summing all the obtained contributions, we obtain the claim. **Corollary 53**.: \[c(\widetilde{\mathcal{E}})=\prod_{i=1}^{r+1}(1+\eta_{i})^{1-g+d-\overline{k}_{ i}}\cdot\exp\left(\frac{-\overline{\tau}_{i}-\Theta+\overline{x}_{i}}{1+ \eta_{i}}\right)\] Proof.: This follows immediately from the formula \[c(V)=\sum_{h=1}^{\infty}(-1)^{h-1}(h-1)!\cdot\mathrm{ch}_{h}(V)\] valid for any vector bundle \(V\) on any scheme. Substituting 53 into (17) completes the proof of Theorem 11, namely that \[\mathsf{Tev}^{X}_{g,n,\beta}=\sum_{m=0}^{\min(n,k_{1},\ldots,k_{r+1})}{n\choose m }(-1)^{m}\int_{S}\prod_{i=1}^{r+1}(1+\eta_{i})^{n-m-1+g-d+\overline{k}_{i}} \eta_{i}^{m}\cdot\exp\left(\frac{\overline{\tau}_{i}+\Theta-\overline{x}_{i}}{ 1+\eta_{i}}\right).\] ### Specialization to genus 0 We now specialize Theorem 11 to \(g=0\) to prove Theorem 12. We continue to assume that \(\ell=r+1\). Identify \[\mathsf{Sym}^{k_{i}}(\mathbb{P}^{1})=\mathbb{P}^{k_{i}}\text{ for }i=1,\ldots,r+1.\] Then, the class \(\eta_{i}\) is the first Chern class of \(\mathcal{O}_{\mathbb{P}^{k_{i}}}(1)\) and Theorem 11 gives: \[\mathsf{Tev}^{X}_{0,n,\beta} =\sum_{m=0}^{\min(n,k_{1},\ldots,k_{r+1})}{n\choose m}(-1)^{m} \int_{\prod_{i=1}^{r}\mathbb{P}^{k_{i}}}\prod_{i=1}^{r+1}(1+\eta_{i})^{n-m-1-d +\overline{k}_{i}}\eta_{i}^{m}\] \[=\sum_{m=0}^{\min(n,k_{1},\ldots,k_{r+1})}{n\choose m}(-1)^{m} \prod_{i=1}^{r+1}\mathrm{Coeff}\left((1+\eta_{i})^{n-m-1-d+\overline{k}_{i}}; \eta_{i}^{k_{i}-m}\right)\] \[=\sum_{m=0}^{\min(n,k_{1},\ldots,k_{r+1})}{n\choose m}(-1)^{m} \prod_{i=1}^{r+1}{n-m-d-1+\overline{k}_{i}\choose k_{i}-m}.\] This concludes the proof of Theorem 12. ### Specialization to \(X=\mathsf{Bl}_{q}(\mathbb{P}^{r})\) Theorem 11 also admits a reasonably elegant specialization to \(X=\mathsf{Bl}_{q}(\mathbb{P}^{r})\). We take now \(k_{2}=\ldots=k_{r+1}=0\) and write \(k=k_{1}\), \(\eta=\eta_{1}\), and \(\mathsf{E}=\mathsf{E}_{1}\), so that \(\tau=\overline{\tau}_{2}=\cdots=\overline{\tau}_{r+1}\) and \(x=\overline{x}_{2}=\cdots=\overline{x}_{r+1}\). We have \[\mathsf{Tev}^{X}_{g,n,d\mathsf{H}^{\vee}+k\mathsf{E}^{\vee}} =\int_{\mathsf{Jac}^{d}(C)\times\mathsf{Sym}^{k}(C)}(1+\eta)^{n-1 +g-d}\cdot\exp\left(\frac{\Theta}{1+\eta}\right)\cdot\exp(\tau+\Theta-x)\] \[=\int_{\mathsf{Jac}^{d}(C)\times\mathsf{Sym}^{k}(C)}A(\eta)\cdot \exp(\tau B(\eta)+\Theta C(\eta)+xD(\eta)),\] where \[\begin{cases}A(\eta)&=(1+\eta)^{n-1+g-d},\\ B(\eta)&=-r,\\ C(\eta)&=\frac{r\eta+(r+1)}{1+\eta},\\ D(\eta)&=r.\end{cases}\] **Lemma 54**.: _Let \(A(\eta),B(\eta),C(\eta)\) and \(D(\eta)\) be polynomials in \(\eta\). Then,_ \[\int_{\mathsf{Jac}^{d}(C)\times\mathsf{Sym}^{k}(C)}A(\eta)\cdot \exp(\tau B(\eta)+ \Theta C(\eta)+xD(\eta))\] \[=\mathrm{Coeff}\left(A(\eta)(C(\eta)(1+\eta B(\eta)-\eta D(\eta)^ {2})^{g};\eta^{k}\right)\] Proof.: Notice that for \(i+j=g\), we have \[\Theta^{i}x^{2j}=\sum_{\begin{subarray}{c}I=(i_{1},\ldots,i_{j})\\ 1\leq i_{1}<\ldots<i_{j}\leq g\end{subarray}}i!(-1)^{j}(2j)!e_{1}^{\prime} \cdots e_{2g}^{\prime}\tau_{I},\] where \(\tau_{I}=\prod_{i\in I}\tau_{i}\) and \(\tau_{\alpha}=\zeta_{1,\alpha}\zeta_{1,\alpha+g}\) for \(\alpha=1,\ldots,g\). Therefore, we can expand \[\int_{\mathsf{Jac}^{d}(C)\times\mathsf{Sym}^{k}(C)}A(\eta)\cdot \exp(\tau B(\eta)+\Theta C(\eta)+xD(\eta))\] \[=\sum_{m=0}^{g}(-1)^{g-m}\sum_{\begin{subarray}{c}I=(i_{1}, \ldots,i_{g-m})\\ 1\leq i_{1}<\cdots<i_{g-m}\leq g\end{subarray}}\int_{\mathsf{Sym}^{k}(C)}A( \eta)C(\eta)^{m}D(\eta)^{2(g-m)}\cdot\exp(\tau B(\eta))\tau_{I}.\] Call \[\widetilde{A}(\eta)=A(\eta)C(\eta)^{m}D(\eta)^{2(g-m)}. \tag{19}\] We have \[\int_{\mathsf{Sym}^{k}(C)}\widetilde{A}(\eta)\cdot\exp(\tau B(\eta)) \tau_{I} =\sum_{h=0}^{\infty}\int_{\mathsf{Sym}^{k}(C)}\widetilde{A}(\eta) \frac{\tau_{I}\tau^{h}B(\eta)^{h}}{h!}\] \[=\sum_{h=0}^{\infty}\int_{\mathsf{Sym}^{k}(C)}\widetilde{A}(\eta) \left(\sum_{\begin{subarray}{c}J:|J|=h\\ J\cap I=\emptyset\end{subarray}}\tau_{I}\tau_{J}\right)B(\eta)^{h}\] \[=\sum_{h=0}^{g-|I|}\binom{g-|I|}{|J|}\cdot\mathrm{Res}_{\eta=0} \left\{\frac{\widetilde{A}(\eta)B(\eta)^{h}}{\eta^{k-h-|I|+1}}d\eta\right\}\] \[=\mathrm{Res}_{\eta=0}\left\{\frac{\widetilde{A}(\eta)}{\eta^{k- |I|+1}}(1+\eta B(\eta))^{g-|I|}d\eta\right\}.\] The claim follows then by substituting (19) and using the combinatorial identity \[\sum_{m=0}^{g}\binom{g}{m}(C(\eta)(1+\eta B(\eta))^{m}(-D(\eta)^{2}\eta)^{g-m }=\big{(}C(\eta)(1+\eta B(\eta))-\eta D(\eta)^{2}\big{)}^{g}\] From Lemma 54, we obtain the residue formula \[\mathsf{Tev}^{X}_{g,n,d\mathsf{H}^{\vee}+k\mathsf{E}^{\vee}}=\mathrm{Coeff} \left((1+\eta)^{n-1-d}(2r\eta+(r+1))^{g};\eta^{k}\right),\] and the right hand side can be expanded as \[\sum_{m=0}^{m}\binom{g}{m}(2r)^{m}(r+1)^{g-m}\binom{n-1-d}{k-m}\] \[=\sum_{m=0}^{m}\binom{g}{m}(2r)^{m}\sum_{h=0}^{g-m}\binom{g-m}{h} (1-r)^{g-m-h}(2r)^{h}\binom{n-1-d}{k-m}\] \[=\] \[= \sum_{a=0}^{g}\binom{g}{a}(2r)^{a}(1-r)^{g-a}\binom{g}{a}\binom{n -1-d+a}{k},\] where in the second equality we have set \(a=m+h\) and in the third equality we have used applied Vandermonde identity \[\binom{m+n}{r}=\sum_{k=0}^{r}\binom{m}{k}\binom{n}{r-k}\text{ for all }r,n,m\in\mathbb{Z}_{\geq 0}.\] Finally, replacing \(a\) with \(g-m\), we obtain Theorem 13. ## 4 Virtual counts for \(X=\mathsf{Bl}_{q}(\mathbb{P}^{r})\) In this section, we establish the virtual count of Theorem 14. ### Preliminaries on \(QH^{*}(x)\) The computation of the invariants \(\mathsf{vTev}^{X}_{g,n,\beta}\) is reduced to a calculation in the quantum cohomology ring \(QH^{*}(X,\mathbb{Q})\) in [4]. See [11] for an introduction. We now take \(X=\mathsf{Bl}_{q}(\mathbb{P}^{r})\). Because \(X\) is Fano, by [18, Proposition 2.2], we have \[QH^{*}(X,\mathbb{Q})\cong\mathbb{Q}[\mathsf{C}]\otimes_{\mathbb{Q}}H^{*}(X, \mathbb{Q})\] as \(\mathbb{Q}[\mathsf{C}]\)-modules. Here, \(\mathsf{C}\subseteq H_{2}(X,\mathbb{Z})\) is the cone of effective curves in \(X\), and by [8, Proposition 4.1], we have \[\mathsf{C}=\mathbb{Z}_{\geq 0}(\mathsf{H}^{\vee}+\mathsf{E}^{\vee})\oplus \mathbb{Z}_{\geq 0}(-\mathsf{E}^{\vee}).\] We will denote by \(\star\) the product in \(QH^{*}(X,\mathbb{Q})\). Let \(\Sigma\subseteq\mathbb{R}^{r}\) be toric fan of \(X\). Its generators are \(v_{j}=e_{j+1}\) for \(j=0,1,\ldots,r-1\), along with \(v_{r}=-(e_{1}+\cdots+e_{r})\) and \(v=-v_{r}\). The sets \[S=\{v_{0},\ldots,v_{r-1}\}\text{ and }T=\{v,v_{r-1}\}\] are the only two primitive sets in the sense of [13, Definition 1.1] and, using [13, Theorem 1.2], from these two sets we respectively obtain the following two relations in \(QH^{*}(X,\mathbb{Q})\): \[(\mathsf{H}-\mathsf{E})^{\star r} =q^{-\mathsf{E}^{\vee}}\mathsf{E}; \tag{20}\] \[\mathsf{H}\star\mathsf{E} =q^{\mathsf{H}^{\vee}+\mathsf{E}^{\vee}}.\] **Lemma 55**.: _The set_ \[1,(\mathsf{H}-\mathsf{E}),\ldots,(\mathsf{H}-\mathsf{E})^{\star r-1}, \mathsf{E}\star(\mathsf{H}-\mathsf{E}),\ldots.,\mathsf{E}\star(\mathsf{H}- \mathsf{E})^{r-1}\] _is a basis of \(QH^{*}(X,\mathbb{Q})\) as \(\mathbb{Q}[\mathsf{C}]\)-module._ Proof.: From the second relation of (20), we have \[\mathsf{E}^{\star 2}=-(\mathsf{H}-\mathsf{E})\star\mathsf{E}+q^{\mathsf{H}^{ \vee}+\mathsf{E}^{\vee}},\] so the above collection spans \(QH^{*}(X,\mathbb{Q})\). The fact that they are linearly independent follows from [13, Theorem 1.2]. Consider the homogeneous basis \[\mathsf{H}^{r},\mathsf{H}^{r-1},\ldots,1,\mathsf{E}^{r-1},\ldots,\mathsf{E} \tag{21}\] of \(H^{*}(X,\mathbb{Q})\). We now express each of these classes in \(QH^{*}(X,\mathbb{Q})\) as linear combinations of the basis elements of Lemma 55 using [13, Theorem 1.6]. In order for [13, Theorem 1.6] to apply, we need the following result: **Lemma 56**.: _Every toric subvariety of \(X\) is Fano. Furthermore, for every map \(\pi:X\to X^{\prime}\), where \(X^{\prime}\) is toric and \(\pi\) is the blowup of an irreducible toric subvariety, we have that \(X^{\prime}\) is Fano._ Proof.: Immediate consequence of [13, Theorem 3.9]. **Lemma 57**.: _We have_ \[\mathsf{H}^{i} =\mathsf{H}\star(\mathsf{H}-\mathsf{E})^{\star(i-1)}\text{ for }i=1,\ldots,r,\text{ and}\] \[\mathsf{E}^{i} =(-1)^{i-1}\mathsf{E}\star(\mathsf{H}-\mathsf{E})^{\star(i-1)} \text{ for }i=1,\ldots,r-1\] _in \(QH^{*}(X,\mathbb{Q})\)._ Proof.: We start with \(\mathsf{H}^{i}\). Let \(\alpha_{i}\) be the cone generated by \(v_{r},\ldots,v_{i+1}\) for \(i=1,\ldots,r\). We have that \(\alpha_{i}\) corresponds to a toric subvariety of class \(\mathsf{H}^{r-i}\). Then, [13, Theorem 1.6] and the fact the only exceptional set special for \(\alpha_{i}\) is the empty set yields the desired equality. The proof for \(\mathsf{E}^{i}\) is similar. This time, \(\alpha_{i}\) is the cone generated by \(v\) and \(v_{r-2},\ldots,v_{r-i}\) for \(i=1,\ldots,r-1\) (when \(i=1\), we just have \(\alpha_{1}=\langle v\rangle\)), so \(\alpha_{i}\) corresponds to \((-1)^{i+1}\mathsf{E}^{i}\). ### The quantum Euler class of \(X\) By definition, the **quantum Euler class** of \(X\) is the image of the Kunneth decomposition of the diagonal in \(X\times X\) under the natural product map \[H^{*}(X,\mathbb{Q})\otimes H^{*}(X,\mathbb{Q})\xrightarrow{\star}QH^{*}(X, \mathbb{Q}).\] This is a canonically defined element of \(QH^{*}(X,\mathbb{Q})\), first introduced by Abrams in [1] (see also [4, 5]). The computation of the virtual invariants \(\mathsf{v}\mathsf{Tev}^{X}_{g,n,d\mathsf{H}^{\vee}+k\mathsf{E}^{\vee}}\) is related to \(\Delta\) in [4, Theorem 1.3] by the formula: \[\mathsf{v}\mathsf{Tev}^{X}_{g,n,d\mathsf{H}^{\vee}+k\mathsf{E}^{\vee}}= \operatorname{Coeff}(\mathsf{P}^{\star n}\star\Delta^{\star g},\mathsf{P}q^{d \mathsf{H}^{\vee}+k\mathsf{E}^{\vee}}).\] where \(\mathsf{P}\) is the point class in \(X\). **Theorem 58**.: _We have_ \[\Delta=(2r)\mathsf{P}-(r-1)q^{-\mathsf{E}^{\vee}}\mathsf{E}.\] _in \(QH^{*}(X,\mathbb{Q})\)._ Proof.: Let \[1,\mathsf{H},\ldots,\mathsf{H}^{r},(-1)^{r-1}\mathsf{E},\ldots.,(-1)^{r-1} \mathsf{E}^{r-1}.\] be the dual basis to \(21\) w.r.t. the Poincare pairing. Then \[\Delta= \mathsf{P}\star 1+\cdots+\mathsf{H}^{r-i}\star\mathsf{H}^{i}+ \cdots+1\star\mathsf{P}+(-1)^{r-1}(\mathsf{E}\star\mathsf{E}^{r-1}+\cdots+ \mathsf{E}^{r-1}\star\mathsf{E})\] \[= 2\mathsf{P}+(r-1)\mathsf{H}^{\star 2}\star(\mathsf{H}- \mathsf{E})^{\star(r-2)}-(r-1)\mathsf{E}^{\star 2}\star(\mathsf{H}- \mathsf{E})^{\star(r-2)}\] \[= 2\mathsf{P}+(r-1)\left((\mathsf{H}-\mathsf{E})^{r}+\mathsf{E}^{2 }\star(\mathsf{H}-\mathsf{E})^{r-2}+2\mathsf{E}\star(\mathsf{H}-\mathsf{E}) ^{\star(r-1)}\right)\] \[-(r-1)\mathsf{E}^{\star 2}\star(\mathsf{H}-\mathsf{E})^{\star(r-2)}\] \[= (2r)\mathsf{P}-(r-1)q^{-\mathsf{E}^{\vee}}\mathsf{E},\] where in the second equality we have used Lemma 57, in the third equality we have written \(\mathsf{H}=\mathsf{H}-\mathsf{E}+\mathsf{E}\), and in the forth equality we have used the first relation in \(20\) and the fact that, by Lemma 57, we have \[\mathsf{P}=\mathsf{H}\star(\mathsf{H}-\mathsf{E})^{\star(r-1)}=(\mathsf{H}- \mathsf{E})^{\star r}+\mathsf{E}\star(\mathsf{H}-\mathsf{E})^{\star(r-1)}.\] This concludes the proof. ### Proof of Theorem 14 Finally, we can compute \[\mathsf{vTev}^{X}_{g,n,d\mathsf{H}^{\vee}+k\mathsf{E}^{\vee}} =\mathrm{Coeff}(\mathsf{P}^{\star n}\star\Delta^{\star g},\mathsf{ P}q^{d\mathsf{H}^{\vee}+k\mathsf{E}^{\vee}})\] \[=\sum_{m=0}^{g}(2r)^{g-m}(1-r)^{m}\binom{g}{m}\mathrm{Coeff}( \mathsf{P}^{\star n+g-m}\star\mathsf{E}^{\star m},\mathsf{P}q^{d\mathsf{H}^{ \vee}+(k+m)\mathsf{E}^{\vee}}),\] and the proof of Theorem 14 is reduced to the following. **Lemma 59**.: _For \(m,k\in\mathbb{Z}_{\geq 0}\), \(\ell,d\in\mathbb{Z}_{>0}\) with \(\ell-d-m>0\), \(d\geq k\) and satisfying_ \[(r+1)d-(r-1)k=r(\ell-1), \tag{22}\] _we have_ \[\mathrm{Coeff}(\mathsf{P}^{\star\ell-m}\star\mathsf{E}^{\star m},\mathsf{P}q^ {d\mathsf{H}^{\vee}+(k+m)\mathsf{E}^{\vee}})=\binom{\ell-d-m-1}{k}.\] **Remark 60**.: _Note that_ \[\binom{\ell-d-m-1}{k}=0\] _unless \(\ell-d-m-1\geq k\), which, using (22), is equivalent to \(d\geq(2r-1)k\). In particular, we get \(0\) unless \(\ell\geq r+2\)._ Proof of Lemma 59.: We proceed by induction on \(m\). Suppose that \(m=0\). Then, conditions (1), (3), and (5) are all satisfied with \(n=\ell\), so \[\mathrm{Coeff}(\mathsf{P}^{\star\ell},\mathsf{P}q^{d\mathsf{H}^{\vee}+k \mathsf{E}^{\vee}})=\mathsf{vTev}^{X}_{0,\ell,d\mathsf{H}^{\vee}+k\mathsf{E}^ {\vee}}=\binom{\ell-d-1}{k}\] by Theorems 12 and 23. Suppose that \(m=1\). If \(\ell>r+1\), then we can write \[\mathsf{P}^{\star(\ell-1)}\star\mathsf{E} =\mathsf{P}^{\star(\ell-2)}\star(\mathsf{H}-\mathsf{E})^{\star(r- 1)}q^{\mathsf{H}^{\vee}+\mathsf{E}^{\vee}}\] \[=\mathsf{P}^{\star(\ell-3)}\star(\mathsf{H}-\mathsf{E})^{\star(r -2)}q^{2\mathsf{H}^{\vee}+\mathsf{E}^{\vee}}\] \[\vdots\] \[=\mathsf{P}^{\star(\ell-r-1)}q^{r\mathsf{H}^{\vee}+\mathsf{E}^{ \vee}}.\] Taking the coefficient of \(\mathsf{P}q^{d\mathsf{H}^{\vee}+(k+1)\mathsf{E}^{\vee}}\) on both sides and applying the \(m=0\) case gives the claim. If instead \(\ell<r+2\), then the same chain of equalities yields \[\mathsf{P}^{\star(\ell-1)}\star\mathsf{E}=(\mathsf{H}-\mathsf{E})^{\star(r+1 -\ell)}q^{(\ell-1)\mathsf{H}^{\vee}+\mathsf{E}^{\vee}}.\] In particular, the coefficient of \(\mathsf{P}q^{d\mathsf{H}^{\vee}+(k+1)\mathsf{E}^{\vee}}\) is \(0\). The claimed equality then follows from Remark 60. Finally, suppose that \(m\geq 2\). Write \[\mathsf{P}^{\star(\ell-m)}\mathsf{E}^{\star m} =\mathsf{P}^{\star(\ell-m-1)}\mathsf{E}^{\star(m-1)}(\mathsf{H}- \mathsf{E})^{\star(r-1)}q^{\mathsf{H}^{\vee}+\mathsf{E}^{\vee}}\] \[=\mathsf{P}^{\star(\ell-m)}\mathsf{E}^{\star(m-2)}q^{\mathsf{H}^ {\vee}+\mathsf{E}^{\vee}}-\mathsf{P}^{\star(\ell-m-1)}\mathsf{E}^{\star(m-1) }q^{\mathsf{H}^{\vee}}.\] Taking the coefficient of \(\mathsf{P}q^{d\mathsf{H}^{\vee}+(k+1)\mathsf{E}^{\vee}}\) on both sides and using the inductive hypothesis completes the proof. This concludes the proof of Theorem 14. **Remark 61**.: _The condition \(n-d\geq 1\) in Theorem 14 (i.e., the condition \(\ell-m-d\geq 1\) in Lemma 59 ) is necessary. For example, when \(r=2\) and \(n=g=d=k=1\), we have_ \[\mathsf{vTev}^{X}_{g,n,d\mathsf{H}^{\vee}+k\mathsf{E}^{\vee}} =\mathrm{Coeff}(\mathsf{P}\star\Delta,q^{d\mathsf{H}^{\vee}+k \mathsf{E}^{\vee}}\mathsf{P})\] \[=4\cdot\mathrm{Coeff}(q^{\mathsf{H}^{\vee}}\mathsf{H},q^{\mathsf{ H}^{\vee}+\mathsf{E}^{\vee}}\mathsf{P})-\mathrm{Coeff}(q^{\mathsf{H}^{\vee}+ \mathsf{E}^{\vee}}(\mathsf{H}-\mathsf{E}),q^{\mathsf{H}^{\vee}+2\mathsf{E}^{ \vee}}\mathsf{P})\] \[=0\] _while the right-hand side of Theorem 14 is equal to \(1\)._ **Remark 62**.: _When \(k=0\), in Lemma 59 we have_ \[d=\frac{r(\ell-1)}{r+1}\in\mathbb{Z},\] _and so it must be that \(\ell>r+1\). In particular, as one would expect, the same proof gives_ \[\mathsf{vTev}^{X}_{g,n,d\mathsf{H}^{\vee}}=(r+1)^{g}=\mathsf{vTev}^{\mathbb{ P}^{r}}_{g,n,d\mathsf{H}^{\vee}}\] _for all \(g\geq 0\) and \(n,d>1\) satisfying (1)._
2309.16703
Incompatibilities Between Current Practices in Statistical Data Analysis and Differential Privacy
The authors discuss their experience applying differential privacy with a complex data set with the goal of enabling standard approaches to statistical data analysis. They highlight lessons learned and roadblocks encountered, distilling them into incompatibilities between current practices in statistical data analysis and differential privacy that go beyond issues which can be solved with a noisy measurements file. The authors discuss how overcoming these incompatibilities require compromise and a change in either our approach to statistical data analysis or differential privacy that should be addressed head-on.
Joshua Snoke, Claire McKay Bowen, Aaron R. Williams, Andrés F. Barrientos
2023-08-16T20:45:28Z
http://arxiv.org/abs/2309.16703v1
# Incompatibilities between current practices in statistical data analysis and differential privacy ###### Abstract The authors discuss their experience applying differential privacy with a complex data set with the goal of enabling standard approaches to statistical data analysis. They highlight lessons learned and roadblocks encountered, distilling them into incompatibilities between current practices in statistical data analysis and differential privacy that go beyond issues which can be solved with a noisy measurements file. The authors discuss how overcoming these incompatibilities require compromise and a change in either our approach to statistical data analysis or differential privacy that should be addressed head-on. Key words and phrases:differential privacy; statistical analysis + Footnote †: copyrighted: © J. Snoke, C.M. Bowen, A.R. Williams, and A.F. Barrientos ## 1. A Brief History of Differential Privacy in the Wild Researchers and data practitioners make many different claims concerning differential privacy (DP) and its impact on statistical analysis. Some maintain that DP provides the future for how government agencies and private companies will release public statistics and data sets. Others argue that pursuing DP is a mistake and will destroy how we disseminate information as we know it. These debates often center on notions of the trade-off between accuracy and utility, selecting privacy budgets, or the appropriate definition of privacy loss. While these questions matter, they often fail to acknowledge other underlying issues. While DP is a framework containing a wide variety of implementations, there exist fundamental incompatibilities between standard statistical analysis approaches and the possibilities within a DP framework1. Footnote 1: One might rightfully respond that some incompatibilities are not unique to DP and also exist for alternative statistical disclosure control methods, and we would agree. For the sake of simplicity, we do not dig into those questions in this piece. See Slavkovic and Seeman (2023) for a thorough article on the similarities and differences between DP and other statistical disclosure control approaches. From the position of either a statistical analyst or a privacy practitioner, dealing with these incompatibilities often comes across in the field as claims that the other side is "doing it wrong." In reality, we have two paradigms functioning under different assumed rules and making them compatible will require some changes to one or both frameworks. We write this perspective from the point of view of both privacy researchers and statistical analysts who are broadly approaching their analysis from a frequentist perspective with the goal of statistical inference. We also use the term statistical analyst as a catch-all term encompassing statisticians, economists, demographers, or any data scientist working in social statistics. Briefly considering the history of practical DP implementations, early applications of the framework generally obscured the incompatibilities due to the specific use-cases. For instance, numerous tech companies created interactive or query-based DP frameworks that allowed analysts to submit a question and receive a noisy statistic in return. Some examples include audience engagement statistics on LinkedIn (Rogers et al., 2021), SQL queries in Uber's driver and rider database (Johnson et al., 2018), people's movement on Google Maps within certain geographic regions (Aktay et al., 2020), and HealthKit usage on Apple products.2 These applications enable analysts to perform learning tasks or other tasks that do not rely on uncertainty estimates or explicit hypothesis testing, since statistical learning was the goal for the use-case. Footnote 2: Differential Privacy Team, Apple. n.d. “Learning with Privacy at Scale.” [https://docassets.developer.apple.com/ml-research/papers/learning-with-privacy-at-scale.pdf](https://docassets.developer.apple.com/ml-research/papers/learning-with-privacy-at-scale.pdf). When the U.S. Census Bureau announced their adoption of DP for the 2020 Census data products, it significantly increased the debate between DP advocates and DP skeptics of its applicability to demographic data. At a high level, the 2020 Disclosure Avoidance System added differentially private noise to thousands of statistics at different census geographic level and corrected for any inconsistencies (e.g., ensuring that population counts for census tracts sum to the county count) using a complex post-processing method formulated in the TopDown Algorithm. This change in statistical data privacy protection created huge backlash from the data user community, including lawsuits3 and a letter to the Director of the U.S. Census Bureau4 to stop the use of DP on the 2020 Census data products. While some simply urged not to add noise, constructive critiques pointed out that statistical analysts needed the ability to adjust for the noise in order to conduct their usual statistical analyses. In essence, analysts needed new tools that are not part of traditional statistical data analysis. The introduction of bias or increased uncertainty in estimates due to privacy protections was not new to those who had been working in statistical disclosure control. What was novel is that DP promised to account or track the added noise rather than ignoring it. But, for some time, it was not clear how this promise would be achieved. In response, several leading researchers requested the Census Bureau release the noisy measurements data set (Dwork et al., 2021). The researchers' reasoning was that access to the noisy measurement file (NMF) would allow analysts to account for the noise introduced from the TopDown Algorithm. In addition, even when statistical analysts have access to the noisy measurement file, most do not have the background understanding of DP or the computational ability to make the required corrections. Given this challenge, some privacy researchers organized a workshop on the "Analysis of Census Noisy Measurement Files and Differential Privacy." The purpose of the workshop was to convene experts from various fields and practices within social science, demography, public policy, statistics, and computer science to discuss the need for implementable tools that allowed analyses on privacy preserving noise induced data and statistics.5 Footnote 5: Workshop on the Analysis of Census Noisy Measurement Files and Differential Privacy,” Accessed February 14, 2023. [http://dimacs.rutgers.edu/events/details?eID=2038](http://dimacs.rutgers.edu/events/details?eID=2038) The debate concerning the decennial Census and the noisy measurement file helped illuminate the distinction between the way noise is added and the ability to account for it in statistical analyses. Some argued that reverting to previous statistical data privacy methods, such as swapping, would be preferable to DP. But fundamentally these methods have the same issues as the lack of a NMF and using them would not provide statistical analysts with any better means of analyzing the data. Without a means of accounting for the noise introduced to protect privacy, any statistical analyses will be biased or falsely overpowered. ## 2. Differentially Private Query Systems for Statistical Inference At this point in the story, it may be tempting to argue that any issues performing statistical analysis under DP can be handled by the development of NMFs and tools to account for the additional noise. The widespread publicity of the Census Bureau's adoption of DP locked in, for many, a particular means of achieving DP, with the corresponding solution being access to the NMF. But these solutions only makes sense under the scenario where a set of predetermined measurements are made and released non-interactively, such as publishing a public data set. In fact, the example of the decennial Census is still _rather similar_ to applications of DP in the tech space. The underlying data were very large and the queries were counts. Implementing DP on other products, such as smaller surveys or interactive query systems with the purpose of statistical inference, introduces an entirely new set of incompatibilities. The adoption of DP for settings where researchers make sequences of queries that encompass more complex statistical analyses will require the field to overcome more significant barriers than those that faced the Census Bureau. In our work, Barrientos et al. (2021), we explore creating a differentially private query system, known as a validation server, to allow tax researchers to estimate simple statistics and linear regressions on confidential IRS data. In contrast to the decennial census, this system must allow for interactive queries and include statistic-specific uncertainty estimates for hypothesis testing, such as confidence intervals. When conducting that study, the first difficulty we encountered was that only a small fraction of the published papers proposing DP algorithms for querying common statistics, such as means and regression coefficients, provide uncertainty estimates. Another challenge was that accurate queries required users to input substantial information about the distribution or the range of the underlying data which is not commonly known. Additionally, an astute user needs to determine how to allocate their privacy budget. But, in many cases, they may not know the complete set of queries they want to perform before starting. Finally, designing an interactive query system that allows different types of queries with optimal performance under DP is not guaranteed to compose in a trivial manner, though it is theoretically possible Rogers et al. (2016), Whitehouse et al. (2023). While still in progress, working with an interdisciplinary team has pushed us towards considering compromises that may dissatisfy either the data analyst or the privacy practitioner. Under one of our proposed systems, users may develop and test analyses on non-formally private synthetic data that represent a more limited subset of individuals prior to submitting their analysis to the validation server. Conversely, our server includes a much more limited set of allowed queries than a typical tax economist would expect. In particular, methods for working with data from complex surveys that need to incorporate survey weights, the backbone of the federal statistical system, do not exist. Another proposed compromise is using two sets of privacy budgets, such that users can conduct some initial analyses prior to exhausting their budget on estimates that can be released. On the other hand, users still carry the burden of bringing prior information for their analyses, such as indicating the range of possible values or spending some privacy budget to estimate it. At this point, we do not know the full implications of these decisions or whether they will be featured in the eventual implementation of the validation server. We only highlight the various compromises we have wrestled through to enable statistical analysis and DP to function together. ## 3. Incompatibilities That Require Compromise Based on the lessons we have learned, we offer the following general incompatibilities between DP and normal statistical practice that must be addressed in practice implementations. When the goal is traditional statistical analysis, specifically inference tasks, overcoming the incompatibilities requires compromises, either from the statistical analysts, formal privacy practitioners, or both. (a) **Estimates for traditional statistical inference.** Frequentist methods for statistical inference rely on estimates, such as confidence intervals or \(p\)-values to perform hypothesis testing. DP methods have only been shown to provide guarantees for statistical accuracy in scenarios where the size of the data set is large enough (e.g., Chaudhuri et al., 2011, Sheffet, 2017, Pena and Barrientos, 2021, Sart, 2023), whereas other papers simply choose not to evaluate statistical inference tasks. While large sample properties, also referred to as asymptotic properties, are universally desirable, they frequently fail to provide substantial insight into what can be expected for specific finite sample sizes. This is because the condition of being "large enough" is difficult to define. Additionally, post-processing can frequently induce bias, and DP implementations have not been shown in practice to provide estimates with amounts of noise that statistical analysts would consider reasonable without sacrificing substantial amounts of privacy6. Furthermore, effective DP methods do not exist for more complex models involving survey weights, panel data, or methods for causal inference. A few compromise options exist to handle this incompatibility. First, a statistical data user can only query point estimates and decide not to perform frequentist hypothesis testing. Alternatively, users may opt for Bayesian approaches. While requiring privacy budgets similar to frequentist methods, Bayesian approaches can account for the assumptions and probabilistic nature behind DP, and they can be used for full inference about the parameters and predictions. Specifically, Bayesian techniques allow for the simultaneous consideration of various factors, such as the use of non-sufficient summary statistics, assumptions about data boundedness (clamping), and noise injection for privacy. These elements are difficult for traditional frequentist statistics to handle simultaneously. Though Bayesian methods offer a promising alternative, they reflect a compromise because they remain unused and unfamiliar to many in the statistical research community. The field needs concrete work answering this question. If we need to consider alternative approaches to frequentist inference, how will this impact statistical practice? If we can only conduct frequentist inference under different privacy definitions, is accurate statistical inference possible? (b) **Control or nuisance variables.** In estimating regression models, it is very common to include control variables for which the estimates are not really of interest to the person querying the model. Current differentially private methods do not account for this, nor is it clear what it would look like if they did. In regression, for example, queries that include uncertainty estimates, namely those that perturb the sufficient statistics, add noise that grows polynomially with the number of predictors. Other methods, such as Wang (2018), scale better with predictors but require a Bayesian inference approach and have not been shown to be practically implementable (Barrientos et al., 2021). Whatever the approach, queries which return these parameters lead to heightened noise added to estimates of interest as the number of control variables grows, and it can be argued that this noise is extraneous even if it only scales linearly. Can DP be reformulated to ignore the privacy-loss due to these control variables? Is including control variables in a regression query but not receiving coefficients possible? How would this impact the privacy guarantee? If so, is there still value in spending privacy budget on nuisance variables? Conversely, can statistical practice be changed such that appropriate analyses can be made without including control variables? (c) **Assumptions on the range of the data or other assumptions.** A common barrier for statistical analysts using DP methods is the need to place prior bounds on the distribution of data or statistics in order to calculate a finite sensitivity. Knowing this information or finding ways of estimating it apart from the data is not always part of statistical practice. In many cases, there are no good priors to help set these bounds, and the DP literature is largely silent on this problem. In some cases, analysts may be able to estimate the bounds under DP. For example, Wilson et al. (2019) propose a method for automatically estimating bounds on continuous data by taking advantage of the physical limitations of machine precision and minimizing the amount of data clipping. This helps select bounds without prior information, but a fundamental bias-variance trade-off exists when privately estimating bounds Amin et al. (2019). More problematically, the analysts cannot know where they fall on this bias-variance trade-off without knowing the real bounds of the data. This inhibits inferential methods to adjust for the fact that the final estimates include uncertainty both from the noise mechanism of the final query and the noise mechanism of the prior bound-setting query. Either statistical analysts will need to adapt their methodology to account for this, or DP methods will need to adapt to enable analysts to estimate the impact of privately setting bounds. (d) **Performing exploratory data analysis.** Statistical analysts commonly explore the data using visualizations, marginal and multivariate summary statistics, and model diagnostics. Most statistical researchers, who we assume are not trusted to access the private data, do not know ahead of time exactly what analyses will be run. In one sense, DP can help disincentivize bad exploratory data analysis (EDA) practices, such as \(p\)-hacking. For example, if an analyst must split their privacy budget across EDA queries, they may limit the amount of exploration they make, potentially making it less likely for them to find spurious results. Conversely, DP may make it more difficult to account for multiple testing, since the final inferences should account for the uncertainty propagated through all the analyses performed to select the final model. Given the issues discussed earlier concerning frequentist inference under DP and the lack of work on multiple testing, it is not clear whether this is feasible. Though some in the broader scientific community are moving towards pre-specifying every model in research7, this is far from the current reality in all disciplines. And not all EDA results in \(p\)-hacking. Data users need the ability to probe assumptions or look for data abnormalities, and they may run into serious problems without this ability. For example, without understanding the data a user may request for a regression model where a predictor has no variance. Under DP, this query may return random noise or a null result with some probability, and statistical analysts are not prepared for this type of response. Either statistical analysts will need to adapt their research without the ability to do EDA, or DP methods need to find ways to allow EDA in a private setting. Footnote 7: For example, [https://plos.org/open-science/preregistration/](https://plos.org/open-science/preregistration/). (e) **Limited queries and the privacy budget.** Finally, there are a broader set of issues that come from performing statistical analyses with a limited privacy budget. This is a concept both unfamiliar to statistical analysts and carrying significant implications. For example, how should data maintainers allocate budgets to numerous analysts? How do analysts determine how much of their budget to allocate to multiple model specifications, robustness checks, and the final models? What should analysts do if a journal reviewer asks for alternative model specifications or other requests, such as reproducing the results, and there is no more privacy budget to spend? More challenges occur when multiple data users or analysts submit the same analysis. As an illustrative example, imagine there are two data users (A and B), who submit the same analysis on the same part of the confidential data, but user A submitted before data user B. The validation server could handle this situation in two ways. One approach is to consider the analyses from user A and B as separate analyses. In other words, these analyses would likely produce two different results regardless if user A and B used the same or different privacy loss budgets. Under this approach, both data users would not be notified about each others' analyses, ensuring greater confidentiality and encouraging analysts to use the validation server results more confidently. Knowing their specific research ideas will not be revealed mitigates concerns about being scooped or having their ideas preempted. But producing different answers for the same query creates communication and education problems in explaining to both data users that their answers are valid. It is also means, from a societal point of view, that we are sacrificing more privacy (or accuracy) to run the same analysis twice. The other approach is to apply the same result from data user A for data user B. Unlike the first approach, there would be no confusion of having two different results for the same analysis. The data users could also share the cost of the privacy loss budget, spending less of their individual budgets. However, both data users would be informed that the analysis was conducted twice. We could even make this more complex and extend it to the situation where data user B wanted a more accurate result and spend more privacy loss budget than data user A. These are some of many other issues we need to address. In any case, either statistical analysts will need to develop novel means of optimally allocating their budget for their research or DP will need to rethink budgets in interactive query systems. ## 4 Future Steps We hope this perspective will serve as a means of calling out the broader set of issues beyond problems that can be solved using a noisy measurements file. As of now, it is not clear whether the compromise comes from the way we perform statistical analyses, the way we implement DP, or both. We hope that future papers on DP will wrestle through these practical questions in a larger way than has been typically done to this point. ## Acknowledgments This research was funded by the Alfred P. Sloan Foundation [G-2020-14024] and National Science Foundation National Center for Science and Engineering Statistics [49100420C0002 and 49100422C0008].
2303.03355
Dissipative phase transitions in $n$-photon driven quantum nonlinear resonators
We investigate and characterize the emergence of finite-component dissipative phase transitions (DPTs) in nonlinear photon resonators subject to $n$-photon driving and dissipation. Exploiting a semiclassical approach, we derive general results on the occurrence of second-order DPTs in this class of systems. We show that for all odd $n$, no second-order DPT can occur while, for even $n$, the competition between higher-order nonlinearities determines the nature of the criticality and allows for second-order DPTs to emerge only for $n=2$ and $n=4$. As pivotal examples, we study the full quantum dynamics of three- and four-photon driven-dissipative Kerr resonators, confirming the prediction of the semiclassical analysis on the nature of the transitions. The stability of the vacuum and the typical timescales needed to access the different phases are also discussed. We also show a first-order DPT where multiple solutions emerge around zero, low, and high-photon numbers. Our results highlight the crucial role played by strong and weak symmetries in triggering critical behaviors, providing a Liouvillian framework to study the effects of high-order nonlinear processes in driven-dissipative systems, that can be applied to problems in quantum sensing and information processing.
Fabrizio Minganti, Vincenzo Savona, Alberto Biella
2023-03-06T18:42:13Z
http://arxiv.org/abs/2303.03355v2
# Dissipative phase transitions in \(n\)-photon driven quantum nonlinear resonators ###### Abstract We investigate and characterize the emergence of finite-component dissipative phase transitions (DPTs) in nonlinear photon resonators subject to \(n\)-photon driving and dissipation. Exploiting a semiclassical approach, we derive general results on the occurrence of second-order DPTs in this class of systems. We show that for all odd \(n\), no second-order DPT can occur while, for even \(n\), the competition between higher-order nonlinearities determines the nature of the criticality allowing for second-order DPTs to emerge only for \(n=2\) and \(n=4\). As pivotal examples, we study the full quantum dynamics of three- and four-photon driven-dissipative Kerr resonators, confirming the prediction of the semiclassical analysis on the nature of the transitions. The stability of the vacuum and the typical timescales needed to access the different phases are also discussed. We also show a first-order DPT where multiple solution emerge around zero, low, and high-photon number. Our results highlight the crucial role played by _strong_ and _weak_ symmetries in triggering critical behaviors, providing a Liouvillian framework to study the effect of high-order nonlinear processes in driven-dissipative systems, that can be applied to problems in quantum sensing and information processing. ## 1 Introduction Nonlinear bosonic systems, such as optical cavities, polaritonic systems, optomechanical resonators, and superconducting circuits, represent an extremely rich and versatile tool to explore and simulate nonequilibrium quantum physics [1, 2, 3]. These systems are intrinsically _open_, meaning that particle, energy, and correlations can be gained or lost through the coupling with the environment [4]. Drives are then applied to these systems, bringing them out of their thermal equilibrium, and compensating for the losses induced by the environment. As a result, the complex interplay between driving, dissipation, and Hamiltonian terms gives rise to a nontrivial open-system dynamics, leading to nonequilibrium stationary states whose properties differ from those of closed quantum systems at equilibrium [5, 6, 7]. The symmetries of the drive and dissipators play a fundamental role in determining both the nature of the steady state and the dynamical properties of a quantum system [8, 9, 10]. For instance, in a nonlinear photonic cavity the possibility to exploit nonlinear and engineered pumping schemes, in the presence of moderate single-particle dissipation, opened venues to the generation and stabilization of nonclassical states [11, 12, 13]. A pivotal example in this field is the use of two-photon drives to generate, stabilize, and control photonic Schrodinger cat states [14, 15], that have been proposed as a fundamental building block of quantum computing devices [16]. Beyond their interest for quantum information, parametric processes have been at the center of intense research, leading to the exploration of their properties both in classical [17, 18] and quantum configurations [11, 19, 20, 21]. The study and characterization of dissipative phase transitions (DPTs), and their peculiarities, has been the focus of a huge theoretical and experimental research, especially concerning the connection of DPTs to multimodality and metastability [22]. In this scenario, two main distinctions have been drawn in the characterization of DPTs [10]. First-order DPTs are abrupt and discontinuous changes in the properties of the system's steady state [23]. These have been associated with hysteresis and critical slowing down [24, 11, 25], which allow to observe the emergence of metastable dynamics and to study its competition with the other typical timescales of the system. Key to understand second-order DPTs - where the steady state transitions continuously, but it is characterized by a divergent response - are symmetries and their breaking [26, 11, 27]. In particular, DPTs can be associated with weak and strong symmetry breaking [28, 29]. Second-order DPTs have been shown to be key for several technological tasks. The cross-fertilization between quantum information processing and open system criticality led to innovative ideas to protect quantum information [28, 30], enhance quantum sensing [31, 32, 33, 34], and review laser theory [35, 36, 37]. In the panorama of DPTs, the parametrically-driven (or two-photon) Kerr resonator has attracted considerable interest [10, 11, 26, 27, 38]. Indeed, it provided an ideal test model that displays both first- and second-order criticalities in different regions of the parameter space [11], and represents one of the few cases for which a steady-state can be analytically found [39, 40]. Thus, DPTs of this model were extensively studied [11, 19, 26, 38], in connection with the spectral properties of the Liouvillian [10] and more exotic phenomena, such as exceptional points and parity-time symmetry breaking [41]. These findings represented the natural extension of the well known results about first-order transition in the coherently-driven (one-photon) Kerr resonator [11, 24, 42, 43, 44, 45, 25], pioneered by Drummond and Walls [46], and showed how the presence of multi-photon driving and dissipation can drastically modify the physics of nonlinear bosonic resonators [11, 26, 28]. Remarkably, all these results obtained at the single-resonator level provided a guideline to investigate emergent phenomena in more complex lattice architectures [47, 48, 49, 50, 27], allowing drawing important parallelisms. In this work, we advance these ideas, and explore the DPTs of nonlinear photonic resonators in the presence of parametric \(n\)-photon drive and losses, going beyond the aforementioned \(n=1,2\) cases. Beside the fundamental theoretical interest, our research is further motivated by the nontrivial achievement of higher-order photon pumping scheme exploiting strong nonlinearities in superconducting circuits [51, 52, 53], making the study of these phase transition timely. We provide general results showing that second-order DPTs can only emerge in even-driven Kerr-like resonators, and we point out the strong technical limitations to their realizations in \(n>4\) driving schemes. We detail how the nature of DPTs is deeply connected to the presence of strong and weak symmetries and can be analyzed exploiting a general theoretical framework based on the structure and on the spectral properties of the Liouvillian superoperator. We also demonstrate that it is possible to witness nontrivial phenomena such as multistability, emerging from the competition between drive, dissipation, and nonlinearity in these models. Thanks to these results, we discuss possible technological limitation that can affect protocols based on \(n>2\) driving schemes. We provide a detailed numerical analysis of the full quantum model for the \(n=3\) and \(n=4\) cases. In the former, we confirm that the system can only undergo a first-order phase transitions accompanied by the symmetry breaking of the discrete weak \(Z_{3}\) symmetry, as the system parameters are properly varied. For the 4-photon driven resonator, we show that both a first- and second-order DPT can occur, accompanied by a breaking of the \(Z_{4}\), possibly incurring in multistability. We discuss analogies and differences between the strong and weak symmetric cases. We verify these results also within the Liouvillian theory, and we derive a precise phase diagram. The paper is structured as follows. In Sec. 2 we introduce the model, the master equation governing the driven-dissipative dynamics, the symmetry properties of the problem, and their consequences on the Liouvillian spectrum. In Sec. 3 we discuss the emergence of DPTs in the semiclassical limit, while in Secs. 4 and 5 we study the full quantum dynamics for \(n=3,4\) resonators, respectively. Finally, in Sec. 6 we draw our conclusions and discuss some future perspectives. ## 2 The model We consider a bosonic \(n\)-driven nonlinear resonator, whose Hamiltonian reads \[\hat{H}_{n}=\sum_{m=1}^{m_{\text{max}}}\frac{U_{m}}{m}\left(\hat{a}^{\dagger} \right)^{m}\hat{a}^{m}+G_{n}\left[\hat{a}^{n}+\left(\hat{a}^{\dagger}\right)^ {n}\right], \tag{1}\] where \(\hat{a}\) (\(\hat{a}^{\dagger}\)) is the bosonic annihilation (creation) operator. The interaction strengths \(U_{m}\) sets the scale of \(m\)-photon processes. For instance, \(U_{1}\) characterizes the energy of one photon in the resonator (in the frame rotating at the pump frequency) and rescales the term \(\hat{a}^{\dagger}\hat{a}\), \(U_{2}\) is a standard Kerr interaction, and so on. As detailed also in Appendix A, for a \(n\)-photon process we should consider at least processes up to the order \(m_{\text{max}}=\lfloor n/2+1\rfloor\), where \(\lfloor A\rfloor\) indicates the integer part of the number \(A\). As we will see in the following, the high-order \(U_{m}\)s play a fundamental role in determining the nature of the DPTs, and for this reason need to be included in a minimal model. \(G_{n}\), instead, represents the \(n\)-photon drive amplitude. Given the dissipative nature of the system, and within the Born and Markov approximations, the system's dynamics is ruled by a Lindblad master equation reading (hereafter we set \(\hbar=1\)) \[\partial_{t}\hat{\rho}(t)=\mathcal{L}[\hat{\rho}(t)]=-i[\hat{H}_{n},\hat{\rho} (t)]+\gamma\mathcal{D}[\hat{a}]+\eta_{n}\mathcal{D}[\hat{a}^{n}], \tag{2}\] with \(\mathcal{D}[\hat{O}]=\hat{O}\hat{\rho}(t)\hat{O}^{\dagger}-\{\hat{O}^{\dagger} \hat{O},\hat{\rho}(t)\}/2\). The first term in Eq. (2) rules the coherent (unitary) part of the dynamics, and follows from Eq. (1), upon an appropriate rescaling of \(U_{m}\) due to the dressing of the cavity eigenmodes by the the environment (Lamb-shift-like terms [4]). The second and the third terms in Eq. (2) account for the incoherent one- and \(n\)-photon losses, respectively. While one-photon dissipation is an unavoidable feature in any photonic resonator, emerging from the coupling of the cavity modes with the electromagnetic vacuum, \(n\)-photon losses naturally emerge as a byproduct of the engineered processes leading to \(n\)-photon drive. Notice that other \(m\)-photon (with \(m\neq 1,\,n\)) dissipative processes can be safely ne glected, because their emergence is linked to the presence of engineered \(m\)-photon exchanges, and here we are considering only a single drive acting at each time. Although the Hamiltonian in Eq. (1) is quite general and platform-independent, we provide a brief discussion on how such terms can emerge in a superconducting circuit implementation in Appendix A. ### Liouvillian spectrum, symmetries, and their breaking Our analysis will mainly focus on the steady states \(\hat{\rho}_{\rm ss}^{(k)}\), i.e., that density matrices which do not evolve any more under the action of the Lindblad master equation (2), defined by \[\partial_{t}\hat{\rho}_{\rm ss}^{(k)}=\mathcal{L}\hat{\rho}_{\rm ss}^{(k)}=0. \tag{3}\] \(k\) is an index which labels these steady states, and in the present analysis will be solely tied to the presence of a strong symmetry (see below). Otherwise, the steady state is unique and will be simply called \(\hat{\rho}_{\rm ss}\). In a thermodynamic limit, formally defined as \(L\to\infty\), the steady-state of an open quantum system can display a nonanalytical behavior as a function of a generic parameter \(\zeta\)[10, 23]. A transition is then the nonanalytical change of some operator \(\hat{\sigma}\) as \(\zeta\) crosses the critical point \(\zeta_{\rm c}\). Formally, we say that there is a phase transition of order \(M\) if [10] \[\lim_{\zeta\to\zeta_{\rm c}}\left|\frac{\partial^{M}}{\partial\zeta^{M}}\lim_ {L\to\infty}\mbox{Tr}\Big{[}\hat{\rho}_{\rm ss}^{(k)}(\zeta,L)\hat{\sigma} \Big{]}\right|=+\infty. \tag{4}\] The \(n\)-photon driven Kerr resonator explicitly displays a \(Z_{n}\) symmetry 1. That is, the transformation Footnote 1: We will use the notation \(Z_{n}\) for the symmetry group, \(\hat{Z}_{n}\) for the operator associated with such a symmetry, and \(\mathcal{Z}_{n}\) for the corresponding superoperator. \[\hat{a}\to\hat{a}\ e^{i2\pi k/n},\quad k=0,\,1,\,\ldots,\,n, \tag{5}\] leaves the master equation (2) unchanged. However, one can define two types of symmetries in open quantum systems [8, 9]. For the model under consideration, these are defined according to the way the operator \(\hat{Z}_{n}=e^{i2\pi\hat{a}^{\dagger}\hat{a}/n}\) acts. One speaks of _strong symmetries_ if \(\hat{Z}_{n}\) commutes with both the Hamiltonian and the jump operators, i.e.: \[[\hat{Z}_{n},\hat{H}]=[\hat{Z}_{n},\hat{a}]=[\hat{Z}_{n},\hat{a}^{n}]=0. \tag{6}\] In this case, \(Z_{n}\) imply the existence of a corresponding conserved quantity \(\langle\hat{Z}_{n}\rangle_{t}\equiv\mbox{Tr}[\hat{\rho}(t)\hat{Z}_{n}]=\mbox{ const.}\) The system will display \(n\) independent steady states, each one characterized by a different value of \(\langle\hat{Z}_{n}\rangle_{\rm ss}\equiv\lim_{t\to\infty}\langle\hat{Z}_{n}\rangle _{t}\). In our case, such a condition is fulfilled if and only if \(\gamma=0\) (i.e., the photons are never lost one-by-one). The presence of a strong symmetry implies that there exist _two superoperators_\(\mathcal{Z}_{n}^{\rm L}=\hat{Z}_{n}\cdot\hat{\mathds{1}}\) and \(\mathcal{Z}_{n}^{\rm R}=\hat{\mathds{1}}\cdot\hat{\mathcal{Z}}_{n}\)2, such that Footnote 2: The \(\star\) notation for superoperators indicates that, if \(\mathcal{S}=\hat{A}\cdot\hat{C}\), then \(S\hat{B}=\hat{A}\hat{B}\hat{C}\). Details can be found Details in Ref. [54]. \[[\mathcal{L},\mathcal{Z}_{n}]=0. \tag{7}\] A _weak symmetry_, instead, does not respect the conditions in Eq. (6), and as such the symmetry of the model does not entail a conserved quantity, meaning that \(\langle\hat{Z}_{n}\rangle_{t}\) changes in time [8, 9]. However, the superoperator \(\mathcal{Z}_{n}=\hat{Z}_{n}\cdot\hat{Z}_{n}\) commutes with the Liouvillian, i.e. \[[\mathcal{L},\mathcal{Z}_{n}]=0. \tag{8}\] As a consequence of the conditions in Eqs. (7) and (8), strong and weak symmetries constrain the structure of the Liouvillian \(\mathcal{L}\) and of its spectrum. A compact and convenient way to discuss symmetries and phase transitions is via the spectral properties of the Liouvillian [10]. Given any Liouvillian \(\mathcal{L}\), we can introduce its eigenvalues \(\lambda_{i}\) and right eigenoperators \(\hat{\rho}_{i}\), defined via the relation \[\mathcal{L}\hat{\rho}_{i}=\lambda_{i}\hat{\rho}_{i}, \tag{9}\] where \(\mathbb{R}\mathrm{e}\left[\lambda_{i}\right]\leq 0,\forall i\) represents the decay rates induced by the dissipative dynamics [4, 55]. #### 2.1.1 DPTs and weak symmetries The presence of a weak symmetry allows refining the discussion on the spectral properties of the system. The eigenvalues \(z_{n}^{(k)}\) of \(\mathcal{Z}_{n}\) are the \(n\) roots of the unity [indeed, \((\mathcal{Z}_{n})^{n}=1\)], that is \(z_{n}^{(k)}=e^{2i\pi k/N}\) for \(k=0,\,1,\ldots n-1\). Since all the eigenstate of \(\mathcal{L}\) must also be eigenstate of \(\mathcal{Z}_{n}\), we can introduce the "quantum number" \(k\), such that, for a weak \(Z_{n}\) symmetry, \[\mathcal{Z}_{n}\hat{\rho}_{i}^{(k)}=z_{n}^{(k)}\hat{\rho}_{i}^{(k)},\quad \mathcal{L}\hat{\rho}_{i}^{(k)}=\lambda_{i}^{(k)}\hat{\rho}_{i}^{(k)}. \tag{10}\] We sort the eigenvalues in such a way that \(|\mathbb{R}\mathrm{e}\left[\lambda_{0}\right]^{(k)}|<|\mathbb{R}\mathrm{e} \left[\lambda_{1}\right]^{(k)}|<\ldots<|\mathbb{R}\mathrm{e}\left[\lambda_{n} \right]^{(k)}|\). The presence of a symmetry thus implies that the Liouvillian does not mix eigenoperators with different values of \(k\), and therefore the Liouvillian can be partitioned (block-diagonalized) into different symmetry sectors \(\mathcal{L}_{k}\), i.e., \[\mathcal{L}=\bigoplus_{k}\mathcal{L}_{k}. \tag{11}\] For this reason, the eigenvalues \(\lambda_{j}^{(k)}\) and eigenoperators \(\hat{\rho}_{j}^{(k)}\) describe the whole physics within each of the Liouvillian symmetry sectors. Weak symmetries fix the structure of the eigenoperators, which, in the number (Fock) basis, read \[\hat{\rho}_{j}^{(k)}=\sum_{p,q}c_{p,q}\left|p\right\rangle\left\langle q \right|\,,\quad\mathrm{mod}(p-q,n)=k, \tag{12}\] where \(\mathrm{mod}(p-q,n)\) indicates the modulo operation. In other words, \(\hat{\rho}_{j}^{(k)}\) must be an operator containing only elements such that \((m-n)\) is either \(k\), or \(k\pm n\), or \(k\pm 2n\), etc. For example, for a \(Z_{2}\) symmetry, this implies \((m-n)\) either even or odd, and therefore the eigenoperators of the Liouvillian must be characterised by a checkerboard-like structure. We show a typical steady-state structure for a weak \(Z_{3}\) and \(Z_{4}\) symmetries in Figs. 1(a) and (b), respectively. As also demonstrated in Ref. [9], in the case of a weak symmetry, \(\hat{\rho}_{\rm ss}\) is generally unique and thus _must_ belong to the \(k=0\) symmetry sector of the Liouvillian. For this reason, for any finite number of photons in the system, the \(n\)-photon driven Kerr resonator with weak \(Z_{n}\) symmetry will admit a unique steady state \(\hat{\rho}_{\rm ss}\propto\hat{\rho}_{0}^{(0)}\). Furthermore, the discontinuous behavior of the steady state in Eq. (4) is signalled by the Liouvillian spectral properties. In the thermodynamic limit, a second eigenoperator, which is stationary under the action of the Liouvillian, emerges. Accordingly, an eigenvalue \(\lambda_{m}^{(k)}\) becomes exactly zero, both in its real and imaginary parts, as a function of the parameter \(\zeta\). In finite-size systems, phase transitions cannot be observed, and \(\lambda_{m}^{(k)}\neq 0\) if \(m\neq 0\) and \(k\neq 0\). Nevertheless, the study of the Liouvillian spectral properties provides much useful information about the scaling and nature of the transition [47]. Within this formalism and notation, a first order phase transition can be seen as a change in the \(k=0\) symmetry sector, where the steady state \(\hat{\rho}_{\rm ss}\propto\hat{\rho}_{0}^{(0)}\) and the eigenoperator \(\hat{\rho}_{1}^{(0)}\) display level touching (detail can be found in Ref. [10]). More specifically, \(\lambda_{1}^{(0)}=0\) at the critical point, and the the minimum in \(\lambda_{1}^{(0)}\) reaches zero as the system scales towards the thermodynamic limit. A spontaneous symmetry breaking of \(Z_{n}\), instead, means the emergence of \(n-1\) states, each one belonging to a different \(k\)-symmetry sector, that do not evolve any more under the action of the Liouvillian. In this case, the phase transition is associated to \(\lambda_{0}^{(1)},\ldots\lambda_{0}^{(n-1)}\) becoming and remaining zero in a whole region where the symmetry is broken. For instance, in the case of a \(\mathcal{Z}_{2}\) breaking, \(\lambda_{0}^{(1)}=0\) after the transition, while for \(\mathcal{Z}_{3}\) one has \(\lambda_{0}^{(1)}=\lambda_{0}^{(2)}=0\). The corresponding states \(\hat{\rho}_{0}^{(k)}\), belonging to different symmetry sectors with respect to \(\hat{\rho}_{0}^{(0)}\), allow constructing the symmetry-breaking steady states. Indeed, by choosing the correct superposition of the form \(\hat{\rho}_{j}=\sum_{k}^{n-1}c_{i,k}\hat{\rho}_{0}^{(k)}\), one can obtain well-defined density matrices such that \(\mathcal{Z}_{n}\hat{\bar{\rho}}_{j}\neq z_{n}^{(k)}\hat{\bar{\rho}}_{j}\) but \(\mathcal{L}\hat{\bar{\rho}}_{j}=0\). #### 2.1.2 DPTs and strong symmetries In the case of a strong symmetry any eigenoperator is characterized by two quantum numbers \((k_{\rm L},k_{\rm R})\), such that \(\mathcal{Z}_{n}^{\rm L,\;R}\hat{\rho}_{i}^{(k_{\rm L},k_{\rm R})}=e^{\pm 2i\pi k _{\rm L,\;R}/n}\hat{\rho}_{i}^{(k_{\rm L},k_{\rm R})}\), where, again, \(k_{\rm L,\;R}=0,1,\ldots n\). We deduce that \[\begin{split}\hat{\rho}_{i}^{(k_{\rm L},k_{\rm R})}=\sum_{p,q}c_ {p,q}\left|p\right>\left<q\right|\,,\\ {\rm mod}(p,n)=k_{\rm L},\quad{\rm mod}(q,n)=k_{\rm R}.\end{split} \tag{13}\] Such a structure is shown in Figs. 1(c) and (d) for two steady states of the \((0,0)\) symmetry sector for \(Z_{3}\) and \(Z_{4}\) symmetries. Notice now that we can define two different types of eigenoperators: those which describe the evolution of populations, for which \(k_{\rm L}=k_{\rm R}\), and the coherences, for which \(k_{\rm L}\neq k_{\rm R}\). Consequently, the symmetry sectors are \(\mathcal{L}_{k_{\rm L},k_{\rm R}}\), i.e., \[\mathcal{L}=\bigoplus_{k_{\rm L},k_{\rm R}}\mathcal{L}_{k_{\rm L},k_{\rm R}}. \tag{14}\] Figure 1: Sketch of the structure of (one of the) steady-state density matrix (matrices) \(\hat{\rho}_{0}^{(0)}\) (\(\hat{\rho}_{0}^{(0,0)}\)) for a weak (a, b) and a strong (c, d) \(Z_{3}\) (a,c) and \(Z_{4}\) (b,d) symmetries. White indicates that the matrix element is zero. Comparing the weak and strong symmetric cases, one notices the effect of one-photon dissipation: the breaking of the strong symmetry by \(\gamma\) results in a mixing of the populations, and thus in more nonzero elements. Nonetheless, being an incoherent process, no coherences between different symmetry sectors can be retained, thus resulting in the coarser steady state structure. Hamiltonian parameters: (a, c) \(U_{1}=5\eta_{3}\), \(U_{2}=3\eta_{3}\), \(G_{3}=9\eta_{3}\); (b, d) \(U_{1}=5\eta_{4}\), \(U_{2}=3\eta_{4}\), \(U_{2}=\eta_{4}/5\), \(G_{4}=20\eta_{4}\). Dissipation: (a) \(\gamma=\eta_{3}\), (b) \(\gamma=\eta_{4}\) For each of the population sectors, there must exist a well-defined steady state \(\hat{\rho}_{\rm ss}^{(k)}\propto\hat{\rho}_{0}^{(k,k)}\)(trace one, Hermitian, and positive semidefinite matrix which does not evolve under the action of the Liouvillian), while coherences are always traceless matrices. Accordingly, the definition of the phase transition and of the spontaneous symmetry breaking takes into account the presence of multiple disconnected eigenspaces. A first-order DPT occurs in the population sectors, and it is associated with the presence of an eigenoperator \(\hat{\rho}_{1}^{(k,k)}\) whose eigenvalue \(\lambda_{1}^{(k,k)}\) becomes zero in the thermodynamic limit. A spontaneous symmetry breaking, instead, implies that the eigenoperators \(\hat{\rho}_{0}^{(k_{\rm L},k_{\rm R})}\) acquire an eigenvalue \(\lambda_{1}^{(k_{\rm L},k_{\rm R})}=0\). Spontaneous symmetry breaking thus implies that quantum superpositions between the states composing \(\hat{\rho}_{\rm ss}^{(k_{\rm L})}\propto\hat{\rho}_{0}^{(k_{\rm L},k_{\rm L})}\) and \(\hat{\rho}_{\rm ss}^{(k_{\rm R})}\propto\hat{\rho}_{0}^{(k_{\rm R},k_{\rm R})}\), i.e., two steady states of different symmetry sectors, can be maintained indefinitely. Indeed, not only the populations do not evolve, but also the coherences remain stationary. In this regard, DPTs accompanied by spontaneous breaking of strong symmetries bear a closer resemblance with Hamiltonian transitions, and this is the reason for their use in quantum information [28, 30]. ## 3Semiclassical analysis of the \(n\)-photon driven resonator The equation of motion for the expectation value of the observable \(\hat{a}\) evolving under Eq. (2) is \[\begin{split}\partial_{t}\langle\hat{a}\rangle_{t}& =-i\sum_{m=1}^{m_{\rm max}}U_{m}\langle\left(\hat{a}^{\dagger} \right)^{m-1}\hat{a}^{m}\rangle_{t}-inG_{n}(\left(\hat{a}^{\dagger}\right)^{n -1})_{t}\\ &\quad-\frac{\gamma}{2}\langle\hat{a}\rangle-\frac{n\eta_{n}}{2} \langle\left(\hat{a}^{\dagger}\right)^{n-1}\hat{a}^{n}\rangle_{t}.\end{split} \tag{15}\] Due to the presence of non-quadratic terms, these equations of motion cannot be closed, leading to a hierarchy of coupled equations. ###The thermodynamic limit and finite-component phase transitions We now introduce the dimensionless parameter \(L\) such that \[G_{n}=\tilde{G}_{n}/\sqrt{L^{n-2}},\,U_{m}=\tilde{U}_{m}/L^{m-1},\,\eta_{n}= \tilde{\eta}_{n}/L^{n-1}, \tag{16}\] and we will consider the thermodynamic limit \(L\to\infty\). In such a limit \((G_{n})^{\alpha}U_{m}\) and \((G_{n})^{\beta}\eta_{n}\) are constants [for \(\alpha\) and \(\beta\) such that \((-n/2+1)\alpha-m+1=0\) and \((-n/2+1)\beta-n+1=0\)], but the number of excitations diverges. Such a rescaling of the system parameters can be seen as the generalization of the scaling proposed in Ref. [56] for the \(n=1\) case. The semiclassical (coherent state) approximation amounts to assuming that the state of the resonator is coherent, i.e., \[\hat{\rho}(t)=\left|\alpha(t)\right\rangle\left\langle\alpha(t)\right|, \tag{17}\] where \(\hat{a}\left|\alpha(t)\right\rangle=\alpha(t)\left|\alpha(t)\right\rangle\). Accordingly, the equation of motion for the rescaled coherent field \(\tilde{\alpha}(t)=\left\langle\hat{a}\right\rangle/\sqrt{L}\) leads to a generalized driven-dissipative Gross-Pitaevskii-like equation \[\begin{split}\partial_{t}\tilde{\alpha}&=\left[-i \sum_{m}\tilde{U}_{m}|\hat{\alpha}|^{2(m-1)}-\frac{n}{2}\tilde{\eta}_{n}|\hat{ \alpha}|^{2(n-1)}\right.\\ &\quad\left.-\frac{\gamma}{2}\right]\tilde{\alpha}-in\tilde{G}_ {n}\left(\tilde{\alpha}^{*}\right)^{n-1}.\end{split} \tag{18}\] Equation Eq. (18) is independent of \(L\) and the photon number scales as \(N=|\alpha|^{2}\propto L\) confirming that \(L\to\infty\) corresponds to a well defined thermodynamic limit with an infinite number of photons. In general, we expect the semiclassical approximation (17) to be valid and predictive in the \(L\to\infty\) limit, and far from the critical points where nonlinear processes inducing quantum fluctuations cannot be neglected. The parameter \(L\) allows introducing the idea of finite-component phase transitions -- where the thermodynamic limit is replaced by a scaling of the system parameters [56, 57, 58, 59, 11, 24, 36, 11, 60]. ###Analysis of the transition properties Given the invariance of Eq. (18) to the transformations in Eq. (16), in the following analysis we will work with the _bare_ quantities \(\{U_{m},\eta_{n},G_{n}\}\). Despite the simplification introduced by the semiclassical approximation, Eq. (18) cannot be yet analytically solved. At the steady state, i.e. \(\partial_{t}\alpha=0\), Eq. (18) reads \[\left(\sum_{m=1}^{m_{\rm max}}U_{m}N^{m-1}-i\frac{\gamma+n\eta_{n}N^{n-1}}{2} \right)\alpha=nG_{n}\left(\alpha^{*}\right)^{n-1}. \tag{19}\] In general, Eq. (19) gives rise to multiple solutions for the photonic field \(\alpha\). The onset of new _stable_ solutions of Eq. (19) can be associated with the emergence of phase transitions. For example when \(n=1\) the cubic equation for \(\alpha\) (obtained by considering \(m_{\rm max}=2\)) gives rise to the well-known S-shaped curve for the photon number [46] signaling the presence of a first-order phase transition (between a low- and high-density state) accompanied by an hysteresis region with multiple stable solutions in the thermodynamic limit [56, 11, 24]. If \(n\geq 2\) Eq. (19) always admits the solution \(\alpha=\alpha_{\rm vac}=0\). However, the other solutions \(\alpha\) of Eq. (19) cannot be analytically found. Our strategy is thus to solve an _inverse problem_. Being interested in studying the emergence of criticalities as the driving strength is varied, by multiplying both sides by their complex conjugate, one finally obtains the equation \[G_{n}(N)=\sqrt{\frac{4\left(\sum_{m}U_{m}N^{m-1}\right)^{2}+\left(\gamma+n\eta_{n} N^{n-1}\right)^{2}}{4n^{2}N^{n-2}}}, \tag{20}\] where we selected the positive branch of the square root since, up to a phase, one can always choose \(G_{n}\in\mathbb{R}^{+}\)3. Footnote 3: This amounts to a change the initial condition by sending \(\hat{a}\to\hat{a}e^{i\varphi 0}\) #### 3.2.1 Stability of the vacuum In this section we show that, within the semiclassical picture, the solution \(\alpha_{\rm vac}=0\) is always asymptotically stable for \(n>2\) if \(\gamma\neq 0\). Consider \(\alpha=\alpha_{\rm vac}+\delta\alpha\), where \(\delta\alpha\in\mathbb{C}\) is a small perturbation (\(|\delta\alpha|\ll 1\)) around the vacuum solution. Plugging the above parametrization into Eq. (18), and expanding it at the first order in \(\delta\alpha\), we get \[\partial_{t}(\delta\vec{\alpha})=\mathsf{M}\cdot\delta\vec{\alpha}, \tag{21}\] where \(\delta\vec{\alpha}=(\mathrm{Re}[\delta\alpha],\mathrm{Im}[\delta\alpha])^{\intercal}\) and \[\mathsf{M}=\begin{pmatrix}-\gamma/2&-2\delta_{n,2}G_{2}+U_{1}\\ -2\delta_{n,2}G_{2}-U_{1}&-\gamma/2\end{pmatrix} \tag{22}\] is the so-called stability matrix. The solutions of Eq. (21) are given by \(\delta\vec{\alpha}(t)=\exp(-\lambda_{\pm}t)\delta\vec{\alpha}(0)\), where \(\lambda_{\pm}=-\gamma/2\pm\sqrt{4\delta_{n,2}(G_{2})^{2}-U_{1}^{2}}\) are the eigenvalues of \(\mathsf{M}\). Thus it is straightforward to conclude that for \(n>2\) the vacuum solution \(\alpha_{\rm vac}=0\) is always stable at a semiclassical level for finite single-photon losses since \[\mathrm{Re}[\lambda_{\pm}]=-\frac{\gamma}{2}<0. \tag{23}\] For \(n=2\) the vacuum gets unstable when \[\mathrm{Re}\left[\sqrt{4(G_{2})^{2}-U_{1}}\right]>\frac{\gamma}{2}, \tag{24}\] which implies \(\mathrm{Re}[\lambda_{+}]>0\). Equation (24) has important consequences since it implies that, contrary to the \(n=1,2\) case, the semiclassical dynamics never triggers a transition from the vacuum to a high-density solutions if \(\gamma\neq 0\). However, as we will see in in Secs. 4 and 5, quantum fluctuation in finite-size systems can make the vacuum solution unstable and allow for the onset of phase transitions. Finally, we note that in the case of strong symmetry \(\gamma=0\), the vacuum is marginally stable, and higher-order perturbation theory is needed to assess the stability of the vacuum. #### 3.2.2 Second-order phase transitions and behavior around \(\alpha=0\) For the class of systems under consideration a second-order phase transition occurs when the state changes from \(N=\langle\hat{a}^{\dagger}\hat{a}\rangle=0\) to \(N>0\) continuously as a function of the driving strength \(G_{n}\)[11]. In other words, if a second-order DPT occurs, semiclassicaly the critical point must correspond to a solution of Eq. (20) where \[G_{n}^{(c)}\equiv\lim_{N\to 0^{+}}G_{n}(N). \tag{25}\] We note that the limit \(N\to 0^{+}\) must be taken since Eq. (20) is defined only for \(N\neq 0\). At this specific value of \(G_{n}\) the system is thus allowed to pass from the semiclassical solution \(\alpha_{\rm vac}=0\) to another stable solution with \(\alpha\neq 0\). Notice that Eq. (20) admits at most three possible behaviors around \(N=0\) as sketched in Fig. 2: * The curve \(G_{n}\) intersect the zero with a positive derivative (red line in Fig. 2). In this case, the system can undergo a second-order DPT, passing continuously from the zero solution to a nonzero one. * The curve \(G_{n}\) intersect the zero with a negative derivative (blue line in Fig. 2). This is resembling the S-like shape of bistability in one-photon driven systems. Since the photon number should Figure 2: (a) Possible different behaviors of \(G_{n}\) as a function of \(N\) according to Eq. (20). The marker indicates the minima of the function \(G_{n}\), while the hatching indicates the unphysical solutions \(N<0\). (b) By simply inverting the plot, we can gain information on \(N\) as a function of \(G_{n}\). monotonically increase by increasing the photon drive, this solution can never be stable, and therefore the system can only undergo a first-order DPT. * The curve \(G_{n}\) never intersect the zero for a finite value of \(N\) (green line in Fig. 2). Also in this case the system can never experience a second-order DPT. We conclude that, a necessary (but not sufficient) condition to observe second-order DPTs, according to the semiclassical theory, is \[\text{(i)}\quad 0<G_{n}^{(c)}<\infty. \tag{26a}\] \[\text{(ii)}\quad\frac{\partial G_{n}(N)}{\partial N}\bigg{|}_{G_{n}=G_{n}^{( c)}}\geq 0. \tag{26b}\] Notice that, in the case of a vertical-tangent point, higher-order derivatives need to be computed. #### 3.2.3 Universal features and a no-go theorem From the remarks in the previous section and using Eqs. (26a) and (26b), we can already draw some important conclusions about the nature of DPTs in this class of systems. In particular, we formulate the following no-go theorem. **No-go theorem.**_Consider a \(n\)-photon driven-dissipative resonator, with nonvanishing Kerr nonlinearity, governed by the Lindbladian (2) then: (a) a second-order DPT never occurs for odd \(n\); (b) If \(\gamma\neq 0\) (weak symmetry), \(n=2\) is the only case with a second-order DPT; (c) If \(\gamma=0\), and \(U_{1}\neq 0\), again \(n=2\) is the only cases where a second-order DPT can emerge; (d) A DPT for \(n=4\) can be found only if \(U_{1}=\gamma=0\). (e) For \(U_{2}\neq 0\), no second-order DPTs can occur if \(n>4\)._ Proof.: The semiclassical solutions of the stationary Gross-Pitaevskii equation (19) must satisfy Eq. (20). The behaviour of this function around \(N=0^{+}\) for \(U_{1}\neq 0\) or \(\gamma\neq 0\) is given by \[G_{n}(N)\simeq\frac{1}{2n}N^{\frac{2-n}{2}}\sqrt{4U_{1}^{2}+\gamma^{2}}\left[ 1+\mathcal{O}(N)\right]. \tag{27}\] The case where \(U_{1},\gamma=0\), the expansion leads to \[G_{n}(N)\simeq\frac{|U_{2}|}{n}N^{\frac{4-n}{2}}\left[1+\mathcal{O}(N)\right]. \tag{28}\] To prove _(a)_, we consider odd-\(n\), and from Eq. (27) we get \[G_{n}^{(c)}=\begin{cases}0&\text{if }n=1\\ \infty&\text{if }n=3,5,\ldots\end{cases} \tag{29}\] and therefore the condition (26a) for the occurrence of a second-order DPT is never satisfied. If \(U_{1},\gamma=0\), instead, Eq. (28) gives \[G_{n}^{(c)}=\begin{cases}0&\text{if }n=1,3\\ \infty&\text{if }n=5,7,\ldots\end{cases}. \tag{30}\] We have therefore proven the statement _(a)_. Let us now consider the case of even \(n\). From Eq. (27) we find that for \(U_{1}\neq 0\) or \(\gamma\neq 0\) a second order DPT is possible only for \(n=2\), with a critical point given by \[G_{2}^{(c)}=\frac{\sqrt{4U_{1}^{2}+\gamma^{2}}}{4}. \tag{31}\] Higher \(n\) results in \(G_{n}^{(c)}=\infty\). Condition (26b) reads \[\frac{\partial G_{2}(N)}{\partial N}\bigg{|}_{G_{2}=G_{2}^{(c)}}=\frac{U_{1} U_{2}+4\gamma\eta_{2}}{2\sqrt{4U_{1}^{2}+\gamma^{2}}} \tag{32}\] and thus it can be satisfied for an appropriate choice of the parameters. These equations proves _(b)_ and _(c)_. Let us now assume \(U_{1}=\gamma=0\). From Eq. (28) follows that, for \(n=4\), a second-order DPT can can place also for \[G_{4}^{(c)}=\frac{|U_{2}|}{4}. \tag{33}\] Therefore, condition (i) in Eq. (26a) is satisfied in the case \(U_{1}=\gamma=0\). As for condition (ii) in Eq. (26b), we have \[\frac{\partial G_{4}(N)}{\partial N}\bigg{|}_{G_{4}=G_{4}^{(c)}}=\frac{U_{3}}{ 4}\operatorname{Sign}\left(U_{2}\right), \tag{34}\] which can be satisfied choosing \(U_{2}\) and \(U_{3}\) with the same sign. Finally, one can easily show that for \(n>4\), Eq. (28) gives \(G_{n}(N)=\infty\), demonstrating the impossibility of a DPT, thus proving _(e)_. Before dealing with the analysis of the full quantum results, let us remark that, while \(\gamma=0\) is impossible to achieve in actual realizations, for many practical purposes one can consider system "sizes" \(L\) where, to a reasonable approximation, the role of \(\gamma\) can be neglected, and thus the approximation \(\gamma=0\) faithfully recovers the results of finite-time experiments. Furthermore, the detuning terms can be easily manipulated, therefore making it possible to approximately fulfil the condition \(\gamma=U_{1}=0\) necessary to witness the second-order DPT for the \(n=4\) case. We also notice that the mechanism enabling second-order DPTs for \(n=4\) (i.e. the fact that \(U_{2}\) and \(U_{3}\) have the same sign) is the same behavior displayed by the two-photon Kerr resonator, where this role is played by the detuning \(U_{1}\) and the two-photon interaction potential \(U_{2}\) in Eq. (32). Finally we stress that, although second-order DPTs could in principle emerge also for even \(n>4\), these would require setting \(U_{2}=0\), that, contrarily to detuning \(U_{1}\), cannot be easily manipulated. #### 3.2.4 Multistability of solutions with different number of photons The solution around \(N=0\) predicts either a first- or a second-order phase transition describing the pas sage of the system from the vacuum to a nonzero population phase. This analysis does not predict the behavior far from \(N=0\), and nothing prevents several Hamiltonian terms from competing with each other, thus resulting in multiple stable solution. In particular, the presence of this multistability would imply an overlap of "S-like" curves of the semiclassical solution, so that for the same drive intensity, there are multiple solutions with different photon number. To understand which mechanism can enable multistability, let us consider again Eq. (20). In the semiclassical formalism, multistability implies the presence of multiple solutions at the semiclassical level with different photon number [c.f. Fig. 3 (b)]. This translates in the presence of multiple local minima (or maxima) of the function \(G_{n}(N)\), as shown in Fig. 3 (a). Therefore, one can study the function \[\frac{\partial G_{n}(N)}{\partial N}=0. \tag{35}\] The number of maxima and minima signals the presence of multiple semiclassical solutions. And following Descartes' rule of signs - i.e., the maximal number of positive roots of a polynomial is the number of sign changes between consecutive coefficients - we deduce that, a necessary condition to have multiple solutions, is the presence of alternating signs between the various \(U_{n}\). Physically speaking, the underlying mechanism is quite straightforward: different \(U_{n}\) terms can compete with each other in determining the energy of one photon in the system, while drive and dissipation favour the solution with more or less photons. Since the relevance of each interaction term can change in different occupation regimes, several solutions can emerge. The stability of the semiclassical solutions with respect to quantum fluctuations need to be numerically assessed. ## 4 Three-photon Kerr resonator Having discussed the general properties of DPTs, we turn now to specific examples, to demonstrate the validity of the semiclassical analysis, and show the quantum properties around criticality. Throughtout the next two sections, we will diagonalize the Liouvil Figure 4: Onset of a first-order phase transition for increasing \(L\) (see legend) with symmetry breaking in the three-photon Kerr resonator. Panel (a): mean number of photons in the steady state \(\langle\hat{a}^{\dagger}\hat{a}\rangle\), renormalised by scaling parameter \(L\). Panel (b): Real part of \(\lambda_{1}^{(0)}\), i.e., the Liouvillian gap in the same simmetry sector as the steady state, inducing the first-order transition. The three vertical lines indicate the scaling values studied in Fig. 6. Panel (c): Real part of \(\lambda_{1}^{(0)}\), i.e., the Liovillian eigenvalue signalling the spontaneous symmetry breaking. Parameters: \(U_{1}/\gamma=-10\), \(U_{2}/\gamma=10\), \(\eta_{3}/\gamma=1\). Figure 3: Multistability according to the semiclassical analysis in a four-photon driven resonator (\(n=4\)), where we fixed \(U_{1}=10\gamma\), \(U_{2}=-25\gamma\), \(U_{3}=3\gamma\), \(\eta_{4}=0.1\gamma\). lian superoperator. We take full advantage of the system's symmetry, as detailed in Appendix B, to reduce the computational complexity and enhance the precision of the results. For the most numerically demanding simulations, we resort to the recently-developed Arnoldi-Lindblad method [61], in conjunction with the algorithm detailed in Appendix B. Here, we consider the three-photon driven Kerr resonator governed by the master equation \[\partial_{t}\hat{\rho}(t)=-i\left[\hat{H}_{3},\hat{\rho}(t)\right]+\gamma \mathcal{D}[\hat{a}]+\eta_{3}\mathcal{D}[\hat{a}^{3}] \tag{36}\] with \[\hat{H}_{3}=U_{1}\hat{a}^{\dagger}\hat{a}+\frac{U_{2}}{2}\left(\hat{a}^{ \dagger}\right)^{2}\hat{a}^{2}+G_{3}\left[\hat{a}^{3}+\left(\hat{a}^{\dagger} \right)^{3}\right]. \tag{37}\] We focus on the \(\gamma\neq 0\) case, where the system displays a \(Z_{3}\) weak symmetry. According to the semiclassical analysis, we expect a first-order dissipative phase transition accompanied by the spontaneous breaking of the weak \(Z_{3}\) symmetry. ### Semiclassical vs quantum solution First, we analyze the photon number as a function of the driving strength \(G_{3}\). In Fig. 4 we show the results of the full quantum analysis (colored lines), and compare them to the prediction of the semiclassical analysis (dashed black line). For a weak drive \(G_{3}\), the system is in the vacuum, and \(\hat{\rho}_{\text{ss}}\simeq\ket{0}\bra{0}\). Increasing the drive intensity, the system's photon number deviates from the vacuum and approaches the high-photon number branch predicted by the semiclassical theory. In this symmetry broken phase, the stationary state is well-approximated by a statistical mixture of three coherent states, i.e., \[\hat{\rho}_{\text{ss}}\simeq\frac{\ket{\alpha_{1}}\bra{\alpha_{1}}+\ket{ \alpha_{2}}\bra{\alpha_{2}}+\ket{\alpha_{3}}\bra{\alpha_{3}}}{3}, \tag{38}\] where \(\ket{\alpha_{1,2,3}}\) are coherent states with same the same number of photons and a relative phase difference of \(\pm 2\pi/3\), i.e. \[\alpha_{j+1}=\alpha_{j}\ e^{i\frac{2\pi}{3}}. \tag{39}\] The change in the steady state population becomes more and more abrupt as we increase the parameters \(L\), demonstrating that, indeed, the phase transition is of the first order. ### Analysis of the first-order transition To confirm the presence of a first-order phase transition, we plot in Fig. 4(b) the Liouvillian eigenvalue \(\lambda_{1}^{(0)}\) associated with the slowest relaxation rate in the steady-state symmetry sector. This eigenvalue signals hysteresis and critical slowing down, and the fact that it tends to zero in the thermodynamic limit proves the presence of a first-order DPT [10]. We then investigate the properties of the eigenoperator \(\hat{\rho}_{1}^{(0)}\) associated with such a state. According to Ref. [10], in the critical region one can use the eigendecomposition of \(\hat{\rho}_{1}^{(0)}\) to recast \[\hat{\rho}_{1}^{(0)}\simeq\hat{\rho}_{1}^{+}-\hat{\rho}_{1}^{-}, \tag{40}\] where \(\hat{\rho}_{1}^{\pm}\) represent the density matrices of the metastable states. As such, we expect that, in the thermodynamic limit, \(\hat{\rho}_{1}^{\pm}\) recover the two stable solutions of the semiclassical theory. We show the eigendecomposition in Fig. 5. We indeed find that the semiclassical approximation qualitatively recovers the results of the eigendecomposition. As discussed in Sec. 3.2.1, the semiclassical analysis predict the presence of a stable vacuum in the whole symmetry broken region. This analysis is confirmed both by the eigendecomposition in Fig. 5 (the vacuum Figure 5: Eigendecomposition of \(\hat{\rho}_{1}^{(0)}\) for \(L=15\) and the parameters in Fig. 4. The solid black line describes the result of the full quantum solution. The prediction of the semiclassical solutions are the dotted light blue curve (few photon numbers, stable), the dashed red curve (high-photon number, stable), and the dashed black curve (unstable solution). Finally, the results of the eigendecomposition are plotted with a solid blue line and a dashed red line. Figure 6: Scaling towards the thermodynamic limit for the three vertical lines in Fig. 4(b), demonstrating the presence of a first-order DPT and the emergent stability of the vacuum. remains long-lived even far from the transition point), and the from the spectral analysis in Fig. 4(b). Considering larger values of \(L\) results in slower timescales. We confirm the scaling towards the thermodynamic limit of the Liouvillian gap \(\lambda_{1}^{(0)}\) in Fig. 6. We consider a point before the transition (red line), at the minimum of the gap (blue line), and after the transition (green line). The same three lines correspond to the vertical lines in Fig. 4(b). In all the cases, after an initial transient, we see an exponential closure of the gap as a function of \(L\). The green curve confirms the vacuum metastability predicted by the semiclassical theory. ### Spontaneous symmetry breaking The spontaneous symmetry breaking implies that, for strong enough pumping, each of the state \(\ket{\alpha_{i}}\bra{\alpha_{i}}\) in Eq. (38) becomes a steady state of the system, since they are not eigenstate of \(\mathcal{Z}_{3}\)[10]. We confirm this picture in Fig. 4(c), where we show that also \(\lambda_{0}^{(1)}\) becomes zero. As expected, we obtain an identical result for \(\lambda_{0}^{(2)}\) (not shown). This implies that the system coherent state have become metastable. ## 5 Four-photon Kerr resonator Here, we consider the four-photon driven-dissipative Kerr resonator, reading \[\partial_{t}\hat{\rho}(t)=-i\left[\hat{H}_{4},\hat{\rho}(t)\right]+\gamma \mathcal{D}[\hat{a}]+\eta_{4}\mathcal{D}[\hat{a}^{4}] \tag{41}\] with \[\begin{split}\hat{H}_{4}=U_{1}\hat{a}^{\dagger}\hat{a}& +\frac{U_{2}}{2}\left(\hat{a}^{\dagger}\right)^{2}\hat{a}^{2}+ \frac{U_{3}}{3}\left(\hat{a}^{\dagger}\right)^{3}\hat{a}^{3}\\ &+G_{4}\left[\hat{a}^{4}+\left(\hat{a}^{\dagger}\right)^{4}\right].\end{split} \tag{42}\] ### Strong symmetry and second-order phase transition We start by considering the strong symmetric case \(\gamma=0\) with \(U_{1}=0\). For this set of parameter, the semiclassical analysis predicts a second-order phase transition associated with the spontaneous breaking of a \(Z_{4}\) strong symmetry. We analyze it in Fig. 7. We recall that, since the system has a strong \(Z_{4}\) symmetry, the number of Liouvillian sectors is \(4\times 4\), being characterized by the two quantum numbers \(k_{\mathrm{L}}\) and \(k_{\mathrm{R}}\). The 4 sectors with \(k_{\mathrm{L}}=k_{\mathrm{R}}\) describe the evolution of the populations, while the remaining 12 with \(k_{\mathrm{L}}\neq k_{\mathrm{R}}\) describe the evolution of the coherences. First, we consider the re-scaled photon number of the steady state for each of the symmetry sectors \((j,j)\) with \(j\in[0,3]\), and increasing the thermodynamic parameter \(L\). Calling \(\hat{\rho}_{\mathrm{ss}}^{(j)}\propto\hat{\rho}_{0}^{(j,j)}\) the steady state in each symmetry sector, \(\langle\hat{a}^{\dagger}\hat{a}\rangle_{j}=\mathrm{Tr}\!\left[\hat{a}^{\dagger }\hat{a}\hat{\rho}_{\mathrm{ss}}^{j}\right]\) are plotted in Figs. 7(a-d). For low drive amplitudes, the system is in the \(Z_{n}\) symmetric vacuum. Indeed, the states need to respect the strong symmetry condition in Eq. (13), and thus \[\hat{\rho}_{\mathrm{ss}}^{(j)}=\ket{\mathrm{vac}_{j}}\bra{\mathrm{vac}_{j}}= \ket{j}\bra{j}, \tag{43}\] where \(j\) labels the symmetry sector and \(\ket{j}\) is the Fock state with \(j\) photons. For large drive, instead, the system transition towards \[\hat{\rho}_{\mathrm{ss}}^{(j)}\simeq\ket{\mathcal{K}_{j}}\bra{\mathcal{K}_{j}} \tag{44}\] where the Schrodinger cats \(\ket{\mathcal{K}_{i}}\) are \[\ket{\mathcal{K}_{j}}=\frac{1}{\mathcal{N}}\sum_{n=0}^{3}e^{i\pi jn/2}\ket{ \alpha_{n}}, \tag{45}\] where \(\ket{\alpha_{n}}\) are coherent states such that \(\alpha_{n}=e^{i\pi n/2}\alpha\) and \(\mathcal{N}\) is a normalization factor. Increasing the value of \(L\) towards the thermodynamic limit, we observe that the passage between the \(Z_{n}\) vacua and the cat states becomes sharper and sharper, but remains continuous. This analysis corroborate the semiclassical one, and by appropriately taking into account the system's symmetry, we observe a second-order DPT. To further demonstrate that, indeed, the transition is of the second and not of the first order, we plot \(\lambda_{1}^{(j,j)}\) in Figs. 7(e-h), i.e., the Liouvillian gap of the \((j,j)\) symmetry sector. We observe no closure of the Liouvillian gap, indicating that no critical slowing down or hysteresis occurs for the Liouvillian populations. Finally, we plot the smallest Liouvillian eigenvalue \(\lambda_{0}^{(j,j+1)}\) for the sectors \((j,j+1)\) (where \(j+1=0\) if \(j=3\)) in Figs. 7(i-l). These represent the decay rate of coherences between the sector \(j\) and \(j+1\), and their closure indicate the possibility of retaining everlasting coherences. In this case, we observe that, after the critical point, these eigenvalue progressively become smaller, indicating that the system undergoes a second order phase transition. We obtain similar results for the other \((j,k)\) sectors with \(j\neq k\) (not shown). This is associated with a spontaneous breaking of the strong \(Z_{4}\) symmetry, because it results in \[\begin{split}\hat{\rho}&\propto\left(\ket{\mathcal{K}_{ j}}+\ket{\mathcal{K}_{k}}\right)\left(\bra{\mathcal{K}_{j}}+\ket{\mathcal{K}_{k}} \right),\\ \mathcal{L}\hat{\rho}&=0\quad\text{but}\quad\mathcal{ Z}_{4}^{\mathrm{L,R}}\hat{\rho}\neq z_{4}^{\mathrm{L,R}}\hat{\rho}.\end{split} \tag{46}\] ### Weak symmetry and multistability We now consider a weakly symmetric case in the presence of detuning \(U_{1}\), and with competing terms giving rise to multistability according to the semiclassical solution. First, in Fig. 8(a), we compare the results of the semiclassical analysis with those of the full quantum simulation. We find that, although the semiclassical solution has three stable solution, the full quantum simulation is characterized by a single first-order DPT, from the vacuum to the highest-populated manifold. Indeed, if we analyze the Liouvillian gap \(\lambda_{1}^{(0)}\) in Fig. 8(b) we clearly see the closure of the Liouvillean gap associated with a first-order DPT. If, however, we also consider the second eigenvalue \(\lambda_{2}^{(0)}\) as in Fig. 8(c), we see that a second slow timescale emerges. That is, despite the presence of a single phase transition, the dynamics of the population of the system is characterized by _two slow timescales_. We corroborate this phenomenon by analyzing the symmetry sectors responsible for spontaneous symmetry breaking. In Fig. 8(c), we plot \(\lambda_{0}^{(1)}\) showing that, indeed, this phenomenon is accompanied by the breaking of the weak \(Z_{4}\) symmetry. Noticeably, the spontaneous symmetry breaking takes place before the occurence of the first-order transition. Furthermore, we also observe a second slow timescale for this symmetry sector, i.e., \(\lambda_{1}^{(1)}\) in Fig. 8(d). This slow timescales represent the fact that there exist multiple symmetry broken states, and there is a slow rate from which the system switches between them. We observe similar results for the other symmetry sectors (not shown). The picture we derive is one in which, although there are only two real steady state of the dynamics, either the vacuum or the one at large photon number, there exist a third metastable state to which the system can be initialized. Such a state is characterized by a broken symmetry, but it cannot be reached by quantum fluctuation alone. To further demonstrate this picture, in Fig. 9(a) we use the eigendecomposition to express the eigenoperators associated with the slowest eigenvalues as \[\hat{\rho}_{1}^{(0)}=\hat{\rho}_{1}^{(0),+}-\hat{\rho}_{1}^{(0),-},\quad\hat{ \rho}_{2}^{(0)}=\hat{\rho}_{2}^{(0),+}-\hat{\rho}_{2}^{(0),-}. \tag{47}\] As one can see, these metastables density matrices recover the results of the semiclassical analysis, and the region in which there is a closure of these Liouvillian eigenvalues roughly corresponds to the region of multistability according to the semiclassical analysis. Overall, the system displays 9 metastable coherent Figure 7: Study of the strongly-symmetric four-photon Kerr resonator, and of the onset of a second-order dissipative phase transition. For different values of the thermodynamic rescaling parameter \(L\): (a-d) photon number in the \((j,j)\) symmetry sector; (e-h) Liouvillian gap in the \((j,j)\) symmetry sector; (e-h) smallest Liouvillian eigenvalue in the \((j,j+1)\) symmetry sector. Parameters: \(\gamma=U_{1}=0\), \(U_{2}=10\eta_{4}\), \(U_{3}=\eta_{4}\). like states, approximated by \(|\alpha_{\rm vac}\rangle\), \(|\alpha_{\rm low}e^{i\phi_{j}}\rangle\), and \(|\alpha_{\rm high}e^{i\phi_{j}}\rangle\), with \(|\alpha_{\rm vac}|<|\alpha_{\rm low}|<|\alpha_{\rm high}|\) and \(\phi_{j}\in j\pi/4\). ## 6 Conclusions and outlook In this work we explored the critical properties of \(n\)-photon driven-dissipative nonlinear quantum resonators. We found that the symmetries of the model, fixed by driving and dissipation, determines the nature of the phase transitions in the steady state. We characterize such criticalities providing general results for this class of models. We attack the problem using a semiclassical approach valid in a well defined thermodynamic limit with an infinite number of excitations. In such a limit the state of the system approaches a coherent state and quantum fluctuations are suppressed leading to a generalized version of the driven-dissipative Gross-Pitaevskii equation. Studying its stationary properties we formulate and prove a no-go theorem stating that no second-order phase transitions are possible when \(n\) is odd, while, for even \(n\), second-order transitions can take place only for \(n=2\) and \(n=4\). We then perform a full quantum analysis of the three- and four-photon driven Kerr resonators. We find that quantum fluctuations trigger the transition between semiclassical solutions in the thermodynamic limit validating the results obtained in the semiclassical limit. While the semiclassical approximation has been proved to be reliable for \(n=1,2\), for higher \(n\) there are no strong arguments supporting its validity. Indeed the systematic inclusion of small quantum fluctuation on top of the mean-field semiclassical solution can be obtained via truncated Wigner methods [62, 63] and Gaussian expansions [49]. This is not the case for \(n>2\) because the drive and \(n\)-photon dissipation can, in principle, introduces non-Gaussian correlation above the coherent-state solution [64]. The emergence of this dissipative phase transitions is understood and characterized within the spectral theory of Liouvillian highlighting the role of weak and strong symmetries. These results could also be relevant in field of quantum technologies and quantum information encoding. Symmetry breaking in second-order DPTs has been demonstrated to be a resource to improve the sensitivity of quantum measurement protocols [34, 38]. Our work proves that such kind of enhancement can only be attained for \(n=2\) or \(n=4\). Furthermore, our results may pose constraints for the exploitation of nonlinear driven resonators for the encoding of bosonic codes. As it been recently proposed [28, 30], detuning and critical phenomena may play a key role in the storing quantum information. The metastability of the vacuum may also prove an obstacle towards a Figure 8: Analysis of the classically multistable system. (a) Photon number as a function of the drive for different values of the thermodynamic scaling parameter \(L\). The black dashed line represents the semiclassical solution. (b) Liouvillian gap and (c) second Liouvillian eigenvalue in the \(k=0\) sector, demonstrating the presence of two slow timescales. (d) Smallest (e) and second smalles Liouvillian eigenvalues in the \(k=1\) sector, demonstrating the presence of SSB and of a slow timescale. Parameters: \(U_{1}=10\gamma\), \(U_{2}/\gamma=-25\gamma\), \(U_{3}=3\gamma\), \(\eta_{4}=0.1\gamma\). rapid and reliable initialization of bosonic qubits. This work paves the way to future intriguing research directions. Among them, we mention the study of the dynamical properties of these systems in connection with quantum trajectories approaches, and the emergence of chaotic behavior in highly nonlinear quantum resonators. ## Acknowledgements We thank G. Rastelli and L. Gravina for the useful discussions. We acknowledge the help of A. Mercurio in the optimization of the numerical codes. This work was supported by the Swiss National Science Foundation through Project No. 20002_185015, Provincia Autonoma di Trento, and was conducted with the financial support of the EPFL Science Seed Fund 2021 and from PNRR MUR project PE0000023-NQSTI. ## Appendix A Interaction, nonlinearities, and \(n\)-photon drives in superconducting circuits Let us consider a standard LC resonator characterized by the Hamiltonian: \[\hat{H}_{\mathrm{cav}}=\omega\hat{a}^{\dagger}\hat{a}, \tag{48}\] where \(\omega\) is the resonator frequency. Non-quadratic (i.e., interaction) terms can emerge by considering the action of nonlinear elements. For instance, nonlinearity can be obtained by quantizing the flux in Josephson junctions potentials of the form \[E_{J}\cos\left(\frac{\phi}{\phi_{0}}\right)\simeq E_{J}\left(1-\frac{\phi^{2}} {2\phi_{0}^{2}}+\frac{\phi^{4}}{24\phi_{0}^{4}}-\frac{\phi^{6}}{720\phi_{0}^{6 }}\right)+\ldots, \tag{49}\] where \(\phi\) is the flux coordinate of the circuit at the junction and \(\phi_{0}\) is the magnetic flux quantum. In most implementations, the expansion in Eq. (49) can be stopped at the \(\phi^{4}\) order. If the Josephson junction belongs to a single resonator, by substituting \(\phi/\phi_{0}\propto\hat{a}+\hat{a}^{\dagger}\), and discarding counter-rotating terms, which are out of resonance, one obtains the Kerr resonator Hamiltonian, reading \[\hat{H}_{\mathrm{Kerr}}=\tilde{U}_{1}\hat{a}^{\dagger}\hat{a}+\frac{U_{2}}{2} \left(\hat{a}^{\dagger}\right)^{2}\left(\hat{a}\right)^{2}. \tag{50}\] In \(n\)-photon driven systems, photons are coherently exchanged between the resonator and a set of external fields, \(n\) at the time. While single-photon drive (i.e., of the form \(\hat{a}+\hat{a}^{\dagger}\)) can emerge by, e.g., capacitive coupling an incoming wave-guide with the cavity, higher order drive require to be mediated by nonlinear elements. For instance, such \(n\)-drive terms can be derived from the expansion in Eq. (49) if the Josephson junction is shared by several modes, so that \(\phi=\sum_{k}\phi_{k}\), where \(\phi_{k}\) represents the flux coordinate of each one of the modes. For instance, two-photon drives can be achieved by standard four waves mixing, rewriting \(\phi=\phi_{a}+\phi_{b}+\phi_{c}\), where \(\phi_{a}\propto\hat{a}+\hat{a}^{\dagger}\) is the field within the resonator, and \(\phi_{b}\propto\hat{b}+\hat{b}^{\dagger}\) and \(\phi_{c}\propto\hat{c}+\hat{c}^{\dagger}\) are auxiliary modes. If the mode \(b\) (\(c\)) is driven, and evolves on a timescale much faster than the typical time scales of the \(a\) mode, one can substitute the operators \(\hat{b}\) (\(\hat{c}\)) with a c-number oscillating at the driving frequency \(\omega_{b}\) (\(\omega_{c}\)) via an adiabatic elimination, reading \(\hat{b}\to be^{i\omega_{b}t}\) (\(\hat{c}\to ce^{i\omega_{c}t}\) ). All in all, discarding again out-of-resonance terms, the Hamiltonian for the resonator resulting from the fourth-order expansion of the potential \(\cos(\phi)\) would result in a nonlinear Hamiltonian \(\hat{H}_{\mathrm{NL}}\), reading \[\hat{H}_{\mathrm{NL}}=\hat{H}_{\mathrm{Kerr}}+G_{2}\left[\hat{a}^{2}e^{2i \omega_{p}t}+\left(\hat{a}^{\dagger}\right)^{2}e^{-2i\omega_{p}t}\right]. \tag{51}\] By passing in the frame rotating at the drive frequency, and re-absorbing the contribution to the energy frequency in the term \(U_{1}=\tilde{U}_{1}-\omega_{p}=-\Delta\), the Figure 9: Eigendecomposition and comparison with the semi-classical solution. (a) Photon number of the full quantum solution (black solid line) compared to the semiclassical solution (black dashed line) and the results of the eigendecomposition [red and blue markers correspond to the red and blue curves in panel (b)]. (b) The two smallest Liouvillian eigenvalues, whose behavior across the transition has been reconstructed using the continuity of the associated eigenoperators. Parameters as in Fig. 8 for \(L=20\). Hamiltonian finally reads \[\hat{H}_{n=2}=U_{1}\hat{a}^{\dagger}\hat{a}+\frac{U_{2}}{2}\left(\hat{a}^{\dagger }\right)^{2}\hat{a}^{2}+G_{2}\left[\hat{a}^{2}+\left(\hat{a}^{\dagger}\right)^{2 }\right]. \tag{52}\] Through similar procedures, high-order expansion of nonlinear terms (\(k\)-wave mixing with \(k>n\)), can result (in principle) in \(n\)-photon drives. _By including such terms, one needs to include also the corresponding nonlinearities_. As detailed in the main text, such nonlinearities can play a fundamental role in determining the nature of the transition. Appendix B An efficient algorithm for block-diagonalizing the Liovuillian in the presence of \(Z_{n}\) symmetries We introduce here a simple algorithm to block diagonalize system displaying a \(Z_{n}\) symmetry. Although we describe it for a weakly symmetric case, its extension to a strong symmetry is straightforward. The Liouvillian admits an abstract definition of its spectrum via Eq. (9). To numerically obtain the eigenvalues and eigenoperators one needs to explicit the matrix form of \(\mathcal{L}\). For a finite-dimensional Hilbert space, one can construct such a matrix via \[\mathcal{L} =-i\left(\hat{H}\otimes\hat{\mathds{1}}-\hat{\mathds{1}}\otimes \hat{H}^{\mathrm{T}}\right)\] \[\quad+\sum_{j=1}^{3}\left(\hat{L}_{j}\otimes\hat{L}_{j}^{*}- \frac{\hat{L}_{j}^{\dagger}\hat{L}_{j}\otimes\hat{\mathds{1}}+\hat{\mathds{1} }\otimes\hat{L}_{j}^{\mathrm{T}}\hat{L}_{j}^{*}}{2}\right), \tag{53}\] where \(\hat{L}_{j}^{\mathrm{T}}\) represents the transpose of \(\hat{L}_{j}\). The spectrum of the Liouvillian can then be directly obtained by diagonalizing the matrix representation of \(\mathcal{L}\). For infinite dimensional spaces (i.e., those of bosonic systems), one need to introduce a cutoff in the Hilbert space \(N_{c}\). That is, one projects the true infinite-dimensional Hamiltonian and jump operators onto the space spanned by the Fock states \(|n\rangle\) for \(n\in[0,N_{c})\), and assumes that the matrix elements of any operator for \(n\in[N_{c},\infty)\) are zero. Since \([\mathcal{Z}_{n},\mathcal{L}]=0\), all the \(\hat{\rho}_{i}\) are eigenoperators of \(\mathcal{Z}_{N}\). And since \(\mathcal{Z}_{n}\) admits \(n\) different eigenvalues, it is always possible to block-diagonalize the Liouvillian into (at least) \(n\) smaller blocks. Each block \(\mathcal{L}_{n}\) describes completely the physics of each symmetry sector of the full Liouvillian \(\mathcal{L}\). Normally, to put the Liouvillian in its block-diagonal form one would construct the basis of a symmetry sector determining the eigenoperators \(\hat{\zeta}_{i}\) of \(\mathcal{Z}_{n}\) and project the Liouvillian onto the correct symmetry sector obtaining the matrix elements \[\mathcal{L}_{i,j}=\mathrm{Tr}\left[\hat{\zeta}_{i}^{\dagger}\left(\mathcal{ L}\hat{\zeta}_{j}\right)\right]. \tag{54}\] Even if in principle correct, this process is extremely slow and inefficient since the Liouvillian is a very sparse and large matrix. Instead of applying Eq. (54), we notice that the Fock basis is already the basis of eigenstates of \(\mathcal{Z}_{n}\), as it follows from Eq. (12). That is, when using Eq. (53), we are using the correct basis to obtain the block-diagonal form of the Liouvillian, simply we are considering the basis in the wrong order. Hence, the Liouvillian is a permutation of row and columns away from being block diagonal, and the algorithm that we seek is one which efficiently finds the correct permutation matrix \(\mathcal{P}\) which transforms \(\mathcal{L}\) into its block diagonal form, whenever such a transformation is possible. The main idea is to model the block diagonalization problem as an equivalent graph-theoretic problem. 1. \(\mathcal{L}\) is written as the adjacency matrix of an undirected graph; 2. Each block in the block diagonal form is a single connected component in the graph; thus, the problem boils down to finding each connected component in the graph. 3. We then use the Breadth First/Depth First search algorithm. consecutively to obtain the permutation matrices and the indices of the blocks. The time to perform this task (i.e., its computational complexity) is linear in the number of nodes in the graph. 4. We use the permutation matrix to produce each block \(\mathcal{L}_{i}\) such that \(\mathcal{L}=\mathcal{P}\,\mathrm{diag}\{\mathcal{L}_{1}\ldots\mathcal{L}_{n} \}\mathcal{P}^{\mathrm{T}}\). The key factor in the numerical speedup comes from the fact that obtaining the permutation matrix \(\mathcal{P}\) requires a number of operations linear in the number of nonzero elements of the Liouvillian, which is a very sparse matrices [c.f. Eq. (53)].
2310.04502
Off-shell duality invariance of Schwarzschild perturbation theory
We explore the duality invariance of the Maxwell and linearized Einstein-Hilbert actions on a non-rotating black hole background. On shell these symmetries are electric-magnetic duality and Chandrasekhar duality, respectively. Off shell they lead to conserved quantities; we demonstrate that one of the consequences of these conservation laws is that even- and odd-parity metric perturbations have equal Love numbers. Along the way we derive an action principle for the Fackerell-Ipser equation and Teukolsky-Starobinsky identities in electromagnetism.
Adam R. Solomon
2023-10-06T18:00:11Z
http://arxiv.org/abs/2310.04502v2
# Off-shell duality invariance of ###### Abstract We explore the duality invariance of the Maxwell and linearized Einstein-Hilbert actions on a non-rotating black hole background. On shell these symmetries are electric-magnetic duality and Chandrasekhar duality, respectively. Off shell they lead to conserved quantities; we demonstrate that one of the consequences of these conservation laws is that even- and odd-parity metric perturbations have equal Love numbers. Along the way we derive an action principle for the Fackerell-Ipser equation and Teukolsky-Starobinsky identities in electromagnetism. ###### Contents * 1 Introduction * 2 Schwarzschild background in the \(2+2\) and GHP formalisms * 2.1 \(2+2\) decomposition * 2.2 Geroch-Held-Penrose (GHP) formalism * 3 Massless scalar * 4 Electromagnetism * 4.1 Electric-magnetic duality * 4.2 Maxwell in GHP Gravity 5.1 Odd sector 5.2 Even sector * 6 Chandrasekhar duality * 6.1 A complex master variable * 6.2 Flat-space limit: linearized gravitational duality * 6.3 Chandrasekhar duality off-shell * 7 Physical implications: Love numbers * 7.1 Equality of Love numbers from gravitational duality * 8 Discussion * A \(2+2\) Ricci tensor components * A.1 Odd perturbations * A.2 Even perturbations ## 1 Introduction The black holes of nature are the most perfect macroscopic objects there are in the universe: the only elements in their construction are our concepts of space and time. Chandrasekhar [1] The advent in the past decade of gravitational-wave astronomy and black hole imaging have spurred a renewed observational interest in the foundational and endlessly fascinating black hole solutions of general relativity (GR). The Schwarzschild metric describing non-rotating black holes is in a sense gravity's analog of the hydrogen atom in quantum mechanics: it was the first exact solution of Einstein's equations to be discovered,1 and is still often the first solution taught to students of GR. Footnote 1: The history is remarkable. Einstein published his field equations and an approximate solution accounting for Mercury’s observed perihelion advance in November 1915. Schwarzschild read this work while serving on the Russian front, and by December 1915 had obtained his exact solution. Half a year later he died of an autoimmune disease acquired at the front. The humble Schwarzschild metric is, of course, far from sufficient for modelling gravitational-wave events: astrophysical black holes rotate and so are more accurately described by the significantly more complicated Kerr metric, and the two-body problem in general relativity is highly non-linear and requires numerical techniques to solve near the merger. But some progress can be made analytically, particularly during the inspiral and ringdown phases, through a variety of perturbative schemes. Among the simplest is _black hole perturbation theory_, in which the metric is a small perturbation around a black hole background, analogous to the flat-space perturbation theory which is itself an essential topic in introductory GR courses. Black hole perturbation theory, in other words, is a fundamental problem in GR with significant relevance to modern experiments. In this paper we explore some of the symmetries of this theory, particularly the _Chandrasekhar duality_ between even- and odd-parity modes (which arrive to Earth as \(+\) and \(\times\) polarizations), which most famously manifests itself in the fact that the quasinormal mode spectra of both sectors are identical [1, 2]. We will have a particular emphasis on symmetries which hold _off shell_, that is, symmetries of the action rather than just of the equations of motion. Our principal motivation for this is the role played by the action in Noether's theorem; it is also relevant for the quantum theory, e.g., [3, 4, 5, 6]. For linear theories, which we consider in this work, it is always possible to construct an action from the equations of motion, so the distinction between on- and off-shell symmetries may seem somewhat artificial. Nevertheless there are interesting differences, as is illustrated by the classical example of electric-magnetic duality in Maxwell's theory. The electromagnetic field is described by the vector potential \(A=A_{\mu}\mathrm{d}x^{\mu}\). In terms of the field strength \(F=\mathrm{d}A\), the field equations in vacuum are2 Footnote 2: Here \(\star\) is the Hodge star, which in coordinates is \(\star F_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}F^{\alpha\beta}\). \[\mathrm{d}\star F=0,\qquad\mathrm{d}F=0. \tag{1.1}\] The former is Maxwell's equation, and the latter is the Bianchi identity, which is satisfied for all field configurations since \(\mathrm{d}^{2}=0\). If we perform a _duality transformation_, by sending \(F\to\star F\) and \(\star F\to-F\),3 then the Maxwell equation becomes the Bianchi identity and vice versa, leaving the full set of equations invariant. This is a particular case (\(\theta=-\pi/2\)) of an \(SO(2)\) duality invariance of Maxwell's equations, Footnote 3: In terms of the electric and magnetic fields this is \((E,B)\to(B,-E)\). Note that \(\star^{2}=-1\) on 2-forms in \(3+1\) dimensions. \[\begin{pmatrix}F\\ \star F\end{pmatrix}\to\begin{pmatrix}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{pmatrix}\begin{pmatrix}F\\ \star F\end{pmatrix}. \tag{1.2}\] Since electric-magnetic duality is a continuous symmetry, Noether's theorem tells us there must be an associated conservation law. To find this, one varies the action under a duality transformation with a spacetime-dependent parameter. However this does _not_ mean simply varying the Maxwell action \(S=\frac{1}{4}\int\mathrm{d}^{4}x\sqrt{-g}F_{\mu\nu}^{2}\) and setting \(\delta F_{\mu\nu}=\epsilon(x)\star F_{\mu\nu}\), because \(A_{\mu}\) rather than \(F_{\mu\nu}\) is the dynamical variable which we vary in the action to obtain Maxwell's equations. The Noether procedure requires us to vary \(A\) by a functional \(\delta A[A]\) implementing the duality symmetry, but it is impossible to construct a \(\delta A[A]\) such that \(\mathrm{d}\delta A=\star F\). If there were, we could take an exterior derivative to find \(\mathrm{d}\star F=0\), i.e., Maxwell's equation for \(A\), which is precisely what we do not want to assume.4 The best we can do is construct a symmetry operator \(\delta A[A]\) which is only a duality transformation (in the sense that \(\mathrm{d}\delta A=\star\mathrm{d}A\)) on shell; the full expression contains additional terms which vanish when the Maxwell equations are satisfied [7, 8, 9]. Footnote 4: Equivalently, note that the Maxwell Lagrangian \(E^{2}-B^{2}\) naively does not appear to be invariant under rotations of \(E\) and \(B\), which are indeed a symmetry of the action, but does appear to be invariant under _hyperbolic_\((E,B)\) rotations, which are _not_ a genuine symmetry. Interestingly the off-shell duality transformation is typically _non-local_. To see this we note that we could flip the roles of the Maxwell equation \(\mathrm{d}\star F=0\) and the Bianchi identity \(\mathrm{d}F=0\) by taking the former to define a potential, \(\star F=\mathrm{d}\tilde{A}\), and the latter to be the field equation for this "dual potential" \(\tilde{A}_{\mu}\). This dual potential is precisely the symmetry transformation, \[\delta A[A]=\tilde{A}, \tag{1.3}\] where \(\tilde{A}\) is a solution to the first-order equation \(\mathrm{d}\tilde{A}=\star\mathrm{d}A\). Since solving this equation requires integration, in general \(\tilde{A}\) will depend non-locally on \(A\). For instance, in a gauge where \(\delta A_{0}=0\), the off-shell duality transformation of \(A_{i}\) is [8] \[\delta A^{i}=\nabla^{-2}(\epsilon^{ijk}\partial_{j}F_{0k}), \tag{1.4}\] with \(\nabla^{-2}\) the inverse spatial Laplacian. This is a genuine symmetry of the Maxwell action, which can be used to derive conserved quantities, and which coincides with duality transformations \(\delta F=\star F\) on shell, i.e., when the Maxwell equations are satisfied. The goal of this work is to discuss a similar story for the Chandrasekhar duality in black hole perturbation theory. Along the way we will investigate the dynamics of scalar, electromagnetic, and gravitational fields on the Schwarzschild background in two covariant languages designed to exploit its symmetries, the \(2+2\) and Geroch-Held-Penrose (GHP) formalisms. These approaches are complementary: the \(2+2\) formulation is more intuitive but specifically adapted to a non-rotating black hole, while GHP generalizes straightforwardly to the full Kerr solution and is in a sense "more fundamental" in that it is based on the algebraically-special structure of black hole spacetimes. We will further see that objects arising naturally when studying dynamics in the \(2+2\) formulation have simple interpretations in GHP language. The rest of this paper is organized as follows. In section 2 we review the Schwarzschild solution and introduce the \(2+2\) and GHP formalisms. We study the dynamics of a massless scalar field on Schwarzschild in section 3, the electromagnetic field in section 4, and linearized gravity in section 5. In section 6 we discuss the off-shell Chandrasekhar duality and in section 7 explore its physical consequences for tidal Love numbers, before concluding in section 8. **Conventions:** We work with vacuum general relativity in \(3+1\) spacetime dimensions with metric signature \((-,+,+,+)\) and choose an orientation such that \(\epsilon_{0123}=\sqrt{-g}\). We will use Greek letters \(\mu,\nu,\cdots\) for four-dimensional spacetime indices, lower-case Latin letters \(a,b,...\) for the \((t,r)\) subspace \(\mathcal{M}_{2}\), and upper-case letters \(A,B,...\) for the 2-sphere \(S^{2}\). ## 2 Schwarzschild background in the \(2+2\) and GHP formalisms The black hole solutions in vacuum four-dimensional general relativity are highly symmetrical. In this section we will review the Schwarzschild metric, on which we will place various field theories, in two formalisms designed to exploit these symmetries in a coordinate-independent manner. The first is the \(2+2\) formalism, which treats objects covariantly on the two-sphere and on the \((t,r)\) plane. The second is the GHP formalism, which takes advantage of the algebraically-special (type D) structure of black hole spacetimes in general relativity. The Schwarzschild metric in Boyer-Lindquist (or Schwarzschild) coordinates is \[g_{\mu\nu}\mathrm{d}x^{\mu}\mathrm{d}x^{\nu}=-f(r)\mathrm{d}t^{2}+\frac{1}{f(r )}\mathrm{d}r^{2}+r^{2}\underbrace{\left(\mathrm{d}\theta^{2}+\sin^{2}\theta \mathrm{d}\phi^{2}\right)}_{\mathrm{d}\Omega_{S^{2}}^{2}},\qquad f(r)\equiv 1- \frac{r_{\mathrm{s}}}{r} \tag{2.1}\] with \(r_{\mathrm{s}}=2GM\) the Schwarzschild radius and \(\mathrm{d}\Omega_{S^{2}}^{2}\) the line element on the unit 2-sphere. As we will see, kinetic terms for fields on a Schwarzschild background are often more conveniently phrased in terms of a "tortoise coordinate" \(r_{\star}\) defined by \[\mathrm{d}r_{\star}=\frac{\mathrm{d}r}{f(r)}. \tag{2.2}\] The horizon \(r=r_{\mathrm{s}}\) is located at \(r_{\star}=-\infty\) and spatial infinity \(r=\infty\) at \(r_{\star}=\infty\). ### \(2+2\) decomposition The Schwarzschild spacetime factorizes naturally into two submanifolds: the \((t,r)\) plane \(\mathcal{M}_{2}\) and the 2-sphere \(S^{2}\). This is the basis of the \(2+2\) decomposition [10, 11, 12]. Let us write the four-dimensional coordinates as \(x^{\mu}=(x^{a},\theta^{A})\), where lower-case Latin letters \(a,b,...\) run over \((t,r)\) and upper-case letters \(A,B,...\) run over \((\theta,\phi)\). The metric factorizes into \[g_{\mu\nu}\mathrm{d}x^{\mu}\mathrm{d}x^{\nu}=g_{ab}\mathrm{d}x^{a}\mathrm{d}x ^{b}+r^{2}\Omega_{AB}\mathrm{d}\theta^{A}\mathrm{d}\theta^{B}, \tag{2.3}\] with \[g_{ab}=\begin{pmatrix}-f&0\\ 0&\frac{1}{f}\end{pmatrix},\qquad\Omega_{AB}=\begin{pmatrix}1&0\\ 0&\sin^{2}\theta\end{pmatrix}. \tag{2.4}\] To avoid a clutter of notation, we will use \(\nabla_{\mu}\), \(\nabla_{a}\), and \(D_{A}\) for the covariant derivatives with respect to \(g_{\mu\nu}\), \(g_{ab}\), and \(\Omega_{AB}\), respectively, and raise and lower indices with these metrics. We also use the same symbol for \(g_{\mu\nu}\) and \(g_{ab}\); which one is meant should be clear from context.5 Footnote 5: In particular, \(\sqrt{-g}\) represents the square root of the determinant of \(g_{\mu\nu}\) in \(\int\mathrm{d}^{4}x\sqrt{-g}\) and of \(g_{ab}\) in \(\int\mathrm{d}^{2}x\sqrt{-g}\). The \(r\) appearing in eq. (2.3) is a spacetime scalar on \(\mathcal{M}_{2}\) and need not be aligned with one of the coordinate directions, though it is in Boyer-Lindquist coordinates. It and the 2-metric \(g_{ab}\) obey the background Einstein equations, \[rR=2\Box r,\qquad\nabla^{a}(r\nabla_{a}r)=r\Box r+(\partial r)^{2}=1,\qquad \nabla_{a}\nabla_{b}r=\frac{1}{2}\Box rg_{ab}, \tag{2.5}\] where \(\Box=g^{ab}\nabla_{a}\nabla_{b}\) and \((\partial r)^{2}=g^{ab}\partial_{a}r\partial_{b}r\). In coordinates, the Ricci scalar and the norm of \(\partial_{a}r\) are \[R=\frac{2r_{\mathrm{s}}}{r^{3}},\qquad(\partial r)^{2}=f. \tag{2.6}\] Note in particular that the latter of these allows us to use \(f(r)\) in coordinate-invariant expressions. We will find it convenient at times to use the shorthand \[r_{a}=\partial_{a}r. \tag{2.7}\] As a consequence of its high degree of symmetry, equations of motion on the Schwarzschild background admit fully separable solutions [13]. For a field of integer spin \(s\), the general solution for the field variable or an observable constructed from it can be written in the schematic form (e.g., omitting indices) \[\phi(x^{\mu})=\sum_{\ell=|s|}^{\infty}\sum_{m=-\ell}^{\ell}\underbrace{\int \mathrm{d}\omega e^{-i\omega t}R_{\ell\omega}(r)}_{\phi_{\ell m}(x^{a})} \underbrace{\Theta_{\ell m}(\theta)e^{im\phi}}_{S_{\ell m}(\theta^{A})}. \tag{2.8}\] A further consequence of symmetry is that the radial and angular functions \(R_{\ell\omega}(r)\) and \(\Theta_{\ell m}(\theta)\) obey remarkably similar equations. The main difference is that the periodic boundary conditions on the angular coordinates constrain \(S_{\ell m}(\theta,\phi)\) to the class of spherical harmonic functions, which are eigenfunctions of the Laplacian on \(S^{2}\), while \(R_{\ell\omega}(r)\) obeys a Schrodinger-like equation (typically in terms of the tortoise coordinate \(r_{\star}\) rather than \(r\)). The spherical harmonics can be categorized by their transformation properties under rotations. In four dimensions, there are two such classes: scalars and vectors.6 The scalar harmonics are the familiar spherical harmonics, Footnote 6: Degrees of freedom transforming under the tensor representation are non-dynamical in \(D=4\) but are present in higher dimensions. \[S_{\ell m}=Y_{\ell m}(\theta,\phi)\propto P_{\ell}^{m}(\cos\theta)e^{im\phi}, \tag{2.9}\] with \(P_{\ell}^{m}(x)\) the associated Legendre polynomials. The vector harmonics decompose into longitudinal and transverse, or electric and magnetic, pieces, which are related to the scalar harmonics by \[E_{A,\ell m} =D_{A}Y_{\ell m}, \tag{2.10a}\] \[B_{A,\ell m} =-\epsilon_{AB}D^{B}Y_{\ell m}, \tag{2.10b}\] with \(\epsilon_{AB}\) the Levi-Civita tensor on the 2-sphere, \(\epsilon_{\theta\phi}=\sin\theta\). In coordinates these are \[E_{A,\ell m}\mathrm{d}\theta^{A} =\partial_{\theta}Y_{\ell m}\mathrm{d}\theta+\partial_{\phi}Y_{ \ell m}\mathrm{d}\phi, \tag{2.11a}\] \[B_{A,\ell m}\mathrm{d}\theta^{A} =-\csc\theta\partial_{\phi}Y_{\ell m}\mathrm{d}\theta+\sin\theta \partial_{\theta}Y_{\ell m}\mathrm{d}\phi. \tag{2.11b}\] The scalar harmonics obey the Laplace equation on the 2-sphere with eigenvalue \(-\ell(\ell+1)\), \[D^{2}Y_{\ell m} =\frac{1}{\sqrt{\Omega}}\partial_{A}\left(\sqrt{\Omega}\Omega^ {AB}\partial_{B}Y_{\ell m}\right)\] \[=-\ell(\ell+1)Y_{\ell m}, \tag{2.12}\] where \(\Omega\equiv\det(\Omega_{AB})=\sin^{2}\theta\), while the vector harmonics \(V_{A}=(E_{A},B_{A})\) are eigenfunctions with eigenvalue \(1-\ell(\ell+1)\), \[D^{2}V_{A}^{\ell m}=-\left[\ell(\ell+1)-1\right]V_{A}^{\ell m}. \tag{2.13}\] The spacetime integration measure appearing in a four-dimensional action contains the 2-sphere integration measure \(\mathrm{d}\Omega\),7 Footnote 7: We remind the reader that in our notation, \(\int\mathrm{d}^{4}x\sqrt{-g}=\int\mathrm{d}^{4}x\sqrt{-\det g_{\mu\nu}}\) while \(\int\mathrm{d}^{2}x\sqrt{-g}=\int\mathrm{d}^{2}x\sqrt{-\det g_{ab}}\). \[\int\mathrm{d}^{4}x\sqrt{-g}=\int\mathrm{d}^{2}x\sqrt{-g}r^{2}\mathrm{d}\Omega,\qquad\int\mathrm{d}\Omega\equiv\int_{\theta=0}^{\pi}\int_{\phi=0}^{2\pi} \sin\theta\mathrm{d}\theta\mathrm{d}\phi. \tag{2.14}\] We will be able to integrate over \(S^{2}\) in actions on Schwarzschild using the orthonormality relations of the spherical harmonics, \[\int\mathrm{d}\Omega Y_{\ell m}Y_{\ell^{\prime}m^{\prime}} =\delta_{\ell\ell^{\prime}}\delta_{mm^{\prime}}, \tag{2.15a}\] \[\int\mathrm{d}\Omega V_{A,\ell m}V_{\ell^{\prime}m^{\prime}}^{A} =\ell(\ell+1)\delta_{\ell\ell^{\prime}}\delta_{mm^{\prime}},\] (2.15b) \[\int\mathrm{d}\Omega E_{A,\ell m}B_{\ell^{\prime}m^{\prime}}^{A} =0. \tag{2.15c}\] ### Geroch-Held-Penrose (GHP) formalism In this subsection we describe an alternative formalism for leveraging the symmetry of black hole backgrounds: the Geroch-Held-Penrose (GHP) formalism, which is itself built on the famous Newman-Penrose (NP) approach. While this approach is somewhat more arcane than the \(2+2\) formalism,8 it more directly makes use of the fundamental property underpinning the "magic" of the Schwarzschild and Kerr spacetimes, namely the fact that they are _algebraically special_. Footnote 8: Due at least in part to its heavy use of Icelandic runes. #### 2.2.1 Newman-Penrose Recall that the Weyl tensor \(C_{\mu\nu\alpha\beta}\) of a generic spacetime has four principal null directions;9 algebraically-special spacetimes are those where one or more of the four are degenerate. The Kerr black hole is of algebraic _type D_, with two singly-degenerate principal null directions. These special vectors, \(l^{\mu}\) and \(n^{\mu}\), point along outgoing and ingoing null rays, respectively. In the Schwarzschild case they live on \(\mathcal{M}_{2}\), Footnote 9: Principal null directions are null vectors \(l^{\mu}\) satisfying \(l^{\nu}l_{[\rho}C_{\mu]\nu\alpha[\beta}l_{\sigma]}l^{\alpha}=0\)[14]. \[l_{\mu}\mathrm{d}x^{\mu}=l_{a}\mathrm{d}x^{a},\quad n_{\mu}\mathrm{d}x^{\mu}= n_{a}\mathrm{d}x^{a}, \tag{2.16}\] and in fact can be thought of as zweibeins for the 2-metric, \[g_{ab}=-l_{a}n_{b}-n_{a}l_{b}. \tag{2.17}\] To complete the picture, we include null vectors parametrizing \(S^{2}\): a complex vector \(m^{\mu}\) and its complex conjugate \(\bar{m}^{\mu}\), with \(m_{\mu}\mathrm{d}x^{\mu}=m_{A}\mathrm{d}\theta^{A}\). These four vectors together comprise a complex null tetrad \(e^{\mathbf{a}}_{\mu}=(l_{\mu},n_{\mu},m_{\mu},\bar{m}_{\mu})\), in the sense that10 Footnote 10: This is the usual vielbein relation \(g_{\mu\nu}=\eta_{\mathbf{a}\mathbf{b}}e^{\mathbf{a}}_{\mu}e^{\mathbf{b}}_{\nu}\) with the internal Minkowski metric written in the form \[\eta_{\mathbf{a}\mathbf{b}}=\begin{pmatrix}0&-1&0&0\\ -1&0&0&0\\ 0&0&0&1\\ 0&0&1&0\end{pmatrix}.\] Here bold lowercase Latin letters represent 4D internal Lorentz indices. \[g_{\mu\nu}=-2l_{(\mu}n_{\nu)}+2m_{(\mu}\bar{m}_{\nu)}. \tag{2.18}\] The vielbeins are normalized so that all of their inner products vanish except for \[l_{\mu}n^{\mu}=-1,\quad m_{\mu}\bar{m}^{\mu}=1. \tag{2.19}\] This setup does not completely fix \((l^{\mu},n^{\mu},m^{\mu},\bar{m}^{\mu})\), as there is some residual Lorentz invariance. Insisting that \(\ell^{\mu}\) and \(n^{\mu}\) remain principal null directions leaves a two-parameter symmetry comprising boosts of \(l\) and \(n\), \[l^{\mu}\to\alpha l^{\mu},\quad n^{\mu}\to\alpha^{-1}n^{\mu}, \tag{2.20}\] and rotations of \(m\) and \(\bar{m}\), \[m^{\mu}\to e^{i\beta}m^{\mu},\quad\bar{m}^{\mu}\to e^{i\beta}\bar{m}^{\mu}, \tag{2.21}\] with \(\alpha\) and \(\beta\) real functions. We will choose the _Carter tetrad_[15], \[l_{\mu}\mathrm{d}x^{\mu} =\frac{1}{\sqrt{2}}\left(-\sqrt{f}\mathrm{d}t+\frac{1}{\sqrt{f}} \mathrm{d}r\right), \tag{2.22a}\] \[n_{\mu}\mathrm{d}x^{\mu} =\frac{1}{\sqrt{2}}\left(-\sqrt{f}\mathrm{d}t-\frac{1}{\sqrt{f}} \mathrm{d}r\right),\] (2.22b) \[m_{\mu}\mathrm{d}x^{\mu} =\frac{r}{\sqrt{2}}\left(\mathrm{d}\theta+i\sin\theta\mathrm{d} \phi\right),\] (2.22c) \[\bar{m}_{\mu}\mathrm{d}x^{\mu} =\frac{r}{\sqrt{2}}\left(\mathrm{d}\theta-i\sin\theta\mathrm{d} \phi\right). \tag{2.22d}\] The frequently-used Kinnersley tetrad [16] is related by a rescaling (2.20) with \(\alpha=\sqrt{f/2}\). The Carter tetrad is particularly useful for our purposes as it maintains symmetries of the background which can be obscured in other bases [17]. In the Newman-Penrose formalism one works with spacetime scalars obtained by projection along the null directions. For instance the Weyl tensor \(C_{\mu\nu\alpha\beta}\) is efficiently encoded in five complex Weyl scalars, which are the "components" of the Weyl tensor in the complex null basis, \[\Psi_{0}=C_{lmlm},\quad\Psi_{1}=C_{lnlm},\quad\Psi_{2}=C_{lm\bar{m}n},\quad \Psi_{3}=C_{ln\bar{m}n},\quad\Psi_{4}=C_{n\bar{m}n\bar{m}}, \tag{2.23}\] where \(C_{lm\bar{m}n}=C_{\mu\nu\alpha\beta}l^{\mu}m^{\nu}\bar{m}^{\alpha}n^{\beta}\) and so on. (In general we will use the notation \(V_{\mu}l^{\mu}=V_{l}\), etc.) For type-D spacetimes the only non-vanishing Weyl scalar is \(\Psi_{2}\), providing a remarkably compact characterization of the full Riemann tensor. In the Schwarzschild case, the value of \(\Psi_{2}\) in coordinates is11 Footnote 11: The resemblance to the Ricci scalar on \(\mathcal{M}_{2}\), cf. eq. (2.6), is not accidental. Using \(R_{aAbB}=-r\nabla_{a}\nabla_{b}r\Omega_{AB}\)[12], we find \(\Psi_{2}=R_{aABb}l^{a}n^{b}m^{A}\bar{m}^{B}=\frac{1}{r}\nabla_{a}\nabla_{b}rl ^{a}n^{b}=-\frac{1}{4}R\). \[\Psi_{2}=-\frac{r_{\mathrm{s}}}{2r^{3}}. \tag{2.24}\] #### 2.2.2 Geroch-Held-Penrose The GHP formalism soups up the NP formalism by working only with quantities and operators which have simple transformation properties under the residual Lorentz invariance (2.20)-(2.21). Defining \(\lambda^{2}=\alpha e^{i\beta}\), we will insist on working with tensors \(\Phi\) that transform under eqs. (2.20) and (2.21) as \[\Phi\to\lambda^{p}\bar{\lambda}^{q}\Phi. \tag{2.25}\] Such a quantity is said to have _GHP type_\(\{p,q\}\). They are also called spin- and/or boost-weighted, where the spin weight is \(s=(p-q)/2\) and the boost weight is \(b=(p+q)/2\). The residual Lorentz transformations (2.20)-(2.21) do not exhaust the symmetry in choosing a tetrad, which is invariant under several discrete tetrad interchanges: _complex conjugation_, which swaps \(m^{\mu}\) and \(\bar{m}^{\mu}\); the _prime_ (\({}^{\prime}\)) operation, which interchanges both \(l\leftrightarrow n\) and \(m\leftrightarrow\bar{m}\); and, less obviously, the _star_ (\(\star\)) operation, \((l,n,m,\bar{m})\rightarrow(m,-\bar{m},-l,n)\), which we will not use. These discrete invariances allow for a particularly economical description of field equations, since one equation implies its prime, conjugate, and prime conjugate versions. Scalars with well-defined GHP type include the Weyl scalars, which inherit their GHP types from the various factors of \(l^{\mu}\), etc., in their definitions (2.23),12 as well as the spin coefficient \(\rho\),13 Footnote 12: Tensors like \(C_{\mu\nu\alpha\beta}\) are _a priori_ unweighted. Footnote 13: And by extension \(\rho^{\prime}\), \(\bar{\rho}\), and \(\bar{\rho}^{\prime}\), although for Schwarzschild \(\rho\) and \(\rho^{\prime}\) are real. \[\rho=-\bar{m}^{\mu}m^{\nu}\nabla_{\mu}l_{\nu}, \tag{2.26}\] which is of GHP type \(\{1,1\}\). Examples of scalars _without_ a well-defined GHP type include the spin coefficients \(\beta\) and \(\epsilon\) (and their primes and conjugates), \[\beta =\frac{1}{2}\left(m^{\mu}\bar{m}^{\nu}\nabla_{\mu}m_{\nu}-m^{\mu} n^{\nu}\nabla_{\mu}l_{\nu}\right), \tag{2.27}\] \[\epsilon =\frac{1}{2}\left(l^{\mu}\bar{m}^{\nu}\nabla_{\mu}m_{\nu}-l^{ \mu}n^{\nu}\nabla_{\mu}l_{\nu}\right). \tag{2.28}\] These are the only non-zero spin coefficients for Schwarzschild and completely describe the spin connection. In the Carter tetrad they take the coordinate values \[\rho=-\rho^{\prime}=-\frac{\sqrt{f}}{\sqrt{2}r},\quad\beta=\beta^{\prime}= \frac{\cot\theta}{2\sqrt{2}r},\quad\epsilon=-\epsilon^{\prime}=\frac{r_{\rm s }}{4\sqrt{2}\overline{f}r^{2}}. \tag{2.29}\] Analogously to the non-coordinate-invariant Christoffel symbols, \(\beta\) and \(\epsilon\) can be used to construct covariant derivative operators with well-defined GHP type. Unfortunately, the use of Icelandic runes for these operators is firmly embedded in the literature: \[\mathtt{p} =l^{\mu}\nabla_{\mu}-p\epsilon-q\bar{\epsilon}, \mathfrak{T} =m^{\mu}\nabla_{\mu}-p\beta+q\bar{\beta}^{\prime}, \tag{2.30}\] \[\mathtt{p}^{\prime} =n^{\mu}\nabla_{\mu}+p\epsilon^{\prime}+q\bar{\epsilon}^{\prime} \mathfrak{T} =\bar{m}^{\mu}\nabla_{\mu}+p\beta^{\prime}-q\bar{\beta}\] The operator \(\mathtt{p}\) sends a GHP type \(\{p,q\}\) object to one with type \(\{p+1,q+1\}\), \(\mathtt{p}^{\prime}\) to \(\{p-1,q-1\}\), \(\mathfrak{T}\) to \(\{p+1,q-1\}\), and \(\mathfrak{T}^{\prime}\) to \(\{p-1,q+1\}\). Note that \(\mathtt{p}\) and \(\mathtt{p}^{\prime}\) raise and lower the boost weight, while \(\mathfrak{T}\) and \(\mathfrak{T}^{\prime}\) raise and lower the spin weight. For the Carter tetrad in Schwarzschild, the GHP derivatives take the coordinate form [17] \[\mathtt{p} =\frac{1}{\sqrt{2f}}\left(\partial_{t}+f\partial_{r}-\frac{br_{ \rm s}}{2r^{2}}\right),\hskip 14.226378pt\mathfrak{T} =\frac{1}{\sqrt{2}r}\left(\partial_{\theta}+i\csc\theta\partial_{\phi}-s\cot\theta\right) \tag{2.31}\] \[\mathtt{p}^{\prime} =\frac{1}{\sqrt{2f}}\left(\partial_{t}-f\partial_{r}-\frac{br_{ \rm s}}{2r^{2}}\right),\hskip 14.226378pt\mathfrak{T}^{\prime} =\frac{1}{\sqrt{2}r}\left(\partial_{\theta}-i\csc\theta\partial_{\phi}+s \cot\theta\right).\] Note also that these derivatives have non-trivial commutators, \[[\mathbb{P},\mathbb{P}^{\prime}]=-2b\Psi_{2},\quad[\mathbb{P},\eth]=\rho\etheth, \quad[\eth,\eth^{\prime}]=2s(\Psi_{2}+\rho\rho^{\prime})=-\frac{s}{r^{2}}, \tag{2.32}\] along with their primes and complex conjugates. In this language, the scalar spherical harmonics are eigenfunctions of \(\eth\eth^{\prime}\), \[\eth\eth^{\prime}Y=-\frac{\ell(\ell+1)}{2r^{2}}Y, \tag{2.33}\] and are a special case of the _spin-weighted_ spherical harmonics, \[\frac{1}{2}\left(\eth^{\prime}\eth+\eth\eth^{\prime}\right)Y_{s}=-\frac{\ell (\ell+1)-s^{2}}{2r^{2}}Y_{s}, \tag{2.34}\] which can be obtained from the scalar harmonics by raising and lowering the spin weight with \(\eth\) and \(\eth^{\prime}\), \[\eth Y_{s}=-\frac{\sqrt{\ell(\ell+1)-s(s+1)}}{\sqrt{2}r}Y_{s+1},\quad\eth^{ \prime}Y_{s}=\frac{\sqrt{\ell(\ell+1)-s(s-1)}}{\sqrt{2}r}Y_{s-1}. \tag{2.35}\] The \(|s|=1\) spin-weighted harmonics are related to the vector harmonics by [17] \[E_{A} =\frac{\sqrt{\ell(\ell+1)}}{2}\left(Y_{-1}\tilde{m}_{A}-Y_{1} \bar{\tilde{m}}_{A}\right), \tag{2.36a}\] \[B_{A} =-i\frac{\sqrt{\ell(\ell+1)}}{2}\left(Y_{-1}\tilde{m}_{A}+Y_{1} \bar{\tilde{m}}_{A}\right), \tag{2.36b}\] where \(\tilde{m}_{A}\mathrm{d}\theta^{A}=\mathrm{d}\theta+i\sin\theta\mathrm{d}\phi\). ## 3 Massless scalar We want to compute the action for linearized gravity on Schwarzschild, performing separation of variables and utilizing the \(2+2\) decomposition. Many of the basic steps of the computation are present in the simpler cases of a scalar and vector field, so we will work our way up to gravity one integer step in spin at a time. The action for a massless scalar is14 Footnote 14: We remind the reader that to avoid a clutter of notation we are using \(\sqrt{-g}\) for both \(\sqrt{-\det g_{\mu\nu}}\) and \(\sqrt{-\det g_{ab}}\), with the meaning clear depending whether we are integrating over \(\mathrm{d}^{4}x\) or \(\mathrm{d}^{2}x\). Note also that in Boyer-Lindquist coordinates, \(\sqrt{-\det g_{ab}}=1\). \[S=-\frac{1}{2}\int\mathrm{d}^{4}x\sqrt{-g}(\partial_{\mu}\phi)^{2}. \tag{3.1}\] The field \(\phi\) admits a spherical harmonic expansion of the form (2.8), \[\phi(x^{\mu})=\sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{\ell}\phi_{\ell m}(x^{a}) Y_{\ell m}(\theta^{A}). \tag{3.2}\] Inserting this into eq. (3.1) and integrating over \(S^{2}\) we find a sum over actions for each \((\ell,m)\) mode, \[S =-\frac{1}{2}\int\mathrm{d}^{2}x\sqrt{-g}r^{2}\int\mathrm{d}\Omega \left[(\partial_{a}\phi)^{2}+r^{-2}(\partial_{A}\phi)^{2}\right]\] \[=-\frac{1}{2}\int\mathrm{d}^{2}x\sqrt{-g}\sum_{\ell\ell^{\prime} mm^{\prime}}\int\mathrm{d}\Omega\left[r^{2}\partial_{a}\phi_{\ell m} \partial^{a}\phi_{\ell^{\prime}m^{\prime}}+\ell(\ell+1)\phi_{\ell m}\phi_{ \ell^{\prime}m^{\prime}}\right]Y_{\ell m}Y_{\ell^{\prime}m^{\prime}}\] \[=-\frac{1}{2}\int\mathrm{d}^{2}x\sqrt{-g}\sum_{\ell,m}\left[r^{2 }(\partial\phi_{\ell m})^{2}+\ell(\ell+1)\phi_{\ell m}^{2}\right]\] \[\equiv\sum_{\ell,m}S_{\ell m}. \tag{3.3}\] To simplify notation, we will drop the \(\ell m\) subscripts and focus on an individual mode, with the summation over all modes implied. This is kosher because in linear theories modes of different \((\ell,m)\) decouple. The \(2D\) field \(\phi\) is not canonically normalized, as its kinetic term is multiplied by a factor of \(r^{2}\). We can remove this with a field redefinition [12, 18], \[\psi\equiv r\phi, \tag{3.4}\] in terms of which the action is \[\boxed{S=\int\mathrm{d}^{2}x\sqrt{-g}\left[-\frac{1}{2}(\partial \psi)^{2}-\frac{1}{2r^{2}}\left(\ell(\ell+1)+\frac{r_{\mathrm{s}}}{r}\right) \psi^{2}\right].} \tag{3.5}\] We identify the usual scalar potential on a Schwarzschild background [2], \[V(r)=\frac{\ell(\ell+1)}{r^{2}}+\frac{r_{\mathrm{s}}}{r^{3}}. \tag{3.6}\] If we drop our insistence on covariance and write the action in terms of the coordinates \((t,r)\), \[S=\int\mathrm{d}t\mathrm{d}r\left(\frac{1}{2}f^{-1}(\partial_{t} \psi)^{2}-\frac{1}{2}f(\partial_{r}\psi)^{2}-\frac{1}{2}V(r)\psi^{2}\right), \tag{3.7}\] we find that the kinetic and gradient terms again have nonstandard factors in front. To canonically normalize we transform to the tortoise coordinate \(\mathrm{d}r=f\mathrm{d}r_{\star}\)[18], \[S=\int\mathrm{d}t\mathrm{d}r_{\star}\left(\frac{1}{2}(\partial_{t} \psi)^{2}-\frac{1}{2}(\partial_{r_{\star}}\psi)^{2}-\frac{1}{2}V(r)\psi^{2} \right). \tag{3.8}\] For completeness let us write the action (3.1) in GHP language. Writing the metric in terms of the null vectors, cf. eq. (2.18), we have \[S =-\frac{1}{2}\int\mathrm{d}^{4}x\sqrt{-g}g^{\mu\nu}\partial_{\mu} \phi\partial_{\nu}\phi\] \[=\int\mathrm{d}^{4}x\sqrt{-g}\left(l^{\mu}n^{\nu}-m^{\mu}\bar{m}^ {\nu}\right)\partial_{\mu}\phi\partial_{\nu}\phi\] \[=\int\mathrm{d}^{4}x\sqrt{-g}\left(\mathrm{p}\,\phi\,\mathrm{p}^ {\prime}\,\phi-\eth\phi\eth^{\prime}\phi\right). \tag{3.9}\] If we separate variables and integrate over the 2-sphere, then the action for a single mode is \[S_{\ell m}=\int\mathrm{d}^{2}x\sqrt{-g}\left(r^{2}\,\mathrm{p}\,\phi\,\mathrm{ p}^{\prime}\,\phi-\frac{\ell(\ell+1)}{2}\phi^{2}\right). \tag{3.10}\] ## 4 Electromagnetism The next step on the road to gravity, which is the spin-2 case, is the spin-1 case, which is electromagnetism. The Maxwell action is \[S=-\frac{1}{4}\int\mathrm{d}^{4}x\sqrt{-g}F_{\mu\nu}^{2},\qquad F_{\mu\nu}=2 \partial_{[\mu}A_{\nu]}. \tag{4.1}\] The vector potential is a superposition of separable solutions: \[A^{\mu}=\sum_{\ell=1}^{\infty}\sum_{m=-\ell}^{\ell}A_{\ell m}^{\mu}. \tag{4.2}\] Herein we will focus on a single mode and drop \(\ell m\) subscripts, with the summation implied. Under a \(2+2\) decomposition the vector potential is \[A_{\mu}\mathrm{d}x^{\mu}=A_{a}(x^{a})Y\mathrm{d}x^{a}+a(x^{a})B_{A}\mathrm{d} \theta^{A}. \tag{4.3}\] Here we have used our gauge freedom to remove the longitudinal mode, which is proportional to \(E_{A}\mathrm{d}\theta^{A}\). Gauge invariance adds a wrinkle that was not present for the scalar: in order to avoid losing information when fixing a gauge at the level of the action rather than the equations of motion, one must make a _complete_ gauge fixing, in the sense that there are no integration constants left when fixing a gauge vector (rather than necessarily that all gauge freedom is exhausted, although we will insist on this too) [19, 20]. Our gauge choice satisfies this requirement [18]. Performing separation of variables and integrating over the 2-sphere, we obtain \[S=\int\mathrm{d}^{2}x\sqrt{-g}\mathcal{L}, \tag{4.4}\] where \[\boxed{\mathcal{L}=\underbrace{-\frac{1}{4}r^{2}F_{ab}^{2}-\frac{1}{2}\ell(\ell+1)A _{a}^{2}}_{\mathcal{L}_{\text{even}}}-\frac{1}{2}\ell(\ell+1)\left[(\partial a) ^{2}+\frac{\ell(\ell+1)}{r^{2}}a^{2}\right]}\,. \tag{4.5}\] We see that the even-parity (or electric) field \(A_{a}\) and the odd-parity (or magnetic) field \(a\) decouple. The even sector has only one dynamical degree of freedom but depends on two variables \(A_{a}\). To isolate this dynamical field we integrate in an auxiliary variable \(\lambda(t,r)\): \[\mathcal{L}_{\text{even,aux}} =\mathcal{L}_{\text{even}}+\frac{1}{4}r^{2}\left(F_{ab}+r^{-2} \lambda\epsilon_{ab}\right)^{2}\] \[=\lambda\epsilon^{ab}\partial_{a}A_{b}-\frac{1}{2}\frac{\lambda^ {2}}{r^{2}}-\frac{1}{2}\ell(\ell+1)A_{a}^{2}. \tag{4.6}\] The \(\lambda\) equation of motion fixes it to be proportional to \(F_{ab}\) on-shell, \[\lambda=r^{2}\epsilon^{ab}\partial_{a}A_{b}=-r^{2}F_{tr}. \tag{4.7}\] Inserting this back into \(\mathcal{L}_{\text{even,aux}}\) we obtain \(\mathcal{L}_{\text{even}}\), establishing their dynamical equivalence. However we can also obtain an action for \(\lambda\) alone by integrating out \(A_{a}\) using its equation of motion, \[A^{a}=\frac{1}{\ell(\ell+1)}\epsilon^{ab}\partial_{b}\lambda, \tag{4.8}\] and plugging back into the action, \[\mathcal{L}=-\frac{1}{2\ell(\ell+1)}(\partial\lambda)^{2}-\frac{1}{2}\frac{ \lambda^{2}}{r^{2}}-\frac{1}{2}\ell(\ell+1)(\partial a)^{2}-\frac{1}{2}\frac{ \ell^{2}(\ell+1)^{2}}{r^{2}}a^{2}. \tag{4.9}\] We canonically normalize the fields by scaling out appropriate factors of \(\sqrt{\ell(\ell+1)}\), \[\psi_{+}\equiv\frac{\lambda}{\sqrt{\ell(\ell+1)}},\qquad\psi_{-}\equiv\sqrt{ \ell(\ell+1)}a, \tag{4.10}\] so that \[\mathcal{L}=\sum_{\pm}\left[-\frac{1}{2}(\partial\psi_{\pm})^{2}-\frac{1}{2} \frac{\ell(\ell+1)}{r^{2}}\psi_{\pm}^{2}\right]. \tag{4.11}\] We conclude that \(\psi_{\pm}\) are the "master variables" for the electric (\(+\)) and magnetic (\(-\)) sectors (see also Ref. [18]), each satisfying a Schrodinger equation with the usual vector potential [2]. ### Electric-magnetic duality The Lagrangian (4.11) is manifestly invariant under electric-magnetic duality, which acts as a rotation on the vector \((\psi_{+},\psi_{-})^{T}\). The infinitesimal version is \[\delta(\psi_{+},\psi_{-})=(\psi_{-},-\psi_{+}), \tag{4.12}\] that is, \[\delta\lambda =\ell(\ell+1)a, \tag{4.13a}\] \[\delta a =-\frac{\lambda}{\ell(\ell+1)}. \tag{4.13b}\] Since eq. (4.11) is dynamically equivalent to the original Maxwell action (4.5), related by auxiliary variables, a symmetry of one is a symmetry of the other. To construct the symmetry operators \(\delta A_{a}\) and \(\delta a\) for eq. (4.5) we need only use eqs. (4.7) and (4.8) relating \(A_{a}\) and \(\lambda\) on shell to find \[\delta A_{a} =\epsilon_{ab}\partial^{b}a, \tag{4.14a}\] \[\delta a =-\frac{r^{2}}{\ell(\ell+1)}\epsilon^{ab}\partial_{a}A_{b}. \tag{4.14b}\] This is an _off-shell_ symmetry of the action (4.5). As discussed in the introduction, this symmetry is non-local. This is reflected in the transformation law for \(a\), which contains the inverse spherical Laplacian in the form \(1/\ell(\ell+1)\).15 Interestingly the symmetry transformation for \(A_{a}\) is local. Footnote 15: Recalling that \(-\ell(\ell+1)\) is the eigenvalue of the spherical Laplacian \(D^{2}\) for scalar spherical harmonics, we see that \(D^{2}(\delta a\,Y)=r^{2}\epsilon^{ab}\partial_{a}A_{b}\,Y\). The transformation law (4.14) has a natural interpretation in terms of Hodge duality. Consider the dual field strength tensor, \[\star F_{\mu\nu}=\frac{1}{2}\epsilon_{\mu\nu\alpha\beta}F^{\alpha\beta}. \tag{4.15}\] The Maxwell equation is \(\mathrm{d}\star F=0\), so that on-shell \(\star F=\mathrm{d}\tilde{A}\) can be expressed in terms of a dual potential \(\tilde{A}_{\mu}\). It turns out that the off-shell duality transformation \(\delta A_{\mu}\) is just such a dual potential, that is, \[\delta A_{\mu}=\tilde{A}_{\mu} \tag{4.16}\] where \[\tilde{A}_{\mu}\mathrm{d}x^{\mu}=\epsilon_{ab}\partial^{b}a(x)Y(\theta) \mathrm{d}x^{a}-\frac{r^{2}}{\ell(\ell+1)}\epsilon^{ab}\partial_{a}A_{b}(x)B_ {A}(\theta)\mathrm{d}\theta^{A} \tag{4.17}\] solves \[\star F_{\mu\nu}=\partial_{\mu}\tilde{A}_{\nu}-\partial_{\nu}\tilde{A}_{\mu} \tag{4.18}\] on shell. The fact that \(A_{\mu}\) and \(\tilde{A}_{\mu}\) are related by integration, \(\star{\rm d}A={\rm d}\tilde{A}\), underlies the non-local nature of \(\delta A_{\mu}\). If we further package the electric and magnetic master variables into a complex scalar, \[\psi\equiv\frac{\psi_{+}-i\psi_{-}}{\sqrt{2}}, \tag{4.19}\] then the action (4.11) is simply \[S=\int{\rm d}^{2}x\sqrt{-g}\left[-\partial_{a}\psi\partial^{a}\bar{\psi}-\frac {\ell(\ell+1)}{r^{2}}\psi\bar{\psi}\right]. \tag{4.20}\] Electric-magnetic duality acts as \(\delta\psi=i\psi\), which is manifestly a symmetry. It is straightforward to obtain the conserved current via the standard Noether procedure, \[J_{a} =i(\bar{\psi}\partial_{a}\psi-\psi\partial_{a}\bar{\psi})\] \[=\psi_{+}\partial_{a}\psi_{-}-\psi_{-}\partial_{a}\psi_{+}. \tag{4.21}\] Intriguingly, the complex master field \(\psi\), which we obtained by integrating out non-dynamical fields and canonically normalizing, turns out to be proportional to \((\ell,m)\) modes of the middle Newman-Penrose scalar \(\phi_{1}=(1/2)(F_{ln}-F_{m\bar{m}})\), \[\sqrt{\frac{\ell(\ell+1)}{2}}\psi_{\ell m}=r^{2}(\phi_{1})_{\ell m}. \tag{4.22}\] For this reason, it will be illuminating to recontextualize the foregoing \(2+2\) calculation in the GHP formalism. ### Maxwell in GHP Analogously to the Weyl tensor, the electromagnetic field strength tensor \(F_{\mu\nu}\) can be fully encoded in three complex Maxwell scalars, \[\phi_{0}=F_{lm},\quad\phi_{1}=\frac{1}{2}\left(F_{ln}-F_{m\bar{m}}\right), \quad\phi_{2}=F_{\bar{m}n}, \tag{4.23}\] of GHP types \(\{2,0\}\), \(\{0,0\}\), and \(\{-2,0\}\), respectively. We remind the reader of the notation \(F_{lm}=F_{\mu\nu}l^{\mu}m^{\nu}\), etc. The Maxwell Lagrangian is \[\mathcal{L} =-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}\] \[=-(-l^{(\mu}n^{\nu)}+m^{(\mu}\bar{m}^{\nu)})(-l^{\alpha}n^{\beta} +m^{\alpha}\bar{m}^{\beta})F_{\mu\alpha}F_{\nu\beta}\] \[=\phi_{1}^{2}-\phi_{0}\phi_{2}+\text{c.c.} \tag{4.24}\] Now we introduce an auxiliary complex scalar \(\lambda\) of GHP type \(\{0,0\}\), meant to equal \(\phi_{1}\) on-shell, by sending \(\mathcal{L}\rightarrow\mathcal{L}-(\phi_{1}-\lambda)^{2}-(\bar{\phi}_{1}-\bar {\lambda})^{2}\), \[\boxed{\mathcal{L}=2\phi_{1}\lambda-\lambda^{2}-\phi_{0}\phi_{2}+\text{c.c.}} \tag{4.25}\] Instead of decomposing \(A_{\mu}\) into \(\mathcal{M}^{2}\) tensors \(A_{a}(t,r)\) and \(a(t,r)\) as in the \(2+2\) decomposition, in the GHP formalism we encode it in the four scalars \((A_{l},A_{n},A_{m},A_{\bar{m}})\). The gauge choice we made earlier can be written in a GHP-invariant manner as \[\eth^{\prime}A_{m}+\eth A_{\bar{m}}=0. \tag{4.26}\] In this gauge, the even modes live in \(A_{l}\) and \(A_{n}\) while the odd modes live in \(A_{m}\) and \(A_{\bar{m}}\) through the combination \[\eth A_{\bar{m}}-\eth^{\prime}A_{m}=i\frac{\ell(\ell+1)}{r^{2}}aY. \tag{4.27}\] To work with the equations of motion coming from the Lagrangian (4.25), it is helpful to establish just a bit more notation. First, we write the Maxwell scalars in terms of operators \(\mathcal{T}_{i}\) acting on \(A\)[21], \[\phi_{0} =\mathcal{T}_{0}A=-\eth A_{l}+(\operatorname{\mathsf{p}}-\rho)A _{m}, \tag{4.28a}\] \[\phi_{1} =\mathcal{T}_{1}A=\frac{1}{2}\left(-\operatorname{\mathsf{p}}^{ \prime}A_{l}+\operatorname{\mathsf{p}}A_{n}+\eth^{\prime}A_{m}-\eth A_{\bar{m }}\right),\] (4.28b) \[\phi_{2} =\mathcal{T}_{2}A=\eth^{\prime}A_{n}-(\operatorname{\mathsf{p}} ^{\prime}-\rho^{\prime})A_{\bar{m}}. \tag{4.28c}\] Second, we introduce Wald's notion of _adjoint operators_[22]. The adjoint \(\mathcal{O}^{\dagger}\) of an operator \(\mathcal{O}\) satisfies \(A\mathcal{O}B-B\mathcal{O}^{\dagger}A=\nabla_{\mu}v^{\mu}\) for some vector \(v^{\mu}\) and tensors (with indices suppressed) \(A\) and \(B\), so that under an integral we obtain the adjoint when integrating by parts, \[\int\mathrm{d}^{4}x\sqrt{-g}A\mathcal{O}B=\int\mathrm{d}^{4}x\sqrt{-g}B \mathcal{O}^{\dagger}A. \tag{4.29}\] The adjoints of the GHP derivatives are \[\operatorname{\mathsf{p}}^{\dagger}=-\operatorname{\mathsf{p}}+2\rho,\quad \eth^{\dagger}=-\eth, \tag{4.30}\] along with their primes. The adjoints of \(\mathcal{T}^{i}\) are [21] \[\mathcal{T}_{0}^{\dagger} =l^{\mu}\eth-m^{\mu}(\operatorname{\mathsf{p}}-\rho), \tag{4.31a}\] \[\mathcal{T}_{1}^{\dagger} =\frac{1}{2}\left[l^{\mu}(\operatorname{\mathsf{p}}^{\prime}-2 \rho^{\prime})-n^{\mu}(\operatorname{\mathsf{p}}-2\rho)-m^{\mu}\eth^{\prime}+ \bar{m}^{\mu}\eth\right],\] (4.31b) \[\mathcal{T}_{2}^{\dagger} =-n^{\mu}\eth^{\prime}+\bar{m}^{\mu}(\operatorname{\mathsf{p}}^ {\prime}-\rho^{\prime}). \tag{4.31c}\] We now have the tools to vary the Maxwell Lagrangian (4.25) with respect to \(A\), \[\left(\mathcal{T}_{0}^{\dagger}\mathcal{T}_{2}+\mathcal{T}_{2}^{\dagger} \mathcal{T}_{0}\right)A+\mathrm{c.c.}=2\mathcal{T}_{1}^{\dagger}\lambda+ \mathrm{c.c.} \tag{4.32}\] Note that this is a vector-valued equation, per the definitions of \(\mathcal{T}_{i}^{\dagger}\). The components along \(l\) and \(n\) determine \(A_{l}\) and \(A_{n}\) in terms of \(\lambda\) and its complex conjugate, \[A_{l} =-\frac{1}{2\eth\eth^{\prime}}(\operatorname{p}-2\rho)(\lambda+ \bar{\lambda}-\mathfrak{g}) \tag{4.33a}\] \[A_{n} =\frac{1}{2\eth\eth^{\prime}}(\operatorname{p}^{\prime}-2\rho^{ \prime})(\lambda+\bar{\lambda}+\mathfrak{g}) \tag{4.33b}\] where \[\mathfrak{g}\equiv\eth^{\prime}A_{m}+\eth A_{\bar{m}} \tag{4.34}\] is zero in the gauge used in the previous subsection; we will fix \(\mathfrak{g}=0\) herein. We can also integrate out \(A_{m}\) and \(A_{\bar{m}}\) using the imaginary part of the \(\lambda\) equation of motion, \[\lambda-\bar{\lambda} =\phi_{1}-\bar{\phi}_{1}\] \[=\eth^{\prime}A_{m}-\eth A_{\bar{m}}, \tag{4.35}\] which implies \[A_{m}=\frac{1}{2\eth\eth^{\prime}}\eth(\lambda-\bar{\lambda}),\qquad A_{\bar{ m}}=-\frac{1}{2\eth\eth^{\prime}}\eth^{\prime}(\lambda-\bar{\lambda}). \tag{4.36}\] Now that we have solutions for each component of \(A_{\mu}\) in terms of \(\lambda\), we can plug them into the Lagrangian (4.25) to find a theory for \(\lambda\) alone. However, to avoid the complications of dealing with the inverse \(\eth\eth^{\prime}\) operator, we first perform a simple field redefinition, \[\lambda=\eth\eth^{\prime}\psi \tag{4.37}\] so that the solution for \(A_{\mu}\) is \[A_{l} =-\frac{1}{2}\operatorname{p}(\psi+\bar{\psi}), \tag{4.38a}\] \[A_{n} =\frac{1}{2}\operatorname{p}^{\prime}(\psi+\bar{\psi}),\] (4.38b) \[A_{m} =\frac{1}{2}\eth(\psi-\bar{\psi}). \tag{4.38c}\] To integrate out \(A_{\mu}\) we plug this solution into eq. (4.25). The Maxwell scalars evaluated on this solution are \[\phi_{0} =\eth\operatorname{p}\psi, \tag{4.39a}\] \[\phi_{1} =\frac{1}{2}\left[\operatorname{p}\operatorname{p}^{\prime} \left(\psi+\bar{\psi}\right)+\eth\eth^{\prime}(\psi-\bar{\psi})\right],\] (4.39b) \[\phi_{2} =\eth^{\prime}\operatorname{p}^{\prime}\psi. \tag{4.39c}\] Putting these in the action we find, freely integrating by parts,16 Footnote 16: It is helpful to recall the GHP commutators (2.32), particularly \([\mathsf{P},\mathscr{Q}]=\rho\mathscr{\bar{Q}}\). On GHP type \(\{0,0\}\) objects such as \(\psi\), \([\mathsf{P},\mathsf{p}^{\prime}]=[\mathscr{\bar{Q}},\mathscr{\bar{V}}]=0\). Together with the adjoints (4.30), these imply that up to total derivatives \((\mathscr{\bar{Q}}\,\mathsf{P}\,A)(\mathscr{\bar{V}}^{\prime}\,\mathsf{P}\,B)= (\mathscr{\bar{P}}\,\mathsf{p}^{\prime}\,A)(\mathscr{\bar{Q}}\mathscr{\bar{V} }^{\prime}B)=(\mathscr{\bar{Q}}\mathscr{\bar{V}}^{\prime}A)(\mathsf{P}\, \mathsf{p}^{\prime}\,B)\). \[\mathcal{L} =2\phi_{1}(\mathscr{\bar{Q}}\mathscr{\bar{V}}^{\prime}\psi)-( \mathscr{\bar{Q}}\mathscr{\bar{V}}^{\prime}\psi)^{2}-\phi_{0}\phi_{2}+\text{c.c.}\] \[=\left(\mathsf{P}\,\mathsf{p}^{\prime}\,\psi-\mathscr{\bar{Q}} \mathscr{\bar{V}}^{\prime}\psi\right)\mathscr{\bar{Q}}\mathscr{\bar{V}}^{ \prime}\bar{\psi}+\text{c.c.}\] \[=2\left(\mathsf{P}\,\mathsf{p}^{\prime}\,\psi-\mathscr{\bar{Q}} \mathscr{\bar{V}}^{\prime}\psi\right)\mathscr{\bar{Q}}\mathscr{\bar{V}}^{ \prime}\bar{\psi}. \tag{4.40}\] This is a remarkably simple result. To switch back to \(\lambda=\mathscr{\bar{Q}}\mathscr{\bar{V}}^{\prime}\psi\), we integrate by parts and use the GHP commutators, \[\mathscr{\bar{Q}}\mathscr{\bar{V}}^{\prime}(\mathsf{P}\,\mathsf{p}^{\prime}- \mathscr{\bar{Q}}\mathscr{\bar{V}}^{\prime})=\left[(\mathsf{P}\,-2\rho)( \mathsf{p}^{\prime}\,-2\rho^{\prime})-\mathscr{\bar{Q}}\mathscr{\bar{V}}^{ \prime}\right]\mathscr{\bar{Q}}\mathscr{\bar{V}}^{\prime}, \tag{4.41}\] to write \[\boxed{\mathcal{L}=2\bar{\psi}\left[(\mathsf{P}^{\prime}\,-2\rho^{\prime})( \mathsf{P}\,-2\rho)-\mathscr{\bar{Q}}\mathscr{\bar{V}}^{\prime}\right]\lambda.} \tag{4.42}\] where \(\bar{\psi}=(\mathscr{\bar{Q}}\mathscr{\bar{V}}^{\prime})^{-1}\bar{\lambda}\). The equation of motion obtained by varying with respect to \(\bar{\psi}\) is \[\left[(\mathsf{P}^{\prime}\,-2\rho^{\prime})(\mathsf{P}\,-2\rho)-\mathscr{ \bar{Q}}\mathscr{\bar{V}}^{\prime}\right]\lambda=0. \tag{4.43}\] On shell \(\lambda=\phi_{1}\), for which this is the _Fackerell-Ipser equation_[23] in GHP notation [24]. Electric-magnetic duality transformations act as complex rotations on the Maxwell scalars, \(\phi_{i}\to e^{i\theta}\phi_{i}\), essentially since they are the components of the (anti-)self-dual parts of the Maxwell tensor. The action (4.42) is indeed manifestly invariant under \(\lambda\to e^{i\theta}\lambda\), or infinitesimally \(\delta\lambda=i\lambda\) (along with \(\delta\bar{\psi}=-i\bar{\psi}\)).17 Footnote 17: The Lagrangian (4.42) does not look real, but it is up to a total derivative, as can be explicitly checked using the commutators and adjoints of the GHP derivatives, and in particular the identity \(\mathsf{p}^{\dagger}(\mathsf{p}^{\prime})^{\dagger}\mathscr{\bar{Q}}\mathscr{ \bar{V}}^{\prime}=\mathscr{\bar{Q}}\mathscr{\bar{V}}^{\prime}\,\mathsf{p}\, \mathsf{p}^{\prime}\,\). A natural extension of the setup with \(\phi_{1}\) as an auxiliary field is to introduce auxiliary fields for all three Maxwell scalars, that is, a triplet \((\lambda_{0},\lambda_{1},\lambda_{2})\) which on-shell satisfy \(\lambda_{i}=\phi_{i}\).18 First let us note that we can "chop off" the \(+\)c.c. in the real Maxwell Lagrangian (4.24) by adding the total derivative \((i/4)F_{\mu\nu}(\star F)^{\mu\nu}=\phi_{0}\phi_{2}-\phi_{1}^{2}-\text{c.c.}\), Footnote 18: This is essentially the construction of, e.g., Ref. [25] for the chiral formulation of Maxwell theory. \[\mathcal{L} =-\frac{1}{4}F_{\mu\nu}^{2}+\frac{i}{4}F_{\mu\nu}(\star F)^{\mu\nu}\] \[=2\left(\phi_{1}^{2}-\phi_{0}\phi_{2}\right). \tag{4.44}\] Now we add in the full triplet of auxiliary fields, \[\mathcal{L} \to\mathcal{L}-2(\phi_{1}-\lambda_{1})^{2}+2(\phi_{0}-\lambda_{ 0})(\phi_{2}-\lambda_{2})\] \[=2\left(2\phi_{1}\lambda_{1}-\lambda_{0}\phi_{2}-\lambda_{2}\phi_ {0}-\lambda_{1}^{2}+\lambda_{0}\lambda_{2}\right). \tag{4.45}\] The \(\lambda_{i}\) equations of motion set \(\lambda_{i}=\phi_{i}\) as desired, while the \(A\) equation of motion is \[2\mathcal{T}_{1}^{\dagger}\lambda_{1}-\mathcal{T}_{0}^{\dagger}\lambda_{2}- \mathcal{T}_{2}^{\dagger}\lambda_{0}=0, \tag{4.46}\] or in vector notation, \[0 =l^{\mu}\left[(\mathsf{P}^{\prime}\!\!\!-\!2\rho^{\prime})\lambda_ {1}-\eth\lambda_{2}\right]+m^{\mu}\left[(\mathsf{P}\!\!\!-\!\rho)\lambda_{2}- \eth^{\prime}\lambda_{1}\right]\] \[-n^{\mu}\left[(\mathsf{P}\!\!\!-\!2\rho)\lambda_{1}-\eth^{\prime }\lambda_{0}\right]-\bar{m}^{\mu}\left[(\mathsf{P}^{\prime}\!\!\!-\!\rho^{ \prime})\lambda_{0}-\eth\lambda_{1}\right]\] \[\equiv\mathcal{E}_{l}l^{\mu}+\mathcal{E}_{n}n^{\mu}+\mathcal{E}_ {m}m^{\mu}+\mathcal{E}_{\widehat{m}}\bar{m}^{\mu}. \tag{4.47}\] This formulation yields first-order constraints among the \(\phi_{i}\) on-shell. These are equivalent to the _Teukolsky-Starobinsky identities_, which are second-order differential relations between \(\phi_{0}\) and \(\phi_{2}\), or equivalently fourth-order relations for \(\phi_{0}\) and \(\phi_{2}\) separately. To obtain the Teukolsky-Starobinsky identities we therefore need to take combinations of derivatives of \(\mathcal{E}_{a}\) to remove \(\phi_{1}\). The correct combinations are \[\eth^{\prime}\mathcal{E}_{n}-(\mathsf{P}\!\!\!-\!3\rho)\mathcal{E }_{m} =0 \implies\qquad(\mathsf{P}\!\!\!-\!2\rho)(\mathsf{P}\!\!\!-\!2\rho) \phi_{2} =\eth^{\prime 2}\phi_{0}, \tag{4.48a}\] \[\eth\mathcal{E}_{l}-(\mathsf{P}\!\!\!-\!3\rho)\mathcal{E}_{ \widehat{m}} =0 \implies\qquad(\mathsf{P}^{\prime}\!\!\!-\!2\rho^{\prime})( \mathsf{P}^{\prime}\!\!\!-\!2\rho^{\prime})\phi_{0} =\eth^{2}\phi_{2},\] (4.48b) \[\eth\mathcal{E}_{m}+\eth^{\prime}\mathcal{E}_{\widehat{m}} =0 \implies\qquad(\mathsf{P}^{\prime}\!\!\!-\!2\rho^{\prime})\eth^{ \prime}\phi_{0} =(\mathsf{P}\!\!\!-\!2\rho)\eth\phi_{2}, \tag{4.48c}\] where the T-S identities following the arrows can be found in, e.g., eq. 43 of Ref. [21]. The third identity can also be obtained from \((\mathsf{P}\!\!\!-\!2\rho)\mathcal{E}_{l}+(\mathsf{P}^{\prime}\!\!\!-\!2\rho^ {\prime})\mathcal{E}_{n}\). Here we have used the background equation \(\mathsf{P}\,\rho^{\prime}=\mathsf{P}^{\prime}\,\rho=\rho\rho^{\prime}-\Psi_{2}\). We note that \(\phi_{1}\) is special not just because it appeared naturally in the dynamical construction of the previous subsection, but also because it is closely related to the _Killing-Yano 2-form_ and its dual, \[\phi_{1}=\frac{i}{4r}F^{\mu\nu}\left(Y_{\mu\nu}-i\star Y_{\mu\nu}\right), \tag{4.49}\] where \[Y=r^{3}\sin\theta\mathrm{d}\theta\wedge\mathrm{d}\phi,\quad\star Y=r\mathrm{d }t\wedge\mathrm{d}r. \tag{4.50}\] The Killing tensor, which underlies separability, is the square of the Killing-Yano tensor, in coordinates, \[k_{AB}=-r^{4}\Omega_{AB},\quad k_{a\mu}=0. \tag{4.51}\] To connect explicitly to the \(2+2\) formulation of the previous subsection, we note the useful identities \[m_{A}\bar{m}_{B} =\frac{r^{2}}{2}\left(\Omega_{AB}-i\epsilon_{AB}\right), \tag{4.52a}\] \[l_{a}n_{b} =\frac{1}{2}\left(-g_{ab}+\epsilon_{ab}\right). \tag{4.52b}\] Using these we can calculate the Maxwell scalars in terms of \(2+2\) quantities, \[\phi_{1} =\frac{1}{2}\left(\epsilon^{ab}\nabla_{a}A_{b}-i\frac{\ell(\ell+1)} {r^{2}}a\right)Y, \tag{4.53a}\] \[\phi_{0} =\left(\partial_{a}aB_{A}-A_{a}E_{A}\right)l^{a}m^{A},\] (4.53b) \[\phi_{2} =-\left(\partial_{a}aB_{A}-A_{a}E_{A}\right)n^{a}\bar{m}^{A},\] (4.53c) \[\phi_{0}\phi_{2} =\frac{1}{4}\left(A_{a}^{2}E_{A}^{2}+\left(\partial a\right)^{2} B_{A}^{2}\right). \tag{4.53d}\] We conclude with speculation about the structure discussed in this section and its generalization to Kerr. There the Fackerell-Ipser equation is not separable, which is why it is typical to work with the _Teukolsky equations_[26] for the extreme-weight scalars \(\phi_{0}\) and \(\phi_{2}\), which are separable due to the aforementioned Killing tensor structure [13]. It would be very interesting to obtain an action principle for the Teukolsky equations analogously to the one we have constructed for the Fackerell-Ipser equation and Teukolsky-Starobinsky identities. We note that in Ref. [27] such an action was constructed using the fact that the Teukolsky equations are linear, which may provide a hint: the Teukolsky Lagrangian derived there is of the form \(\mathcal{L}\sim\rho^{-2}\phi_{2}\mathcal{O}\phi_{0}\), where \(\mathcal{O}\) is the Teukolsky operator for \(\phi_{0}\). It would also be interesting to understand how the Debye and Hertz potentials which appear in reconstruction methods [28, 29, 30, 22] arise from the action formulation. We leave these important open questions for future work. ## 5 Gravity Consider linear perturbations around the Schwarzschild metric \(\bar{g}_{\mu\nu}\),19 Footnote 19: For black hole perturbation theory in \(2+2\) language see, e.g., Refs. [31, 10, 11, 12]. The factor of \(2/M_{\rm Pl}\) is to canonically normalize the metric fluctuation. \[g_{\mu\nu}=\bar{g}_{\mu\nu}+\frac{2}{M_{\rm Pl}}h_{\mu\nu}, \tag{5.1}\] and expand the Einstein-Hilbert action to quadratic order in \(h_{\mu\nu}\), \[S =\frac{M_{\rm Pl}^{2}}{2}\int{\rm d}^{4}x\sqrt{-g}R[g]\] \[=\bar{S}+\delta_{1}S+\delta_{2}S+\mathcal{O}(h^{3}). \tag{5.2}\] The even- and odd-parity perturbations decouple at this order, so each is described by a separate quadratic action: \[\delta_{2}S=\sum_{\ell=2}^{\infty}\sum_{m=-\ell}^{\ell}\left(S_{\rm even}^{ \ell m}+S_{\rm odd}^{\ell m}\right). \tag{5.3}\] Herein we will drop bars on background quantities, since we will only be interested in \(\delta_{2}S\). Expanding the Ricci scalar to second order in perturbations is a non-trivial task, and ultimately not necessary, since we can write the action in first-order form. To see this, consider a metric variation \(g\to g+\delta g\) and Taylor expand the action, \[S[g+\delta g]=S[g]+\delta S+\frac{1}{2}\delta^{2}S+\cdots. \tag{5.4}\] Matching to eq. (5.2) we see that \[\delta_{2}S=\frac{1}{2}\delta^{2}S. \tag{5.5}\] It is a foundational result in GR that \(\delta\int\mathrm{d}^{4}x\sqrt{-g}R=\int\mathrm{d}^{4}x\sqrt{-g}G_{\mu\nu} \delta g^{\mu\nu}\). Taking a second variation we obtain \[\delta_{2}S =\frac{M_{\mathrm{Pl}}^{2}}{4}\int\mathrm{d}^{4}x\sqrt{-g}\delta G _{\mu\nu}\delta g^{\mu\nu}\] \[=-\int\mathrm{d}^{4}x\sqrt{-g}h^{\mu\nu}G[h]_{\mu\nu} \tag{5.6}\] where \(G[h]_{\mu\nu}\equiv\delta G_{\mu\nu}[g+h]\) is the linear-in-\(h\) part of the Einstein tensor for \(g_{\mu\nu}+h_{\mu\nu}\), \[G[h]_{\mu\nu}=\nabla_{\alpha}\nabla_{(\mu}h^{\alpha}_{\nu)}-\frac{1}{2}\Box h _{\mu\nu}-\frac{1}{2}\nabla_{\mu}\nabla_{\nu}h-\frac{1}{2}\left(\nabla_{\mu} \nabla_{\nu}h^{\mu\nu}-\Box h\right)g_{\mu\nu}. \tag{5.7}\] For simplicity (and to facilitate comparison to the literature) we will continue to call this \(\delta G_{\mu\nu}\), with the understanding that it is evaluated on \(g_{\mu\nu}+h_{\mu\nu}\) rather than \(g_{\mu\nu}+2M_{\mathrm{Pl}}^{-1}h_{\mu\nu}\). Integrating by parts we recover the standard Fierz-Pauli Lagrangian for a spin-2 field, \[\delta_{2}S=\int\mathrm{d}^{4}x\sqrt{-g}\left(-\frac{1}{2}\nabla_{\alpha}h_{ \mu\nu}\nabla^{\alpha}h^{\mu\nu}+\nabla_{\alpha}h_{\mu\nu}\nabla^{\nu}h^{\mu \alpha}-\nabla_{\mu}h\nabla_{\nu}h^{\mu\nu}+\frac{1}{2}\nabla_{\mu}h\nabla^{ \mu}h\right). \tag{5.8}\] The \(2+2\) components of \(\delta G_{\mu\nu}[g+h]\) are standard and can be found in, e.g., Refs. [10, 11, 12].20 We present relevant components in appendix A. The quadratic action (5.6) is expanded as Footnote 20: We leave the analogous GHP analysis for future work. \[h^{\mu\nu}\delta G_{\mu\nu}=h^{ab}\delta G_{ab}+\frac{2}{r^{2}}h^{aA}\delta G _{aA}+\frac{1}{r^{4}}h^{AB}\delta G_{AB}. \tag{5.9}\] We remind the reader that \(\mathcal{M}_{2}\) indices are raised with \(g^{ab}\) and \(S^{2}\) indices with \(\Omega^{AB}\). There are at least two useful gauges which can be safely fixed at the level of the action [20]. One is the standard Regge-Wheeler gauge, in which \(h_{aA}\) is purely odd and \(h_{AB}=r^{2}K\Omega_{AB}\). Another is the "\(\alpha\) gauge" used in, e.g., Refs. [18, 32, 33], where \(h_{aA}\) contains both even and odd pieces and \(h_{AB}=0\). The gauge choice affects the auxiliary structure of the action. To see this, consider the gauge-invariant variables \(\tilde{h}_{ab}\) and \(\tilde{K}\) defined in Ref. [11], which correspond (by construction) to \(h_{ab}\) and \(K\) in the Regge-Wheeler gauge, and in \(\alpha\) gauge contain derivatives, \[\tilde{h}_{ab} =h_{ab}-2\nabla_{(a}\left(r^{2}r_{b)}\alpha\right), \tag{5.10a}\] \[\tilde{K} =-2fr\alpha. \tag{5.10b}\] We will remain agnostic about which of these two gauges to pick, and write down expressions for both. In these gauges, the components of \(h_{\mu\nu}\) are \[h_{ab} =\sum_{\ell,m}h_{ab}^{\ell m}Y_{\ell m}, \tag{5.11a}\] \[h_{aA} =\sum_{\ell,m}r^{2}\left(\alpha_{\ell m}r_{a}E_{A}^{\ell m}+h_{a }^{\ell m}B_{A}^{\ell m}\right),\] (5.11b) \[h_{AB} =\sum_{\ell,m}r^{2}K_{\ell m}Y_{\ell m}\Omega_{AB}, \tag{5.11c}\] where we remind the reader that \(r_{a}\equiv\partial_{a}r\). As usual we will drop the summation and the subscripts and focus on a single \((\ell,m)\) mode. In Regge-Wheeler gauge we set \(\alpha=0\), and in \(\alpha\) gauge we set \(K=0\). We will also find it convenient to decompose \(h_{ab}\) into its trace and tracefree parts, \[h_{ab}=\hat{h}_{ab}+\frac{1}{2}hg_{ab},\quad\hat{h}^{a}{}_{a}=0, \tag{5.12}\] and to work with the Ricci tensor rather than the Einstein tensor, \[\delta G_{\mu\nu}=\delta R_{\mu\nu}-\frac{1}{2}\delta Rg_{\mu\nu},\quad\delta R =g^{ab}\delta R_{ab}+r^{-2}\Omega^{AB}\delta R_{AB} \tag{5.13}\] In terms of these variables, the even and odd actions are \[S_{\text{even}} =\int\mathrm{d}^{2}x\sqrt{-g}\mathrm{d}\Omega\left[r^{2}(Kg^{ab}- \hat{h}^{ab})\delta R_{ab}+\frac{1}{2}h\Omega^{AB}\delta R_{AB}-2r^{2}r^{a} \alpha E^{A}\delta R_{aA}\right], \tag{5.14a}\] \[S_{\text{odd}} =-2\int\mathrm{d}^{2}x\sqrt{-g}\mathrm{d}\Omega r^{2}h^{a}B^{A} \delta R_{aA}. \tag{5.14b}\] To integrate over the 2-sphere, we note that the \(S^{2}\) scalars \(\delta R_{ab}\) and \(\Omega^{AB}\delta R_{AB}\) are expanded in \(Y_{\ell m}\), while the even and odd parts of \(\delta R_{aA}\) can be written as \[\delta R_{aA}=\delta R_{a}^{E}E_{A}+\delta R_{a}^{B}B_{A}. \tag{5.15}\] Performing the integral over \(S^{2}\) and writing the actions as \(S=\int\mathrm{d}^{2}x\sqrt{-g}\mathcal{L}\), the Lagrangians are \[\mathcal{L}_{\mathrm{even}} =r^{2}(Kg^{ab}-\hat{h}^{ab})\delta R_{ab}+\frac{1}{2}h\Omega^{AB} \delta R_{AB}-2\ell(\ell+1)r^{2}r^{a}\alpha\delta R_{a}^{E}, \tag{5.16a}\] \[\mathcal{L}_{\mathrm{odd}} =-2\ell(\ell+1)r^{2}h^{a}\delta R_{a}^{B}, \tag{5.16b}\] where \(h_{ab}\) denotes \(h_{ab}^{\ell m}\), etc. Let us treat the odd and even sectors separately. ### Odd sector The odd piece of the Ricci tensor is (see appendix A) \[\delta R_{a}^{B}=\frac{1}{2r^{2}}\nabla^{b}\left(r^{4}F_{ab}\right)+\frac{( \ell+2)(\ell-1)}{2}h_{a}, \tag{5.17}\] where \[F_{ab}=\partial_{a}h_{b}-\partial_{b}h_{a}, \tag{5.18}\] so the Lagrangian (5.16b) is \[\mathcal{L}_{\mathrm{odd}} =-2\ell(\ell+1)r^{2}h^{a}\delta R_{a}^{B}\] \[=-\ell(\ell+1)\left(h^{a}\nabla^{b}\left(r^{4}F_{ab}\right)+( \ell+2)(\ell-1)r^{2}h_{a}^{2}\right)\] \[=-\ell(\ell+1)\left(\frac{1}{2}r^{4}F_{ab}^{2}+(\ell+2)(\ell-1)r^ {2}h_{a}^{2}\right), \tag{5.19}\] where in the last line we have integrated by parts. Note that \((\ell+2)(\ell-1)=\ell(\ell+1)-2\). Finally we rescale \[h_{a}\to\frac{h_{a}}{\sqrt{2\ell(\ell+1)}} \tag{5.20}\] so the action takes the form \[\boxed{\mathcal{L}_{\mathrm{odd}}=-\frac{1}{4}r^{4}F_{ab}^{2}-\frac{(\ell+2)( \ell-1)}{2}r^{2}h_{a}^{2}.} \tag{5.21}\] In coordinates this is [18] \[\mathcal{L}_{\mathrm{odd}}=\frac{1}{2}r^{4}(\dot{h}_{1}-h_{0}^{\prime})^{2}+ \frac{1}{2}(\ell+2)(\ell-1)r^{2}\left(\frac{1}{f}h_{0}^{2}-fh_{1}^{2}\right), \tag{5.22}\] where \(h_{a}\mathrm{d}x^{a}=h_{0}\mathrm{d}t+h_{1}\mathrm{d}r\), and overdots and primes denote \(\partial_{t}\) and \(\partial_{r}\), respectively. Physically we can think of eq. (5.21) as describing a two-dimensional vector with an \(r\)-dependent mass,21 where we remind the reader that \(r\) is a background scalar rather than necessarily a coordinate direction. Note the close resemblance to \(\mathcal{L}_{\text{even}}\) for the Maxwell field (4.5). We can repeat the same trick to integrate out the two fields \(h_{a}\) in favor of a single dynamical field. We integrate in an auxiliary variable \(\lambda(x^{a})\) via a perfect square so as not to affect the dynamics, \[\mathcal{L}_{\text{odd,aux}} =\mathcal{L}_{\text{odd}}+\frac{1}{4}\left(r^{2}F_{ab}+\lambda \epsilon_{ab}\right)^{2}\] \[=\frac{1}{2}\left(r^{2}\lambda\epsilon^{ab}F_{ab}-\lambda^{2}-( \ell+2)(\ell-1)r^{2}h^{2}\right). \tag{5.23}\] This is dynamically equivalent to \(\mathcal{L}_{\text{odd}}\), which is recovered by plugging in the solution to the \(\lambda\) equation of motion, \(\lambda=(1/2)r^{2}\epsilon^{ab}F_{ab}\), and we will write it as \(\mathcal{L}_{\text{odd}}\) accordingly. The introduction of \(\lambda\) gives us the option to integrate out \(h_{a}\) by solving its equation of motion, \[(\ell+2)(\ell+1)r^{2}h_{a}=\epsilon_{ab}\partial^{b}(r^{2}\lambda). \tag{5.24}\] Substituting this into the action we have \[\mathcal{L}_{\text{odd}}=-\frac{1}{2}\frac{[\partial(r^{2}\lambda)]^{2}}{( \ell+2)(\ell-1)r^{2}}-\frac{1}{2}\lambda^{2}. \tag{5.25}\] We perform a further rescaling to canonically normalize the kinetic term, \[\lambda=\frac{\sqrt{(\ell+2)(\ell-1)}}{r}\Psi_{-}, \tag{5.26}\] so that the action becomes, using the background equations of motion (2.5), \[\mathcal{L}_{\text{odd}}=-\frac{1}{2}(\partial\Psi_{-})^{2}-\frac{1}{2}\left( \frac{\ell(\ell+1)}{r^{2}}-\frac{3}{2}R\right)\Psi_{-}^{2}. \tag{5.27}\] The mass term explicitly evaluates to \[\frac{\ell(\ell+1)}{r^{2}}-\frac{3}{2}R =\frac{\ell(\ell+1)}{r^{2}}-\frac{3r_{\text{s}}}{r}\] \[=\frac{V_{-}(r)}{f(r)}, \tag{5.28}\] where \(V_{-}(r)\) is the _Regge-Wheeler potential_[34]. Putting everything together we obtain the odd-sector Regge-Wheeler action, \[\boxed{S_{\text{odd}}=\int\text{d}^{2}x\sqrt{-g}\left[-\frac{1}{2}(\partial \Psi_{-})^{2}-\frac{1}{2}\frac{V_{-}}{f}\Psi_{-}^{2}\right].} \tag{5.29}\] The equation of motion, \[\Box\Psi_{-}=\frac{V_{-}}{f}\Psi_{-}, \tag{5.30}\] where \[\Box =\partial_{a}(g^{ab}\partial_{b})\] \[=\frac{1}{f}\left(-\partial_{t}^{2}+\partial_{r_{*}}^{2}\right), \tag{5.31}\] is the usual Regge-Wheeler equation [34] for \(\Psi_{-}\), \[\left(-\partial_{t}^{2}+\partial_{r_{*}}^{2}\right)\Psi_{-}=V_{-}\Psi_{-}. \tag{5.32}\] This means that \(\Psi_{-}\) must be proportional to the Regge-Wheeler variable up to time derivatives. Indeed, recalling Martel and Poisson's [11] gauge-invariant definition of the Cunningham-Price-Moncrief variable [35], which is itself a time integral of the original Regge-Wheeler variable [34], we find agreement with \(\Psi_{-}\) up to a numerical factor: \[\Psi_{\mathrm{CPM}} =\frac{r^{3}}{(\ell+2)(\ell-1)}\epsilon^{ab}F_{ab}\] \[=\frac{2r}{(\ell+2)(\ell-1)}\lambda\] \[=\frac{2}{\sqrt{(\ell+2)(\ell-1)}}\Psi_{-}. \tag{5.33}\] We conclude the discussion of the odd sector by noting an interesting alternative approach discussed in, e.g., Ref. [10]. Consider the \(\mathcal{M}^{2}\) 1-form \(h=h_{a}\mathrm{d}x^{a}\). The action is \[S_{\mathrm{odd}}=-\frac{1}{2}\int\mathrm{d}^{2}x\left(r^{4}F\wedge\star F+( \ell+2)(\ell-1)r^{2}h\wedge\star h\right), \tag{5.34}\] and the equation of motion is \[\mathcal{E}=\mathcal{E}_{a}\mathrm{d}x^{a}=-\star\mathrm{d}(r^{4}\star F)-( \ell+2)(\ell-1)r^{2}h. \tag{5.35}\] Taking a divergence by applying \(\mathrm{d}\star\), we find that the 1-form \(r^{2}\star h\) is closed, \[\mathrm{d}(r^{2}\star h)=0. \tag{5.36}\] By the Poincare lemma we can write it in terms of a scalar potential \(\phi\), \[r^{2}h=\star\mathrm{d}\phi, \tag{5.37}\] or in index notation, \[r^{2}h_{a}=-\epsilon_{ab}\partial^{b}\phi. \tag{5.38}\] Comparing to eq. (5.24) we see that this potential is related to our auxiliary variable \(\lambda\) by \[\phi=-\frac{r^{2}}{(\ell+2)(\ell-1)}\lambda. \tag{5.39}\] The auxiliary field method is a technique for consistently implementing eq. (5.37) at the level of the action. In particular, if we were to naively plug the solution (5.37) directly into the original action (5.21), the resulting theory would be of fourth order in derivatives of \(\phi\), and could not describe the same physics: it contains two degrees of freedom rather than one, and possesses an Ostrogradski ghost instability [36]. ### Even sector The action for the even sector is given by eq. (5.16a). Expressions for relevant components of the perturbed Ricci tensor are in appendix A. The resulting actions after many intergrations by parts are \[\mathcal{L}^{\text{RW}}_{\text{even}} =-2rr^{c}\hat{h}^{ab}\nabla_{a}\hat{h}_{bc}-2r\hat{h}^{ab}r_{a} \partial_{b}h-\frac{r^{2}R+\ell(\ell+1)+1}{2}\hat{h}_{ab}^{2}+\frac{\ell(\ell+ 1)+2}{4}h^{2} \tag{5.40a}\] \[\quad-2r^{2}\nabla_{a}\hat{h}^{ab}\nabla_{b}K+r^{2}\partial h \cdot\partial K+2rr^{a}K\partial_{a}h+\ell(\ell+1)hK+r^{2}(\partial K)^{2},\] \[\mathcal{L}^{\alpha}_{\text{even}} =-2rr^{c}\hat{h}^{ab}\nabla_{a}\hat{h}_{bc}-2r\hat{h}^{ab}r_{a} \partial_{b}h-\frac{r^{2}R+\ell(\ell+1)+1}{2}\hat{h}_{ab}^{2}+\frac{\ell(\ell +1)+2}{4}h^{2} \tag{5.40b}\] in Regge-Wheeler gauge and in \(\alpha\) gauge, respectively. We begin by noting the well-known fact that these expressions are significantly more complicated than eq. (5.21).22 Footnote 22: One is compelled to wonder who ordered this, especially given that, as we will see, the two sectors have essentially the same dynamics. It is convenient to perform a coordinate-like decomposition on objects with indices by projecting along \(r_{a}\) and the timelike direction \(t_{a}=\epsilon_{ab}r^{b}=f\partial_{a}t\), in terms of which the metric is [11] \[fg_{ab}=r_{a}r_{b}-t_{a}t_{b}. \tag{5.41}\] In particular, we do not lose any information by projecting the traceless perturbation \(\hat{h}\) once along \(r_{a}\)[12], \[\hat{h}_{a}\equiv\hat{h}_{ab}r^{b}, \tag{5.42}\] as we can reconstruct \(\hat{h}_{ab}\) via23 Footnote 23: To see this, consider all contractions with \(r_{a}\) and \(t_{a}\). \[f\hat{h}_{ab}=2r_{\langle a}\hat{h}_{b\rangle}=r_{a}\hat{h}_{b}+r_{b}\hat{h}_{ a}-(r\cdot\hat{h})g_{ab}, \tag{5.43}\] where angular brackets denote traceless symmetrization, \(T_{\langle ab\rangle}=T_{(ab)}-\frac{1}{2}Tg_{ab}\). This simplifies the actions somewhat, \[\mathcal{L}_{\text{even}}^{\text{RW}} =-2r\hat{h}^{ab}\nabla_{a}\hat{h}_{b}-\frac{\ell(\ell+1)+1}{f}\hat{h} _{a}^{2}-2r\hat{h}^{a}\partial_{a}h+\frac{\ell(\ell+1)+2}{4}h^{2}\] \[\quad-2r^{2}\nabla_{a}\hat{h}^{ab}\nabla_{b}K+r^{2}\partial h \cdot\partial K+2rr^{a}K\partial_{a}h+\ell(\ell+1)hK+r^{2}(\partial K)^{2}, \tag{5.44a}\] \[\mathcal{L}_{\text{even}}^{\alpha} =-2r\hat{h}^{ab}\nabla_{a}\hat{h}_{b}-\frac{\ell(\ell+1)+1}{f}\hat {h}_{a}^{2}-2r\hat{h}^{a}\partial_{a}h+\frac{\ell(\ell+1)+2}{4}h^{2}\] \[\quad+\ell(\ell+1)r^{2}\left(r^{2}(t^{a}\partial_{a}\alpha)^{2}+2 f\alpha^{2}-2r_{b}\alpha\nabla_{a}\hat{h}^{ab}-\frac{1}{r^{4}}h\nabla_{a} \left(r^{4}r^{a}\alpha\right)\right). \tag{5.44b}\] For concreteness, let us fix \(\alpha\) gauge. We will discuss Regge-Wheeler gauge at the end of the section. After the gauge freedom has been used up, there are four fields for one underlying dynamical degree of freedom. Two auxiliary variables are apparent by inspection of the action (5.44): \(t^{a}\hat{h}_{a}\sim h_{tr}\) and \(h\). Here we will essentially follow the procedure of Ref. [18] and begin by integrating out the former. To isolate the components of \(\hat{h}_{a}\) we decompose it as \[\hat{h}_{a}=\hat{h}_{0}t_{a}+\hat{h}_{1}r_{a}. \tag{5.45}\] We will also need to perform some simple field redefinitions to demix fields. We begin by shifting \(h\), \[h=\tilde{h}-2\hat{h}_{1}. \tag{5.46}\] Note that \(h\) contains both \(h_{tt}\) and \(h_{rr}\), whereas \(\tilde{h}\sim h_{rr}\). In this field basis the action is \[\mathcal{L}_{\text{even}}^{\alpha} =\ell(\ell+1)\hat{h}_{0}^{2}+\frac{\ell(\ell+1)+2}{4}\tilde{h}^{2 }-\ell(\ell+1)\tilde{h}\hat{h}_{1}-2rt^{a}\hat{h}_{0}\partial_{a}\tilde{h}+2 rr^{a}\tilde{h}\partial_{a}\hat{h}_{1}\] \[\quad-\frac{\ell(\ell+1)}{r^{2}}\tilde{h}\nabla_{a}\left(r^{4}r^ {a}\alpha\right)+\ell(\ell+1)r^{2}\left[r^{2}(t^{a}\partial_{a}\alpha)^{2}+2f \alpha^{2}\right]\] \[\quad+2\ell(\ell+1)\left[\hat{h}_{0}t^{a}\partial_{a}(r^{2} \alpha)+2rr^{a}\hat{h}_{1}\partial_{a}(r\alpha)+(1+3f)r\hat{h}_{1}\alpha\right] \tag{5.47}\] We can integrate out \(\hat{h}_{0}\) using its equation of motion, \[\ell(\ell+1)\hat{h}_{0}=rt^{a}\partial_{a}\left(\tilde{h}-\ell(\ell+1)r\alpha \right), \tag{5.48}\] to find \[\mathcal{L}_{\text{even}}^{\alpha} =-\frac{r^{2}}{\ell(\ell+1)}t^{a}t^{b}\partial_{a}\tilde{h} \partial_{b}(\tilde{h}-2\ell(\ell+1)r\alpha)+\frac{\ell(\ell+1)+2}{4}\tilde{h }^{2}-\ell(\ell+1)\tilde{h}\hat{h}_{1}\] \[\quad+2rr^{a}\tilde{h}\partial_{a}\hat{h}_{1}-\frac{\ell(\ell+1) }{r^{2}}\tilde{h}\nabla_{a}\left(r^{4}r^{a}\alpha\right)\] \[\quad+2\ell(\ell+1)\hat{h}_{1}\left(2rr^{a}\partial_{a}(r\alpha)+( 1+3f)r\alpha\right)+2\ell(\ell+1)r^{2}f\alpha^{2}. \tag{5.49}\] Now we perform a second field redefinition,24 comprising a shift to demix \(\alpha\) and \(\tilde{h}\) and an overall rescaling, Footnote 24: The reason for this particular order of operations is that integrating out \(\hat{h}_{0}\) simplifies the kinetic term for \(\alpha\). \[\alpha=\frac{\Lambda}{r^{2}f}\psi+\frac{\tilde{h}}{2\ell(\ell+1)r}, \tag{5.50}\] where we have introduced the function [11] \[\Lambda(r)\equiv\ell(\ell+1)+1-3f. \tag{5.51}\] The action becomes \[\mathcal{L}^{\alpha}_{\text{even}} =\left(\frac{\ell(\ell+1)+1}{4}+\left(\frac{1}{2\ell(\ell+1)}-1 \right)f\right)\tilde{h}^{2}-\Lambda\tilde{h}\tilde{h}_{1}-\frac{\ell(\ell+1) \Lambda}{f}r^{a}\tilde{h}\partial_{a}\psi\] \[\quad+\frac{2\Lambda r}{f}t^{a}t^{b}\partial_{a}\tilde{h}\partial _{b}\psi-\frac{(\ell+2)(\ell-1)(\ell(\ell+1)+\Lambda)}{r}\tilde{h}\psi+\frac{ 2\ell(\ell+1)\Lambda^{2}}{r^{2}f}\psi^{2}\] \[\quad+\frac{2\ell(\ell+1)}{f}\hat{h}_{1}\left[2\Lambda r^{a} \partial_{a}\psi+\frac{3f(\ell(\ell+1)-f)-\ell(\ell+1)-1}{r}\psi\right]. \tag{5.52}\] Note that \(\psi\) is precisely the gauge-invariant Zerilli-Moncrief function defined in Ref. [11], multiplied by \(-1/4\). The upshot of all these field redefinitions is that two of the remaining three fields are manifestly non-dynamical: \(\hat{h}_{1}\) is a Lagrange multiplier (it appears linearly) and \(\tilde{h}\) is auxiliary (it appears quadratically but without derivatives). The constraint obtained by varying with respect to \(\hat{h}_{1}\) fixes \(\tilde{h}\) in terms of \(\psi\), \[\tilde{h}=\frac{2\ell(\ell+1)}{f}\left[2r^{a}\partial_{a}+\frac{3f(\ell(\ell+ 1)-f)-\ell(\ell+1)-1}{r\Lambda}\right]\psi, \tag{5.53}\] while the equation of motion for \(\tilde{h}\) is \[0 =\Lambda\hat{h}_{1}+\frac{(\ell(\ell+1)+1-f)\Lambda}{f}r^{a} \partial_{a}\psi+\frac{(\ell+2)(\ell-1)(\ell(\ell+1)+\Lambda)}{r}\psi\] \[\quad+\frac{2r\Lambda}{f}t^{a}t^{b}\nabla_{a}\nabla_{b}\psi- \left(\frac{\ell(\ell+1)+1}{2}+\left(\frac{1}{\ell(\ell+1)}-2\right)f\right) \tilde{h}. \tag{5.54}\] This fixes \(\hat{h}_{1}\) once we use eq. (5.53), although we do not need to know \(\hat{h}_{1}\) in order to integrate it out of the action, as it multiplies the constraint (5.53) that it enforces. We will however need this equation in order to construct off-shell duality operators for the metric perturbations. Plugging eq. (5.53) into the action we finally obtain, after some integrations by parts and algebra, \[\mathcal{L}^{\alpha}_{\text{even}}=-4\ell(\ell+1)(\ell+2)(\ell-1)\left[( \partial\psi)^{2}+\frac{V_{+}}{f}\psi^{2}\right], \tag{5.55}\] where \[V_{+}=\frac{f}{3r^{2}}\left(\Lambda+\frac{2(\ell+2)^{2}(\ell-1)^{2}\left(1+\ell( \ell+1)\right)}{\Lambda^{2}}\right) \tag{5.56}\] is the _Zerilli potential_[37]. Finally we canonically normalize, \[\psi=\frac{1}{2\sqrt{\ell(\ell+1)(\ell+2)(\ell-1)}}\Psi_{+}, \tag{5.57}\] to obtain the Zerilli action for the even sector: \[\boxed{\mathcal{L}_{\text{even}}=-\frac{1}{2}(\partial\Psi_{+})^{2}-\frac{1}{ 2}\frac{V_{+}}{f}\Psi_{+}^{2}.} \tag{5.58}\] The main benefit of working with \(\alpha\) gauge is that the field redefinitions we needed to perform did not involve derivatives, but a choice of gauge is not a choice of physics, and indeed in Regge-Wheeler gauge we can follow a similar procedure to reduce the action (5.40a) to the Zerilli action (5.58). We begin again by integrating out \(h_{tr}\sim\hat{h}_{0}\), while \(h_{tt}\sim h-2\hat{h}_{1}\) is a Lagrange multiplier that imposes a constraint on \(K\) and \(h_{rr}\sim h+2\hat{h}_{1}\) (and in turn drops out of the action). To demix the remaining two variables and canonically normalize we perform a field redefinition, \[h+2\hat{h}_{1}=\sqrt{\frac{\ell(\ell+1)}{2(\ell+2)(\ell-1)}}\frac{\Lambda}{rf} \Psi_{+}+\frac{1}{f}\left(2rr^{a}\partial_{a}-\Lambda\right)K. \tag{5.59}\] The even sector is inordinately complicated, and the procedure we have done is not unique, and may not be the simplest or clearest. Alternative approaches would therefore be interesting to explore. An obvious alternative is to integrate out \(h\) first rather than \(\hat{h}_{0}\). Furthermore, the decomposition (5.45) can be swapped for a more elegant argument in terms of differential forms analogously to the odd sector [12], which may therefore admit an auxiliary variable formulation. And of course an approach eliding the Regge-Wheeler and Zerilli equations altogether in favor of the Teukolsky equation would be of exceptional interest. ## 6 Chandrasekhar duality The linearized Einstein-Hilbert action is a complicated functional of the metric perturbations (cf. eqs. (5.21) and (5.40)), but by integrating out the non-dynamical degrees of freedom we obtained a simple action in terms of the Regge-Wheeler and Zerilli variables, \[\boxed{S=\sum_{\ell=2}^{\infty}\sum_{m=-\ell}^{\ell}\sum_{\pm}\int\mathrm{d}^{ 2}x\sqrt{-g}\left(-\frac{1}{2}(\partial\Psi_{\pm})^{2}-\frac{1}{2f}V_{\pm} \Psi_{\pm}^{2}\right),} \tag{6.1}\] where \(V_{+}\) and \(V_{-}\) are the usual Zerilli [37] and Regge-Wheeler [34] potentials, respectively. It is important to pause here to emphasize the difference between on-shell and off-symmetries. We could have constructed eq. (6.1) directly from the Regge-Wheeler and Zerilli equations, but it was a non-trivial exercise to get there from the Einstein-Hilbert action using standard field theory tools. Having done this exercise, we will be able to construct an off-shell duality symmetry of the original action (5.2). First let us demonstrate the duality invariance of the Regge-Wheeler/Zerilli action (6.1). It is a remarkable fact that the two seemingly-disparate potentials \(V_{\pm}\) (cf. eqs. (5.28) and (5.56)) can be written in a unified form in terms of a single _superpotential_[1, 2, 38, 39, 40, 41],25 Footnote 25: Note also the relation \(V_{+}=V_{-}-2\partial_{r_{\star}}^{2}\Lambda\)[42]. For electric-magnetic duality on charged black hole backgrounds, see Ref. [43]. \[V_{\pm}=W^{2}\mp r^{a}\partial_{a}W+\beta, \tag{6.2}\] where the superpotential \(W(r)\) and constant \(\beta\) are given by \[W(r)=-\left(\frac{3}{2}\frac{rRf}{\Lambda}+\sqrt{-\beta}\right),\quad\beta \equiv-\left(\frac{\ell(\ell+1)(\ell+2)(\ell-1)}{6r_{\rm s}}\right)^{2}. \tag{6.3}\] It is straightforward to check that the action (6.1) is invariant under the duality symmetry \[\boxed{\delta\Psi_{\pm}=\left(r^{a}\partial_{a}\mp W\right)\Psi_{\mp}.} \tag{6.4}\] The transformation (6.4) is an off-shell symmetry of the action, and coincides on shell with the venerable _Chandrasekhar duality_[1, 38, 39, 40].26 This "hidden" symmetry of the linearized Einstein equations relates a solution \(\Psi_{\pm}\) to the Regge-Wheeler or Zerilli equation to a solution \(\Psi_{\mp}\) to the other equation, which can be constructed in frequency space via27 Footnote 26: For Chandrasekhar duality beautifully visualized, see Ref. [44]. Footnote 27: The prefactor is not strictly necessary since any constant multiple of this will also be a solution. However we include this prefactor to emphasize the existence of _algebraically-special_ modes for which \(\omega^{2}=\beta\), where special care must be taken. We will not discuss algebraically-special modes in this work. \[\Psi_{\pm}=\frac{1}{\beta-\omega^{2}}\left(\frac{\mathrm{d}}{\mathrm{d}r_{ \star}}\Psi_{\mp}\mp W\Psi_{\mp}\right). \tag{6.5}\] We note that, intriguingly, this symmetry structure also appears in _supersymmetric quantum mechanics_, the theory of \(0+1\)-dimensional supersymmetry [45].28 The Chandrasekhar duality is responsible for the crucial result that, for four-dimensional black holes in GR, the even and odd sectors are _isospectral_, meaning they share the same quasinormal mode spectrum.29 Footnote 29: This is a consequence of the fact that, if \(\Psi_{\pm}\) satisfies the boundary conditions which define a quasinormal mode, then the \(\Psi_{\mp}\) generated by eq. (6.5) does as well, so the Chandrasekhar transformation relates quasinormal modes of even and odd parity without changing the frequency (excluding algebraically-special modes). As we will see, this is also true for the infalling boundary conditions used to calculate Love numbers. With the off-shell symmetry (6.4) in hand, we can compute conserved quantities using the Noether procedure. The conservation law, in coordinates, is \[\partial_{t}J^{t}+\partial_{r_{\star}}J^{r_{\star}}=0, \tag{6.8}\] with the current \[J^{t} =\dot{\Psi}_{+}A^{\dagger}\Psi_{-}-\dot{\Psi}_{-}A\Psi_{+}\] \[=-\Psi_{+}^{\prime}\dot{\Psi}_{-}-\dot{\Psi}_{+}\Psi_{-}^{\prime} +W\left(\Psi_{-}\dot{\Psi}_{+}-\Psi_{+}\dot{\Psi}_{-}\right), \tag{6.9a}\] \[J^{r_{\star}} =\dot{\Psi}_{+}\dot{\Psi}_{-}-(A\Psi_{+})(A^{\dagger}\Psi_{-})- \beta\Psi_{+}\Psi_{-}\] \[=\dot{\Psi}_{+}\dot{\Psi}_{-}+\Psi_{+}^{\prime}\Psi_{-}^{\prime} +W\left(\Psi_{+}\Psi_{-}^{\prime}-\Psi_{-}\Psi_{+}^{\prime}\right)-\left(W^{2 }+\beta\right)\Psi_{+}\Psi_{-}. \tag{6.9b}\] Here overdots denote derivatives with respect to \(t\), and primes denote \(\partial_{r_{\star}}\) derivatives. ### A complex master variable Similarly to the spin-1 case, we can combine the Regge-Wheeler and Zerilli variables into a complex variable, \[\Psi\equiv\frac{\Psi_{+}+i\Psi_{-}}{\sqrt{2}}, \tag{6.10}\] in terms of which the Lagrangian (6.1) takes a very simple form, \[\mathcal{L} =-\frac{1}{2}\sum_{\pm}\left((\partial\Psi_{\pm})^{2}+\frac{V_{ \pm}}{f}\Psi_{\pm}^{2}\right)\] \[=-\partial_{a}\Psi\partial^{a}\bar{\Psi}+\frac{1}{2f}r^{a} \partial_{a}W(\Psi^{2}+\bar{\Psi}^{2})-\frac{W^{2}+\beta}{f}\Psi\bar{\Psi} \tag{6.11}\] as does the duality transformation, \[\delta\Psi=i\left(r^{a}\partial_{a}\bar{\Psi}+W\Psi\right). \tag{6.12}\] Let us confirm this is a symmetry. Under a general variation, the Lagrangian changes as \[\delta\mathcal{L}=\bar{\mathcal{E}}\delta\Psi+\mathcal{E}\delta\bar{\Psi}, \tag{6.13}\] where the equation of motion \(\mathcal{E}\) is \[\mathcal{E}\equiv\Box\Psi+\frac{1}{f}r^{a}\partial_{a}W\bar{\Psi}-\frac{W^{2}+ \beta}{f}\Psi. \tag{6.14}\] In terms of the quantity \[\bar{Q}=r^{a}\partial_{a}\bar{\Psi}+W\Psi, \tag{6.15}\] the variation of the Lagrangian under \(\delta\Psi=i\bar{Q}\) is \[\delta\mathcal{L} =i\left(\bar{Q}\bar{\mathcal{E}}-Q\mathcal{E}\right)\] \[=2\operatorname{Im}Q\mathcal{E}. \tag{6.16}\] Now we calculate \(Q\mathcal{E}\) and freely integrate by parts, \[Q\mathcal{E} =\left(r^{a}\partial_{a}\Psi+W\bar{\Psi}\right)\left(\Box\Psi+ \frac{1}{f}r^{a}\partial_{a}W\bar{\Psi}-\frac{W^{2}+\beta}{f}\Psi\right)\] \[=W\left(-\partial_{a}\Psi\partial^{a}\bar{\Psi}+\frac{r^{a} \partial_{a}W}{f}\left(\Psi^{2}+\bar{\Psi}^{2}\right)-\frac{W^{2}+\beta}{f} \Psi\bar{\Psi}\right). \tag{6.17}\] The last line is manifestly real, so that the variation of the Lagrangian vanishes as expected, \[\delta\mathcal{L}=2\operatorname{Im}Q\mathcal{E}=0. \tag{6.18}\] Using similar manipulations we can also calculate the conserved current, \[J_{a}=v_{a}-\bar{v}_{a}+W\left(\bar{\Psi}\partial_{a}\Psi-\Psi\partial_{a}\bar {\Psi}\right)-\frac{W^{2}+\beta}{2f}r_{a}\left(\Psi^{2}-\bar{\Psi}^{2}\right) \tag{6.19}\] where we have defined \[v^{a}\partial_{a}=-(\partial_{r}\Psi)(\partial_{t}\Psi)\partial_{t}+\frac{f^{ 2}(\partial_{r}\Psi)^{2}+(\partial_{t}\Psi)^{2}}{2}\partial_{r} \tag{6.20}\] such that \(r^{a}\partial_{a}\Psi\Box\Psi=\nabla_{a}v^{a}\). Analogously to the spin-1 case, it is natural to wonder whether this complex master variable is related to the middle-weight Weyl scalar, \(\Psi_{2}\). A new complication in the gravitational case is that \(\Psi_{2}\) has a background value, and accordingly its perturbation \(\delta\Psi_{2}\) is not gauge-invariant. Nevertheless one can construct a gauge-invariant version \(\widetilde{\delta\Psi_{2}}\) which contains the Regge-Wheeler and Zerilli variables [46], \[\widetilde{\delta\Psi_{2}}=\frac{\sqrt{(\ell-1)\ell(\ell+1)(\ell+2)}}{2r^{3}} \left[\frac{\Lambda}{(\ell+2)(\ell-1)}\Psi_{+}+i\Psi_{-}\right]Y(\theta) \tag{6.21}\] This is not quite our master variable \(\Psi\), as the real (even) piece is a rescaling of the Zerilli variable. We leave a further exploration of this question for future work. ### Flat-space limit: linearized gravitational duality We can gain some physical insight by looking at the flat-space limit, \(r_{\rm s}\to 0\). The expression (6.4) for \(\delta\Psi_{\pm}\) diverges due to the \(1/r_{\rm s}\) scaling in \(W(r)\), which can be remedied by sending \(\delta\Psi_{\pm}\to r_{\rm s}\delta\Psi_{\pm}\) before taking the limit. In this limit we have an \(SO(2)\) symmetry acting on \((\Psi_{+},\Psi_{-})\) similar to the electromagnetic case, \[\delta\Psi_{+} =-\Psi_{-}, \tag{6.22a}\] \[\delta\Psi_{-} =\Psi_{+}. \tag{6.22b}\] Direct calculation shows that, on shell, this duality generates rotations between the Riemann tensor and its dual, \[\delta R_{\mu\nu ab} =\star R_{\mu\nu ab}, \tag{6.23a}\] \[\delta\star R_{\mu\nu ab} =-R_{\mu\nu ab}, \tag{6.23b}\] where the dual Riemann tensor is defined as \[\star R_{\mu\nu ab}=\frac{1}{2}\epsilon_{\mu\nu\rho\sigma}R^{\rho\sigma}{}_{ ab}. \tag{6.24}\] This is the well-known gravitational "electric-magnetic" duality, lifted to an off-shell symmetry for linear perturbations around flat space [47]. We conclude that the symmetry (6.4) is an extension of electromagnetic duality to Schwarzschild backgrounds. An off-shell duality symmetry has also been found to hold for Minkowski [47], de Sitter [48], and anti-de Sitter backgrounds [49]. Adding to this list Schwarzschild, which is less symmetric than the others, raises interesting questions: which other backgrounds possess a linearized duality symmetry, and what physical mechanism underlies these symmetries? ### Chandrasekhar duality off-shell The symmetry (6.4) can be lifted to a symmetry of the linearized Einstein-Hilbert action in terms of the metric perturbations, eqs. (5.21) and (5.40), analogously to electromagnetism. The calculation itself is cumbersome and not especially enlightening, so we will outline the steps without presenting full expressions. Let us begin with the transformation of the odd-sector variable \(h_{a}\). Using its solution (5.24) and undoing various rescalings, we have \[\delta h_{a}=\frac{1}{\sqrt{2\ell(\ell+1)r^{2}}}\epsilon_{ab}\partial^{b}\left( r\delta\Psi_{-}\right), \tag{6.25}\] where \(\delta\Psi_{-}\) is given by eq. (6.4). That expression is constructed from \(\Psi_{+}\), which we in turn write in terms of even-sector metric perturbations by following the chain of field redefinitions. For the even sector, we vary the expressions in terms of \(\Psi_{+}\) for \(h_{ab}\) and \(\alpha\) or \(K\), use eq. (6.4), and relate \(\Psi_{-}\) to \(h_{a}\) via \[\Psi_{-}=\frac{r^{3}}{2\sqrt{(\ell+2)(\ell-1)}}\epsilon^{ab}F_{ab}. \tag{6.26}\] In this way we construct (rather complicated) expressions \(\delta h_{\mu\nu}[h]\) which one can verify by explicit calculation comprise an off-shell symmetry of eqs. (5.21) and (5.40). Interestingly they can be simplified somewhat using the equations of motion, in which case the expressions become entirely local. A natural question for future investigation is whether the \(\delta h_{\mu\nu}\) constructed this way is equal to a dual potential \(\tilde{h}_{\mu\nu}\). Since only the electric part of the Weyl tensor has a non-vanishing background value, the linearized duality transformations do not simply rotate \(C_{\mu\nu\alpha\beta}\) and \(\tilde{C}_{\mu\nu\alpha\beta}\). ## 7 Physical implications: Love numbers Another aspect of black hole perturbation theory in which symmetry has recently been found to play a crucial role is in the computation of _tidal Love numbers_. In particular, the puzzle over the unexpected vanishing of black hole Love numbers [50, 51, 52, 53, 54] spurred the discovery of underlying symmetry structures [55, 56, 57, 58]. It turns out that the duality symmetry which is the focus of this paper also plays a role in the symmetry story for Love numbers. Consider the Regge-Wheeler action (5.22) in the static sector, i.e., setting time derivatives to zero, \[\mathcal{L}^{\omega=0}_{\text{odd}}=\frac{1}{2}r^{4}h_{0}^{\prime 2}+\frac{( \ell+2)(\ell-1)}{2}r^{2}\left(\frac{1}{f}h_{0}^{2}-fh_{1}^{2}\right), \tag{7.1}\] where primes denote \(r\) derivatives. In the static limit \(h_{1}\) is auxiliary and decouples from \(h_{0}\), so can be consistently set to zero. The Regge-Wheeler variable \(\Psi_{-}\) is related to \(h_{0}\) by \[\Psi_{-}=r^{3}h_{0}^{\prime},\quad h_{0}=\frac{f}{r^{2}}\partial_{r}\left(r \Psi_{-}\right). \tag{7.2}\] In Ref. [55] it was shown that the static Regge-Wheeler equation is invariant under _ladder symmetries_ which are responsible for the vanishing of tidal Love numbers in the odd sector. These come in the form of raising and lowering operators which relate solutions of the Regge-Wheeler equation to a solution with \(\ell\) raised or lowered by one, \[D_{\ell}^{+}=-r^{2}f\partial_{r}+\frac{\ell^{2}+3}{2(\ell+1)}r_{ \text{s}}-\ell r, \tag{7.3a}\] \[D_{\ell}^{-}=r^{2}f\partial_{r}+\frac{\ell^{2}(r_{\text{s}}-2r)- 2\ell(r-r_{\text{s}})+4r_{\text{s}}}{2\ell}. \tag{7.3b}\] At the lowest rung of the ladder, \(\ell=2\), there is a further symmetry given by \(\delta\Psi_{-}^{\ell=2}=Q_{2}\Psi_{-}^{\ell=2}\), with \[Q_{2}=r^{6}f\partial_{r}-3r^{5}f. \tag{7.4}\] It follows that any \(\ell\) mode is symmetric under the "horizontal" ladder symmetry \[\delta\Psi_{-}=Q_{\ell}\Psi_{-}, \tag{7.5}\] where \(Q_{\ell}\) is built recursively from \(Q_{2}\), \[Q_{\ell}\equiv D_{\ell-1}^{+}Q_{\ell-1}D_{\ell}^{-}. \tag{7.6}\] Transforming from \(\Psi_{-}\) to \(h_{0}\), we see that the metric transforms under the horizontal odd-sector ladder symmetry as \[h_{0}\to h_{0}+\frac{f}{r^{2}}\partial_{r}\left[rQ_{\ell}(r^{3}h_{0}^{ \prime})\right],\quad h_{1}\to h_{1}. \tag{7.7}\] It is straightforward to check that eq. (7.7) is a symmetry of eq. (7.1). However, such a symmetry of the Zerilli equation is not apparent. Indeed, the argument for the vanishing of Love numbers for the Zerilli equation in Ref. [55] relied on the fact, as we will show, that the duality invariance (6.4) implies that the even and odd Love numbers are equal.30 Footnote 30: This is to some extent an artifact of our insistence on working with the Regge-Wheeler and Zerilli master equations. The main result in Ref. [55] worked with the Teukolsky equation, which, besides not being limited to the Schwarzschild case, contains both even and odd modes. Ladder operators for the Zerilli equation can be constructed straightforwardly by sandwiching a Regge-Wheeler ladder operator between two applications of the duality symmetry, e.g., for the horizontal operators, \[\delta\Psi_{+,\ell}=\left(\partial_{r_{*}}-W\right)Q_{\ell}\left(\partial_{r _{*}}+W\right)\Psi_{+,\ell}. \tag{7.8}\] It would be very interesting to know whether this symmetry is responsible for universal relations such as I-Love-Q [59, 60]. ### Equality of Love numbers from gravitational duality Let us finish by establishing that the vanishing of the duality Noether current requires the tidal Love numbers in the even and odd sectors to be equal. Following Ref. [61], we calculate the Love numbers for static solutions by imposing regularity at the horizon and examining the behavior of the fields at infinity, \[\Psi_{\pm}\to\bar{\Psi}_{\pm}\left(r^{\ell+1}+\hat{\lambda}_{\pm}r^{-\ell} \right), \tag{7.9}\] where \(\hat{\lambda}_{\pm}\) are the Love numbers for the even (\(+\)) and odd (\(-\)) sectors and \(\bar{\Psi}_{\pm}\) are constants. Since we are looking at static solutions, conservation of the Noether current (6.8) becomes the statement that the \(r_{\star}\) component (6.9b) is constant. First we need to ensure that the duality transformation (6.4) preserves the boundary conditions, namely, if \(\Psi_{\pm}\) is regular at the horizon, then so is \(\partial_{r_{\star}}\Psi_{\pm}\mp W(r)\Psi_{\pm}\). From eq. (6.3) we see that \(W(r_{\rm s})\) is finite, which leaves us to check that \(\partial_{r_{\star}}\Psi_{\pm}=f(r)\partial_{r}\Psi_{\pm}\) is regular at \(r=r_{\rm s}\). We can see this by solving the Regge-Wheeler and Zerilli equations perturbatively near the horizon, \[f\partial_{r}(f\partial_{r}\Psi_{\pm})=V_{\pm}\Psi_{\pm}. \tag{7.10}\] It is convenient to use \(f\) as our radial coordinate, so that we can simply expand around \(f=0\) to look at the horizon. Using the fact that the Regge-Wheeler and Zerilli potentials both scale as \(f\) near the horizon, and that \(\partial_{r}=f^{\prime}(r)\partial_{f}\approx\partial_{f}/r_{\rm s}\), we have \[f\partial_{f}(f\partial_{f}\Psi_{\pm})\approx\frac{(\ell-1)\ell(\ell+1)(\ell+ 2)\pm 3}{\ell(\ell+1)+1}f\Psi_{\pm}. \tag{7.11}\] Near the horizon this is solved by \[\Psi_{\pm}=c_{1}^{\pm}\left(1+\mathcal{O}(f)\right)+c_{2}^{\pm}\ln f\left(1+ \mathcal{O}(f)\right). \tag{7.12}\] Regularity at the horizon demands \(c_{2}^{+}=c_{2}^{-}=0\), so that \(f\partial_{r}\Psi_{\pm}\approx f\partial_{f}\Psi_{\pm}\to 0\) as \(f\to 0\).31 So if \(\Psi_{\pm}\) is a solution with boundary conditions suitable for computing Love numbers, then \(\tilde{\Psi}_{\pm}\equiv\Psi_{\pm}+\delta\Psi_{\pm}\) is as well. Footnote 31: We must also check that the subdominant terms do not blow up at the horizon. Assuming that the subdominant terms on the log side go as \(f^{n}\ln f\), then these contribute harmlessly to \(f\partial_{f}\Psi_{\pm}\) for \(n>0\), and moreover they vanish upon perturbatively solving the equation of motion. Now we simply need to compute \(J^{r_{\star}}\) at the horizon and at infinity and equate the two, where for static solutions \[J^{r_{\star}}=\Psi_{+}^{\prime}\Psi_{-}^{\prime}+W\left(\Psi_{+}\Psi_{-}^{ \prime}-\Psi_{-}\Psi_{+}^{\prime}\right)-(W^{2}+\beta)\Psi_{+}\Psi_{-}. \tag{7.13}\] We begin by evaluating this at the horizon. Primes denote \(r_{\star}\) derivatives, and we are assuming that \(\Psi_{\pm}\) are regular at the horizon, so \(\Psi_{\pm}^{\prime}=(1-r_{\rm s}/r)\partial_{r}\Psi_{\pm}=0\) at \(r=r_{\rm s}\). From eq. (6.3) we see \(W^{2}(r_{\rm s})+\beta=0\), so that the current vanishes for static solutions with regular boundary conditions, \[J^{r_{\star}}=0. \tag{7.14}\] At infinity, we again have \(W^{2}(\infty)+\beta=0\), so the leading-order terms will be those with only one derivative, \[J^{r_{\star}} \to W(\infty)\left(\Psi_{+}\Psi_{-}^{\prime}-\Psi_{-}\Psi_{+}^{ \prime}\right)\] \[=-\frac{\bar{\Psi}_{+}\bar{\Psi}_{-}}{6r_{\rm s}}\ell(\ell+1)( \ell-1)(\ell+2)(2\ell+1)\left(\hat{\lambda}_{+}-\hat{\lambda}_{-}\right). \tag{7.15}\] Since \(J^{r_{\star}}=0\) everywhere, we conclude that \[\begin{array}{|c|}\hline\hat{\lambda}_{+}=\hat{\lambda}_{-},\end{array} \tag{7.16}\] i.e., the even and odd sectors are forced to have equal Love numbers as a consequence of symmetry. It turns out that both of these Love numbers are strictly zero [50, 51, 52, 53, 54], which is a consequence of a different symmetry than the duality considered in this paper [55, 56, 57, 58], but these conclusions are distinct from each other, i.e., eq. (7.16) does not just say \(0=0\). The equality of Love numbers follows from the invariance under duality of the boundary conditions. This is clearly the case for black holes, where regularity at the horizon implies the duality-invariant \(c_{2}^{+}=c_{2}^{-}\), but one could also in principle imagine a horizonless compact object with non-zero but (approximately) equal Love numbers, provided that whatever boundary conditions are chosen at its surface are invariant under duality and that the object is sufficiently compact that \(J^{r_{\star}}\approx 0\). ## 8 Discussion We have computed the actions for scalar, electromagnetic, and linearized fields on a Schwarszchild background in the \(2+2\) formalism. In each case we focused on isolating and canonically normalizing the underlying dynamical degrees of freedom. In the cases of electromagnetism and gravity, this exercise revealed a manifest electric-magnetic duality symmetry, which holds off shell and accordingly can be used to construct conserved quantities. As a physical application of the Noether current associated to linearized gravitational duality, we showed that duality forces the even- and odd-parity perturbations to have identical tidal responses. Combining this duality with a "ladder" symmetry [55] which causes the odd Love numbers to vanish therefore extends that particular argument for vanishing Love numbers to even perturbations. It would be interesting to explore whether these symmetries play a role in universal relations for compact objects. In the case of electromagnetism, we found a clear connection to objects arising in the Newman-Penrose and Geroch-Held-Penrose formalisms: the dynamical master variable is related to the middle-weight Maxwell scalar \(\phi_{1}\). This observation enabled us to derive actions for the Fackerell-Ipser equation and Teukolsky-Starobinsky identities. It would be quite interesting to extend these constructions to the Teukolsky equation for the extreme-weight Maxwell scalars, to gravity, and to Kerr, which is the case of prime astrophysical interest. We leave these questions for future work. ## Acknowledgements I am grateful to Lam Hui, Austin Joyce, Riccardo Penco, and Luca Santoni for collaboration, and many insightful discussions, on duality and other topics in black hole perturbation theory. This work made substantial use of xAct and the diffgeo Mathematica package by Matthew Headrick. My research is partially supported by funds from the Natural Sciences and Engineering Research Council (NSERC) of Canada. Research at the Perimeter Institute is supported in part by the Government of Canada through NSERC and by the Province of Ontario through MRI. ## Appendix A \(2+2\) Ricci tensor components In this appendix we reproduce the components of the linearized Ricci tensor \(\delta R_{ab}\) in the \(2+2\) split [10, 11, 12], using the partially gauge-fixed metric perturbation (5.11). We do not present a complete list but focus on components of \(\delta R_{ab}\) necessary to compute the linearized Einstein-Hilbert action, cf. eqs. (5.16a) and (5.16b). ### Odd perturbations In the odd sector, the only non-zero component is \[\delta R_{aA}^{B} =\left(\frac{1}{2r^{2}}\nabla^{b}\left(r^{4}F_{ab}\right)-h_{a} \right)B_{A}+h_{a}D^{B}D_{[A}B_{B]}\] \[=\underbrace{\left(\frac{1}{2r^{2}}\nabla^{b}\left(r^{4}F_{ab} \right)+\frac{(\ell+2)(\ell-1)}{2}h_{a}\right)}_{\delta R_{a}^{B}}B_{A},\] (A.1) where \[F_{ab}=\partial_{a}h_{b}-\partial_{b}h_{a}.\] (A.2) In going to the second line we used the identity \[D^{B}D_{[A}B_{B]}=\frac{1}{2}\ell(\ell+1)B_{A}.\] (A.3) To prove this, notice that \(D_{[A}B_{B]}\propto\epsilon_{AB}\) by symmetry, where the coefficient is \[D_{[A}B_{B]} =\frac{\epsilon^{CD}D_{C}B_{D}}{2}\epsilon_{AB}\] \[=\frac{\ell(\ell+1)}{2}\epsilon_{AB},\] (A.4) where we have used the definition (2.10b) of \(B_{A}\). Taking a derivative and using the definition again, the result follows. ### Even perturbations To compute the Lagrangian (5.16a) we need the following components of the perturbed Ricci tensor: \[\delta R_{ab},\quad\Omega^{AB}\delta R_{AB},\quad\delta R_{a}^{E}.\] (A.5) Note that we do not need the piece of \(\delta R_{a}^{E}\) involving \(K\). The relevant pieces of the relevant components are \[\delta R_{ab} =\left(\frac{1}{2}r^{-2}\nabla_{c}\left(r^{2}\nabla_{d}\hat{h}^{cd} \right)-\frac{1}{4}\Box h\right)g_{ab}+\frac{2}{r}r_{c}\hat{C}^{c}{}_{\langle ab \rangle}+\frac{1}{r}r_{\langle a}\partial_{b\rangle}h\] \[\quad-r^{-2}\nabla_{(a}\left(r^{2}\nabla_{b)}K\right)-\frac{\ell( \ell+1)}{r^{2}}\nabla_{(a}\left(r^{2}r_{b)}\alpha\right)+\frac{1}{2}R\hat{h}_{ ab}+\frac{\ell(\ell+1)}{2r^{2}}h_{ab},\] (A.6a) \[\Omega^{AB}\delta R_{AB} =2\nabla_{a}(rr_{b}\hat{h}^{ab})+\frac{\ell(\ell+1)+2}{2}h-r^{2} \nabla_{a}\left(r^{4}\nabla^{a}K\right)+(\ell+2)(\ell-1)K\] \[\quad-\ell(\ell+1)\left[r^{2}r^{a}\partial_{a}\alpha+(1+3f)r \alpha\right],\] (A.6b) \[\delta R_{a}^{E} =\frac{1}{2}\nabla^{b}\hat{h}_{ab}-\frac{1}{4}\nabla_{a}h+\frac{ 1}{2}\partial_{a}\ln rh-\frac{1}{r^{2}}\nabla^{b}\left[r^{4}r_{[a}\nabla_{b]} \alpha\right]-r_{a}\alpha.\] (A.6c) Angular brackets denote tracefree symmetrization, \[T_{\langle ab\rangle}=T_{(ab)}-\frac{1}{2}T^{c}{}_{c}g_{ab}.\] (A.7) In the above we have defined \[\hat{C}^{c}{}_{ab}=\nabla_{(a}\hat{h}_{b)}^{c}-\frac{1}{2}\nabla^{c}\hat{h}_{ ab}.\] (A.8) The expression for \(\delta R_{ab}\) contains the term \(\nabla_{c}\hat{C}^{c}{}_{ab}\) (cf. Ref. [11]), which we have simplified using the identity \[\nabla^{c}\nabla_{(a}p_{b)c}-\frac{1}{2}\Box p_{ab}-\frac{1}{2}g_{ab}\nabla^{ c}\nabla^{d}p_{cd}=\frac{R}{2}p_{ab}\] (A.9) for symmetric traceless tensors \(p_{ab}\) in \(D=2\), \[\nabla_{c}\hat{C}^{c}{}_{ab}=\frac{1}{2}R\hat{h}_{ab}+\frac{1}{2}\nabla_{c} \nabla_{d}\hat{h}^{cd}g_{ab}.\] (A.10)
2310.12360
GRI: Graph-based Relative Isomorphism of Word Embedding Spaces
Automated construction of bilingual dictionaries using monolingual embedding spaces is a core challenge in machine translation. The end performance of these dictionaries relies upon the geometric similarity of individual spaces, i.e., their degree of isomorphism. Existing attempts aimed at controlling the relative isomorphism of different spaces fail to incorporate the impact of semantically related words in the training objective. To address this, we propose GRI that combines the distributional training objectives with attentive graph convolutions to unanimously consider the impact of semantically similar words required to define/compute the relative isomorphism of multiple spaces. Experimental evaluation shows that GRI outperforms the existing research by improving the average P@1 by a relative score of up to 63.6%. We release the codes for GRI at https://github.com/asif6827/GRI.
Muhammad Asif Ali, Yan Hu, Jianbin Qin, Di Wang
2023-10-18T22:10:47Z
http://arxiv.org/abs/2310.12360v1
# GRI: Graph-based Relative Isomorphism of Word Embedding Spaces ###### Abstract Automated construction of bilingual dictionaries using monolingual embedding spaces is a core challenge in machine translation. The end performance of these dictionaries relies upon the geometric similarity of individual spaces, i.e., their degree of isomorphism. Existing attempts aimed at controlling the relative isomorphism of different spaces fail to incorporate the impact of semantically related words in the training objective. To address this, we propose GRI that combines the distributional training objectives with attentive graph convolutions to unanimously consider the impact of semantically similar words required to define/compute the relative isomorphism of multiple spaces. Experimental evaluation shows that GRI outperforms the existing research by improving the average P\(@1\) by a relative score of up to 63.6%. We release the codes for GRI at [https://github.com/asif6827/GRI](https://github.com/asif6827/GRI). ## 1 Introduction Bilingual Lexical Induction (BLI) aims at the construction of lexical dictionaries using different mono-lingual word embeddings. Automated construction of bilingual dictionaries plays a significant role, especially for resource-constrained languages where hand-crafted dictionaries are almost non-existent. It is also a key tool to bootstrap the performance of many down-streaming applications, e.g., cross-lingual information retrieval Artetxe et al. (2018), neural machine translation (Lample et al., 2018). The most prevalent way for the construction of cross-lingual embeddings is to map the monolingual embeddings in a shared space using linear and/or non-linear transformations, also known as mapping-based methods Conneau et al. (2017); Joulin et al. (2018); Patra et al. (2019). A core limitation of the mapping-based methods is their reliance on the approximate isomorphism assumption, i.e., the underlying monolingual embedding spaces are geometrically similar. This severely limits the applicability of the mapping-based methods to closely related languages and similar data domains. This isomorphism assumption does not hold, especially in case of domain-mismatch and for languages exhibiting different characteristics Conneau et al. (2017); Sogaard et al. (2018); Glavas et al. (2019); Patra et al. (2019). Other dominant factors identified in the literature that limit the end performance of BLI systems include: (i) linguistic differences (ii) algorithmic mismatch, (iii) variation in data size, (iv) parameterization etc. Similar to the supervised models, the unsupervised variants of BLI are also unable to cater to the above-mentioned challenges Kim et al. (2020); Marie and Fujita (2020). Instead of relying on embedding spaces trained completely independent of each other, in the recent past there have been a shift in explicitly using the isomorphism measures alongside distributional training objective Marchisio et al. (2022). In order to control the relative isomorphism of monolingual embedding spaces, these models use existing bilingual dictionaries as training seeds. However, one core limitation of these models is their inability to incorporate the impact of semantically relevant tokens into the training objective. This severely deteriorates the relative isomorphism of the resultant cross-lingual embedding space. Figure 1: Semantically related tokens for English and Ukrainian languages. These words though lexically varying carry the same semantics and their impact should be unanimously considered. This phenomenon is illustrated in Figure 1 for English and Ukrainian languages. For example, for the English language, we often use terms {_"terrible"_, _"horrible"_} within the same context without a significant change in the meaning of the sentence. For these terms, corresponding terms in the Ukrainian language {"crganihi", "kakalih-bo"} may also be used interchangeably without a significant change in the context. Likewise, for the bottom row in Figure 1, the words {_"good"_, _"great"_, _"excellent"_} are semantically related words in the English language, with {"bijmihno", "yydobo", "l006pe"} as corresponding semantically related words in the Ukrainian language. To address these challenges, in this paper we propose a novel framework named: Graph-based Relative Isomorphism (GRI). GRI uses attentive graph convolutions to pay attention to semantically related tokens, followed by using isomorphism metrics to inject this information into the model training. Later, it combines the isomorphism loss with the distributional training objective to train the complete model. We argue GRI offers a better alternative for BLI, as it allows injecting information about the semantic variations of tokens in the training objective, which is a more natural setting in order to control the relative isomorphism of linguistic data. An immediate benefit of the proposed model is obvious in the domain-mismatch settings, where attentive graph convolutions mechanism by GRI offer the ability to unanimously analyze and/or model similar concepts represented by lexically varying terms across different corpora. This is also evident by a relatively stable performance of GRI for both domain-sharing and domain-mismatch settings (Section 6.1). We summarize the core contributions of this paper as follows: 1. We propose GRI that combines isomorphism loss functions (guided by graph convolutions) along with the distributional training objective for BLI. 2. We propose attentive graph convolutions for GRI in order to control the relative isomorphism by sharing information across semantically related tokens. 3. We illustrate the effectiveness of GRI via comprehensive experimentation. For benchmark data sets, GRI outperforms the existing state of the art by approximately 63.6% for average P\(@\)1. ## 2 Related Work Due to limited space, we primarily categorize the related work on relative isomorphism of cross-lingual embeddings into: (i) mapping to shared space, and (ii) joint training. Mapping to shared space.These models aim to find a linear and/or non-linear transformation for pre-trained word embeddings in order to map them to a shared space. These models rely on the assumption that the embedding models share similar structure across different languages (Mikolov et al., 2013), which allows them to independently train embeddings for different languages and learn mapping functions to align them in a shared space. Supervised variants in this regard use existing bilingual resources, such as parallel dictionaries (Xing et al., 2015; Joulin et al., 2018; Jawanpuria et al., 2019). The unsupervised variants use distributional matching (Zhang et al., 2017; Conneau et al., 2017; Artetxe et al., 2018; Zhou et al., 2019). These models have also been applied to the contextualized embeddings (Aldarmaki and Diab, 2019; Schuster et al., 2019). Joint TrainingThese models put additional constraints on model learning, i.e., a hard or soft cross-lingual constraints in addition to the monolingual training objectives. Similar to the mapping-based models, early works in this domain include the supervised variants relying on bilingual dictionaries (Ammar et al., 2016; Luong et al., 2015; Gouws et al., 2015). Recently, the unsupervised approaches have gained attention because of their ease of implementation. For instance, Lample et al. (2018) analyzed the performance for concatenated monolingual corpora with shared vocabulary without any additional cross-lingual resources. Results show that this setting outperforms many carefully crafted alignment based strategies for unsupervised machine translation. Other unsupervised approaches with good results on benchmark data sets include zero-shot cross-lingual transfer by Artetxe and Schwenk (2019) and cross-lingual pre-training by Lample and Conneau (2019). Marchisio et al. (2022) proposed IsoVec that introduces multiple different losses along with the skip-gram loss function to control the relative isomorphism of monolingual spaces. A major limitation of these methods is their inability to incorporate the lexico-semantic variations of word pairs across different languages in the model training, which severely limits the end performance of these models. ## 3 Background In this section, we discuss the notation and mathematical background of the tools and techniques used in this paper. ### Notation Throughout this paper, we use \(\mathbf{U}\in\mathbf{R}^{p\times d}\) and \(\mathbf{V}\in\mathbf{R}^{q\times d}\) to represent the embeddings of the source and target languages. We assume the availability of seeds pairs for both source and target languages, represented by: \(\{(u_{0},v_{0}),(u_{1},v_{1}),...(u_{s},v_{s})\}\). ### VecMap toolkit For mapping across different embedding spaces, we use vecmap toolkit1. We follow Zhang et al. (2019) to pre-process the embeddings, i.e., the embeddings are unit-normed, mean-centered and unit-normed again. For bilingual induction, we follow the steps outlined by Artetxe et al. (2018), i.e., whitening each space, and solving Procrustes. This is followed by re-weighting, de-whitening, and mapping of translation pairs via nearest-neighbor retrieval. For details, refer to the original work by Artetxe et al. (2018). Footnote 1: [https://github.com/artetxem/vecmap](https://github.com/artetxem/vecmap) ## 4 Proposed Approach ### Problem Definition In this paper, we address a core challenge in controlling the relative isomorphism for cross-lingual data sets, i.e., incorporate the impact of semantically coherent words for BLI. ### Overview We propose Graph-based Relative Isomorphism GRI, shown in Figure 2, that aims to learn distributional information in the source embedding space \(\mathbf{U}\), in such a way: (i) \(\mathbf{U}\) is geometrically similar to the target embedding space \(\mathbf{V}\) to the best possible extent, (ii) \(\mathbf{U}\) captures information about the semantically related terms in \(\mathbf{V}\). In order to capture the distributional information GRI uses skip-gram with negative sampling. In order to control the geometry and isomorphism of embedding space \(\mathbf{U}\) relative to space \(\mathbf{V}\), GRI uses attentive graph convolutions. Finally, it uses multiple different isomorphism metrics along with the skip-gram loss function for model training. We claim the proposed model provides the provision to perform BLI in a performance-enhanced fashion by using attentive graph convolutions for effective propagation of semantic relatedness of tokens across different languages. ### Gri In order to learn the distributional embeddings for the source language that are geometrically similar to the target embeddings GRI incorporates attentive graph convolutions along with the distributional training objective. GRI relies on the assumption that each language possesses multiple variations of semantically similar tokens that may be used interchangeably. And, in order to effectively model the relative isomorphism for the multi-lingual data sets this phenomenon needs to be captured explicitly. The proposed model (GRI) is based on the assumption that sharing information among semantically related words is inevitable in order to control the relative isomorphism of the embedding spaces. From the linguistic perspective, there are an arbitrary number of words semantically related to a given word, which makes graphs a natural choice to unanimously consider the impact of these words in the end model. We explain the individual components of GRI in the following sub-sections: #### 4.3.1 Distributional Representation In order to learn the distributional representations GRI uses skip-gram with negative sampling. For training the skip-gram model, we follow the same settings as outlined by Mikolov et al. (2013), i.e., embed a word close to its contextual neighbors and far from a set of words randomly chosen from the vocabulary. The formulation of the skip-gram loss function is illustrated in Equation 1. Figure 2: Proposed framework for Graph-based Relative Isomorphism(GRI). It combines attentive graph convolutions with the skip-gram to control the relative isomorphism for source \(\mathbf{U}\) and target \(\mathbf{V}\) embeddings. \[\begin{split}\mathcal{L}_{SG}=\log\sigma({u^{{}^{\prime}}_{c_{O}}}^{ \top}{u_{c_{I}}})+\\ \sum_{i}^{k}\mathbf{E}_{c_{i}\sim P_{n}(c)}\big{[}\log\sigma(-{u^{ {}^{\prime}}_{c_{i}}}^{\top}{u_{c_{I}}})\big{]}\end{split} \tag{1}\] Here \(u_{c_{O}}\) and \(u_{c_{I}}\) correspond to the output and input vector representation of the word \(c\). \(u^{{}^{\prime}}_{c_{i}}\) the embedding vectors for noise terms. \(P_{n}(c)\) corresponds to the noise distribution, \(k\) is the number of noisy samples. We use \(k=10\) in our case. #### 4.3.2 Capturing Semantics In order to control the relative isomorphism across the source and target embeddings GRI uses attentive graph convolutions under transductive settings in order to share information among semantically related words. The graph construction is summarized in Algorithm 1, and explained as follows: Graph Construction.Inputs for the graph construction include: (i) the supervision seed pairs for the target language, (ii) existing pre-trained word2vec embeddings2Mikolov et al. (2013). The graph construction process proceeds as follows: Footnote 2: [https://code.google.com/archive/p/word2vec/](https://code.google.com/archive/p/word2vec/), trained using Google-News Corpus of 100 billion words. Firstly, we organize the target words into all possible pairs, i.e., combinations of two words at a time. For each word pair, we compute a score (cosine similarity) of their corresponding embedding vectors. The word pairs with scores higher than a threshold (\(thr\)) are stored as the probable semantically related terms (\(\text{Pairs}_{prob}\)), illustrated in lines (2-6). We observed that using a significantly higher value for \(thr\) is beneficial, because: (i) it helps in capturing the most confident word pairs thus overcoming noise, and (ii) only a few word pairs end up in \(\text{Pairs}_{prob}\) which makes it computationally efficient. Finally, for all word pairs in \(\text{Pairs}_{prob}\), we formulate edges to construct the graph G. For each word pair, we use the cosine score of corresponding embedding vectors as the attention weight. ``` 0: Embedding (EMB); \(\text{D}_{tar}=\text{Target}(D_{tr+dev+st})\) 0: Graph: G 1:\(\text{Pairs}_{prob}\leftarrow\textbf{0}\); \(\text{G}\leftarrow\emptyset\) 2:for\((w_{1},w_{2})\leftarrow\text{Pairs}(\text{D}_{tar})\)do 3:\(y^{*}=\text{score}_{\text{EMB}}(w_{1},w_{2})\) 4:if\(y^{*}\geq thr\)then 5:\(\text{Pairs}_{prob}\leftarrow\text{Pairs}_{prob}\cup(w_{1},w_{2})\) 6:endif 7:endfor 8:for\(pair\in\text{Pairs}_{prob}\)do 9:\(\text{G}\leftarrow\text{G}\cup\{edge(pair)\}\) 10:endfor 11:return G ``` **Algorithm 1** Graph Construction Attentive Graph Convolutions.Depending upon the value of \(thr\), graph G surrounds each word by a set of highly confident semantically related words (including their lexical variations). The degree of similarity is controlled by the cosine similarity of embedding vectors. Later, for each word, we aggregate the information contained in the neighbors to come up with a new representation of the word that accommodates information from semantically related neighbors. Note, in our setting, unlike the existing work by Kipf and Welling (2016), we propose attentive graph convolutions with pair-wise distributional similarity scores as the hard attention weights. The attention weights are not updated during the model training. Specifically, we use the following layer-wise propagation mechanism: \[L^{(i+1)}=\rho(\tilde{\Gamma}L^{(i)}W_{i}) \tag{2}\] where \(\tilde{\Gamma}=\bar{D}^{-1/2}(\Gamma+I)\bar{D}^{-1/2}\) is the normalized symmetric matrix, \(\bar{D}\) is the degree matrix of \(\Gamma\), \(\Gamma\) is the weighted adjacency matrix learned from graph G with pair-wise scores as the attention weights, \(L^{(i)}\) is the input from previous layer, with \(L^{(0)}=\mathbf{U}\in\mathbf{R}^{p\times d}\) as the input matrix corresponding to the embeddings of the source terms, \(W_{i}\) is the learnable weight matrix, \(\rho\) is the non-linear activation function. Note, the end goal of the attentive convolutions is to analyze each word in G in relation to the weighted combination of its semantic neighbors. For this, we surround each word (node) with a group of semantically related words (nodes in the graph) and perform weighted aggregation to recompute the representation of the word. We also allow self-connections for each word, i.e., adding an identity matrix \(I\) to \(\Gamma\). This will enforce "semantically related words" to get similar representations. We observed, that for our problem settings, this attentive graph convolution framework outperforms the basic settings with equal contribution from all neighbors Kipf and Welling (2016). For GRI, we use a two-layered network to learn the final embeddings of each word \(\mathbf{U}_{m}\in\mathbf{R}^{p\times d}\) as follows: \[\mathbf{U}_{m}=(\tilde{\Gamma})(ReLU((\tilde{\Gamma})\mathbf{U}W_{0}))W_{1} \tag{3}\] ### Isomorphism Loss functions In order to train the GRI, we experiment with multiple different isomorphism loss functions on top of the attentive graph convolution network. Details about each loss function are as follows: L2 loss.We use L2-norm averaged over the number of data samples. \[\mathcal{L}_{2}=\frac{1}{N}||\mathbf{U}_{m}-\mathbf{V}||_{2} \tag{4}\] Orthogonal Procrustus loss.Orthogonal Procrustes loss aims to find a linear transformation \(W\) to solve: \[\mathcal{L}_{proc}=\operatorname*{arg\,min}_{\mathbf{W}\in\mathbf{R}^{d \times d},\mathbf{W}^{T}\mathbf{W}=I}\frac{1}{N}||\mathbf{U}_{m}\mathbf{W}- \mathbf{V}||_{2} \tag{5}\] Schonemann (1966) proposed a solution \(\mathbf{W}=\mathbf{Q}\mathbf{P}^{T}\), where \(\mathbf{P}\Sigma\mathbf{Q}^{T}\) is the singular value decomposition of \(\mathbf{V}^{T}\mathbf{U}_{m}\). Procrustus loss with initialization.It follows the same process as that of the Procrustus loss with the exception that we initialize the embeddings for the source words with the embedding vectors of their translation vectors corresponding to the target words. The end goal of this setting is to analyze the ability of the GRI to propagate the knowledge of the initialized embeddings during the model training. We also allow updating the initialized word embeddings during model training. We denote this loss by \(\mathcal{L}_{proc_{init}}\). We use the symbol \(\mathcal{L}_{ISO}\) to represent the different variations of isomorphism losses, i.e., \(\mathcal{L}_{2}\), \(\mathcal{L}_{proc}\) and \(\mathcal{L}_{proc_{init}}\). ### The Complete Model Finally, we combine the distributional training objective with the isomorphism loss function to compute complete loss of GRI, as follows: \[\mathcal{L}_{GRI}=\alpha\mathcal{L}_{SG}+(1-\alpha)\mathcal{L}_{ISO} \tag{6}\] where \(\alpha\) is the parameter used to control the contribution \(\mathcal{L}_{SG}\) and \(\mathcal{L}_{ISO}\) respectively. ## 5 Experiments and Results ### Datasets In order to set up a unanimous platform for comparative analysis, we use the data settings used by Marchisio et al. (2022). We use the first 1 million lines from newscrawl-2020 data set for English ("en"), Bengali ("bn") and Tamil ("ta") and the entire of newscrawl-2020 data for Ukrainian ("uk") to train word embeddings. We used Moses scripts for data pre-processing3. For evaluation, we used publically available train, dev, and test splits provided by MUSE (Conneau et al., 2017). Out of approx 8000-word pairs for each language, we used word pairs 0-5000, 5001-6500, and 6501-8000 as train, test and dev set respectively. The train set is used for model training, dev set is used for parameter tuning, and final results are reported using the test set. All these data splits are non-overlapping. Footnote 3: Moses script ### Baseline Models For comparative evaluation, we use independently trained distributional embeddings for the source and target languages as a baseline model. Amongst the existing research, we compare GRI against the prevalent state-of-the-art work on BLI, i.e., IsoVec by Marchisio et al. (2022). IsoVec uses the skipgram training objective along with isomorphism training objectives. Note, Marchisio et al. (2022) used exactly the same data settings as that of our proposed model (i.e., GRI), so for performance comparison, we use the numbers reported in the original paper. ### Experimental Settings For model training, we use Adam optimizer (Kingma and Ba, 2014) with learning rate = 0.001; \(\alpha\) in Equation 6 is set to 0.7; the value of \(thr\) in algorithm 1 is set to 0.5. For experiments, we use embeddings learnt for English language as the target embeddings, and embeddings for other languages, i.e., "ta", "uk", and "bn", as the source embeddings. For mapping across different spaces, we use Vecmap toolkit with process-flow explained in Section 3.2. We use average P@1 as the evaluation metric. We report mean \((\mu)\) and standard deviation \((\sigma)\) of the results over 5 runs of the experiment. All experiments are performed using Intel Core-i9-10900X CPU, and Nvidia 3090Ti GPU. ### Main Results The results for the proposed model (GRI) compared with the baseline models are shown in Table 1. We boldface the overall best-performing scores with the previous state-of-the-art underlined. These results show the GRI has a relatively stable performance (with low variance), it consistently outperforms the baseline and previous state-of-the-art scores by a significant margin. For "bn", "uk", and "ta", GRI outperforms the IsoVec (Marchisio et al., 2022) by 21.4%, 63.6% and 60.7% respectively. Especially noteworthy is the performance improvement gained by GRI for the Ukrainian language. We attribute this performance improvement to the fact that the semantic relatedness of the words corresponding to the Ukrainian embedding space is relatively better compared to other languages. The performance comparison of different isomorphism loss functions shows that \(\mathcal{L}_{proc}\) consistently outperforms the \(\mathcal{L}_{proc_{init}}\) and \(\mathcal{L}_{2}\) across all data sets. A relatively low performance of \(\mathcal{L}_{proc_{init}}\) compared to the \(\mathcal{L}_{proc}\) may be attributed to the fact that randomly initialized embeddings are a better choice compared to the initialization from the seed pairs. The initialization from the seed pairs may not be helpful for the model training to improve the performance at later stages. Overall results show the significance of using attentive graph convolutions in controlling the relative geometry of source language for BLI. Especially, the ability of the attentive convolutions to accumulate the contribution of semantically related terms plays a vital role in controlling the relative geometry of the source embeddings relative to the target embeddings, as is evident from the results in Table 1. ## 6 Discussion In this sub-section, we perform a detailed analysis of the performance of GRI. Specifically, we analyze: (i) Domain mis-match settings (ii) Impact of attentive convolutions, (iii) Isometric metrics, and (iv) Error cases. ### Domain mismatch Domain-mismatch has been identified as one of the core limitations of existing BLI methods. These methods fail badly in inferring bilingual information for embeddings trained on data sets from different domains (Sogaard et al., 2018; Marchisio et al., 2020). We claim that incorporating lexical variations for semantically related tokens makes GRI robust to the domain mismatch settings. In order to validate these claims for GRI, we re-run the experiments using target embeddings trained on 33.8 million lines of web-crawl data from the English Common Crawl data. The embeddings for the source languages ("bn", "uk" and "ta") are trained using the newscrawl-2020 data. The results for the domain-mismatch experiments for different isomorphism loss functions are reported in Table 2. These results are compared against the baseline distributional embeddings and best-performing scores of the existing work, i.e., IsoVec by Marchisio et al. (2022). Note, for the domain mismatch experiments, we use exactly same data settings as that of Marchisio et al. (2022), so we report exactly the same numbers as reported in original paper. Comparing the results of our model against the IsoVec, the GRI improves the performance by 27.74%, 53.12% and 74.22% for the "bn", "uk" and "ta" languages respectively. Comparing these results against the main experiments reported in Table 1, we can see the GRI yields a stable performance for both domain-shared as well as domain mismatch settings. These results show that the attentive graph convolutions indeed allow information sharing across semantically related tokens along with their lexical variations that is in turn helpful in controlling the relative isomorphism of the embedding spaces. Comparing the results for different loss functions, we can see that similar to the main experi \begin{table} \begin{tabular}{l|c|c|c} \hline Methodology & bn & uk & ta \\ \hline Baseline & 13.1 (\(\pm\) 0.51) & 13.9 (\(\pm\) 0.45) & 10.8 (\(\pm\) 0.42) \\ \hline IsoVec (I.2) & 16.3 (\(\pm\) 0.4) & 16.5 (\(\pm\) 0.4) & 11.1 (\(\pm\) 0.5) \\ IsoVec (Proo-I.2) & 16.6 (\(\pm\) 0.7) & 16.0 (\(\pm\) 0.8) & 10.7 (\(\pm\) 0.3) \\ IsoVec (Proo-I.2-Init) & 16.9 (\(\pm\) 0.2) & 17.1 (\(\pm\) 0.6) & 11.8 (\(\pm\) 0.3) \\ \hline GRI (\(\mathcal{L}_{2}\)) & 17.28 (\(\pm\) 0.02) & 18.75 (\(\pm\) 0.41) & 13.47 (\(\pm\) 0.04) \\ GRI (\(\mathcal{L}_{preval}\)) & 19.83 (\(\pm\) 0.05) & 21.37 (\(\pm\) 0.08) & 15.27 (\(\pm\) 0.01) \\ GRI (\(\mathcal{L}_{proc}\)) & **20.52** (\(\pm\) 0.02) & **27.97** (\(\pm\) 2.63) & **18.97** (\(\pm\) 0.2) \\ \hline \end{tabular} \end{table} Table 1: GRI results for the proposed model. We compare results with IsoVec (Marchisio et al., 2022). \begin{table} \begin{tabular}{l|c|c|c} \hline Methodology & bn & uk & ta \\ \hline Baseline & 9.7 (\(\pm\) 0.72) & 10.2 (\(\pm\) 0.43) & 7.5 (\(\pm\) 0.39) \\ \hline IsoVec (Proo-I.2-Init) & 15.5 (\(\pm\) 0.7) & 17.3 (\(\pm\) 0.4) & 10.9 (\(\pm\) 0.5) \\ \hline GRI (\(\mathcal{L}_{2}\)) & 13.97 (\(\pm\) 0.02) & 17.32 (\(\pm\) 0.32) & 11.93 (\(\pm\) 0.01) \\ GRI (\(\mathcal{L}_{preval}\)) & 19.75 (\(\pm\) 0.01) & 21.32 (\(\pm\) 0.10) & 17.12 (\(\pm\) 0.59) \\ GRI (\(\mathcal{L}_{preval}\)) & **19.80** (\(\pm\) 0.50) & **26.49** (\(\pm\) 0.50) & **18.99** (\(\pm\) 0.20) \\ \hline \end{tabular} \end{table} Table 2: GRI results for domain mis-match experiments compared with the baseline models, IsoVec (Marchisio et al., 2022). ments the performance of the model for the Procrustes loss (\(\mathcal{L}_{proc}\)) is relatively higher than the \(\mathcal{L}_{2}\) and \(\mathcal{L}_{proc_{init}}\). ### Impact of attentive convolutions In this sub-section, we analyze in detail the performance improvement of GRI attributable to the attentive graph convolutions. For this, we primarily analyze the performance improvement of GRI with and without attentive graph convolutions. The results of these experiments are reported in Table 3. These results show the significance of attentive graph convolutions that help in improving the performance across all three languages. The improvement in performance for the "bn", "uk" and "ta" languages is 24.36%, 64.72% and 62.13% respectively. To gain further insight, we also analyzed the output of the model with and without graph convolutions in order to dig out which class of translation instances were correctly translated especially due to the attentive convolutions part of GRI. We run this analysis only for the Ukrainian language because: GRI - yields a higher score for the Ukrainian language compared to other languages. All the analyses were performed under the direct supervision of a linguistic expert. Detailed analyses show that a major portion (approx 51%) of the pairs corrected especially by the graph convolutions belong to the nouns, with 21% verbs and 20% adjectives. The rest 7% are assigned to other classes. This analysis shows that the phenomenon of lexical variation is dominant among nouns that results in better performance of GRI - compared to the baseline models. ### Isometric metrics We also correlate the results of GRI with different widely used isomorphism metrics. Specifically, we use two metrics, namely: (a) Pearson's correlation, and (b) Eigenvector similarity. Details about these metrics and the corresponding experimental setting are as follows: Pearson's Correlation.We compute Pearson's correlation between the cosine similarities of the seed translation pairs as an indicator of the relative isomorphism of corresponding spaces. We expect our P@1 results to correlate positively (\(\uparrow\)) with Pearson's correlation. We compute Pearson's correlation over first 1000 translation seed pairs. Corresponding results are shown in the first half of Table 4. We boldface the best scores. These results show that for all languages, Pearson's correlation for the model GRI (\(\mathcal{L}_{proc}\)) is slightly higher compared to other models. Although these results are aligned with our findings in Table 4, however, one noteworthy observation is that Pearson's correlation is not a true indicator of the relative performance improvement across different isomorphism losses. Eigenvector Similarity.In order to compute the eigenvector similarity of two spaces, we compute the Laplacian spectra of the corresponding k-nearest neighbor graphs. This setting is similar to Sogaard et al. (2018), and is summarized as follows. For seed pairs construct unweighted graphs followed by computing the graph Laplacians. Later, compute the eigenvalues of the graph Laplacians and retain the first \(k\) eigenvalues summing to less than 90% of the total sum of eigenvalues. Finally, we compute the eigenvector similarity as the sum of squared differences between partial spectra. The graphs with similar eigenvalue spectra are supposed to have similar structures (a measure of relative isomorphism). We expect our eigenvector similarity results to correlate negatively (\(\downarrow\)) with P@1. The experimental results are shown in the right half of Table 4, with the best scores boldfaced. These results show that the eigenvector similarity scores for the model GRI (\(\mathcal{L}_{proc_{init}}\)) are better than the other two models. This is in contrast to our findings in Table 1, where GRI (\(\mathcal{L}_{proc}\)) shows relatively better performance. Generally speaking, the results of the isometric metrics do not truly correlate with the P@1. These findings are aligned with earlier studies by Marchisio et al. (2022) that also emphasized the need for better metrics to compute the relative isomorphism of the embedding spaces. \begin{table} \begin{tabular}{l|c c c|c c} \hline \multicolumn{5}{c|}{Pearson Correlation (\(\uparrow\))} & Eigenvector Similarity(\(\downarrow\)) \\ \hline Methodology & bn & uk & ta & bn & uk & ta \\ \hline GRI (\(\mathcal{L}_{2}\)) & 0.47 & 0.36 & 0.42 & 35.55 & 30.64 & 69.72 \\ GRI (\(\mathcal{L}_{proc_{init}}\)) & 0.47 & 0.36 & 0.43 & **31.23** & **10.92** & **45.56** \\ GRI (\(\mathcal{L}_{proc_{init}}\)) & **0.49** & **0.37** & **0.44** & 32.16 & 29.53 & 62.81 \\ \hline \end{tabular} \end{table} Table 4: Analysis of different isometry metrics for GRI. \begin{table} \begin{tabular}{l|c|c|c} \hline Methodology & bn & uk & ta \\ \hline GRI w/o G-Conv (\(\mathcal{L}_{2}\)) & 16.10 (\(\pm\) 0.35) & 16.35 (\(\pm\) 0.30) & 11.25 (\(\pm\) 0.45) \\ GRI w/o G-Conv (\(\mathcal{L}_{proc_{init}}\)) & 16.75 (\(\pm\) 0.20) & 16.98 (\(\pm\) 0.30) & 11.70 (\(\pm\) 0.25) \\ GRI w/o G-Conv (\(\mathcal{L}_{proc_{init}}\)) & 16.50 (\(\pm\) 0.5) & 16.10 (\(\pm\) 0.70) & 10.65 (\(\pm\) 0.20) \\ \hline GRI (\(\mathcal{L}_{proc_{init}}\)) & 17.28 (\(\pm\) 0.02) & 18.75 (\(\pm\) 0.41) & 13.47 (\(\pm\) 0.01) \\ GRI (\(\mathcal{L}_{proc_{init}}\)) & 19.83 (\(\pm\) 0.05) & 21.37 (\(\pm\) 0.08) & 18.527 (\(\pm\) 0.01) \\ GRI (\(\mathcal{L}_{proc_{init}}\)) & **20.52** (\(\pm\) 0.02) & **27.97** (\(\pm\) 2.63) & **18.97** (\(\pm\) 0.20) \\ \hline \end{tabular} \end{table} Table 3: Analyzing the impact of attentive graph convolutions for GRI. ### Error Analyses We also analyze the error cases of GRI in order to know the limitations of the model and room for future improvement. Note, similar to section 6.2, we only perform the error analyses for the Ukrainian language and Procrustes loss (\(\mathcal{L}_{proc}\)). All experiments were performed with the help of linguistic experts. We separately analyze the errors for the variants of GRI with and without attentive graph convolutions (i.e., GRI; GRI w/o G-Conv) in order to quantify the reduction in error attributable to the attentive graph convolutions. In order to better understand the errors from semantic perspective, we categorize the errors into the following types: Type-a:The predicted target word for P@1 is semantically close to the true target word. Type-b:The predicted target word is a k-nearest neighbor of the true word for k=5. We limit the error cases to only the above-mentioned simple types because these types give a rough picture of the relative isomorphism of the different spaces from the semantic perspective. The percentage error counts for both models are shown in Table 5. For the model GRI w/o G-Conv(\(\mathcal{L}_{proc}\)), 21.3% errors fall in error Type-a, and 6.5% errors belong to error Type-b. For the model GRI(\(\mathcal{L}_{proc}\)), 50.2% errors fall in Type-a, and 16.6% errors belong to Type-b. As expected the variant of GRI with graph convolutions shows a higher percentage for both categories, i.e., Type-a and Type-b. These numbers clearly indicate that the attentive graph convolutions were not only able to correct a major portion of errors made by (GRI w/o G-Conv), but also the errors made by the model GRI are either highly semantically related to the target words or a nearest neighbor of the target word. In order to gain further insight, we manually look at the error cases. For both models, a few examples are shown in Table 6. The majority of the predictions made by GRI are indeed correct and closely related to the true target words. For example, it predicts {"mailing", "sharing", "win-dows"} in place of {"mail", "shared", "window"} respectively. These results clearly indicate that the current performance of GRI is under-reported and there is a need for better quantification measures (other than P@1) in order to compute and/or report the true potential of the model. Overall error analyses show the significance of the using attentive graph convolutions to incorporate the lexical variations of semantically related tokens in order to control the relative isomorphism and perform BLI in performance-enhanced way. ## 7 Conclusion and Future Directions In this paper, we propose Graph-based Relative Isomorphism (GRI) to incorporate the impact of lexical variations of semantically related tokens in order to control the relative isomorphism of cross-lingual embeddings. GRI uses multiple different isomorphism losses (guided by the attentive graph convolutions) along with the distributional loss to perform BLI in a performance-enhanced fashion. Experimental evaluation shows that GRI outperforms the existing research on BLI by a significant margin. Some probable future directions include: (i) extending the concepts learned in this research to contextualized embeddings, and (ii) augmenting the GRI to focus more on preserving lexicosemantic relations. ### Limitations Some of the core limitations of the proposed approach are as follows: (i) current formulation of GRI is not defined and/or implemented for deep contextualized embeddings which are more prevalent and a better alternate to the distributional embeddings, (ii) existing limitations of the distributional embeddings are inherited in the model, which limits the end-performance of GRI. For example, as pointed out by Ali et al. (2019) the distributional embedding space tends to inter-mix dif \begin{table} \begin{tabular}{c c|c c c} \hline \hline & \multicolumn{2}{c}{GRI (\(\mathcal{L}_{proc}\))} & \multicolumn{2}{c}{GRI w/o G-Conv(\(\mathcal{L}_{proc}\))} \\ \hline source & target & target & source & target & target \\ \hline numtra & mail & mailing & smuc.m & gone & shattered \\ crist-mult & shared & sharing & crist & coll & 60g \\ nixon & window & windows & canus & em & merchants \\ nixon & college & teaching & nic & nose & rubbing \\ hintos & walked & went & station & replacing & overpriced \\ nivryuman & manuals & templates & ray.scan & volcano & 100mph \\ pedepta & reform & reforms & picr & growth & decline \\ \hline \hline \end{tabular} \end{table} Table 6: Example error cases for the Ukrainian vs English language for the models: GRI (\(\mathcal{L}_{proc}\)); GRI w/o G-Conv(\(\mathcal{L}_{proc}\)). For each model, the first column (source) corresponds to the Ukrainian words, the second column (target) represents the true target word, third column (target’) represents the model predictions for target words. \begin{table} \begin{tabular}{l|c c} \hline & Type-a & Type-b \\ \hline GRI w/o G-Conv(\(\mathcal{L}_{proc}\)) & 21.3\% & 6.5\% \\ GRI (\(\mathcal{L}_{proc}\)) & 50.2\% & 16.6\% \\ \hline \hline \end{tabular} \end{table} Table 5: Classification of Error Types ferent lexico-semantic relations, and yield a poor performance on a specific task. This phenomenon has a direct impact on GRI - especially on controlling the relative isomorphism of highly interlinked relation pairs, e.g., Antonyms vs Synonyms. Acknowledgements.Di Wang, Yan Hu and Muhammad Asif Ali are supported in part by the baseline funding BAS/1/1689-01-01, funding from the CRG grand URF/1/4663-01-01, FCC/1/1976-49-01 from CBRC and funding from the AI Initiative REI/1/4811-10-01 of King Abdullah University of Science and Technology (KAUST). Di Wang is also supported by the funding of the SDAIAA-KAUST Center of Excellence in Data Science and Artificial Intelligence (SDAIA-KAUST AI).
2305.19083
Defense Against Shortest Path Attacks
Identifying shortest paths between nodes in a network is an important task in applications involving routing of resources. Recent work has shown that a malicious actor can manipulate a graph to make traffic between two nodes of interest follow their target path. In this paper, we develop a defense against such attacks by modifying the weights of the graph that users observe. The defender must balance inhibiting the attacker against any negative effects of the defense on benign users. Specifically, the defender's goals are: (a) to recommend the shortest paths possible to users, (b) for the lengths of the shortest paths in the published graph to be close to those of the same paths in the true graph, and (c) to minimize the probability of an attack. We formulate the defense as a Stackelberg game in which the defender is the leader and the attacker is the follower. In this context, we also consider a zero-sum version of the game, in which the defender's goal is to minimize cost while achieving the minimum possible attack probability. We show that this problem is NP-hard and propose heuristic solutions based on increasing edge weights along target paths in both the zero-sum and non-zero-sum settings. Relaxing some constraints of the original problem, we formulate a linear program for local optimization around a feasible point. We present defense results with both synthetic and real network datasets and show that these methods often reach the lower bound of the defender's cost.
Benjamin A. Miller, Zohair Shafi, Wheeler Ruml, Yevgeniy Vorobeychik, Tina Eliassi-Rad, Scott Alfeld
2023-05-30T14:46:27Z
http://arxiv.org/abs/2305.19083v1
# Defense Against Shortest Path Attacks+ ###### Abstract Identifying shortest paths between nodes in a network is an important task in applications involving routing of resources. Recent work has shown that a malicious actor can manipulate a graph to make traffic between two nodes of interest follow their target path. In this paper, we develop a defense against such attacks by modifying the weights of the graph that users observe. The defender must balance inhibiting the attacker against any negative effects of the defense on benign users. Specifically, the defender's goals are: (a) to recommend the shortest paths possible to users, (b) for the lengths of the shortest paths in the published graph to be close to those of the same paths in the true graph, and (c) to minimize the probability of an attack. We formulate the defense as a Stackelberg game in which the defender is the leader and the attacker is the follower. In this context, we also consider a zero-sum version of the game, in which the defender's goal is to minimize cost while achieving the minimum possible attack probability. We show that this problem is NP-hard and propose heuristic solutions based on increasing edge weights along target paths in both the zero-sum and non-zero-sum settings. Relaxing some constraints of the original problem, we formulate a linear program for local optimization around a feasible point. We present defense results with both synthetic and real network datasets and show that these methods often reach the lower bound of the defender's cost. Introduction In numerous applications involving the routing of resources through a network, finding the shortest path between two nodes is an important problem. A malicious actor with the capacity to modify the graph could entice users to follow a particular path that could put them at risk. To counter adversarial activity, it is important to consider defensive measures against such behavior. Recent work has proposed an algorithm to manipulate the shortest path when the attacker is able to remove edges. In this paper, taking inspiration from differential privacy, we propose a defensive technique based on perturbing edge weights. Users are presented an altered set of edge weights that aims to provide the shortest paths possible while making the attacker's target more expensive. The contributions of this paper are as follows: (1) We define a defender cost based on the impact on user experience and probability of attack. (2) We formulate a Stackelberg game to optimize the defender's expected cost. (3) In a zero-sum setting, we show that this optimization is NP-hard. (4) We propose a heuristic algorithm called PATHDEFENSE that greedily increments edge weights until the user's cost is sufficiently low. (5) We present results on simulated and real networks demonstrating the cost improvement PATHDEFENSE provides. ## 2 Method In our problem setting, a graph \(G\) has weights \(w\), and an attacker intends to remove edges to make a particular target path be the shortest between its endpoints. The defender's goal is to publish an approximate set of weights that provide users with short paths to their destinations while also increasing the burden on the adversary, making an attack less likely. This method is inspired by a differential privacy technique for approximating shortest paths without revealing true weights [22], though here we consider the weight perturbations in an optimization context. We refer to the problem of minimizing the defender's cost in this context as the _Cut Defense_ problem. The analysis over the remainder of the paper makes the following assumptions: (1) The attacker has a single target path \(p^{*}\) and uses PATHATTACK to optimize the attack. (2) If PATHATTACK identifies an attack within the attacker's budget, the attack will occur. (3) True edge weights and removal costs are known to the attacker. ### Notation We consider a graph \(G=(V,E)\), which may be directed or undirected. Each edge has a nonnegative weight \(w:E\rightarrow\mathbb{R}_{\geq 0}\). These are the weights denoting the true traversal distance. The defender publishes weights \(w^{\prime}:E\rightarrow\mathbb{R}_{\geq 0}\), which may be different than \(w\). For a given source-destination pair \(s,t\in V\), let \(p(G,\hat{w},s,t)\) be the shortest path in \(G\) from \(s\) to \(t\) using weights \(\hat{w}\). For a given path \(p\) between two nodes, let \(\ell(G,\hat{w},p)\) be the length of \(p\) in \(G\) using weights \(\hat{w}\). We denote by \(p^{*}\) and \(b\) the attacker's target path and budget, respectively. When determining the impact on users, we consider the distribution of source-destination pairs, \(\mathcal{D}\), as this will help determine how often paths are disrupted. In addition, we assume the defender has uncertainty about \(p^{*}\) and \(b\). The defender considers a distribution \(\mathcal{P}\) of possible target paths and a distribution \(\mathcal{B}\) of possible budgets. These distributions result in a distribution of user-observed graphs, \(\mathcal{G}\), which we describe in the next section. The defender's cost (loss) function is denoted by \(L\). A notation table is provided in Appendix A. ### Stackelberg Game We frame our method as a Stackelberg game, in which the defender is the leader and the attacker is the follower. The defender has full knowledge of the attacker's action set, and chooses the optimal solution given the attacker's assumed response. Here, we briefly describe the two players in the game. AttackerThe attacker will observe a graph \(G=(V,E)\) with weights \(w^{\prime}\) published by the defender, and may also know the true weights \(w\). Each edge \(e\in E\) has a cost of removal \(c(e)>0\) that is known to the attacker. The attacker has a target path \(p^{*}\), which goes from a source node \(s\) to a destination node \(t\), and a budget \(b\) specifying the greatest cost of edge removal that the attacker can expend. The attacker runs the version of PATHATTACK called PATHATTACK-LP in [21]. This algorithm iteratively solves a relaxed version of the integer program \[\hat{\Delta}= \operatorname*{arg\,min}_{\Delta}\mathbf{c}^{\top}\mathbf{x} \tag{1}\] \[\text{s.t.}\ \Delta\in\{0,1\}^{|E|}\] (2) \[\mathbf{x}_{p}^{\top}\Delta\geq 1,\forall p\in P_{p^{*}}\] (3) \[\mathbf{x}_{p^{*}}^{\top}\Delta=0. \tag{4}\] Here, \(\Delta\) is an indicator vector for edges to remove from the graph, \(P_{p^{*}}\) is the set of paths that compete with \(p^{*}\) to be the shortest (i.e., all paths from \(s\) to \(t\) that are not longer than \(p^{*}\), using the published weights \(w^{\prime}\)), \(\mathbf{x}_{p}\) is an indicator vector for the edges in path \(p\), and \(\mathbf{c}\) is the vector of edge removal costs. The algorithm uses constraint generation to identify a subset of paths to use for constraint (3) and performs a randomized rounding procedure to get an integer solution to the relaxed problem. The algorithm outputs a set of edges \(E^{\prime}\), indicated by \(\hat{\Delta}\), whose size is is within a logarithmic factor of the smallest possible solution. If \(c(E^{\prime})=\sum_{e\in E^{\prime}}c(e)\leq b\), then the attacker executes the attack, and the graph used by all parties becomes \(G^{\prime}=(V,E\setminus E^{\prime})\). If \(c(E^{\prime})>b\), the attack is not worth the cost to the attacker, so \(G^{\prime}=G\). DefenderIn addition to the true nodes and edges of the graph \(G=(V,E)\), the defender has knowledge of the true weights \(w:E\rightarrow\mathbb{R}_{\geq 0}\) that will be experienced by users traversing the graph. The defender will publish a different set of weights \(w^{\prime}\). While the defender knows that the attacker will use PATHATTACK, we assume there is uncertainty with respect to the attacker's target path \(p^{*}\) and budget \(b\). The defender has a distribution over both of these variables, as defined above. The distributions \(\mathcal{P}\) and \(\mathcal{B}\) combine with the published weights \(w^{\prime}\) to create a distribution over graphs \(\mathcal{G}\) as follows. For a given \(p^{*}\) in \(\mathcal{P}\), let \(E^{\prime}\) be the solution given by PATHATTACK using the published weights, and is a unique solution across all target paths. (If multiple target paths have the same solution, the probability of the resulting graph integrates across the paths.) Then the probability that users observe graph \(G^{\prime}=(V,E\setminus E^{\prime})\) is \[\Pr_{\mathcal{G}}[G^{\prime}]=\Pr_{P\sim\mathcal{P}}[P=p^{*}]\cdot\Pr_{B\sim \mathcal{B}}[|E^{\prime}|\leq B]. \tag{5}\] The defender's goal is to publish a set of weights that has minimal expected cost, i.e., \[\hat{w}^{\prime}=\operatorname*{arg\,min}_{w^{\prime}}\mathbb{E}\left[L(G,w,w ^{\prime},\mathcal{D},\mathcal{P},\mathcal{B})\right]. \tag{6}\] There are several considerations when defining the defender's cost, which we discuss in detail next. ### Defender's Cost Function The attacker's cost function is simple: After running PATHATTACK, if the cost of edge removal is within the budget, the attack is carried out. When determining the best course of action, the defender has three considerations. The first is the cost incurred by users of the network: the distance they must travel to get from their origin points to their destinations. If the users must travel longer distances, the cost to the defender is higher. Note that this is the actual distance traveled: The user selects a path \(p\) based on the perturbed weights \(w^{\prime}\), but the distance is computed based the original weights \(w\). There is also a cost associated with the user traveling a different distance than advertised. If the length of \(p\) is \(\ell^{\text{true}}\), but the user is told the length is \(\ell^{\text{obs}}\), this may negatively affect the user's experience. If \(\ell^{\text{obs}}<\ell^{\text{true}}\), then the user will likely be dissatisfied with traversing a longer distance than advertised. The case where \(\ell^{\text{obs}}>\ell^{\text{true}}\) is less clear. If the advertised distance is only slightly greater than the true distance, the user may be happy to experience a shorter distance than advertised. If, on the other hand, the advertised distance is drastically larger, this may induce an additional burden on users. Since deviations between the true and observed weights may cause user dissatisfaction, this is an additional cost for the defender. Finally, there may be situations where there is some additional cost to the defender if the adversary is successful. This would be a cost _in addition_ to the cost due to longer distances experienced by users after the attack. If, for example, the new traffic route allows the adversary to gain a competitive advantage over the defender, this would have a broader negative consequence for the defender than the specific issue of users experiencing longer distances. If this is an issue for the defender, there will be another component to the cost function to account for the expected cost of attacker success. To mathematically formalize the cost function, we consider the three costs described above: 1. \(L_{d}\): The average _distance_ traveled by users 2. \(L_{e}\): The average cost of the _error_ between advertised and true path distances 3. \(L_{s}\): The expected cost of attacker _success_ Cost 1 takes the expected value across source-destination pairs \(s,t\sim\mathcal{D}\). While the path \(p\) from \(u\) to \(v\) is determined using the observed weights \(w^{\prime}\), the distance experienced by users is based on the true weights \(w\). Thus, for a user traveling from \(u\) to \(v\), we use the path \(p(G^{\prime},w^{\prime},u,v)\), which has length \(\ell(G^{\prime},w,p(G^{\prime},w^{\prime},u,v))\). Aggregating across all pairs, cost 1 is expressed as \[L_{d}(G,w,w^{\prime},\mathcal{B},\mathcal{D},\mathcal{P})=\mathbb{E}_{u,v \sim\mathcal{D},G^{\prime}\sim\mathcal{G}(G,w^{\prime},\mathcal{B},\mathcal{P })}\left[\ell(G^{\prime},w,p(G^{\prime},w^{\prime},u,v))\right]. \tag{7}\] Cost 2 considers the same path as cost 1, but rather than the distance traveled, the defender considers some function of the error between the advertised and true path lengths. Let \(c_{\text{err}}\) denote this function. Then cost 2 is given by \[L_{e}(G,w,w^{\prime},\mathcal{B},\mathcal{D},\mathcal{P})=\mathbb{E}_{s,t \sim\mathcal{D},G^{\prime}\sim\mathcal{G}(G,w^{\prime},\mathcal{B},\mathcal{P })}\left[c_{\text{err}}(\ell^{\text{true}},\ell^{\text{obs}})\right], \tag{8}\] where \(\ell^{\text{true}}=\ell(G^{\prime},w,p(G^{\prime},w^{\prime},s,t))\) and \(\ell^{\text{obs}}=\ell(G^{\prime},w^{\prime},p(G^{\prime},w^{\prime},s,t))\). The shape of \(c_{\text{err}}\) will vary based on the defender's belief about users' degree of dissatisfaction with errors in reported path lengths. Here we use the function \[c_{\text{err}}(\ell^{\text{true}},\ell^{\text{obs}})=\begin{cases}f_{+}(\ell^ {\text{obs}}-\ell^{\text{true}})&\text{ if }\ell^{\text{obs}}\geq\ell^{\text{true}}\\ f_{-}(\ell^{\text{true}}-\ell^{\text{obs}})&\text{ if }\ell^{\text{obs}}<\ell^{ \text{true}}\end{cases}, \tag{9}\] where \(f_{+},f_{-}>0\) denote different marginal costs for overstating or understating, respectively, the length of the user's path. Finally, cost 3 occurs if the attack is successful. The defender has a parameter \(\lambda\geq 0\) that denotes the cost of attacker success. The cost to the defender is \[L_{s}=\lambda\Pr[p^{*}\text{ is the shortest observed path between its terminals in }G^{\prime}]. \tag{10}\] If the only cost of an attack is the direct disruption to users accounted for in \(L_{d}\) and \(L_{e}\), then the defender sets \(\lambda=0\). A pseudocode description for an algorithm to compute the cost is provided in Appendix B. ## 3 Optimization We begin by formally formulating the optimization to solve Cut Defense. We then define a zero-sum version in which the defender's goal is to reduce cost given that the probability of attack is minimized. We propose a heuristic method that results in a feasible solution for a single target path, then extend its usage to multiple target paths. We finally derive a linear program for local optimization around a feasible point. ### Non-Convex Optimization Formulation We optimize cost while varying perturbed weights. Let \(\mathbf{w}\in\mathbb{R}_{\geq 0}^{|E|}\) be the vector of original edge weights, where each edge has been given an arbitrary index corresponding to its vector entry. The vector \(\mathbf{w}^{\prime}\) contains the perturbed weights, \(\mathbf{c}\) contains edge removal costs, and \(\mathbf{x}_{p}\) is a binary indicator vector for path \(p\), i.e., if the \(i\)th edge is in path \(p\), the \(i\)th entry in \(\mathbf{x}_{p}\) is \(1\), otherwise it is \(0\). Let \(P(u,v)\) be the set of all paths from \(u\) to \(v\) and \(\mathrm{supp}(\mathcal{P})\) be all paths with nonzero probability of being the target, and \(\mathcal{X}(G,\mathbf{w},p^{*})\) be the set of attacks against graph \(G\) with weights \(\mathbf{w}\) to make \(p^{*}\) be the shortest path between its terminal nodes. We solve Cut Defense by optimizing as follows: \[\hat{\mathbf{w}}^{\prime}= \operatorname*{arg\,min}_{\mathbf{w}^{\prime}}\lambda\left(1-z_ {\emptyset}\right)+\sum_{u,v\in V}L_{d}(u,v)+L_{e}(u,v) \tag{11}\] \[\mathrm{s.t.}\ L_{d}(u,v)=\Pr_{D\sim\mathcal{D}}[D=(u,v)]\cdot \sum_{p^{*}\in\mathrm{supp}(\mathcal{P})\cup\{\emptyset\}}z_{p^{*}}\cdot \ell_{uv,p^{*}}^{\mathrm{true}}\quad\forall u,v\in V\] (12) \[L_{e}(u,v)=\Pr_{D\sim\mathcal{D}}[D=(u,v)]\cdot\left(f_{+}\cdot d _{uv,p^{*}}^{\mathrm{pos}}+f_{-}\cdot d_{uv,p^{*}}^{\mathrm{neg}}\right) \right)\quad\forall u,v\in V\] (13) \[\Delta_{p^{*}}\in\mathcal{X}(G,\mathbf{w}^{\prime},p^{*})\quad \forall p^{*}\in\mathrm{supp}(\mathcal{P})\] (14) \[\Delta_{\{\emptyset\}}=\mathbf{0}\] (15) \[\mathbf{c}^{\top}\Delta_{p^{*}}\leq\mathbf{c}^{\top}\Delta\quad \forall\Delta\in\mathcal{X}(G,\mathbf{w}^{\prime},p^{*}),p^{*}\in\mathrm{supp} (\mathcal{P})\] (16) \[z_{p^{*}}=\Pr(p^{*})\sum_{i\geq\mathbf{c}^{\top}\Delta_{p^{*}}} \Pr_{B\sim\mathcal{B}}[B=i]\quad\forall p^{*}\in\mathrm{supp}(\mathcal{P})\] (17) \[z_{\emptyset}=1-\sum_{p^{*}\in\mathrm{supp}(\mathcal{P})}z_{p^{*}}\] (18) \[z_{p^{*}}\geq 0\quad\forall p^{*}\in\mathrm{supp}(\mathcal{P}) \cup\{\emptyset\}\] (19) \[p_{uv,p^{*}}=\operatorname*{arg\,min}_{p\in P(u,v)}\mathbf{x}_{p }^{\top}(\mathbf{w}^{\prime}+W\Delta_{p^{*}})\quad\forall u,v\in V,p^{*}\in \mathrm{supp}(\mathcal{P})\cup\{\emptyset\}\] (20) \[\ell_{uv,p^{*}}^{\mathrm{true}}=\mathbf{x}_{p_{uv,p^{*}}}^{\top} \mathbf{w}\quad\forall u,v\in V,p^{*}\in\mathrm{supp}(\mathcal{P})\cup\{\emptyset\}\] (21) \[\ell_{uv,p^{*}}^{\mathrm{obs}}=\mathbf{x}_{p_{uv,p^{*}}}^{\top} \mathbf{w}^{\prime}\quad\forall u,v\in V,p^{*}\in\mathrm{supp}(\mathcal{P}) \cup\{\emptyset\}\] (22) \[d_{uv,p^{*}}^{\mathrm{pos}},d_{uv,p^{*}}^{\mathrm{neg}}\geq 0 \quad\forall u,v\in V,p^{*}\in\mathrm{supp}(\mathcal{P})\cup\{\emptyset\}\] (23) \[d_{uv,p^{*}}^{\mathrm{pos}}-d_{uv,p^{*}}^{\mathrm{neg}}=\ell_{ uv,p^{*}}^{\mathrm{obs}}-\ell_{uv,p^{*}}^{\mathrm{true}}\quad\forall u,v\in V,p^{*} \in\mathrm{supp}(\mathcal{P})\cup\{\emptyset\}. \tag{24}\] Note that \(p_{uv,p^{*}}\), defined in (20), is the shortest path from \(u\) to \(v\) according to the published weights after the attacker attacks when the target path is \(p^{*}\). The case where there is attack is considered when \(p^{*}=\emptyset\). This means that \(\ell_{uv,p}^{\mathrm{true}}\) and \(\ell_{uv,p^{*}}^{\mathrm{obs}}\) correspond to \(\ell\) and \(\ell^{\prime}\), respectively, in (8). One potential concern when calculating the expected cost across pairs of nodes is the possibility that the graph could become disconnected, leaving some inter-node distances infinite. The best attack, however, will never result in a disconnected graph. If the optimal solution disconnected a connected graph, there would be multiple connected components, one of which contains the target path \(p^{*}\). However, if a single edge that connected the component that includes \(p^{*}\) to another component were added back to the graph, \(p^{*}\) would remain the shortest path between its terminals. This contradicts the assumption that the proposed attack was optimal, and yields the following theorem. **Theorem 1**.: _The optimal \(\Delta_{p^{*}}\) in (16) will not disconnect the graph._ Since PATHATTACK is an approximation algorithm, there is a possibility that the resulting attack will result in a disconnected graph. Thus, we apply a slight modification to the original PATHATTACK from [21]: after the rounding procedure, if the attack results in multiple connected components, add the highest-cost edge between two connected components back to the graph until it is connected again. ### Zero-Sum Formulation In the prior section, we assumed a non-zero-sum game in which the optima for the attacker and defender may coincide. We gain additional insight into the problem by considering the zero-sum version of the problem, in which the defender's primary goal is ensuring the attack does not occur. In this case, we are given the same information as in Cut Defense except the cost of attack success \(\lambda\). Instead, the defender manipulates the weights \(w^{\prime}\) to minimize the probability of attack, i.e., \[z_{\min}=\min_{w^{\prime}}\sum_{p^{*}\in\mathcal{P}}\Pr_{P\sim\mathcal{P}}[P=p ^{*}]\cdot\Pr_{B\sim\mathcal{B}}[c(E^{\prime}(G,w^{\prime},p^{*}))\leq B]. \tag{25}\] Within the minimized attack probability, however, the defender wants the cost to be as low as possible. Thus, the _Zero-Sum Cut Defense_ problem is given by \[\hat{w}^{\prime}= \operatorname*{arg\,min}_{w^{\prime}}L_{d}(G,w,w^{\prime}, \mathcal{B},\mathcal{D},\mathcal{P})+L_{e}(G,w,w^{\prime},\mathcal{B}, \mathcal{D},\mathcal{P})\] (26) s.t. \[\sum_{p^{*}\in\mathcal{P}}\Pr_{P\sim\mathcal{P}}[P=p^{*}]\cdot \Pr_{B\sim\mathcal{B}}[c(E^{\prime}(G,w^{\prime},p^{*}))\leq B]=z_{\min}. \tag{27}\] Note that \(L_{s}\) is not considered in the objective in this formulation, since the attack probability is fixed at its minimum possible value. We show that this version of the problem is NP-hard. **Theorem 2**.: _The Zero-Sum Cut Defense problem is NP-hard._ Proof Sketch.: We prove NP hardness via reduction from the Knapsack problem. Given a set of \(n\) items with values \(\nu_{i}\in\mathbb{Z}_{+}\) and weights \(\eta_{i}\in\mathbb{Z}_{+}\), \(1\leq i\leq n\), and two thresholds \(U\) and \(H\), the Knapsack problem is to determine whether there is a subset of items with total value at least \(U\) with weight no more than \(H\). For each item, we create a triangle in a graph, where consecutive triangles share a node as shown in Fig. 1. The \(i\)th triangle consists of the nodes \(u_{i-1}\), \(u_{i}\), and \(\omega_{i}\). Let \(s=u_{0}\) and \(t=u_{n}\). We create a Zero-Sum Cut Defense instance in which the support of \(\mathcal{P}\) consists of the single path from \(s\) to \(t\) that passes through no nodes \(\omega_{i}\) for any \(1\leq i\leq n\). For all \(i\), edge \(\{u_{i-1},u_{i}\}\) has weight 1 and removal cost 1, \(\{u_{i-1},\omega_{i}\}\) has weight 1 and cost \(v_{i}\), and edge \(\{\omega_{i},u_{i}\}\) has weight \(w_{i}\) and cost \(v_{i}\). The adversary's budget is \(U-1\) with probability 1. The defender's cost only considers traffic going from \(s\) to \(t\), i.e., \(\Pr_{(x,y)\sim\mathcal{D}}[(x,y)=(s,t)]=1\). Let \(f_{+}=1\) and \(f_{-}=H^{\prime}=\sum_{i=1}^{n}w_{i}\). Since the adversary's budget is \(U-1\), in order to minimize the attack probability (in this case, make it 0), the defender must force some subset of edges along \(p^{*}\) to have length at least the same as the two-hop paths running parallel to them. The removal costs on the parallel paths of these edges must sum to at least \(V\): the defender's "value" is increased cost for the attacker. The increase in distance traveled for the user will be commensurate with the weights of the items associated with the perturbed edges. This provides a direct mapping between solving Knapsack and solving Zero-Sum Cut Defense on the generated graph. A detailed proof is provided in Appendix C. While the problem cannot be efficiently solved in general, we find feasible points fairly easily by increasing the length of \(p^{*}\) until the cost of edges used in \(\mathtt{PATHATTACK}\) is sufficiently high. Starting with weights \(w^{\prime}\) initialized to the true weights \(w\), the procedure is as follows. 1. \(E^{\prime}\leftarrow\mathtt{PATHATTACK}(G,w^{\prime},p^{*})\) 2. \(p\leftarrow\) 2nd shortest path between the terminals of \(p^{*}\), if it exists 3. pick an edge \(e\) from \(E_{p^{*}}\setminus E_{p}\) 4. increase \(w^{\prime}(e)\) by \(\delta=\ell(G,w^{\prime},p)-\ell(G,w^{\prime},p^{*})\) Here \(E_{p}\) is the set of edges on path \(p\). This procedure continues until either \(p^{*}\) is the longest path between its terminals or \(c(E^{\prime})\) exceeds the largest possible attack budget. This procedure yields a feasible point, assuming \(\mathtt{PATHATTACK}\) provides an optimal solution. If we continue until \(p^{*}\) is the longest path, we are definitely at a feasible point: all other paths that connect \(p^{*}\)'s endpoints need to be cut. This observation yields the following theorem. Figure 1: Reduction from Knapsack to Zero-Sum Cut Defense. The \(i\)th item in the set corresponds to a triangle \(\{u_{i-1},\omega_{i},u_{i}\}\). All target paths go from \(s=u_{0}\) to \(t=u_{n}\). The target path \(p^{*}\) traverses the bottom edges on the figure, highlighted in red. Keeping defender cost low while ensuring the probability of attack is 0 is equivalent to keeping the weight of the knapsack low while ensuring the value is sufficient. **Theorem 3**.: _When \(\mathcal{P}\) consists of a single path \(p^{*}\), performing the procedure above yields a feasible point for Zero-Sum Cut Defense._ While having multiple possible target paths complicates the problem, we use a similar principle to reduce the probability of attack. For each target path, we increase the edge weights as described above. We then apply PATHATTACK and use the number of edges removed to calculate the attack probability. Then, starting from the original graph, we consider the target paths in order of increasing attack probability. We then increase the edge weights on each target path again, accumulating the new weights each time. This prioritizes the path at the end of the sequence, which has the highest probability of resulting in an attack. ### Heuristic Method From the zero-sum case, we see that increasing the weights on target paths is an effective strategy. Taking this as inspiration, we propose a heuristic algorithm that iteratively chooses an edge \(e\) from some target path \(p^{*}\) and increments its weight to add another path to \(P_{p^{*}}\). We call this algorithm PATHDEFENSE. At each iteration, the algorithm considers edges on which the smallest possible weight increase will provide one target path with a new competing path. For a given target path \(p^{*}\), these edges are identified by applying PATHATTACK and finding the second-shortest path \(p\) between the source and destination of \(p^{*}\), if such a path exists. The edges in \(p^{*}\) that are not part of \(p\) may be incremented to add \(p\) as a competing path that must be cut to make \(p^{*}\) shortest. Pseudocode for this subroutine is provided in Algorithm 1. The attack probability is evaluated after considering each possible perturbation, and whichever perturbation results in the smallest attack probability is kept. If multiple perturbations result in the same attack probability, the edge is chosen that maximizes the average length of \(p^{*}\). This procedure continues until (1) all target paths are the longest between their terminals, (2) a threshold is reached in terms of cost, attack probability, or number of iterations. Pseudocode for PATHDEFENSE is provided in Algorithm 2. ### Relaxation: Local Optimization Around a Feasible Point Once a feasible point is identified, we relax the hardest constraints to formulate a linear program for local optimization. In this case, we fix the attack that occurs for each \(p^{*}\), and ensure that the observed shortest path between each pair of nodes remains the same as the weights are varied. By fixing the attack, we are given a value for \(\Delta_{p^{*}}\) and \(z_{p^{*}}\), thus removing constraints (14)-(19) from the nonconvex optimization in Section 3.1. By fixing the shortest path, we are given a value for \(p_{uv,p^{*}}\), replacing constraint (20) with \[\ell^{\text{obs}}_{uv,p^{*}}\leq\mathbf{x}_{p}^{\top}\left(\mathbf{w}^{\prime} +W\Delta_{p^{*}}\right)\quad\forall u,v\in V,p^{*}\in\text{supp}(\mathcal{P}) \cup\{\emptyset\},p\in P(u,v) \tag{28}\] All remaining constraints in the nonconvex program are linear. This is not, however, sufficient to locally optimize: The attack \(\Delta_{p^{*}}\) must be both necessary (not cut superfluous edges) and sufficient (cut all paths that compete with \(p^{*}\)). To optimize within this context, we add the constraints \[\mathbf{x}_{p^{*}}^{\top}\mathbf{w}^{\prime} \leq\mathbf{x}_{p}^{\top}(\mathbf{w}^{\prime}+W\Delta_{p^{*}})- \epsilon_{p^{*}}\quad\forall p^{*}\in\operatorname{supp}(\mathcal{P}),p\in P(s _{p^{*}},t_{p^{*}}) \tag{29}\] \[\mathbf{x}_{p^{*}}^{\top}\mathbf{w}^{\prime} \geq\mathbf{x}_{p}^{\top}\mathbf{w}^{\prime}\quad p^{*}\in \operatorname{supp}(\mathcal{P}),p\in P_{p^{*}} \tag{30}\] Here \(s_{p^{*}}\) and \(t_{p^{*}}\) are the source and destination nodes, respectively, of \(p^{*}\) and \(P_{p^{*}}\) is the set of paths competing with \(p^{*}\) that were used by PATHATTACK to obtain \(\Delta_{p^{*}}\). To ensure sufficiency, (29) constrains all paths between the terminals of a target path \(p^{*}\) to be strictly longer than \(p^{*}\). The additional variables \(\epsilon_{p}^{*}\) may be measured based on the difference in lengths between \(p^{*}\) and the second-shortest path after the attack. Constraint (30) ensures necessity by making all paths that competed with \(p^{*}\) at the feasible point to remain competitive. Since all constraints are linear, we use constraint generation to explicitly state a subset of the necessary constraints, just as in the PATHATTACK algorithm. ## 4 Experiments We demonstrate the optimization procedure using 4 synthetic network generators and 4 real networks. All synthetic networks have 250 nodes and an average degree of approximately 12. We use Erdos-Renyi (ER) random graphs, Barabasi-Albert (BA) preferential attachment graphs, Watts-Strogatz (WS) small-world graphs, and stochastic blockmodel (SBM) graphs where nodes are separated into communities of size 200 and 50. All edges are given weights drawn from a Poisson distribution with rate parameter 20 and removal costs are set to 1. The real network datasets include two transportation networks, a social network, and a computer network. The transportation networks are United States airports (USAIR), where edge weights are the number of seats on flights between airports [9], and United Kingdom metro stops (UKMET), where weights are travel times between stops in minutes [13]. The social network used is interactions between users at the 2009 ACM Hypertext Conference (HT), where weights are the number of face-to-face interactions between users over the course of the conference [15]. The computer network is an autonomous system (AS) graph [18], with weights from a Poisson distribution as in the synthetic networks. Weights in the USAIR and HT graphs are inverted to create distances rather than similarities. More detailed statistics of the datasets and links to their web locations are provided in Appendix D. ### Experimental Setup For each experiment, we choose one dataset and 1, 2, 4, or 8 target paths, ranging from 5th shortest to 19th shortest. Source-destination pairs are chosen uniformly at random and the target paths include the 5th shortest and every second path thereafter. In some cases, all target paths have the same terminal nodes, in others, we choose independently for each path. For SBM and AS graphs, we also consider the case where the two terminal nodes are from one community of nodes, but the target path traverses nodes in another one. (We call this an _extra-community_ path.) This emulates a scenario where an outside attacker wants the traffic to take a relatively unnatural path, e.g., computer traffic unnecessarily crossing national boundaries. In each experiment, \(\mathcal{B}\) is a Poisson distribution whose rate parameter is set to the average number of edges removed by PATHATTACK across all target paths. The distribution of sources and destinations for users emphasizes the portions of the graph with target paths: with probability 0.5, we draw two nodes both either on a target path or on the true shortest path between its endpoints, and with probability 0.5 we do not. (Pairs are uniformly distributed within each category.) For each setting, results are aggregated across 10 trials. Experiments were run on a CentOS Linux cluster with 32 cores per machine, and each job was allocated 10 GB of memory. We used Gurobi 9.5.1 for optimization and NetworkX 2.4 for graph analysis, both within Python 3.8.1.1 Footnote 1: Gurobi: [https://www.gurobi.com](https://www.gurobi.com). NetworkX: [https://networkx.org](https://networkx.org). Code for PATHATTACK is available at [https://github.com/bamille1/PATHATTACK](https://github.com/bamille1/PATHATTACK). ### Results We first consider how the three components of the defender's cost vary over the course of running PATHDEFENSE. Representative results are shown in Fig. 2. In the early iterations, cost is dominated by the true distance traveled by users. Although the distribution is skewed toward the portion of the graph affected by the attack, the impact of errors is negligible in comparison. One reason for this phenomenon is that increasing edge weights discourages their use: when a path looks longer, fewer users will take it and it will not be considered in the cost. In the UKMET case, however, this changes after about 100 iterations, at which point the cost from errors drastically increases. The metro graph is somewhat tree-like, making it difficult to avoid traversing perturbed edges. In all cases, the overall reduction in cost comes from a large reduction in the probability of attack counterbalancing a small increase in the average distance traveled. The cost of PATHDEFENSE for three additional datasets is shown in Fig. 3. The plots include cases where the rate parameter of the budget distribution is doubled and where the cost of adversary success is reduced by a factor of five. We report all costs as a proportion of a lower bound, i.e., the cost when there is no attack and no perturbation. When the adversary's budget is doubled, the initial defender cost is much larger, but the cost eventually obtained by PATHDEFENSE is very similar. It is typical for PATHDEFENSE to outperform the zero-sum case at an early iteration when there are few target paths, though this does not always happen. When there are 8 target paths in the HT graph, the zero-sum procedure produces a better result than PATHDEFENSE. This may be due to the clustering that exists in the social network: it may promote oscillation between competing paths, whereas the zero-sum method focuses on one path at a time. Results on all datasets are provided in Appendix E. Results where the attacker targets a path that exits and re-enters a community are shown in Fig. 4. The lowest relative cost is higher in this case than when paths are chosen by enumerating consecutive shortest paths, which is consistent with intuition. In the SBM graph with extra-community target paths, we again see that the zero-sum method yields lower cost than PATHDEFENSE, suggesting that optimizing each target path in sequence is effective in this case as well. ## 5 Related Work The problem of releasing a graph that can be useful while not giving away sensitive information has received considerable attention since the problem of deanonymization was discovered [4]. Much of this research has been on privacy-preserving release of social network data, where nodes are anonymized with respect to topological features like degree [19], neighborhood [25], or cluster [5]. Sharing of sensitive graph data has been studied in the context of differential privacy [11]. Sealfon [22] applied differential privacy to graph weights in the context of computing shortest paths. While keeping the true weights of the graph secret, the algorithm provides an approximate shortest path between a pair of query nodes. Other recent methods have considered differential privacy for unweighted graphs. Two methods--a data-driven low-dimensional projection [3] and random low-dimensional projections [6]--have been applied for cut queries, i.e., calculating the number of edges that must be removed to disconnect two sets of vertices. Other recent work does not necessarily preserve distances between pairs of nodes, but maintains the distribution of distances [8]. Outside of differential privacy, work has been done on reliably finding short Figure 2: Defender cost broken down by component. Results are shown for BA (left), WS (center), and UKMET (right) graphs. Lower cost is better for the defender. In all cases there are 4 target paths with the same terminal nodes. There is a substantial reduction in cost due to the probability of adversary success being reduced, and the cost due to errors in published distances is minimal for BA and WS, whereas \(L_{e}\) increases a substantial amount in the UKMET data, as it is difficult to avoid traversing perturbed edges. est paths when a graph is located on an untrusted server [14]. In other work, an actor wants to "buy" a path from \(s\) to \(t\), and the prices are only known to the current owners [2]. It has been shown in this setting that a buyer can be forced to overpay for the path [12], which is similar to the Cut Defense goal of forcing an attacker to expend extra resources, though in a different data access mechanism. The PATHATTACK algorithm is an example of inverse optimization [1], and specifically the inverse shortest path problem [24]: rather than optimizing the path length for a given graph, we change the graph to make a certain path the shortest. Inverse shortest path problems have proven useful in various navigation scenarios [7, 17]. Other research has considered inverse shortest path lengths [23] and other inverse optimizations, such as max flow/min cut [20, 16, 10]. Figure 3: Cost of PATHDEFENSE when all target paths share terminals. Results are shown for ER (top), USAIR (middle), and HT (bottom) graphs. the original budget and \(\lambda\) (left), when the attacker budget is doubled (center), and when the cost of attacker success is reduced by five times (right). Plots include the average cost (solid line) and the cost range across trials (shaded area), as well as the average zero-sum result (dashed line). All costs are normalized by a lower bound. As expected, increasing the adversary’s budget results in slower convergence, and decreasing the attack success cost reduces the improvement provided by PATHDEFENSE. ## 6 Conclusion This paper presents a framework and algorithms for defending against shortest path attacks. We formulate the defense as a Stackelberg game in which the defender alters the weights of the graph before the attacker removes edges to make the target path shortest. The defender's cost includes components to limit the average distance traveled by users, the error in the published distances, and the probability of attacker success. We show that the zero-sum version of this problem is NP-hard and provide a greedy edge weight increment procedure to find a feasible point. Using this same procedure in the more general context, we propose the PATHDEFENSE algorithm and apply it to several real and synthetic datasets. Across a wide set of experiments, we observe that PATHDEFENSE reduces the attack probability to a negligible level (typically less than \(10^{-6}\)) while only slightly increasing the cost borne by users (by less than \(5\%\) in over \(87\%\) of cases).
2308.07241
Context-Aware Planning and Environment-Aware Memory for Instruction Following Embodied Agents
Accomplishing household tasks requires to plan step-by-step actions considering the consequences of previous actions. However, the state-of-the-art embodied agents often make mistakes in navigating the environment and interacting with proper objects due to imperfect learning by imitating experts or algorithmic planners without such knowledge. To improve both visual navigation and object interaction, we propose to consider the consequence of taken actions by CAPEAM (Context-Aware Planning and Environment-Aware Memory) that incorporates semantic context (e.g., appropriate objects to interact with) in a sequence of actions, and the changed spatial arrangement and states of interacted objects (e.g., location that the object has been moved to) in inferring the subsequent actions. We empirically show that the agent with the proposed CAPEAM achieves state-of-the-art performance in various metrics using a challenging interactive instruction following benchmark in both seen and unseen environments by large margins (up to +10.70% in unseen env.).
Byeonghwi Kim, Jinyeon Kim, Yuyeong Kim, Cheolhong Min, Jonghyun Choi
2023-08-14T16:23:21Z
http://arxiv.org/abs/2308.07241v4
# Context-Aware Planning and Environment-Aware Memory ###### Abstract Accomplishing household tasks requires to plan step-by-step actions considering the consequences of previous actions. However, the state-of-the-art embodied agents often make mistakes in navigating the environment and interacting with proper objects due to imperfect learning by imitating experts or algorithmic planners without such knowledge. To improve both visual navigation and object interaction, we propose to consider the consequence of taken actions by **CAPEAM** (Context-Aware Planning and Environment-Aware Memory) that incorporates semantic context (_e.g._, appropriate objects to interact with) in a sequence of actions, and the changed spatial arrangement and states of interacted objects (_e.g._, location that the object has been moved to) in inferring the subsequent actions. We empirically show that the agent with the proposed CAPEAM achieves state-of-the-art performance in various metrics using a challenging interactive instruction following benchmark in both seen and unseen environments by large margins (up to \(+10.70\%\) in unseen env.). ## 1 Introduction For decades, the research community has been pursuing the goal of building a robotic assistant that can perform everyday tasks through language directives. Recent advancements in computer vision, natural language processing, and embodied AI have led to the development of several benchmarks aimed at encouraging research on various components of such robotic agents. These benchmarks include navigation [2, 7, 8, 24], object interaction [28, 42], and interactive reasoning [11, 16] in visually rich 3D environments [6, 23, 40]. However, for realistic assistants to be built, active research in interactive instruction following [16, 28, 36, 42] has been in progress. This requires agents to navigate, interact with objects, and complete long-horizon tasks by following natural language instructions with egocentric vision. To accomplish a given task, the agent needs to plan a sequence of actions to interact with specific task-relevant objects. However, the agent often plans to interact with irrelevant objects to the task. For instance, for the task "put an apple slice on the table", after slicing an apple, the agent might plan to pick up a bread slice, which can lead to the failure of the entire task, mainly due to a lack of contextual memory. To address this issue, we first propose a novel approach that Figure 1: **Overview of the proposed ‘Context-Aware Planning (CAP)’ and ‘Environment-Aware Memory (EAM)’. The CAP incorporates ‘context’ (_i.e._, task-relevant objects) of the task (denoted by ✓ in generating a sequence of sub-goals, compared with the output without the CAP, denoted by �). The detailed planners then predict a sequence of agent-executable actions for each respective sub-goal. The agent keeps the state changes of objects and their masks in the EAM and utilizes them when necessary. Even when the agent may not predict the mask of the plate due to occlusion, it can still interact with the plate thanks to the mask remembered in EAM, leading to successful task completion.** divides the long-horizon planning process into two distinct phases: (1) task-relevant prediction, treated as a _context_ prediction, and (2) detailed action planning that considers the contextual memory. We refer to the term 'context' as the objects that the command instructs the agent to manipulate. All actions performed by the agent need to focus on these objects, making them the overarching context for the entire plan's actions. By prioritizing the prediction of the context, we improve the agent's ability to plan a sequence of actions with less loss of environmental knowledge including objects and their receptacles. We then combine the generated actions with the context to boost the agent's efficiency in accomplishing long-term objectives by concentrating on interactive objects related to the task. In addition, changing the object states poses an additional challenge to the agent's ability to successfully complete tasks that involve object interaction [16, 42]. Failure to track the dynamic object states (_e.g_., if an object has been already moved or not) can result in unintended interactions and often lead to task failure. For example, for the task "move two apples in the table," once the agent moves an apple, the agent might try to move the same apple twice if the agent does not know the apple has already been moved and eventually fails at the task. To address the additional challenge, we further propose to use an environment-aware memory that stores information about the states of objects, as well as their masks for changed visual appearances mainly due to occlusion. This approach allows the agent to interact with objects in their proper states over time. By keeping track of object states and appearances, the agent can ensure interacting with the correct objects and conducting the appropriate actions, ultimately leading to more successful task completion. For training and evaluation, we use the widely used challenging benchmark for interactive instruction following [36]. We achieve the state-of-the-art success rates and the goal-condition success rates in _seen_ and _unseen_ environments by large margins (up to \(+10.70\%\) in unseen SR) and rank the first place in the leaderboard at the moment of submission. Also, CAPEAM with the templated approach with minor engineering won the \(1^{st}\) generalist language grounding agents challenge at the Embodied AI Workshop in CVPR 2023.1 Footnote 1: See our entry ‘[EA123] ECLAIR’ in [https://leaderboard.allenai.org/alfred/submissions/public](https://leaderboard.allenai.org/alfred/submissions/public) We summarize our contributions as follows: * We propose context-aware planning that plans a subgoal sequence with 'context' and conducts respective sub-goals with the corresponding detailed planners. * We propose environment-aware memory that stores states in spatial memory and object masks for better navigation and interaction with changed object states. * We achieve a state-of-the-art in a challenging interactive instruction following benchmark [36] in all metrics with better generalization to novel environments. ## 2 Related Work Action Planning.Embodied AI tasks require agents to reason at multiple levels of abstraction, taking into account the dynamic nature of the environment, their capabilities, and goal formulation [11, 14, 39, 16, 13]. Despite the complexity of tasks, many approaches [30, 32, 37] rely on flat reasoning that directly outputs low-level actions. Prior arts [12, 16, 41, 5, 10] attempted to address this issue by splitting the layer of actions into two, with the first layer composed of abstract natural language instructions and the second layer of agent-interpretable low actions. However, a significant semantic gap exists between natural language instruction and agent-interpretable actions. Several approaches [24, 25, 32] require large amounts of labeled data or trial-and-error learning to bridge this gap. [4] propose a deep model to reduce the semantic gap issue without requiring excessive labeled data or trial-and-error learning. However, their models often suffer from confusion in predicting the correct objects to interact with. Meanwhile, the templated approaches [19, 27] have been explored for data-efficient action planning. They rely on pre-designed templates for every task type in the dataset and match each task to the corresponding template. Despite the benefits of efficiency and accuracy, human experts must generate templates whenever a new task type is needed, which is time-consuming and resource-intensive. Additionally, this may not generate optimal plans, as it is restricted to predefined protocols and may not be suitable for tasks outside of the specified protocol. Instead, we propose a method that takes full advantage of the deep learning model to decrease the rigidity of the templated approaches. For effective planning, it would help agents focus on a task context to parse task-relevant objects from natural language instructions [9, 15]. [9] learn to localize instruction-relevant objects based on implicit language representation. [15] uses the implicit representation of language instructions to guide hallucination of the region outside of the input egocentric map. However, such implicit representation can be affected by language ambiguity (e.g., a red one, the fruit, and apple for 'Apple') as the agents are not supervised which they refer to as the same object. In contrast, we explicitly predict task-relevant objects (e.g., Apple, Knife, etc.) that can help the agent focus on the context and thus be robust to variation of language instruction. Aside from the learning-based approaches for planning, large language models (LLMs) [17, 18, 33, 1] have been actively investigated for their efficacy in reasoning. Despite their significant In [17], LLMs lack physical ground ing due to the absent connection between the physical state of agents and environments (e.g., executable actions, environmental consequences of actions, etc.). This may result in unreasonable interpretations of instructions. For instance, for the task of 'turn on the lamp while holding a tennis racket,' the LLM may generate a plausible plan: 1) pick up the tennis racket, 2) plug in the code of the lamp, and 3) turn on the lamp. However, the agent may not support the 'plug-in' action, leading to failure at the task. To address the physical grounding issue in LLMs, efforts have been made including re prompting [33], success detectors [18], and skill affordance value functions [1], which implies that sub-goal planning with LLMs still requires further investigation. Memory for Object Interaction.Utilizing the semantic spatial map makes the process of searching for objects efficient. Previous works [19, 27] record the objects' positional information on the map when the agent finds key objects during exploration and uses it when the current sub-goal matches one of these records. However, these data do not contain whether the interaction has already been completed or the object should not be moved again. This may cause the agent to re-interact with the object that should not be interacted with. To address this issue, we propose environment-aware memory that keeps track of objects' information if it has been interacted with, which can reduce misinteraction with unsuitable objects. In addition to spatial representation encoding, interaction with the same object in multiple time steps is also challenging as the appearance of an object may change during the interaction (_e.g._, opened drawer vs. closed drawer) but the agent has to recognize the same object with different appearances. For multiple interactions with the same object, [37] proposes instance association in time (IAT) that introduces a memory to store the previous time step's mask for an additional mask-selection criterion based on the geological distance of masks to select the object mask closest to the previous time step's one. However, if the mask generator fails to recognize the object, the IAT then has no mask candidates for selection and therefore the agent is not able to choose the current object's mask. In contrast, we propose to memorize the previously interacted masks to reduce the effect of the appearance changes mainly due to occlusion. Semantic Spatial Representation.The earlier strategies to the problem [30, 32, 37] are to directly map visual observations and natural language instructions to a sequence of low-level actions and corresponding masks for object interaction by encoding their history in implicit representation. For instance, [37] proposes to use separate network branches for action and mask prediction and each branch learns to map visual and language features to a sequence of respective actions and masks. [32] proposes a Transformer-based agent that jointly learns to predict actions and masks based on the history of visual observations and language instructions. While they outperform the baseline [36] with large margins, they still lack the ability to complete tasks in unseen environments as observed in the performance gap Figure 2: **Model Architecture. Our agent consists of (1) ‘context-aware planning (CAP)’ and (2) ‘environment-aware memory (EAM)’. Taking the natural language instructions, the sub-goal planner in the CAP predicts ‘context’ (_i.e._, task-relevant objects) and generates a sequence of ‘sub-goal frames’ that are sub-goals with a predicted action and placeholders for which object should be used with it. Then the objects in the ‘sub-goal frames’ are completed with predicted objects (the context). For each planned sub-goal, a corresponding detailed planner generates a sequence of ‘executable actions.’ In the EAM, the agent maintains the semantic spatial map by integrating the predicted depths and masks into 3D world-coordinates along with the state changes of objects with their masks to utilize them during task completion.** from seen environments. Multiple recent works propose to build an explicit semantic spatial representation such as 2D top-down maps [19, 27], 3D voxel maps [5, 29, 20], or graphs [26]. Such representation enables the agent to accurately perceive 3D environments and plan actions to navigate to and interact with objects on the representation at the expense of additional supervision. Inspired by this, we maintain our agent's history in a semantic map for room layouts, objects, _etc_. ## 3 Approach Generalizing a learned embodied AI agent to the unseen environment is one of the key challenges of building a successful embodied AI agent [5, 29, 19, 27]. To achieve better generalization, recently proposed successful approaches [5, 29, 20] often build an explicit spatial map for the environment and use a hybrid approach of combining learning the environment with well-designed navigation strategies. Despite being successful, these agents often forget their task contexts and therefore attempt to interact with task-irrelevant objects, eventually leading to task failure. In addition, they might encounter different objects' states (_e.g_., appearances, positions, _etc_.) due to object interaction, which may require tracking the states while completing tasks. To address this issue, we introduce context-aware planning that plans a sequence of actions based on a context (_i.e_., task-relevant objects) and environment-aware memory that stores states in a spatial memory and object masks for better navigation and interaction with the objects in various states (_e.g_., an object containing another object inside) in a hybrid model of learning and crafted navigation algorithms. Figure 2 illustrates the architecture of our CAPEAM. We provide details for each component of our method below. ### Context-Aware Planning Upon receiving a natural language command, an agent needs to interpret and infer the requirements of the given task (_e.g_., to fetch the object of interest). Once the agent successfully interprets the command, the agent needs to create a plan to achieve the goal. Similar to [4, 5, 41], we propose a novel planning scheme that divides the goal into'sub-goals' and develops each sub-goal into a 'detailed action sequence' that the agent can execute. However, generating a proper sub-goal sequence for the complex task is not trivial as the generated sub-goals often contain irrelevant objects or undesirable actions. Particularly, if there is no proper error-correcting mechanism in place, which is not trivial either, an incorrect sub-goal leads to a failure of the entire task. Even if the agent corrects the error, the corrections take extra steps to complete the task, harming the efficiency of the agent. For the correct planning with task-relevant objects, we first define 'context' as a set of task-relevant objects shared across sub-goals of a given task. The proposed 'context-aware planning' (CAP) divides planning into two phases; 1) a'sub-goal planner' which generates sub-goals, and 2) a 'detailed planner' which is responsible for a sequence of detailed actions and objects for interaction for each sub-goal. The sub-goal planner further comprises two sub-modules: the context predictor, which predicts three task-relevant objects, and the sub-goal frame sequence generator, which generates a sequence of sub-goals that do not rely on particular objects, referred to as sub-goal frames. There are three task-relevant objects referred to as the context. The first object corresponds to the main object to be manipulated. The second object pertains to the container that holds the object. The last object relates to the target object where the object is to be placed in the task. We integrate these predicted task-specific objects into a sequence of sub-goal frames to produce a sub-goal sequence. This allows our agent to plan a sequence of sub-goals conditioned on the task-relevant objects, which helps the agent remember the context, _i.e_., objects, during action planning. #### 3.1.1 Sub-Goal Planner Given a language input \(l\), the sub-goal planner, \(f_{sub}(\cdot)\), generates human-interpretable sub-goals. We can write the \(n^{\text{th}}\) sub-goal as a triplet of action, a small object, and a receptacle that contains the object to be interacted with as: \[\begin{split} f_{sub}(l)&=\{S_{n}\}_{n=1}^{N},\\ S_{n}&=(A_{n},O_{n},R_{n}),\end{split} \tag{1}\] where, \(A_{n}\) denotes a human-interpretable action, such as 'clean' or 'heat'. \(O_{n}\) denotes a small object targeted for ma Figure 3: **Context-Aware Planning (CAP). It consists of a ‘sub-goal planner’ and a set of ‘detailed planners’ for each sub-goal to generate ‘executable actions.’ The sub-goal planner first predicts a set of objects related to the task, which we call ‘Context.’ Then, the ‘sub-goal frame sequence generator’ in the sub-goal planner generates a sequence of ‘sub-goal frames.’ Finally, the ‘meta-classes’ in each sub-goal frame are replaced with the corresponding objects in the context, resulting in the final sub-goal. A ‘detailed planner’ translates the sub-goal to executable actions.** nipulation in the execution of \(A_{n}\). \(R_{n}\) refers to the location where \(O_{n}\) can be found. \(N\) refers to the number of sub-goals in a plan. We have two sub-modules to generate the triplets: the context predictors and the sub-goal frame sequence generator. The context predictors focus on extracting task-relevant objects from natural language instruction. The sub-goal frame generator concentrates on a sequence of interactions that constitute the goal. We predict task-relevant objects to ensure that all sub-goals in the plan share the same task-relevant object information. By utilizing a'meta-class' to generate sub-goal sequences that identify where these task-relevant objects should be placed. This ensures that all sub-goals share the same task-relevant objects. We refer to a sub-goal filled with the meta-classes as a'sub-goal frame'. Context Prediction.Given a human-described instruction \(l\) as an input, we use three context predictors. \(f^{O}_{ctxt}(\cdot)\) predicts a primary object being targeted in the task, noted as \(c_{O}\). \(f^{M}_{ctxt}(\cdot)\) predicts a required carrier of the object, noted as \(c_{M}\). \(f^{R}_{ext}(\cdot)\) outputs a destination (_i.e_., receptacle) that contains the target object, referred to as \(c_{R}\): \[f^{O}_{ctxt}(l)=c_{O},\ \ f^{M}_{ctxt}(l)=c_{M},\ \ f^{R}_{ctxt}(l)=c_{R}, \tag{2}\] where \(c_{O}\) is an object assumed to play the main target in a task described as \(l\), \(c_{M}\), and \(c_{R}\) are a container and destination of \(c_{O}\), respectively. For instance, if the goal is to "place an apple in a mug on a table", the agent needs to move the 'apple' (\(c_{O}\)) using the container'mug' (\(c_{M}\)) and subsequently place both of them onto the 'table' (\(c_{R}\)). To predict the context, we finetune a pretrained language model [13]. Sub-Goal Frame Sequence Generator.After identifying the objects that require manipulation in the task, we determine a series of interactions that will bring objects to the desired goal state. To do this, we generate a sequence of sub-goals _without_ specifying the context (_i.e_., objects) involved. This allows the sub-goals to be later filled with predicted task-relevant objects. This task is accomplished through the sub-goal frame sequence generator. We introduce a meta-class, which is mapped to one of the corresponding contexts. The use of a meta-class instead of the specific object name allows the sub-goal frame sequence generator to focus more on the object's role (_i.e_., a main target, a carrier, and a destination) in the task rather than its own name, as illustrated in Figure 5. If a sub-goal frame includes meta-classes (\(x_{O}\), \(x_{M}\), \(x_{R}\)), it is later replaced with the 'contexts' (\(c_{O}\), \(c_{M}\), \(c_{R}\)) from the context predictors. Note that some contexts may contain 'None' indicating that a task does not need them. For instance, for a task, 'put an apple on the table,' the carrier (\(x_{M}\)) is not needed and therefore an action sequence is planned based only on the two contexts (_i.e_., \(x_{O}=\text{Apple and }x_{R}=\text{Table}\)). The sub-goal frame outputs a sub-goal with a placeholder (\(\langle\cdot\rangle\)), which is later filled with either an object or a meta-class by the generator. Formally, the sub-goal frame sequence generator \(f_{sf}(\cdot)\) takes as input a natural language instruction and generates a sequence of sub-goal frames: \[\begin{split} f_{sf}(l)&=\{F_{n}\}_{n=1}^{N},\\ F_{n}&=(A_{n},\langle O\rangle_{n},\langle R\rangle _{n}),\\ \langle\cdot\rangle&\in E\cup\{x_{O},x_{M},x_{R}\}, \end{split} \tag{3}\] where \(l\) denotes the instruction, \(A_{n}\) denotes an action of \(n^{th}\) sub-goal frame, \(\langle O\rangle_{n}\) and \(\langle R\rangle_{n}\) represent place holder of the small object and receptacle to be interacted with, respectively, \(E\) refers to a set of all objects in a given environment, and \(N\) refers to the number of sub-goal frames. All meta-class in place holder of sub-goal frames are then replaced with task-relevant objects (\(c_{O}\), \(c_{M}\), \(c_{R}\)) from context prediction, resulting in the final output \(\{(A_{n},O_{n},R_{n})\}_{n=1}^{N}\). #### 3.1.2 Detailed Planners To execute each sub-goal generated by the sub-goal planner \(f_{sub}\), an agent should render the agent-executable actions which we refer to as 'detailed actions' from the inferred'sub-goal plan', \(\{S_{n}\}_{n=1}^{N}\). We define a 'detailed planner' \(f^{g}_{dp}(\cdot)\) for a sub-goal action \(g\) to translate a sub-goal with \(A_{n}=g\) into a sequence of the detailed actions as: \[f^{g}_{dp}((A_{n},O_{n},R_{n}))=\{(a_{t},o_{t})\}_{t=1}^{T_{n}}, \tag{4}\] where \(a_{t}\) and \(o_{t}\) are the \(t^{\text{th}}\) detailed action and an object to interact in the given sub-goal. For instance, if a sub-goal is given as (Pickup, Plate, Cabinet), a favorable output of a detailed planner would be (Open, Cabinet), (Pickup, Plate), and (Close, Cabinet). We learn the detailed planner using a self-attention LSTM in a supervised manner [22]. ### Environment-Aware Memory Semantic Spatial Map.One of the challenges in the interactive instruction following task is to accurately perceive the 3D environment from the 2D RGB images captured by the agent for better navigation and interaction with objects. Following [5, 19, 20, 27, 29], we build a semantic spatial map from predicted depth maps and object masks by back-projecting them to 3D world coordinates. In particular, we use depth to maintain the environmental information such as obstacle area, object positions and classes, _etc_. However, without memory, the agent may not be able to keep track of the configurations of objects in an environment. The absence of such tracking poses an additional challenge for task completion. For instance, if the agent is asked to move multiple apples to a table, the agent should then remember which objects have been moved so far. Here, we propose to configure memory of past environmental information to remember the past information and predict a proper action sequence during a task. The memory helps the agent remember the current state of the environment and make proper decisions to facilitate task completion. Retrospective Object Recognition.While completing tasks, the agent often needs to interact with the same object in multiple time steps. For example, to move an apple with a plate, the agent has to interact with the plate twice: 1) put the apple in the plate and 2) pick up the plate with the apple. During multiple object interactions, however, the visual appearance of the object can change due to various reasons such as occlusion. For these reasons, the agent might not be able to recognize the object and would fail at interaction with the intended object, leading to the entire task's failure. To address changes in the visual appearance of an object during multiple interactions, we propose to retain the latest segmentation masks of objects and use them as the current object's mask if the agent is interacting with the same object but fails to recognize it. Exploiting the preserved masks allows the agent to keep interacting with the same object even with visual appearance changes during interactions. Object Relocation Tracking.In another task completion scenario, the agent often faces scenarios where it needs to relocate multiple objects of the same class to the same location (_e.g._, "Move two apples on the table."). While conducting this task, the agent may lose track of which objects have already been relocated. Consequently, the agent may attempt to navigate to and relocate objects that have already been relocated, which possibly leads to a repeated sequence of unintended object relocation or a task failure. To circumvent the issues, we propose to maintain the information about the most recent location of each relocated object in a 2D coordinate and exclude it in the semantic map as a future target for our agent's navigation. The agent can recognize relocated objects among all detected objects by comparing the locations in the memory and the semantic spatial map. This module allows the agent to avoid redundant interaction with already relocated objects, possibly leading to task failure, and therefore successfully navigate to and interact with those that have been not relocated. Object Location Caching.During another task completion, an agent may need to revisit object locations that it has previously visited to obtain an object with 'changed states'. Considering the task "put an apple slice on the table" as an example, after the agent has sliced the apple, the agent needs to navigate to the sliced object again at the location where the apple was sliced to interact with the object in its changed state. Without memorizing the locations and masks, however, the agent would need to re-explore the environment to locate the objects again, which could result in inefficient navigation and possible task failure. To alleviate such an issue, we propose to cache the 2D locations and the segmentation masks in memory for objects whose states change. By preserving the object locations and masks, the agent can efficiently navigate back to the remembered locations and interact with the remembered object masks when necessary. This can reduce the need for the agent to explore the environment again, which can lead to more efficient navigation and interaction and possibly reduce the possibility of navigation and interaction failure. ### Action Policy To conduct object interaction for task completion, the agent first needs to reach the target objects in close vicinity [31, 34, 39]. Recent approaches [5, 19, 20, 29] plan obstacle-free paths using either deterministic algorithms (_e.g._, A*, FMM [35], _etc._) or learning the path in the discrete navigation space [38], mostly by the imitation learning [3]. Since imitation learning requires a large number of expert trajectories for satisfactory performance, deterministic algorithms currently dominate the literature for significantly better performance, implying the amount of data in the current benchmark with imitation learning might be more limited than necessary. In addition, [19, 27] maintains the obstacle area larger than they actually perceive for safe path planning distant from the obstacles. Inspired by them, we plan navigation paths with the deterministic approach [35] in the discrete space on the expanded obstacle map. Figure 4: **Environment-Aware Memory (EAM).** The agent updates the semantic spatial map using predicted depths and object masks for scene information. ‘Retrospective Object Recognition’ preserves the latest object mask to approximate the current object’s mask when mask prediction fails. ‘Object Relocation Tracking’ stores the most recent location of each relocated object and discards it as a future navigation target. ‘Object Location Caching’ remembers the locations and masks of objects whose states change. ## 4 Experiments Dataset and Metrics.We employ ALFRED [36] as a challenging interactive instruction following benchmark. There are three splits of environments in ALFRED: 'train', 'validation', and 'test'. The validation and test environments are further divided into two folds, _seen_ and _unseen_, to assess the generalization capacity. For evaluation, we follow the standard evaluation protocol of the ALFRED benchmark [36]. The primary metric is the success rate, denoted by 'SR,' which measures the percentage of completed tasks. Another metric is the goal-condition success rate, denoted by 'GC,' which measures the percentage of satisfied goal conditions. Finally, path-length-weighted (PLW) scores penalize SR and GC by the length of the actions that the agent takes. We provide further details of this benchmark in the supplementary for space's sake. ### Comparison with the State of the Art We present a quantitative analysis of our method and prior arts in Table 1. For a fair comparison, we compare our method with prior arts that incorporate semantic spatial representation constructed followed by depth estimation. Following the recent approaches [27, 5], we compare methods that use only a high-level goal statement (_i.e_., without low-level instructions, denoted by 'Low Inst.' by \(\boldsymbol{\chi}\)). In addition, we compare the models that generate action sequences using prior knowledge of tasks and environments with the 'action template' ('Tem. Act.' by \(\boldsymbol{\check{\check{\bigvee}}}\)) [19, 27]. We first investigate the performance when the hand-designed action sequence templates are combined with our agent (\(\boldsymbol{\check{\bigvee}}\) in 'Tem. Act.'), which is an ablated version of our model. We observe that our agent outperforms all prior arts in novel environments in terms of success rates, which is the main metric of the benchmark. We observe [19] yields better performance in seen environments compared to our agent. We believe that this might be attributed to the strategies to enhance spatial perception such as pose adjustment based on accurate perception models (_e.g_., depth estimators, _etc_.) that generally perform well in seen environments. Nevertheless, we observe that agent has less performance gap between seen and unseen environments, implying better generalization of our agent to unseen environments. We then investigate the performance without using the templated action sequences (\(\boldsymbol{\check{\bigvee}}\) in 'Tem. Act.'). We observe that our method outperforms all prior arts by large margins in SR and GC for both seen and unseen environments. As we consistently observe the improvements with and without the low-level instructions, this would imply that our method does not heavily rely on the detailed description of a task. Note that [29] collects an additional dataset with the human-in-the-loop process for interaction failure recovery. With the additional expensive supervision, we observe that [29] achieves better PLW scores by efficiently completing tasks than CAPEAM, and the comparison is not quite fair. ### Ablation Study To investigate the benefits of each proposed component of our CAPEAM, we conduct a quantitative ablation study and summarize the result in Table 2. Without Context-Aware Planning.First, we ablate the CAP from our method and the agent therefore learns a \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{**Low**} & \multirow{2}{*}{**Tem.**} & \multicolumn{3}{c}{**Test Sem**} & \multicolumn{2}{c}{**Test Unseen**} \\ & & & & **SR** & **GC** & **SR** & **GC** \\ \hline PLM [27] & ✗ & ✓ & 25.77 (10.39) & 36.15 (14.17) & 24.6 (9.67) & 34.75 (13.13) \\ Promper [19] & ✗ & ✗ & **4.98** (**23.24**) & **35.59** (**20.86**) & **24.61** (**24.20**) & **9.95** (**25.80**) \\ **CAPEAM-** & ✗ & ✓ & 2.64 (**20.51**) & 25.22 (**22.42**) & **4.75** (**21.15**) & 22.22 (**22.32**) \\ FILM [27] & ✓ & ✓ & 28.83 (12.17) & 39.55 (15.59) & 27.80 (11.32) & 38.52 (15.13) \\ Promper [19] & ✓ & ✓ & **3.23** (**25.81**) & **6.33** (**30.37**) & **27.12** (20.76) & 38.76 (**26.22**) \\ **CAPEAM-** & ✓ & ✓ & 20.62 (**22.50**) & 20.42 (**22.42**) & **49.48** (**21.61**) & **6.10** (**27.16**) \\ \hline HLSM [3] & ✗ & ✗ & 25.31 (1.69) & 35.79 (11.53) & 16.29 (4.34) & 27.24 (8.45) \\ LG-SR [29] & ✗ & ✗ & 33.01 (**26.55**) & **33.11** (**24.87**) & 27.80 (12.29) & 38.55 (29.01) \\ Extra [26] & ✗ & ✗ & **2.96** (**26.56**) & 44.14 (**24.37**) & **26.07** (**29.27**) & 39.54 (**29.14**) \\ **CAPEAM** (Ours) & ✗ & ✗ & **47.36** (**19.03**) & **54.38** (**23.25**) & **33.09** (**17.64**) & **54.66** (**22.76**) \\ \hline HLSM [3] & ✓ & ✗ & 29.91 (**8.70**) & 41.21 (**11.45**) & 28.77 (**5.55**) & 30.31 (9.99) \\ MLM [27] & ✗ & 33.01 (**0.43**) & 43.65 (**10.48**) & 41.68 (**11.63**) & 24.13 (**24.31**) & **24.17** (**24.32**) \\ AMSLAM [21] & ✓ & ✗ & 29.48 (**23.29**) & 48.08 (**5.56**) & 23.48 (**23.36**) & 34.64 (**4.53**) \\ LG-SR [29] & ✓ & ✗ & 20.05 (**21.29**) & **38.52** (**25.97**) & **44.11** (**22.76**) & **4.12** (**22.76**) \\ **CAPEAM** (Ours) & ✓ & ✗ & **51.79** (**21.60**) & **60.95** (**22.58**) & **44.11** (**24.37**) & **57.33** (**24.06**) \\ Human & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparison with the state of the arts. The path-length-weighted (PLW) metrics are given in the parentheses for each value. The highest and second-highest values per fold and metric are shown in bold and underline, respectively. ‘Low Inst.’ refers to the step-by-step instructions aligned to the respective subgoals. ‘Tem. Act.’ refers to ‘templated action’ sequences designed in [27]. \(\boldsymbol{\check{\bigvee}}\)/\(\boldsymbol{\check{\bigvee}}\) denotes the corresponding module is used/not used, respectively. ‘CAPEAM\({}^{*}\)’ denotes our agents using the templated actions without action planning by our learned model.** \begin{table} \begin{tabular}{l c c c c c} \hline \hline \# & CAP & Param\# & **Valid. Seen** & **Valid. Unseen** \\ \hline \(a) & ✓ & \(712.60\)M & \(83.05\) & \(77.34\) \\ \hline \(b) & ✗ & \(651.01\)M & \(79.27\) & \(74.91\) \\ \(c) & ✗ & \(164.95\)M & \(80.12\) & \(74.18\) \\ \hline \hline \end{tabular} \end{table} Table 3: **Accuracy of the predicted action sequence compared to the ground-truth by the CAP module**_vs_**. naïve model size increase.** CAP refers to the ‘Context-Aware Planning.’ Param# denotes the number of parameters to be learned for planning. (b) and (c) share the same LSTM-based architecture of (a) but not use context (_i.e_., a set of task-relevant objects). CAP noticeably improves the success rate ((a) \(\rightarrow\) (c)) but this is not simply from the model size increase ((c) \(\rightarrow\) (b)). monolithic policy that directly maps natural language instructions to a sequence of agent-executable actions. At times, the agent attempts to interact with inappropriate objects to tasks. The result of the task may differ from the intended goal, leading to task failure. This eventually leads to noticeable performance drops (\(-2.34\%\), \(-3.86\%\) in SR) in both seen and unseen splits as evidenced in (#(a) _vs._ #(b)). Some may argue that the performance gain by the CAP may come from model size increase as we use separate networks for context prediction and sub-goal frame sequence generation. For this, we learn the same policy but with more learnable parameters such as the size of hidden states and embeddings to closely approach to the size of the final model and summarize the result in Table 3. We observe that even with the similar number of parameters (\(651.01\)M compared to \(712.60\)M), the planning accuracy of the policy with more parameters remains similar or even slightly drops in the seen split from the one with the smaller network (\(164.95\)M). It implies that the performance gain by the CAP may not be attributed to simple model size increase. Without Environment-Aware Memory.We then ablate the EAM from our agent. Without EAM, the agent has to conduct its actions based on the current state as the agent preserves only limited information about the changed states of objects and their masks. Due to the lack of environmental information, the agent may perform undesired actions (_e.g._, move an already relocated object) and thus fail. We observe significant performance drops (\(-5.35\%\) and \(-4.45\%\) in SR) in both seen and unseen environments (#(a) _vs._ #(c)). Without Both.Without any of the proposed components, the agent may interact with irrelevant objects with limited past environmental information. As expected, our agent without CAP and EAM achieves the lowest performance among the agents equipped with either or both (#(d) _vs._ #(a, b, c)). Moreover, we observe that using both CAP and EAM improves our agent more than using either of them ((#(d) \(\rightarrow\) #(b, c)) _vs._ (#(d) \(\rightarrow\) #(a))). This implies that the CAP and the EAM are complementary to each other. ### Qualitative Analysis Context-Aware Planning.To illustrate the benefit of context-aware planning (CAP), we present two qualitative examples in Figure 5. The left example shows that a sub-goal planner without the CAP may generate sub-goals including irrelevant objects (_i.e._, Potato), even when it is not mentioned in the given instruction. The agent continues searching for a knife for the limited number of steps and fails. But when we use the context predictor, the sub-goal planner correctly infer the task-relevant objects and constructs the sequence with them (Sec. 3.1); the context predictor outputs task-relevant objects from the given human-described instruction such as an 'egg,' denoted as \(o_{O}\), a 'bowl', the container to hold the object as \(o_{M}\), and a 'counter', the place to put the object as \(o_{R}\). The task-relevant objects that is correctly inferred by our model, the sub-goal planner can generate a desirable sub-goal sequence and leads to successful task completion. In addition, the right example in the Figure 5 shows that our agent without CAP may predict irrelevant objects (_i.e._, Knife) to the task (_i.e._, "Put a watch in a bowl on the shelf."). As the agent without CAP, denoted by 'CAPEAM w/o CAP,' tries to find the unintended object (_i.e._, Knife) which is not present in this room, the agent continues to explore the environment and eventually fails to reach the target object. In contrast, our agent with CAP, denoted by 'CAPEAM,' can generate a sequence of executable actions Figure 5: **Benefit of Context-Aware Planning (CAP). In the two qualitative examples, the ‘contexts’ are denoted by \(c_{O}\) in yellow, \(c_{M}\) in blue, and \(c_{R}\) in green colored boxes. While our CAPEAM plans a sub-goal sequence with task-relevant objects, ‘CAPEAM w/o CAP’ interacts with task-irrelevant objects (_i.e._, Potato or Knife) and consequently fails.** with relevant objects. By conducting all the actions with the intended objects, the agent finally succeeds in the task. Environment-Aware Memory.We now conduct a qualitative analysis to assess the impact of retrospective object recognition (Sec. 3.2). It allows the agent to continue interacting by utilizing a previously saved mask even when it cannot recognize the object. First, we investigate the benefit of the 'Retrospective Object Recognition' module in the EAM. Prior arts [4, 27] may miss interactions with unrecognized objects because they have limited access to past information and typically rely on current information during the interaction. For instance, in Figure 6, the memoryless model is incapable of detecting the bowl's mask because of occlusion caused by the watch placed on top of it when it tries to pick up the bowl. Thus, they may initially encounter challenges with interaction, given their incapability to obtain a mask for interaction. On the contrary, CAPEAM with the memory stores the bowl's mask from the previous action and uses it for interaction. Consequently, even when the agent cannot perceive the bowl, it can interact with it by the saved mask. Finally, we investigate the benefit of the 'Object Relocation Tracking' module in the EAM. Figure 7 illustrates its benefit of keeping track of the relocated objects and preventing re-relocation of the same objects (Sec. 3.2). As observed in 'CAPEAM' in the figure, after relocating a target object (_i.e._, 'TissueBox'), the agent remembers the relocated object's location and avoids interacting with that already relocated tissue box. On the contrary, as observed in 'CAPEAM w/o EAM,' our agent without EAM does not keep track of the relocated object's location. Thus, it interacts with the already relocated object again and eventually, this leads to task failure. ## 5 Conclusion We propose CAPEAM that incorporates _context_ in planning and to remember environmental changes in memory for embodied AI agents. It improves navigation and object interaction by avoiding unnecessary exploration and correct planning of actions with appropriate objects to interact with. We empirically validate the benefit of the proposed modules in ALFRED by showing that the proposed method outperforms existing methods, especially in unseen environments even without human-designed plan templates. Limitation and Future Work.Our context-aware planning fixes the anticipated context during a task execution as an inductive bias. But the context may change even in a single task execution. A prospective avenue for future investigation lies in the modification of context in response to input from environments, thereby enhancing adaptability. **Acknowledgment. This work is partly supported by the NRF grant (No.2022R1A2C4002300) 20%, IITP grants (No.2020-0-01361, AI GS Program (Yonsei University) 5%, No.2021-0-02068, AI Innovation Hub 5%, 2022-0-00077 10%, 2022-0-00113 10%, 2022-0-00959 10%, 2022-0-00871 20%, 2022-0-00951 20%) funded by the Korea government (MST).** Figure 6: **Benefit of the ‘Retrospective Object Recognition’ in the Environment-Aware Memory (EAM).**\(\rightarrow\) indicates the agent can preserve the object masks in EAM and utilize them while \(\boldsymbol{\kappa}\) cannot. ‘CAPEAM w/o EAM’ fails in interaction since it cannot recognize the bowl. In contrast, ‘CAPEAM’ can exploit the preserved bowl’s mask and therefore succeeds in the task. Figure 7: **Benefit of the ‘Object Relocation Tracking’ in the EAM. While our agent without EAM (‘CAPEAM w/o EAM’) interacts with the already relocated object (‘TissueBox’) as it does not keep track of the relocated location (denoted by the dashed \(\rightarrow\) and \(\boldsymbol{\kappa}\)), EAM (‘CAPEAM’) allows to avoid interacting with the already relocated one and therefore interact with the intended one.**
2306.09918
No Strong Feelings One Way or Another: Re-operationalizing Neutrality in Natural Language Inference
Natural Language Inference (NLI) has been a cornerstone task in evaluating language models' inferential reasoning capabilities. However, the standard three-way classification scheme used in NLI has well-known shortcomings in evaluating models' ability to capture the nuances of natural human reasoning. In this paper, we argue that the operationalization of the neutral label in current NLI datasets has low validity, is interpreted inconsistently, and that at least one important sense of neutrality is often ignored. We uncover the detrimental impact of these shortcomings, which in some cases leads to annotation datasets that actually decrease performance on downstream tasks. We compare approaches of handling annotator disagreement and identify flaws in a recent NLI dataset that designs an annotator study based on a problematic operationalization. Our findings highlight the need for a more refined evaluation framework for NLI, and we hope to spark further discussion and action in the NLP community.
Animesh Nighojkar, Antonio Laverghetta Jr., John Licato
2023-06-16T15:45:08Z
http://arxiv.org/abs/2306.09918v1
# No Strong Feelings One Way or Another: Re-operationalizing Neutrality in Natural Language Inference ###### Abstract Natural Language Inference (NLI) has been a cornerstone task in evaluating language models' inferential reasoning capabilities. However, the standard three-way classification scheme used in NLI has well-known shortcomings in evaluating models' ability to capture the nuances of natural human reasoning. In this paper, we argue that the operationalization of the _neutral_ label in current NLI datasets has low validity, is interpreted inconsistently, and that at least one important sense of neutrality is often ignored. We uncover the detrimental impact of these shortcomings, which in some cases leads to annotation datasets that actually _decrease_ performance on downstream tasks. We compare approaches of handling annotator disagreement and identify flaws in a recent NLI dataset that designs an annotator study based on a problematic operationalization. Our findings highlight the need for a more refined evaluation framework for NLI, and we hope to spark further discussion and action in the NLP community. ## 1 Introduction With the rise of large language models like GPT-3 Brown et al. (2020), PaLM Chowdhery et al. (2022), and GPT-4,1 it has become increasingly necessary to evaluate their language understanding and reasoning abilities. One influential task in this regard is natural language inference (NLI) MacCartney and Manning (2009, 2014), which is used to examine the inferential and commonsense reasoning skills of language models Jeretic et al. (2020). NLI requires a model to determine the relationship between a statement, known as the _premise_\(P\), and another statement, called the _hypothesis_\(H\), by classifying it as _entailment_ (H must be true given P), _contradiction_ (H must be false given P), or _neutral_ (H can or cannot be true given P).2 NLI is crucial because it involves comprehending the logical properties of sentences, which is arguably a core capability of human reasoning and an important skill for language models to possess. Footnote 2: Recognizing textual entailment (RTE) Dagan et al. (2006), a variant of NLI, only considers entailment and non-entailment. Solving NLI requires the ability to perform textual inference between any two sentences (and in some cases, between any two arbitrarily long texts), making it a versatile framework for developing and evaluating reasoning benchmarks. Many NLP tasks, like question answering Demszky et al. (2018), dialog systems Gong et al. (2018), machine translation Poliak et al. (2018), identifying biased or misleading statements Nie et al. (2019), fake news detection Yang et al. (2019), paraphrase detection Nighojkar and Licato (2021), and fact verification Thorne et al. (2018), require understanding and reasoning about the meaning of text and can be re-framed as NLI Figure 1: Selected NLI items from SNLI with annotations (shown by colors). The diamonds on the right show the gold label for these items in SNLI; note item 4 is marked ‘-’ and is not assigned a gold label (hence it is ignored). We argue that items with all four annotation distributions should be considered neutral, but that there should be at least two sub-types of neutral. problems. NLI provides a broad framework for studying and alleviating logical inconsistencies in a language model's reasoning (Poliak, 2020; Mitchell et al., 2022) including explanation-based maeieutic prompting (Jung et al., 2022), that uses NLI to evaluate individual links in a reasoning chain. Most NLI datasets (Bowman et al., 2015; Williams et al., 2018; Nie et al., 2020; Chen et al., 2020) utilize crowdsourcing to either generate NLI items or gather labels for pre-existing items. While this approach has advanced research on textual entailment, we believe that current NLI datasets, both established and recent, have overlooked important issues in their annotation design that hinder their validity as measures of textual entailment. Although the effects of different crowdsourcing schemes for NLI dataset development has been studied (Bowman and Dahl, 2021; Parrish et al., 2021), we focus on a specific issue: the operationalization of _neutral_. Neutral items usually have the lowest levels of annotator agreement (Nie et al., 2020), and we contend that this disagreement has been handled improperly in previous work, contributing to the ongoing debate about how to handle disagreement in NLI (Palomaki et al., 2018; Pavlick and Kwiatkowski, 2019; Bowman et al., 2015; Williams et al., 2018). Instructions provided to annotators for labeling items as neutral are often ambiguous and inconsistent between datasets, with phrases like "neither" (Nie et al., 2020) or "might be correct" (Bowman et al., 2015; Williams et al., 2018). We believe these problems can be addressed by reconsidering the prevailing operationalization of neutral and replacing it with one which embraces disagreement. Although we are not the first to argue for the importance of properly incorporating disagreement (Palomaki et al., 2018; Pavlick and Kwiatkowski, 2019; Basile et al., 2021; Plank, 2022; Rottger et al., 2022; Uma et al., 2022), we identify specific problems introduced by ignoring disagreement (for example, by dropping examples with low agreement entirely), and offer new evidence supporting its adoption grounded in the psychometric concept of _construct validity_. Consider the items shown in Figure 1, sourced from the SNLI dataset (Bowman et al., 2015). A general consensus on the gold label is reached by the annotators in the first three items, but the fourth item exhibits a high degree of disagreement. While the first three items are labeled neutral in SNLI and used to train models, the fourth is labeled with a special '-' class, indicating an irresolvable level of disagreement, and hence it is removed from training data (Bowman et al., 2015). This practice (also used by Williams et al.) effectively treats disagreement as an undesirable product of NLI data collection--a _linguistic annotation artifact_ to be considered as noise rather than signal. But what is the source of this disagreement? Should item 4 in Figure 1 be ignored, or is it simply a different form of neutrality? We argue that item 4 should be considered a different sense of neutral than the one represented by item 1, because two interpretations are possible: (1) the individuals in the embrace may be facing in opposite directions, resembling a conventional embrace, and (2) one individual may be embracing the other from behind, thereby causing them to face the same direction. This ambiguity in how to interpret such items leads to two irreconcilable types of neutrals; items can be either _true_ neutrals (item 1 in Figure 1), or they can be neutral as a result of _conflicting_ interpretations (item 4). Main contributions.In this paper, we address the aforementioned issues with neutrality in three ways: 1. We propose a new operationalization of neutral based on inter-annotator agreement, which we argue better captures two distinct senses of neutrality (true neutral and conflicting neutral) often conflated in NLI. 2. We compare our operationalization with a 4-way classification scheme based on annotator disagreement suggested by Jiang and de Marneffe (2019); Zhang and de Marneffe (2021); Jiang and de Marneffe (2022) and find that our operationalization has better construct validity, as using it to train models for NLI leads to better downstream performance. 3. We show that known limitations of at least one published NLI dataset (UNLI) are a direct consequence of its adopting an operationalization that did not embrace disagreement, instead opting to aggregate NLI annotations on a continuous scale. We analyze its methodological flaws, and make recommendations to avoid similar problems in future work. Related Work NLI is widely used for assessing language models' inferential capabilities, in part due to its generality and versatility. Many datasets, like SNLI Bowman et al. (2015), MultiNLI Williams et al. (2018), Adversarial NLI (ANLI) Nie et al. (2020), and WA-NLI Liu et al. (2022) have been developed to evaluate a model's ability to reason through entailment relationships across a wide variety of contexts. Other datasets focus on specific domain knowledge Holzenberger et al. (2020); Koreeda and Manning (2021); Yin et al. (2021); Khan et al. (2022); Yang (2022) or require knowledge of non-English languages Conneau et al. (2018); Araujo et al. (2022). In most NLI datasets, only one label per item is deemed correct, and models are tasked with determining the most plausible of three possible labels. However, there is a growing need for NLI tasks to handle a broader range of relationships and make finer-grained distinctions between them. Researchers are shifting their focus towards finer-grained annotations Chen et al. (2020); Gantt et al. (2020); Meissner et al. (2021), as classical NLI tasks are not well-equipped to handle disagreement between annotators Zhang et al. (2021); Zhang and de Marneffe (2021); Jiang and de Marneffe (2022); Wang et al. (2022). Recent research has also focused on assessing models' performance on _ambiguous_ NLI items, where humans may disagree on the correct label. ChaosNLI Nie et al. (2020) was developed to study such ambiguities by gathering 100 human annotations on items from a subset of SNLI and MultiNLI, where only 3/5 of annotators agreed on the correct label. They found that models struggled to perform above random chance on items with low inter-annotator agreement and were unable to replicate the annotator label distribution Zhou et al. (2022). Since most of the low agreement items are neutral Nie et al. (2020), we believe a possible reason for this poor performance is the conflation of true and conflicting neutrals as a single category (Section 4). Zhou et al. (2022); Meissner et al. (2021) build on ChaosNLI and test language models' ability to recover the original annotator label distribution. However, the best results are still below estimated human performance. To solve ambiguous NLI items, Wang et al. (2022) argue that models need to be well-calibrated (i.e., their predicted probability distribution must correctly match the annotator distribution), and they show that label smoothing or temperature scaling can achieve competitive performance without direct training on the label distribution, though it should be noted that other work has found mixed success with using either of these approaches to address ambiguity in NLI Uma et al. (2022). According to Pavlick and Kwiatkowski (2019), annotator disagreements are _irresolvable_ even when the number of annotators and context are both increased. Such items should not be ignored since the disagreement cannot be always attributed to noise. They argue that handling disagreements should be left to the ones using the models trained on these datasets. Similar to Zhou et al. (2022), Pavlick and Kwiatkowski (2019) also show that NLI models trained to predict one label cannot capture the human annotation distribution. Despite calls in the literature for annotator disagreement to be accommodated rather than ignored, how this should be done has been the subject of much study. The earliest attempts from SNLI and MultiNLI simply assigned a '-' label to cases that had sufficiently low agreement, indicating that they should not be used for training Bowman et al. (2015); Williams et al. (2018). More recent work has tried to incorporate low agreement items as a fourth _disagreement_ class, a practice that began with Jiang and de Marneffe (2019) and was later used by Zhang and de Marneffe (2021); Jiang and de Marneffe (2022). We examine this practice in Section 3 and demonstrate that simply using a _catch-all_ category for disagreement is not as effective as our operationalization for neutral items. Another line of research has explored changing the annotation schema to use a continuous scale, rather than a discrete one, in the hope that this type of scale will better capture the subtleties of reasoning over ambiguity and lead to less disagreement. Chen et al. (2020) introduce _uncertain natural language inference_ (UNLI), where annotators indicate the likelihood of a hypothesis being true given a premise. While models trained on UNLI can closely predict human estimations, later work has found that fine-tuning on UNLI can hurt downstream performance Meissner et al. (2021), suggesting a serious flaw in the UNLI dataset. We analyze further issues with UNLI in Section 5. In a recent study, Kalouli et al. (2023) propose a new interpretation of neutral based on the concept of _strictness_. They argue that, under "strict interpretation", the pair _P: The woman is cutting a tomato. H: The woman is slicing a tomato/_ would be considered neutral as she could be cutting squares, but it could be considered an entailment pair if the interpretation is not so strict. Their operationalization of neutral based on the concept of _strictness_ lacks clarity due to the absence of a precise, understandable definition of _strictness_. In effect, it simply shifts the problem of understanding what makes a pair of sentences neutral to understanding what makes their relationship "strictly logical" (a term they use to define strict interpretation, without further elaboration).3 Footnote 3: Note that the strict conditional \(\square(p\to h)\) was famously introduced by Lewis (1912) as a formalization of the indicative conditional. However, this does not appear to be the sense of “strict” meant by Kalouli et al. (2023). ## 3 Empirical evaluation of 'disagreement' as a fourth class The classification scheme that uses a fourth 'disagreement' label for low-agreement items Jiang and de Marneffe (2019); Zhang and de Marneffe (2021); Jiang and de Marneffe (2022) conflates all three NLI labels in doing so. To explore this possibility, we conduct an empirical study to compare this disagreement-based scheme with other 4-way classification schemes. We define the _level of agreement_ (**A**) between annotators on NLI items as: \[\textbf{A}=\frac{\textit{number of votes for the majority label}}{\textit{total number of votes}} \tag{1}\] We also explore two agreement threshold \(t\) values (\(0.8\), and \(1\)),4 which is the cutoff-value of **A** below which items are considered to have "low agreement." Note that Jiang and de Marneffe (2019) choose \(t=0.8\) but do not provide an explanation for choosing it. We train ALBERT-base Lan et al. (2019), DistilBERT-base-uncased Sanh et al. (2019), Electra-base Clark et al. (2020), DeBERTa-v3-base He et al. (2020), and RoBERTa-base Liu et al. (2019) to show that these results are not specific to just a few models. We are limited to using SNLI and MultiNLI because they are the only NLI datasets that report individual annotations in sufficient quantity to finetune transformer language models. We trained each model for 5 epochs and tested their performance on a held out, stratified, evaluation set.5. We use only the base versions of these models because our objective here is not to train the best models, but to examine and compare classification schemes. Models are being used in this experiment only to compare the _separability_ of all classes for each of these classification schemes: Footnote 4: Because SNLI and MultiNLI have at most 5 annotations, and the majority label is always taken as the gold label, \(0.4\) is the smallest possible **A** that can be used. Since all items at that agreement are marked as - in both the datasets, \(t=0.6\) cannot be used for **Ent** and **Con**. Also, \(t=0.6\) will give us same items for all four classes in **Dis** as well as **Neu**, making their comparison at that threshold meaningless. * **Con:** Entailment, Neutral, \(\uparrow\) Contradiction, \(\downarrow\) Contradiction 6 Footnote 6: \(\uparrow\) and \(\downarrow\) denote high and low annotator agreement respectively. * **Dis:** Entailment, Neutral, Contradiction, Disagreement * **Ent:**\(\uparrow\) Entailment, \(\downarrow\) Entailment, Neutral, Contradiction * **Neu:** Entailment, \(\uparrow\) Neutral, \(\downarrow\) Neutral, Contradiction performance, regardless of model or threshold used, and thus has better construct validity (Bleidorn and Hopwood, 2019; Zhai et al., 2021) than the classification scheme based on disagreement. ## 4 Operationalizing Neutral In NLI, the neutral label is used for situations where the relationship between the premise and hypothesis is ambiguous or there is insufficient information to determine the relationship. Neutral is often considered a catch-all for relationships that do not fall under entailment or contradiction. The definition of neutral is typically provided to crowd-source workers as "neither" (Nie et al., 2020) or "might be correct" (Bowman et al., 2015; Williams et al., 2018). But is a classification of neutral simply a default assumption that always means neither entailment nor contradiction can be definitively determined, or can it be a positive claim that a different type of relationship holds between the sentences? A closer look at the data obtained from NLI datasets suggests that neutrality is more complex than it may initially seem. According to Nie et al. (2020), neutral items in many NLI datasets exhibit the lowest agreement levels. The most frequent label below an agreement level of \(\mathbf{A}=0.8\) for both the SNLI and MultiNLI subsets is neutral, while it is the least frequent label at a perfect agreement level. This lack of agreement motivates our focus on neutral particularly, as it is consistently the most problematic label to annotate. The empirical study in Section 3 also shows that a neutral-based classification scheme has a better separability than a disagreement-based classification scheme. There are at least two senses in which the relationship between two sentences can be said to be neutral, which become clear if we imagine two possible justifications that an individual NLI annotator may provide for why they selected the label neutral: (1) _True Neutral:_ The annotator cannot find any sufficiently strong reasons (using whichever standard of strength they determine appropriate) to satisfy either entailment or contradiction; or (2) _Conflicting Neutral:_ The annotator finds strong reasons to support _both_ entailment and contradiction. It is a central position of this paper that these two interpretations of the neutral label are irreconcilable and should not be confused with each other. Attempting to conflate the two, e.g. by assuming that neutrality is simply the mid-point on a continuous scale between the two extremes of entailment and contradiction, will and has led to significant reductions in quality of data collections and their resulting benchmark datasets (see SS5). No existing NLI dataset, to our knowledge, asks or encourages annotators to explain whether their reasons for selecting neutral are in line with true or conflicting neutral as we have defined them above. For the present work, then, we present evidence for the discriminant validity of true and conflicting neutral (i.e., that they refer to two distinct constructs that can and should be measured separately Campbell and Fiske (1959)) by assuming that they will be _approximately reflected in the distribution_ of individual annotations on a single NLI item--in other words, conflicting neutral items will tend to have annotation distributions resembling item Figure 2: Heatmaps of \(F_{1}\) scores on different 4-way classification schemes (x-axis) for different language models (y-axis). Darker boxes indicate better performance. Models consistently under-perform on the disagreement-based classification scheme (**Dis**) proposed by Jiang and de Marneffe (2019); Zhang and de Marneffe (2021); Jiang and de Marneffe (2022), indicating that a catch-all disagreement label does not provide enough information to models to reason over ambiguous items. 4 in Figure 1, whereas true neutrals will tend to match item 1. Results in Section 3 show that indeed such a classification scheme does a much better job of separating the four classes for models than a scheme that conflates all three labels. True vs. Conflicting Neutral: Surface-level DifferencesWe perform an exploratory analysis to identify potential reasons why annotators may disagree on some 'neutral' items, to better motivate our operationalization of 'neutral'. Drawing from Pavlick and Kwiatkowski (2019), who found that disagreement increases as more context is given, we investigate whether ambiguity in NLI items arises due to increased complexity, leading to difficulties in accurately interpreting them. We measure this complexity using two metrics: mean length of the item in terms of number of characters (after the premise and hypothesis are joined with a space), and Flesch Reading Ease (Flesch, 1948), a commonly-used measure of text readability. Our findings, shown in Table 1, reveal that true neutral items are shorter and easier to read than conflicting neutral items. However, the observed difference in complexity between the two forms of neutrals is marginal and inconclusive. These results suggest that at least superficial qualitative differences exist between different types of neutrals, but more extensive research is needed to clarify the extent of these differences. ## 5 An Analysis of UNLI We have argued that a carefully grounded operationalization of the neutral label is crucial for ensuring the reliability (performance should be free from random error) and validity of NLI. To demonstrate the issues that can arise if this caution is not taken, we next analyze a recent NLI dataset -- Uncertain NLI (UNLI) (Chen et al., 2020). The UNLI dataset, when used for fine-tuning, appears to actually harm downstream performance (Meissner et al., 2021; Zhou et al., 2022; Wang et al., 2022). UNLI attempts to enhance NLI by converting the categorical labels for some SNLI items to a continuous scale. Participants were instructed to rate the likelihood of a given hypothesis being entailed by a given premise using an ungraduated slider, ranging from 0 (labeled as "impossible") to 1 (labeled as "very likely") and were shown the probability they were assigning to the _premise-hypothesis_ pair in real time. According to Chen et al. (2020), the probabilistic nature of NLI (Glickman et al., 2005) suggests that not all contradictions or entailments are equally strong.8 Thus, UNLI was developed with the intention of capturing subtler distinctions in _entailment strength_ using a continuous scale. This dataset has over 60K items from SNLI, annotated by humans. For each premise-hypothesis pair, two annotations were collected, and in cases where the first two annotators differed by \(20\%\) or more, a third annotator was consulted. However, the dataset only reports the averaged scores, which makes it impossible to assess the degree of agreement or correlation between the two annotators or even identify examples where a third annotator was needed. Thus, reported values near 0.5 (which we might take to be the equivalent of _neutral_ items) fundamentally conflate items where both annotators chose the midpoint on the slider with items where each annotator chose one of the extremities. Footnote 8: The view that NLI is inherently probabilistic, or that natural inference can be best modeled with probability, is not universally held, e.g. (Bringsjord, 2008). The assumption that one continuous scale can capture even the three categories in standard NLI (entailment, contradi \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Dataset & Mean Length (\(T\)) & Mean Length (\(C\)) & Reading Ease (\(T\)) & Reading Ease (\(C\)) \\ \hline \(*\) SNLI dev + test & 109.6 & 118.2 & 84.0 & 82.8 \\ SNLI train & 102.8 & 111.3 & 84.8 & 83.6 \\ \(*\) MultiNLI matched + mismatched & 117.0 & 183.0 & 67.0 & 65.2 \\ MultiNLI train & 163.8 & 186.0 & 68.7 & 64.4 \\ ANLI R3 dev & 389.0 & 372.7 & 67.9 & 65.3 \\ ANLI R3 test & 382.4 & 392.7 & 69.8 & 66.1 \\ ANLI R3 train & 369.3 & 377.3 & 66.3 & 64.6 \\ WA-NLI test & 147.3 & 147.6 & 77.4 & 77.4 \\ WA-NLI train & 147.5 & 148.6 & 77.1 & 77.0 \\ \hline \end{tabular} \end{table} Table 1: Comparison of true (\(T\)) and conflicting (\(C\)) neutrals. Smaller values for reading ease indicate harder-to-read items. We use our trained model to estimate \(\mathbf{A}\) for the datasets that do not release individual annotations and the ones that do are marked with a “\(*\)”. Cases where our hypothesis was NOT confirmed are underlined and in brown. a strong one (already shown to be problematic in Pavlick and Kwiatkowski (2019)), which is typically glossed over by presuming that entailment lies at the higher end of the spectrum, contradiction at the other end, and neutral somewhere in the middle. But no such instruction to interpret the scale this way was provided to annotators. Indeed, as we will show, annotators appeared to be confused as to whether an absence of entailment meant that the slider should be at the '0' position, or in the middle. In their attempt to obtain subjective probabilities for premise-hypothesis pairs, the authors used a scale with 10K steps with a scaled logistic transformation (\(f(x)=\sigma(\beta(x-5000))\)) to convert the values on the scale into probabilities between \(0\) and \(1\). They do not report the chosen value of \(\beta\) and do not specify whether the scores were averaged before or after applying the function, which is crucial information as both would yield different results. Because raw values of \(x\) are not provided, and we do not know whether scaling is performed before or after averaging, we are unable to recover the chosen values of \(\beta\). The scale Chen et al. (2020) used was based on EASL Sakaguchi and Van Durme (2018), an approach developed to collect scalar ratings in NLP tasks.9 They then modified the EASL scale by utilizing the aforementioned logistic transformation, which they argued would allow for more nuanced values near both extremes. Notably, the source of the anchor points used on the scale (i.e., "impossible" and "very likely") is not explicitly stated by Chen et al. (2020), although it is possible they were obtained from JOCI Zhang et al. (2017), a dataset created for studying ordinal commonsense reasoning that uses the same anchor points for opposite ends of the scale.10 Footnote 9: This is further supported by the fact that Chen et al. (2020) cite Zhang et al. (2017) as a previous attempt to model likelihood judgments in NLI, which is also the aim of UNLI. In effect, their logistic transformation compresses the extreme ends of the scale, so that the graphic they display (Figure 1 in Chen et al. (2020)), at first glance, appears as if the NLI items labeled as contradiction, neutral, and entailment occupy roughly equal space across the continuum of values. Figure 3 instead depicts the distribution of averaged human responses collected by Chen et al. (2020) on a linear scale.11 It is clear to observe in Figure 3 that while entailment and contradiction annotations are distinctly separated and skewed heavily towards the extreme opposite ends of the scale, annotations for neutral _span the entire range from 0 to 1_. The origin of this discrepancy is unclear, but based on the instructions given to them, it may be that annotators were unsure where to place neutral on the scale. Supporting this hypothesis is the bulge near \(0\) on the violin plot for neutral in Figure 3, which suggests that annotators chose \(0\) for both neutral and contradiction items. This information is obscured by the logistically transformed graph displayed by Chen et al. (2020). Footnote 11: Many of the properties of the scale we address here were unclear from reading the original figure in Chen et al. (2020), necessitating the redrawing. Table 2 highlights some examples from UNLI that demonstrate the poor alignment of its annotations with SNLI annotation distributions. From Figure 3, the reliability of the scale for neutral annotations is notably poor, with annotations spanning the entire range of the scale. This suggests that neutral annotations lack internal consistency, an important measure of reliability Rust and Golombok (2014), because annotators do not label the NLI items in a consistent fashion even when the label remains constant. Measurement issues are not uncommon in other fields that routinely run human studies, including psychological and educational measurement. Development of annotation schemes in these fields often involves careful consideration of the item Figure 3: Figure 1 from Chen et al. (2020) redrawn on a linear scale. Note the two distinct bulges in the violin plot for neutral items, suggesting that annotators were confused about whether neutral items should be placed near \(0\) or middle of the slider. format, including the rating scale, to ensure that it effectively measures the construct of interest (Bandalos, 2018). This can be achieved through qualitative analysis, such as cognitive interviews and focus groups, where items are administered to test takers and feedback is collected to ensure that the scale is understood and completed accurately, among other things (Miller et al., 2014). However, in the development of UNLI, Chen et al. (2020) did not report using such procedures. Moreover, common practices in measurement research were missing from UNLI, such as reporting how bad-faith responses were identified and filtered out, using attention-check items (except the qualifying test, whose results are not provided as part of the dataset), employing a sufficienlty large sample size of annotators, and providing individual annotations and relevant information about the annotators like their recruitment and compensation. These omissions make precise scientific replication impossible, and raise concerns about the validity of UNLI as a measure of (and benchmark for) NLI, while also providing a plausible explanation for why prior research yielded poor results when using UNLI for fine-tuning. ## 6 Conclusion In this paper, we examined the operationalization of neutral in NLI datasets. Our analysis revealed that previous attempts to handle ambiguity in NLI based on neutrality have significant issues with their validity as annotation strategies for NLI. We proposed a new operationalization of neutral into _true neutral_ and _conflicting neutral_. Although instances of these forms of neutral are present in most popular NLI datasets, they have been conflated into one neutral label, limiting our ability to measure ambiguity in NLI effectively. We showed that this approach of casting NLI to a 4-way classification task is better than the disagreement-based classification scheme used in previous work. We used UNLI as a case study to highlight measurement and annotation issues that should be avoided in the future. Of the many factors that make science successful, two of the most important are the ability to make carefully designed measurements, and replicability. The first of these cannot be met when measurements of constructs are made in ways that significantly compromise their validity and reliability. And replicability is made impossible when papers are published in reputable venues reporting unclear collection details, having important parameter choices omitted, and with datasets reporting summary statistics in place of crucially important data. A significant roadblock of the work we reported in this paper was the lack of availability of individual annotations in widely-adopted NLI benchmarks, even when there seems to be no public benefit in leaving out such information. It is our hope that the present work will encourage our fellow AI researchers to more highly value such considerations. ## Limitations We approximated the operationalization of the two senses of neutrality using annotator agreement. Perhaps a better basis for operationalizing the two senses of neutrality could be found in the reasons behind the annotators choosing the neutral label. Since no NLI datasets ask annotators to explain their choice and release those responses, we will try to analyze this in the future. We presented a surface-level syntactic analysis of the differences between the two types of neutrals, but semantic differences should also be analyzed. Intuitively, semantic differences might give us a better understanding of these two types, but further study is needed to verify this. Though we focus on UNLI as a case study to back up our claims, further analysis on a broader range of NLI datasets (and possible extensions to tasks beyond NLI) should also be conducted. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Premise & Hypothesis & SNLI & UNLI \\ & & Annotations & score \\ \hline A woman with a blue jacket around her waist is sitting on the ledge of some stone ruins resting. & A man sits on a ledge. & \(4C-0N-1E\) & \(0.88\) \\ A lady is standing up holding a lamp that is turned on. & She is lighting a dark room. & \(2C-2N-1E\) & \(0.78\) \\ A singer wearing a leather jacket performs on stage with dramatic lighting behind him. & A singer is on American idol. & \(1C-4N-0E\) & \(0.01\) \\ A small boy wearing a blue shirt plays in the kiddie pool. & Boy cooling off during the summer. & \(1C-4N-0E\) & \(0.89\) \\ \hline \end{tabular} \end{table} Table 2: Items from UNLI along with their individual annotations from SNLI.
2306.10291
Proximity-induced spin-orbit coupling in phosphorene on a WSe$_2$ monolayer
We investigate, using first-principles methods and effective-model simulations, the spin-orbit coupling proximity effects in a bilayer heterostructure comprising phosphorene and WSe$_2$ monolayers. We specifically analyze holes in phosphorene around the $\Gamma$ point, at which we find a significant increase of the spin-orbit coupling that can be attributed to the strong hybridization of phosphorene with the WSe$_2$ bands. We also propose an effective spin-orbit model based on the ${\bf C}_{1{\rm v}}$ symmetry of the studied heterostructure. The corresponding spin-orbit field can be divided into two parts: the in-plane field, present due to the broken nonsymmorphic horizontal glide mirror plane symmetry, and the dominant out-of-plane field triggered by breaking the out-of-plane rotational symmetry of the phosphorene monolayer. Furthermore, we also demonstrate that a heterostructure with 60$^\circ$ twist angle exhibits an opposite out-of-plane spin-orbit field, indicating that the coupling can effectively be tuned by twisting. The studied phosphorene/WSe$_2$ bilayer is a prototypical low common-symmetry heterostructure in which the proximity effect can be used to engineer the spin texture of the desired material.
Marko Milivojević, Martin Gmitra, Marcin Kurpas, Ivan Štich, Jaroslav Fabian
2023-06-17T08:23:37Z
http://arxiv.org/abs/2306.10291v2
# Proximity-induced spin-orbit coupling in phosphorene on WSe\({}_{2}\) monolayer ###### Abstract We investigate, using first-principles methods and effective-model simulations, the spin-orbit coupling proximity effects in a bilayer heterostructure comprising phosphorene and WSe\({}_{2}\) monolayers. We specifically analyze holes in phosphorene around the \(\Gamma\) point, at which we find a significant increase of the spin-orbit coupling that can be attributed to the strong hybridization of phosphorene with the WSe\({}_{2}\) bands. We also propose an effective spin-orbit model based on the \(\mathbf{C}_{1\mathrm{v}}\) symmetry of the studied heterostructure. The corresponding spin-orbit field can be divided into two parts: the in-plane field, present due to the broken nonsymmorphic horizontal glide mirror plane symmetry, and the dominant out-of-plane field triggered by breaking the out-of-plane rotational symmetry of the phosphorene monolayer. Furthermore, we also demonstrate that a heterostructure with \(60^{\circ}\) twist angle exhibits an opposite out-of-plane spin-orbit field, indicating that the coupling can effectively be tuned by twisting. The studied phosphorene/WSe\({}_{2}\) bilayer is a prototypical low common-symmetry heterostructure in which the proximity effect can be used to engineer the spin texture of the desired material. ## I Introduction Phosphorene [1; 2; 3; 4; 5; 6] is a two-dimensional (2D) material whose sizable direct semiconducting gap and high carrier mobility make it a promising alternative to gapless graphene in the field of electronics. However, weak spin-orbit coupling [7; 8; 9; 10] and zero magnetism in phosphorene, limit its use in spintronics applications. Also, phosphorene has space-inversion symmetry and thus exhibits no spin-orbit fields. The simplest way to induce such fields is via the Rashba effect [11; 12], i.e. by applying an electric field in the direction perpendicular to the monolayer plane. This approach is not very effective in phosphorene as the Rashba field ultimately depends on the atomic number [13]. It is therefore desired to find alternative ways of inducing sizeable spin-orbit fields in phosphorene. Van der Waals heterostructures offer a rich playground for modifying electronic, spin, optical, and magnetic properties of the target materials [14; 15; 16; 17; 18; 19; 20]. In the context of proximity-induced spin-orbit effects in weak SOC materials [21; 22; 23], transition-metal dichalcogenide (TMDC) monolayers (MLs) [24; 25; 26] are the obvious material of choice due to the strong spin-orbit coupling of their valence bands [27; 28; 29; 30; 31; 32]. The common three-fold symmetry of graphene and TMDC materials has enabled a simple effective description of the proximity-induced interaction between the MLs [33; 34; 35; 36; 37; 38]. Such a common symmetry is not present in phosphorene/TMDC heterostructures, in which the rotation-symmetry-broken environment can trigger different spin-orbit coupling terms and, as a consequence, induce new types of spin textures in the desired materials [39]. The goal of the present study is to obtain both a quantitative and qualitative understanding of such heterostructures. In particular, we study a heterostructure comprising phosphorene (P) and monolayer WSe\({}_{2}\) employing _ab-initio_ methods and group theory. The giant spin splitting in the valence bands of the WSe\({}_{2}\) monolayer points to the potentially interesting hole spin physics of proximitized phosphorene. Indeed, we find sizeable momentum-dependent spin-orbit fields at the \(\Gamma\) point (both in-plane and out-of-plane) where the strong hybridization between the phosphorene and WSe\({}_{2}\) bands takes place. From symmetry arguments, we derived an effective spin-orbit Hamiltonian that ideally captures the spin physics predicted by the density-functional theory (DFT) calculations. Finally, we show that a \(60^{\circ}\) twisted heterostructure preserves the in-plane spin-orbit fields but flips the out-of-plane component, suggesting that twist angle can be an effective tool to tailor the proximity spin physics in such heterostructures. This paper is organized as follows. After the introductory section, in Sec. II we analyze the geometry of the P/WSe\({}_{2}\) heterostructure and present the necessary computational details for the calculation of the band structure. In Sec. III band structure analysis of such a heterostructure is presented. Furthermore, based on the \(\mathbf{C}_{1\mathrm{v}}\) symmetry of the heterostructure, the effective model for the hole spins around the \(\Gamma\) point is constructed and the fitting parameters that match the DFT data with the model are given. We also analyze the effect of twist on the proximity effect, by assuming the relative twist angle of \(60^{\circ}\) between the phosphorene and WSe\({}_{2}\) monolayer. Finally, in Sec. IV, we present our conclusions and provide further outlooks of the presented study. ## II Computational and atomic structure details For lattice parameters of phosphorene ML, we consider \(a=3.2986\)A and \(b=4.6201\)A [9] (lattice vectors correspond to \(\mathbf{a}=a\mathbf{e}_{x}\), \(\mathbf{b}=b\mathbf{e}_{y}\)), while the lattice parameter of WSe\({}_{2}\) ML is equal to \(a_{\text{W}}=3.286\)A [40] (lattice vectors are \(\mathbf{a}_{1}=a_{\text{W}}\mathbf{e}_{x}\), \(\mathbf{a}_{2}=a_{\text{W}}(-\mathbf{e}_{x}+\sqrt{3}\mathbf{e}_{y})/2\)). The commensurate heterostructure was constructed using the CellMatch code [41], containing 20 P atoms and 8 WSe\({}_{2}\) chemical units. While the phosphorene layer remained unstrained, the WSe\({}_{2}\) is strained by 0.51%. In Fig. 1, we present side (a) and top (b) view of the atomic structure model of the P/WSe\({}_{2}\) heterostructure, alongside the Brillouin zone with high symmetry points of phosphorene (c) and WSe\({}_{2}\) (d) ML. The studied heterostructure has the vertical mirror plane symmetry that coincides with the \(yz\) plane, where the zigzag (armchair) direction of phosphorene corresponds to the \(x\) (\(y\)) direction of the heterostructure. We perform DFT electronic structure calculations of the P/WSe\({}_{2}\) heterostructure by means of the plane wave QUANTUM ESPRESSO package [42; 43], assuming a vacuum of 20 A in the \(z\)-direction. The Perdew-Burke-Ernzerhof exchange-correlation functional was utilized [44], for the norm-conserving method [45]. The positions of atoms were relaxed with the help of the quasi-Newton scheme and scalar-relativistic SG15 Optimized Norm-Conserving Vanderbilt (ONCV) pseudopotentials [46; 47; 48]. The force and energy convergence thresholds for ionic minimization were set to \(1\times 10^{-4}\) Ry/bohr and \(10^{-7}\) Ry/bohr, respectively, using the Monkhorst-Pack scheme with \(56\times 8\)\(k\)-points mesh. Small Methfessel-Paxton energy level smearing of 1mRy [49] was used along with the kinetic energy cut-offs for the wave function and charge density 80 Ry and 320 Ry, respectively. Also, the semiempirical Grimme's DFT-D2 van der Waals corrections were included [50; 51]. For the relaxed structure, the average distance between the closest phosphorene and the selenium plane (in the \(z\)-direction) is equal to 3.31A. In the case of noncollinear DFT calculations including spin-orbit coupling, fully relativistic SG15 ONCV pseudopotentials were used. Also, the dipole correction [52] was applied to properly determine energy offset due to dipole electric field effects. The energy convergence threshold was set to \(10^{-8}\) Ry/bohr, using the same \(k\)-points mesh and kinetic energy cutoffs for the wave function and charge density as in the relaxation procedure. Note that the illustration of the band structure unfolded to the Brillouin zone of both monolayers, is done using the DFT Vienna ab-initio simulation package VASP 6.2 [53; 54], using as the input the relaxed structure from QUANTUM ESPRESSO code. ## III Band structure analysis In Fig. 2 we present the band structure of the P/WSe\({}_{2}\) heterostructure unfolded to the X\(\Gamma\)Y path (a) of the phosphorene and TK\(\Gamma\)MI path (b) of the WSe\({}_{2}\) Brillouin zone. In order to have a more apparent separation between the bands having different atomic origins, we mark the bands with dominant phosphorus (a) and WSe\({}_{2}\) (b) atomic orbital character with orange and green color, respectively. First, we notice that an overall heterostructure is a semiconductor due to the semiconducting nature of both constituents. The small strain applied to the WSe\({}_{2}\) monolayer does not change its band structure significantly. The most important feature for the spin-orbit proximity study stems from the fact that the top valence band projected to the WSe\({}_{2}\) Brillouin zone has the same characteristics as in the monolayer limit; the giant spin-orbit coupling at the K point and along the \(\Gamma\)KM path is preserved [27]. On the other hand, it can be seen that within the phosphorene Brillouin zone, the valence band around the \(\Gamma\) point is mainly composed of phosphorene atomic orbitals. This is consistent with the highly anisotropic energy dispersion relation in the armchair and zigzag direction observed, resembling the well-known asymmetry of the phosphorene effective mass in the vicinity of \(k=0\) point [55]. Additionally, close to the \(\Gamma\) point, we notice strong hybridization of phosphorene bands with bands having dominant WSe\({}_{2}\) character. Since the K point of WSe\({}_{2}\) is folded to the X\(\Gamma\) line of the phosphorene Brillouin zone, it is to be expected that the proximity-induced spin-orbit coupling should be more pronounced along the X\(\Gamma\) line than in the \(\Gamma\)Y direction. The DFT calculation confirms this conjecture. As we will show below, the obtained hole spin texture of the top valence band of phosphorene can be described using a simple symmetry-adapted spin-orbit Hamiltonian with anisotropic parameters for \(\Gamma\)X and \(\Gamma\)Y directions. Figure 1: Atomic structural model of studied P/WSe\({}_{2}\) heterostructure. (a) side perspective view and (b) top view with primitive unit cells of phosphorene and WSe\({}_{2}\) are shaded in gray. In (c) and (d) the Brillouin zones with high symmetry points of phosphorene and WSe\({}_{2}\) monolayer is also given. We identify the \(x/y\) direction of the heterostructure with the zigzag/armchair direction of the phosphorene monolayer. ### Model Hamiltonian To make a simple description of the hole physics in phosphorene within the P/WSe\({}_{2}\) heterostructure we derive a simple spin-orbit coupling model Hamiltonian based on the \(\mathbf{C}_{1\mathrm{v}}\) symmetry of the heterostructure. Group symmetry \(\mathbf{C}_{1\mathrm{v}}=\{e,\sigma_{\mathrm{v}}\}\) has two elements; \(e\) represents the identity element, while \(\sigma_{\mathrm{v}}\) is the vertical mirror symmetry that coincides with the \(yz\)-plane. The presence of vertical mirror symmetry is a consequence of the zero twist angle between the MLs. This symmetry can be broken by twisting WSe\({}_{2}\) ML for an angle different than a multiple of \(60^{\mathrm{o}}\). Thus, the effective spin-orbit model close to the \(\Gamma\) point can be derived using the constraints posed by the presence of the vertical mirror plane symmetry as well as by the time-reversal symmetry. Using the transformation rule of the momentum and spin operators, \((k_{x},k_{y})\xrightarrow{\sigma_{\mathrm{v}}}(-k_{x},k_{y})\) and \((\sigma_{x},\sigma_{y},\sigma_{z})\xrightarrow{\sigma_{\mathrm{v}}}(\sigma_{ x},-\sigma_{y},-\sigma_{z})\), respectively, it turns out that the effective, linear in \(k\), spin-orbit coupling Hamiltonian can be written as a sum of polynomials \(k_{x}\sigma_{y}\), \(k_{y}\sigma_{x}\), and \(k_{x}\sigma_{z}\), that are invariant under the system's symmetry \(\sigma_{\mathrm{v}}\) \[H_{\mathrm{SO}}^{\mathrm{eff}}=\lambda_{1}k_{x}\sigma_{y}+\lambda_{2}k_{y} \sigma_{x}+\lambda_{3}k_{x}\sigma_{z}, \tag{1}\] with the parameters \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) that need to be determined. The presence of the \(k_{x}\sigma_{y}\) and \(k_{y}\sigma_{x}\) terms is a consequence of a broken nonsymmorphic horizontal glide mirror plane symmetry of the phosphorene monolayer, while the emergence of the \(k_{x}\sigma_{z}\) spin-orbit fields triggered by breaking the out-of-plane rotational symmetry. In terms of the induced spin texture, the spin-orbit Hamiltonian can be divided into two parts, the in-plane (\(\lambda_{1}k_{x}\sigma_{y}+\lambda_{2}k_{y}\sigma_{x}\)), and out-of-plane (\(\lambda_{3}k_{x}\sigma_{z}\)) spin-orbit fields. By diagonalizing the Hamiltonian (1), one can obtain the following formulas for the spin splitting and the spin expectation values of the Bloch states: \[\Delta_{\mathrm{so}}^{\mp} = \mp\sqrt{k_{x}^{2}(\lambda_{1}^{2}+\lambda_{3}^{2})+k_{y}^{2} \lambda_{2}^{2}},\] \[s_{x}^{\mp} = \mp\frac{k_{y}\lambda_{2}}{2\sqrt{k_{x}^{2}(\lambda_{1}^{2}+ \lambda_{3}^{2})+k_{y}^{2}\lambda_{2}^{2}}},\] \[s_{y}^{\mp} = \mp\frac{k_{x}\lambda_{1}}{2\sqrt{k_{x}^{2}(\lambda_{1}^{2}+ \lambda_{3}^{2})+k_{y}^{2}\lambda_{2}^{2}}},\] \[s_{z}^{\mp} = \mp\frac{k_{x}\lambda_{3}}{2\sqrt{k_{x}^{2}(\lambda_{1}^{2}+ \lambda_{3}^{2})+k_{y}^{2}\lambda_{2}^{2}}}, \tag{2}\] and use them to determine the spin-orbit coupling parameters by fitting the DFT data. The fitting parameters (see Table 1) reproduce well the spin structure of the top valence band close to the \(\Gamma\) point. This is illustrated in FIG. 3 (a)-(c) where we plot the spin-splitting energy \(\Delta E=\Delta_{\mathrm{so}}^{+}-\Delta_{\mathrm{so}}^{-}\) and spin expectation values close to the \(\Gamma\) point, along the X\(\Gamma\)Y path. In FIG. 3(d)-(f), the angular dependence of spin splitting and spin expectation values is given by assuming the fixed \(|\mathbf{k}|\) value (0.009 in the units of 1/A) and varying the angle \(\varphi\) between the \(\mathbf{k}\)-point vector and the \(x\)-direction from 0 to \(2\pi\). On the level of the effective model (see Table 1), we notice the dominant effect of the out-of-plane spin-orbit field, which is an inherent feature of the group-IV monochalcogenides [56; 57; 58; 59; 60] monolayers, representing the ferroelectrics with phosphorene-like atomic structure. However, in these systems, the spin is locked in the z-direction, due to symmetry, whereas in our case the Figure 2: Calculated band structure of a zero twist-angle commensurate P/WSe\({}_{2}\) heterostructure unfolded to the X\(\Gamma\)Y path of the phosphorene (a) and \(\Gamma\)K\(\Gamma\)M\(\Gamma\) path of the WSe\({}_{2}\) (b) monolayer Brillouin zone. The bands with the dominant contribution of phosphorus (a) and WSe\({}_{2}\) (b) atomic orbitals are marked with orange and green color, respectively. more exotic spin texture is generated. Furthermore, one can compare the strengths of the spin-orbit coupling parameters in the \(k_{x}\) and \(k_{y}\) directions. In the \(k_{x}\)-direction, the effective strength of the spin-orbit field is equal to \(\sqrt{\lambda_{1}^{2}+\lambda_{3}^{2}}=0.019\,\mathrm{eV\AA}\) (comparable to the intrinsic spin-orbit coupling strength in ferroelectric SnS monolayer [61]), while in the \(k_{y}\)-direction, the strength is equal to \(0.009\,\mathrm{eV\AA}\), being roughly two times smaller than in the \(k_{x}\) case. How can the proximity-enhanced spin-orbit coupling influence the electron spin dynamics in phosphorene? We propose to explore spin relaxation, which is readily experimentally accessible. Indeed, in pristine phosphorene, the spin relaxation was found from theory and experiment to be dominated by the Elliott-Yafet mechanism stemming from the intrinsic spin-orbit coupling [8, 62]. This competes with the Dyakonov-Perel mechanism, which is weaker due to the weak Rahsba spin-orbit coupling, although for sufficiently large out-of-plane electric fields or \(z\)-component of the crystal potential gradient \(\nabla V(\mathbf{r})\), it can overtake the Elliott-Yafet effect. For monolayer phosphorene, this would happen for electric fields of \(E\approx 5\,\mathrm{Vnm^{-1}}\), corresponding to the effective strength of the spin-orbit field \(\lambda_{x}\approx 1.08\,\mathrm{meVA}\) in the \(k_{x}\) direction and \(\lambda_{y}\approx 3.34\,\mathrm{meVA}\) in the \(k_{y}\) direction [8]. The values of \(\lambda_{1}\), \(\lambda_{2}\), and \(\lambda_{3}\) exceed those of \(\lambda_{x}\) and \(\lambda_{y}\). We thus predict that the Dyakonov-Perel mechanism dominates the spin relaxation in proximitized phosphorene. From the comparison of spin-orbit coupling parameters, \(\lambda\)s', one sees that proximitized phosphorene due to \(\mathrm{WSe_{2}}\) has a pronounced anisotropy of the in-plane spin-orbit fields which is expected to yield marked spin relaxation anisotropy. Assuming the Fermi level at \(2\,\mathrm{meV}\) below the valence band maximum, the corresponding crystal momenta are \(k_{x}=0.015\,\mathrm{\AA^{-1}}\) and \(k_{y}=0.0004\,\mathrm{\AA^{-1}}\), which give spin-orbit fields \(\Omega_{x}=\lambda_{2}k_{y}=3.6\,\mu\mathrm{eV}\), \(\Omega_{y}=\lambda_{1}k_{x}=0.18\,\mathrm{meV}\) and \(\Omega_{z}=\lambda_{3}k_{x}=0.22\,\mathrm{meV}\). It is clear, that \(\Omega_{x}\) will have a minor effect on spin relaxation compared to \(\Omega_{y}\) and \(\Omega_{z}\). Neglecting \(\Omega_{x}\), and assuming isotropic momentum lifetime \(\tau_{p}\), the spin relaxation rates for the armchair (arm) and out-of-plane (\(\perp\)) directions the rates can be estimated as \(\tau_{s,\mathrm{arm}}^{-1}\sim\tau_{p}\lambda_{3}^{2}(k_{x}^{2})\) and \(\tau_{s,\perp}^{-1}\sim\tau_{p}\lambda_{1}^{2}(k_{x}^{2})\), respectively, where \(\langle\rangle\) denotes the Fermi contour average [63]. Electron spins polarized in the zigzag (zz) direction would relax approximately twice faster, with the rate \(\tau_{s,\mathrm{zz}}^{-1}\sim\tau_{p}\langle k_{x}^{2}\rangle(\lambda_{1}^{2} +\lambda_{3}^{2})\). Finally, one can argue that the observed spin-orbit coupling in phosphorene does not originate from the proximity-induced interaction with the strong spin-orbit coupling material, \(\mathrm{WSe_{2}}\) ML, but is a consequence of the broken symmetry of the phosphorene monolayer. To test this assumption, we compare the previously calculated spin-orbit coupling parameters (Table 1) with the Figure 3: Calculated electronic band spin splitting and spin expectation values for phosphorene top valence band in the P/\(\mathrm{WSe_{2}}\) heterostructure. (a) Band spin splitting along the high symmetry lines in the first Brillouin zone; (b) spin expectation values for the lower band, and (c) for the upper band along the high symmetry lines in the first Brillouin zone. (d) Angular dependence of the band spin splitting for the momenta around the \(\Gamma\) point with radius \(k=0.009\,\mathrm{\AA^{-1}}\), (e) spin expectation values for the lower and (f) upper band spin split band. The color scale corresponds to the \(z\)-component of the spin. case of the phosphorene ML, by removing the WSe\({}_{2}\) ML from the self-consistent calculation and keeping the coordinates of phosphorene ML obtained within the heterostructure relaxation, being the mechanism responsible for breaking the phosphorene's symmetry. In this case, the fitting of the spin-orbit Hamiltonian (1) to the DFT data gives us the following parameters: \(\lambda_{1}^{\rm P}=-0.00065\) eV A, \(\lambda_{2}^{\rm P}=0.0014\) eV A, and \(\lambda_{3}^{\rm P}\approx 0\), confirming the dominant role of the proximity-induced spin-orbit coupling effect. Note that the obtained values obey a similar trend (\(|\lambda_{1}^{\rm P}|<|\lambda_{2}^{\rm P}|\); \(\lambda_{3}^{\rm P}=0\)), and are of the same order of magnitude as Rashba spin-orbit parameters of phosphorene in strong electric fields (\(\propto\) V/nm) [8]. Twist modification of proximity-induced spin-orbit coupling: an example of \(60^{\rm o}\) twist angle Strong proximity-mediated transfer of a spin-orbit coupling from WSe\({}_{2}\) to phosphorene suggests that a relative change of WSe\({}_{2}\) band structure with respect to phosphorene by means of a twist could have a significant impact on the spin texture in phosphorene. We test this assumption by analyzing the P/WSe\({}_{2}\) heterostructure in which the WSe\({}_{2}\) monolayer is twisted for an angle of \(60^{\rm o}\) with respect to phosphorene. The WSe\({}_{2}\) ML within the new heterostructure has the same number of atoms and is strained for the equal percentage as in Section II; thus, it was possible to use the same parameters as before to perform the necessary DFT calculations. After fitting the model Hamiltonian (1) to the DFT data, we obtain the following spin-orbit coupling parameters: \(\lambda_{1}=0.010\) eV A, \(\lambda_{2}=0.010\) eV A, and \(\lambda_{3}=0.015\) eV A. When compared to the values obtained in the zero twist-angle case, we can notice that a small change in parameters \(\lambda_{1}\) and \(\lambda_{2}\) is followed by the sign change of the \(\lambda_{3}\) parameter. The sign change of the \(\lambda_{3}\), corresponding to the \(k_{x}\sigma_{z}\) spin-orbit coupling term, can be directly connected to the fact that, instead of the \(\Gamma\)K branch, the \(\Gamma\)K' branch of WSe\({}_{2}\) is located on the \(\Gamma\)X line of the phosphorene Brillouin zone. Since at the \(K\) and \(K^{\prime}\) points, the corresponding energies are equal and connected via time-reversal symmetry \(\Theta\), \(\Theta E_{|K+\rangle}=E_{|K^{\prime}-\rangle}\), where \(|\pm\rangle\) corresponds to spin wavefunction with \(s_{z}=\pm 1/2\) spin expectation value (we remind that spins in WSe\({}_{2}\) monolayer are locked in the out-of-plane direction), hybridization of phosphorene bands with WSe\({}_{2}\) bands via the spin split branch with \(s_{z}=\pm 1/2\) spin expectation value will be transferred to the branch \(s_{z}=\mp 1/2\) with the opposite spin. The fact that the \(k_{x}\sigma_{z}\) term is locked to the valley of WSe\({}_{2}\) ML suggests that this term is related to the valley-Zeeman spin-orbit coupling induced by the proximity effect in the studied heterostructure. ## IV Conclusions We analyzed the proximity-induced spin-orbit coupling effects in a heterostructure made of phosphorene and WSe\({}_{2}\) monolayer. Giant spin splitting of WSe\({}_{2}\) valence bands motivated us to focus on the hole spin physics in phosphorene where, due to the broken inversion symmetry, spin splitting of the bands can occur. We discovered a significant proximity-induced spin-orbit coupling in the top valence band of phosphorene, whose origin is attributed to the strong hybridization with the WSe\({}_{2}\) spin split bands close to the \(\Gamma\) point. An effective spin-orbit coupling Hamiltonian model compatible with the \(\mathbf{C}_{1\rm v}\) symmetry of the heterostructure is derived, and the spin-orbit parameters that fit the obtained data from ab-initio calculations to the model Hamiltonian are determined. By comparing the obtained parameters with the spin-orbit coupling values with group-IV monochalcogenide monolayers, representing the ferroelectrics with phosphorene-like atomic structure, we concluded that phosphorene is transformed into weak spin-orbit coupling material. Still, compared to electric field-induced Rashba spin-orbit coupling, the proximity-induced spin-orbit coupling is an order of magnitude larger. Finally, we showed that the twist angle can influence the spin-orbit proximity effect in a studied material. More precisely, for the twist angle of \(60^{\rm o}\), we reported a sign change of the out-of-plane spin-orbit field, followed by a sizable modification of the in-plane spin-orbit texture. The studied heterostructure shows that structures with incompatible symmetries can be used to generate spin textures different from the more commonly studied composites made of graphene and transition metal dichalcogenides, opening a playground for novel materials that can be used either as a target material or as a substrate in van der Waals heterostructures important for spintronics application. ###### Acknowledgements. M.M. acknowledges the financial support provided by the Ministry of Education, Science, and Technological Development of the Republic of Serbia and DAAD Research Grant 57552336. This project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under the Programme SASPRO 2 COFUND Marie Sklodowska-Curie grant agreement No. 945478. M.G. acknowledges financial support provided by Slovak Re \begin{table} \begin{tabular}{|c|c|c|} \hline \(\lambda_{1}\) [eV Å] & \(\lambda_{2}\) [eV Å] & \(\lambda_{3}\) [eV Å] \\ \hline 0.012 & 0.009 & -0.015 \\ \hline \end{tabular} \end{table} Table 1: Spin-orbit coupling parameters \(\lambda_{1/2/3}\) obtained after fitting the model Hamiltonian (1) to the DFT data, assuming the P/WSe\({}_{2}\) heterostructure with zero twist angle. search and Development Agency provided under Contract No. APVV-SK-CZ-RD-21-0114 and by the Ministry of Education, Science, Research and Sport of the Slovak Republic provided under Grant No. VEGA 1/0105/20 and Slovak Academy of Sciences project IMPULZ IM-2021-42 and project FLAG ERA JTC 2021 2DSOTECH. M.K. acknowledges financial support provided by the National Center for Research and Development (NCBR) under the V4-Japan project BGaapEng V4-JAPAN/2/46/BGaEng/2022. I.S acknowledges financial support by APVV-21-0272, VEGA 2/0070/21, and by H2020 TREX GA No. 952165 project. J.F. acknowledges support from Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) SFB 1277 (Project-ID 314695032, project B07), SPP 2244 (Project No. 443416183), and of the European Union Horizon 2020 Research and Innovation Program under Contract No. 881603 (Graphene Flagship) and FLAG-ERA project 2DSOTECH. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gaussentre.eu) for funding this project by providing computing time on the GCS Supercomputer SuperMUC-NG at Leibniz Supercomputing Centre (www.lrz.de).
2310.19235
A Synopsis of Stent Graft Technology Development
Coronary artery disease (CAD) is a leading cause of death worldwide. Treatments have evolved, with stenting becoming the primary approach over bypass surgery. This article reviews the evolution of coronary stent technology, starting from the first angioplasty in 1977. Pioneers like Forssmann, Dotter, and Gruentzig established the foundation. The late 1980s saw the introduction of bare metal stents (BMS) to address angioplasty limitations. However, BMS had issues, leading to the development of first-generation drug-eluting stents (DES) in the early 2000s, which reduced restenosis but had safety concerns. Subsequent innovations introduced second-generation DES with better results and the latest bioresorbable vascular scaffolds (BVS) that dissolve over time. Clinical trials have been crucial in validating each stent's effectiveness. Despite progress, challenges remain in stent selection, approval processes, and minimizing risks. The future may see personalized stenting based on patient needs, highlighting the significant advancements in stent technology and its impact on patient care.
Umme Hafsa Momy
2023-10-30T02:41:56Z
http://arxiv.org/abs/2310.19235v1
# A Synopsis of Stent Graft Technology Development + ###### Abstract Coronary artery disease (CAD) is the predominant cause of mortality and morbidity across the globe. Over the past few decades, treatments for CAD have witnessed dramatic evolution, with percutaneous coronary intervention (PCI) with stenting taking precedence over bypass surgery as the primary revascularization strategy. This paper delivers an extensive overview of the significant progress in coronary stent technology, tracing back to the inaugural coronary angioplasty in 1977. Early trailbazers like Werner Forssmann, Charles Dotter, and Andreas Gruentzig laid the groundwork for interventional cardiology. The introduction of bare metal stents (BMS) in the late 1980s offered solutions to the limitations of balloon angioplasty, such as acute vessel closure and restenosis. However, BMS had its own set of challenges. Consequently, the early 2000s saw the emergence of first-generation drug-eluting stents (DES), utilizing sirolimus and paclitaxel, offering significant reductions in restenosis compared to BMS. Despite their success, safety concerns such as very late stent thrombosis arose. Innovations continued with the second-generation DES, featuring advanced stent platforms and biocompatible polymers, ensuring enhanced long-term results. The most recent advancement has been the bioresorbable vascular scaffolds (BVS), which are designed to resorb over time, eliminating the need for a long-term metallic implant. Throughout this journey, clinical trials played a pivotal role in validating the efficacy of each stent generation. While there have been remarkable improvements in reducing restenosis and other adverse events, challenges like optimizing regulatory approval pathways, stent selection, and minimizing risks associated with thrombosis and restenosis persist. The future holds promise for more individualized stenting strategies, tailored to specific patient and lesion profiles. This review not only traces the rapid evolution of coronary stent technology but also underscores its transformative impact on patient care and outcomes. ## 1 keywords _Coronary artery disease, Percutaneous coronary intervention, Coronary stenting, Bare metal stents,Angioplasty, Revascularization Drug-eluting stents, Bioresorbable scaffolds, Restenosis, Stent thrombosis._ ## 2 Introduction Coronary artery disease (CAD) is a leading cause of mortality worldwide, responsible for over 9 million deaths in 2016 [1]. Percutaneous coronary intervention (PCI) with stenting has become the most commonly performed revascularization procedure for obstructive CAD [2]. Since the first coronary balloon angioplasty was performed in 1977 [3], there has been rapid development and evolution of stent technology over the past decades. Bare metal stents (BMS) were introduced in the late 1980s to overcome limitations of balloon angioplasty like acute vessel closure and restenosis [4, 5]. First-generation drug-eluting stents (DES) emerged in the early 2000s to further reduce restenosis rates compared to BMS [6, 7]. However, concerns emerged regarding very late stent thrombosis with first-generation DES [8]. This spurred development of second-generation DES with novel stent platforms and polymers to improve long-term safety and efficacy [9]. Most recently, bioresorbable vascular scaffolds (BVS) have been introduced as a transient scaffold to provide short-term vessel support and drug delivery without leaving a permanent metallic implant [10]. In this paper, we provide a comprehensive overview of the major developments in coronary stent technology over the past 30 years since the introduction of BMS. We summarize the evolution of stent designs and materials from early BMS to modern second-generation DES and BVS. We review clinical data from landmark trials comparing different stent generations and technologies. Finally, we discuss future directions for coronary stenting with a focus on optimizing patient outcomes and minimizing adverse events like restenosis and stent thrombosis. This synopsis of the rapid growth of stent technology illuminates how each new stent generation aimed to incrementally improve upon limitations of its predecessors. ## 3 Method ### Werner Forssmann (1904-1979) Werner Forssmann was the first to venture into the coronary arteries, because the human heart has always fascinated him. He invented the notion of giving drugs to the heart with a catheter in 1929, which was a completely unknown concept. Even though no institution would allow him to attempt such a thing, he was undeterred and made the decision to proceed anyway without giving much thought to the likely outcomes. He asked the nurse to numb his own left elbow while she quickly cut, a urethral catheter was then inserted into his own arm after the vein was unlocked. After more spins, he had Dozen escort him to the radiology lab in the basement, where he used the catheter inserted into his heart to take photographs [11; 12; 13]. Although Forssmann was unable to continue his research on cardiac catheterization, At Columbia University in New York, Andre F. Cournand and Dickinson W. Richards took his idea and developed it further decades later, in the late 1930s. They improved the method and used it to take meaningful measurements inside the heart. The catheter technique quickly replaced other techniques as the accepted method of measuring intracardiac pressure. However, it would be some time before the crown tree and its tangled branches could be located ### F. Mason Sones (1919-1985) In 1950, F. Mason Sones signed on with the Cleveland Clinic. He was intelligent, tenacious, and stayed in the hospital for the majority of his time. Sones instructed his colleague to inject a dye shot into the aorta to illuminate it. In his laboratory, Sones placed a diagnostic catheter into the ascending aorta of a young patient on October 30, 1958.The oxygen-free angiographic dye was intended to prevent oxygen delivery and cause ventricular fibrillation, but a small catheter suddenly began to whip around like an uncontrollable garden hose was used to inject all of the dye deeply into the patient's right coronary artery. Nothing like this had ever been tried before. The patient was unharmed throughout, and the procedure produced a precise image of the coronary arteries. Sones proclaimed with triumph, He claimed that they had "just revolutionized cardiology", after all. Using diagnostic catheters to create incredibly detailed images, he was able to successfully place aortic trunk arteries [14]. Sones began developing unique tapered-tip catheters with an open end and a mesh just beginning to prevent the catheter from clogging the vessel's shaft and was inspired by the unexpected incident and outcome [11]. It did not take long for coronary angiography to become established as a safe and common test for CAD. A special J-shaped catheter was developed by Melvin P. Judkins, who modified the method to make catheterization of the coronary arteries easier and require less effort.However, it requires extensive practice to get a catheter into the tiny openings of the coronary arteries [15]. ### Charles T. Dotter (1920-1985) The Director of radiology at the University of Oregon in Portland is the brilliant Charles T. Dotter., developed numerous methods for detecting and treating vascular disease [16; 17]. Sven-Ivar Selinger, known for his method of inserting a catheter inside blood vessels placing a catheter inside blood vessels-- "Catheter over the wire, needle in, wire in, needle out" had spent some time with him.10 He conducted research using various materials, including Piano wires, guitar strings, and other cables and he produced personal catheters because he believed that catheter technology could be used for more than just diagnosing problems. Dotter identified a chance and provided her with a novel procedure never done before. 83 years of age diabetic lady with a non-healing toe with gangrene and a foot sore underwent an arteriogram of her left leg on January 16, 1964. Her surgeons had insisted on amputating the foot, believing she was beyond help with her poor circulation. but the lady vehemently declined. He first passed a guidewire through the plaque obstruction before inserting a small-caliber catheter and then wedged the plaque by inserting larger and larger catheters through it. In an instant, the lady's frozen leg warmed up and became hyperemic, seemingly by magic. The woman's ulcer was healed after a few weeks, and the pain was subsided, according to X-rays that showed improved circulation [18, 19]. ### Andrew In 1969, Andreas Gruentzig joined University Hospital of Zurich after completing his medical undergraduate studies in Heidelberg. Gruentzig was fascinated by the "Doddering" technique and had learned it from a lecture, but he also believed it needed improvement because there was a high potential for vascular harm, plaque falling off and acute distal occlusion due to embolization. Gruentzig developed the concept of opening a blood vessel with a balloon attached to the catheter's tip. in his wild and revolutionary imagination Except for Michaela and Maria Schlumpf, no one else expressed much interest in or support for his idea in 1972 because it was so outlandish. Since they lacked a laboratory and his research funding, using his kitchen as a a workspace for the following two years, working on his catheters almost every evening with his wife and Schlumpf. It was difficult to get the catheter's tip fitted with a balloon so that he could blow it up to open a locked container. He could blow it up to unlock a locked container was no easy task, and they encountered many technology issues, including air leaks and numerous balloons that expand asymmetrically or lose their structural integrity. However, he persisted and adjusted. As a result, he experimented with different materials, shapes, and designs, but after hundreds of failures and successes, he began to see a few encouraging outcomes.4 He experimented with his design on diseased arteries taken from cadavers and animal models, and as his methods for making balloons got better, he felt ready to use it on a patient. It was February 1974, Gruentzig pushed his catheter and inflated the balloon in a man in his 67s whose lower extremity pain rendered him unable. As a result, patient's pain resolved immediately. After that, he moved his next challenge. While awaiting a chance to use his catheters in a human body, he met with numerous production companies, engineers and the layout of his device was constantly being improved. ## 4 A Simple Angioplasty is Insufficient On September 16, 1977, Bachmann didn't show any signs of pain The artery was visible as being open on radiographic images. A significant accomplishment in medicine, it was a big success. In addition to other cases, Gruentzig was successful in reproducing the outcomes, and he made the news. On February 7, 1978, the front page of a Swiss newspaper carried the headline "Medical Sensation: Balloon Treatment Against Heart Attacks [11]. Geoffrey O. Hartzler was one of the many followers Gruentzig had trained over the years [20].He shocked the medical community in 1980 by using angioplasty to try to destroy a myocardial infarction (MI) as it developed, which kind of signaled a fresh approach for the procedure. In addition, Hartzler started modifying the stiff end of the catheter to have better curves so that it would slide into place more easily as he grew impatient with Gruentzig' s invention's flaws. Hartzler has pushed the boundaries further, making the bold claim that failure of angioplasty is the only cause for bypass surgery Despite the fact that claim merely partially accurate, initial studies contrasting using surgical techniques for angioplasty, for instance the Coronary Angioplasty vs Bypass Revascularization Investigation (CABRI) [21], the Bypass Angioplasty Revascularization Investigation (BARI) [22],Emory Angioplasty vs Surgery Trial (EAST) [22], the German Angioplasty Bypass Surgery Investigation (GABI) [22], the Randomized intervention Treatment of Angina (RITA) [22], showed this in selected patients. By extruding plaque, simple balloon angioplasty can temporarily increase lumen diameter, but elastic recoil quickly eliminates this gain. Plaque dissection can produce plastic, more permanent changes, but there is a possibility of acute vascular occlusion with this method [4]. The inventors of coronary angioplasty were forced to perform these procedures in an active surgical standby mode due to an abrupt occlusion. When balloon-induced intimal denudation and medial tearing occur, the subendothelial matrix is exposed to the blood, which promotes platelet aggregation and thrombosis in the acute phase and chronic negative changes in vascular remodeling (late recoil) and neointimal hyperplasia in the chronic phase. In the first 6 to 9 months, 30 percent to 40 percent of patients experienced an almost total loss of therapeutic benefit due to insufficient initial reinforcement and restenosis (Fig. 1) [5]. ## 5 Key Preclinical Studies Prelimical studies in animal models like porcine coronary arteries provided key insights into optimal stent design features. For example, Saxon et al. compared different stent materials and configurations in a pig model. They found tantalum wire stents achieved a larger acute lumen than stainless steel, with less thrombosis risk [23]. Schwarzacher et al. tested antithrombotic stent coatings like carbon and heparin in a sheep model, finding reduced acute thrombosis compared to uncoated stents [24]. These studies guided early stent material and coating selections prior to clinical use. Later preclinical work focused on evaluating safety of stent polymer coatings and kinetic drug release profiles for developing drug-eluting stents [25]. ## 6 Stent Development In 1986, Ulrich Siegwart of Switzerland and Jacques Puel of France implanted the first stents[26]. Coil stents and slotted tube designs were first implanted after these self-expanding mesh designs in 1987 at Emory University Hospital and So Paulo, Brazil, respectively. With the spread worldwide Angioplasty procedures revealed that the arteries of numerous patients gradually narrowed hours, days, or even months following the operation. As techniques for angioplasty spread across the globe, it became apparent that many patients' arteries gradually narrowed weeks or months after the procedure[27; 28; 29]. Early research revealed rates of restenosis ranging between twelve and forty-eight percent [30].Stents were created that would keep vessels conserve after angioplasty to address these problems. Studies have exhibited improved fast outcomes and longer period without events stenting associated to standard inflatable angioplasty. Restenosis levels were declined by about 10 percent overall [5; 4]. ## 7 Classification of Stent The most significant development within transcutaneous coronary revascularization is the development of stents. Restenosis within a stent. is no longer a significant issue with coronary intervention to make stents that can be infused with drugs. Metal stents covered with a polytetrafluoroethylene (PTFE) membrane, known as covered stent-grafts, were developed to prevent restenosis caused by the growth of tissue through the mesh of a stent. An intriguing idea for preventing intraluminal proliferation, sealing degenerated vein grafts and covering coronary artery perforations is the use of stent grafts, which integrate a membrane into a coronary stent. After balloon dilatation, arterial recoil and restenosis should be avoided by using coronary stents. Bare metal stents (BMS), drug eluting stents (DES), and bioresorbable vascular scaffolds (BRS) are the three major types of stents. Figure 1: Pathophysiology of In-Stent Restenosis and Thrombosis. ### Bare metal stents (BMS) Bare metal stents (BMS) were the first stents used. These stents can be made thinner, have great mechanical strength and poor flexibility. However, bare metal stents can cause restenosis and can lead to peripheral embolism after being implanted in old vein grafts. A lower restenosis rate, as confirmed by two historical trials issued in 1993 [4, 5].Observational studies indicate that the rate of cardiovascular events is increased when stent grafts are used voluntarily in native vessels. However, the strong mechanical support also contributes to neo-intimal hyperplasia. Intravascular ultrasound studies showed that stents required high pressures to fully expand, leading to the development of Dual antiplatelet therapy (DAPT) combining ticlopidine, clopidogrel, and aspirin. In-stent restenosis (ISR), observed in mid- and long-term follow-up, 15 percent to 30 percent of treated lesions was still significantly risky with these stents [31]. ### Drug-Eluting Stents (DES) Drug-Eluting Stents (DES) the next creation of stents. The drug eluted was an antimitotic agent that prevented the growth of SMCs. In the history of interventional cardiology, there has been a third revolutionary paradigm shift. was signaled during 1999 when the first DES was implanted in Brazil by Sousa. However, the possibility of late stent thrombosis (ST) is enhanced by impaired endothelial regeneration and vasomotion. #### 7.2.1 First-generation Drug-eluting Stents First-generation DESs originally used two antiproliferative medications were sirolimus and paclitaxel. When Cementin published a meta-analysis in 2006, It was discovered that myocardial infarction (MI) and mortality risk were both increased by stent thrombosis (ST) that developed extremely late or tardy [32]. Several randomized controlled trials (RCTs) have been conducted to evaluate both. and demonstrated considerable reductions in ISR, target lesion and late lumen loss /vascular revascularization rates evaluated to BMS. Each was constructed from stainless steel and was thick in the struts of greater than 130m [6, 33, 34]. A very late ST, while currently familiar as a potential first-generation Drug-eluting Stents (DES) complication, occurs rarely and a lot of data registries and meta-analyses consume offered comfort regarding using such methods in practice [35] #### 7.2.2 Second-generation Drug-eluting Stents With the switch to metal alloys for the platform in the second-generation DES, the struts could be thinner and more flexible. The Lemus family of drugs, such as zotarolimus, enivroximes and novelist, which exhibit faster drug release and consequently earlier endothelial coverage, have been used to create new, more biocompatible polymer. ### BRS stands for the bioresorbable stent. Resorbing over a period of 6 months to 2 years, these stents reduce chronic inflammation over a longer period of time and promote endothelial regeneration. Reproduced with permission [28]. ## 8 Complications of stenting Stenting complications are comparatively rare. Stents have not been associated with many complications, but there is a small chance that the body will reject the stent. Discomfort and bleeding at the puncture site where the catheter was inserted are the most common side effects. Some people have metal allergies or sensitivities, and stents contain metal components. Stent manufacturers do not recommend using them on people sensitive to metal. Dissection of an inner layer in the coronary artery is a tiny tear that occasionally results from the procedure. The tear usually heals on its own and is not too large. In certain circumstances, a stent is used to repair the tear. Immediate treatment is given if the tear is severe and causes blockage of arterial blood flow or bleeding out around the heart. ## 9 Limitations of stenting In over 90 percent of patients, stenting improves blood flow and relieves symptoms, but there is a chance that symptoms will return within six months. Symptatic restenosis can occur in the following cases: 1. About 30 percent of patients who have surgery to open a blocked artery do not receive stents. 2. Approximately 15 percent of patients with bare metal stents. 3. Less than 10 percent of patients using drug eluting stents, also known as drug eluting stents. In addition, some medical conditions, such as diabetes and continued smoking, can increase the risk of narrowing. diffuse narrowed arteries, Low-density lipoprotein (LDL) cholesterol and high blood pressure. A large blood vessel located at or near the beginning of a side branch narrows a blood vessel with numerous implanted stents. ## 10 Regulatory Approval Pathways The 1994 FDA approval of Palmaz-Schatz stents for coronary use marked an important milestone [36]. This first bare metal stent underwent prospective clinical trials to meet regulatory requirements and demonstrate safety and efficacy. Subsequent stent approvals built on this pathway, with new generations of stents requiring comparably designed trials. First-generation drug-eluting stents gained FDA approval in 2003 to 2004 based on trials like SIRIUS [6]. Approval of these novel, higher-risk devices spurred more rigorous post-marketing surveillance mandates. Costly and lengthy regulatory processes posed challenges, limiting the pace of incremental innovation. Efforts to balance safety with faster access continue to evolve. ## 11 Care after the procedure In over 90 percent of patients, stenting improves blood flow and relieves symptoms, but there is a chance that symptoms will return within six months. Symptomatic restenosis can occur in the following cases: 1. About 30 percent of patients who undergo surgery to open a blocked artery do not receive stents. 2. Approximately 15 percent of patients with bare metal stents. 3. Less than 10 percent of patients who use drug-eluting stents, also known as drug-coated stents. Compared to other coronary artery sites, some are more likely to re-narrow. Additionally, some medical conditions, for instance diabetes and continued smoking, might enhance the possibility of narrowing. diffusely narrowed arteries, high blood pressure, terrible cholesterol (LDL) levels that are too high. A major blood vessel that is at the start of a side branch or close to it narrows, a blood vessel with numerous stents implanted. ## 12 Preventing blood clots The formation of a blood clot (thrombosis) inside the stent, also known as stent thrombosis, is one of the most serious complications that can occur after the insertion of a stent. Fortunately, stent thrombosis due to administration of aspirin and other anticoagulant drugs both before and after stenting is rare. It is believed that clotting occurs when the metal of the stent comes into contact with blood components. Stent thrombosis, which cuts off the blood supply to the heart, can lead to a heart attack or even death. Even though greatest occurrences happen in the interior thirty days of stent placement, Stent thrombosis may happen as early as twenty-four hours, thirty days, or even up to a year later. ## 13 When to seek help If any of the following events occur after stenting, seek medical help immediately: \(\bullet\) Fever greater than 38\({}^{\circ}\)C (100.4 F). \(\bullet\) You faint or feel dizzy. \(\bullet\) Your pulse is not normal Start of chest pain. \(\bullet\) The puncture site becomes extremely painful, swollen, warm, bleeds more than a few drops, or discharges pus. ## 14 Conclusion The record and development stenting of coronary arteries is amongst the greatest astonishing characteristics of innovative practice of medicine. Patients requiring coronary angioplasty typically receive treatment with coronary artery stenting. Stents have eliminated the mechanical component of restenosis and acute recoil, eliminating the need for urgent bypass surgery. Even with BMS, ISR has declined due to significant advances in stent platform design. As a result of numerous efficacy and safety studies, Stents for coronary arteries are currently the preferred medication for CAD. Nevertheless, there is still thrombosis and stent restenosis a major a significant competition for modern coronary artery stents. Future interventional cardiologists may be able to use individualized, evidence-based medicine, in which the selection of a Based on stent on a patient's inherent factors, Lesion characteristics and risk profile (for thrombosis, restenosis, and bleeding).
2302.11478
Theoretical exploration of task features that facilitate student sensemaking in physics
Assessment tasks provide opportunities for students to make sense of novel contexts in light of their existing ideas. Consequently, investigations in physics education research have extensively developed and analyzed assessments that support students sensemaking of their surrounding world. In the current work, we complement contemporary efforts by theoretically exploring assessment task features that increase the likelihood of students sensemaking in physics. We identify the task features by first noting the salient characteristics of the sensemaking process as described in the science education literature. We then leverage existing theoretical ideas from cognitive psychology, education, and philosophy of science in unpacking the task features which elicit the characteristics of sensemaking. Furthermore, we leverage Conjecture Mapping -- a framework from design-based research -- to articulate how the proposed task features elicit the desired outcome of sensemaking. We argue that to promote sensemaking, tasks should cue students to unpack the underlying mechanism of a real-world phenomenon by coordinating multiple representations and by physically interpreting mathematical expressions. Major contributions of this work include: adopting an agent-based approach to explore task features; operationalizing conjecture mapping in the context of task design in physics; leveraging cross-disciplinary theoretical ideas to promote sensemaking in physics; and introducing a methodology extendable to unpack task features which can elicit other valued epistemic practices such as modeling and argumentation.
Amogh Sirnoorkar, James T. Laverty
2023-02-22T16:27:39Z
http://arxiv.org/abs/2302.11478v1
# Theoretical exploration of task features that facilitate student sensemaking in physics ###### Abstract Assessment tasks provide opportunities for students to make sense of novel contexts in light of their existing ideas. Consequently, investigations in physics education research have extensively developed and analyzed assessments that support students sensemaking of their surrounding world. In the current work, we complement contemporary efforts by theoretically exploring assessment task features that increase the likelihood of students sensemaking in physics. We identify the task features by first noting the salient characteristics of the sensemaking process as described in the science education literature. We then leverage existing theoretical ideas from cognitive psychology, education, and philosophy of science in unpacking the task features which elicit the characteristics of sensemaking. Furthermore, we leverage Conjecture Mapping - a framework from design-based research - to articulate how the proposed task features elicit the desired outcome of sensemaking. We argue that to promote sensemaking, tasks should cue students to unpack the underlying mechanism of a real-world phenomenon by coordinating multiple representations and by physically interpreting mathematical expressions. Major contributions of this work include: adopting an agent-based approach to explore task features; operationalizing conjecture mapping in the context of task design in physics; leveraging cross-disciplinary theoretical ideas to promote sensemaking in physics; and introducing a methodology extendable to unpack task features which can elicit other valued epistemic practices such as modeling and argumentation. ## I Introduction Researchers in physics education have advocated for facilitating students' content understanding through promoting "sophisticated epistemology" - leveraging different modes of reasoning while engaging with a task [1, 2]. The education research community has also emphasized promoting pedagogical practices that facilitate students in generating new knowledge by building on their existing ideas [3]. Sensemaking [4] - the process of addressing a perceived gap in one's understanding - attends to these valued objectives. Sensemaking assists students' in better comprehending the curricular content by leveraging different forms of knowledge and practices [5]. Sensemaking is also one of the many ways through which scientists and engineers generate new knowledge [6, 7, 8]. Given this significance, there has been an uptick in investigations on the discourse markers and the nature of tasks associated with sensemaking. These include (but are not limited to) construction and critique of claims [6], version questions during interactions [9], the blending of model- and evidence-based reasoning [10], computational reasoning about physics scenarios [11], addressing quantitative problems through qualitative insights and vice-versa [12], and explaining physical systems through mathematical insights [13]. We contribute to these efforts by theoretically exploring the assessment task features that increase the likelihood of students sensemaking in physics. We identify these features by initially noting characteristics of the sensemaking process as described in the science education literature [4]. Guided by the research in cognitive psychology, science education and philosophy of science, we make a theoretical argument for the task features that promote sensemaking in physics. We neither argue that the proposed features _necessarily_ engage students in sensemaking nor any task that elicits sensemaking _necessarily_ entails these features. We also do not claim that the proposed list is an _exhaustive_ one. Rather, we make a modest argument that tasks entailing the proposed features _together_ (as opposed to presence of one of these) increase _the likelihood_ of students sensemaking. To highlight how the proposed features bring about the desired outcome of sensemaking, we elucidate the design criteria through a conjecture map. Conjecture mapping is a framework primarily employed in design-based research to conceptualize the interactions between theoretically salient design features of a learning environment and their intended outcomes [14]. We adopt this framework to our context in elucidating how the proposed features elicit the desired outcomes of sensemaking. The current work makes four key contributions to the contemporary literature. Firstly, this study presents an agent-based approach in articulating the task features by shifting the vocabulary from "_tasks entailing a feature X_" to "_tasks that cue students about X"_ or "_tasks that cue students to do X_". Such vocabulary would better account for students' agency along with the local practices of their learning environments. Secondly, this work operationalizes the Conjecture Mapping framework in the context of task-design in physics. Thirdly, we leverage cross-disciplinary theoretical ideas particularly from cognitive psychology and philosophy of science in identifying the task features that promote sensemaking in physics. Lastly, our methodological approach in identifying task features can also be potentially extended in unpacking task features which can elicit other valued epistemic practices such as modeling and argumentation. In doing so, we address the following research questions in the rest of this paper: **RQ1:**: How can we adopt a framework-based approach in theoretically identifying task features that promote students sensemaking in physics? **RQ2:**: What set of assessment task features increase the likelihood of students sensemaking in physics? This manuscript is structured as follows: in the next section, we briefly review the literature on sensemaking before describing the theories of sensemaking and conjecture mapping in Sections III.2 and III.3. In Sections IV-VIII, we detail the arguments in support of the task features which increase the likelihood of students sensemaking in physics. We substantiate each argument by providing a theoretical background and empirical evidence from the literature. We conclude by discussing the implications and limitations of this work in Sections IX and X. ## II Literature review on sensemaking Science education literature has a rich repository of investigations on students sensemaking about their surrounding world. These explorations broadly span across three domains: (i) theoretical descriptions of the sensemaking process (ii) analytical accounts exploring approaches of sensemaking, and (iii) the outcomes of sensemaking. We present a brief overview of the studies in each domain, and encourage readers to go through references [4; 15] along with the cited literature for additional details. ### Theoretical accounts of sensemaking The first domain of the sensemaking literature has focused on theorizing the underlying process involved in'making sense' of a given context. These accounts have explored sensemaking through the lens of transfer [16], modeling [10; 17; 18; 19; 20; 21; 22; 23], argumentation [24; 25; 6], epistemic frames [26; 27], and epistemic games [28]. According to Nokes-Malach and Mestre [16], sensemaking forms a critical component of 'transfer' - the process of leveraging existing knowledge in solving novel problems. The authors argue sensemaking (during problem-solving) to be an iterative process involving coordination between prior knowledge and contextual information while generating an optimal solution. The process of narrowing down on the optimal solution is often achieved by'modeling' the given problem [10; 17; 18; 19; 20; 21]. Modeling as sensemaking entails an initial construction of mental models, and subsequent validation of the models' ideas through external representations [22]. One can also model the given context by employing mathematics as either a tool and/or as an object of investigation [23]. Choosing the optimal solution candidate during sensemaking is also achieved through construction and critique of claims during an argument [24; 25; 6]. Sensemaking from the argumentation perspective entails generation and evaluation of new knowledge, both at the individual, and at the community level. The idea of generating new knowledge is also resonated in other studies theorizing sensemaking as an 'epistemic frame' (a tacit understanding of 'what's going on here?' [29; 30]), or as an 'epistemic game' (a strategic approach in perceiving an inquiry [31; 32]). The sensemaking epistemic frame involves generation of novel explanations in response to a perceived gap in one's understanding about an observed phenomenon. These explanations are based on one's lived experiences, and often are aimed at unpacking the underlying mechanism that gives rise to the phenomenon [33; 34; 27; 27]. On the other hand, the Sensemaking Epistemic Game [28] conceptualizes sensemaking as a multi-stage iterative process with a goal of addressing one's knowledge gap by leveraging contextual information and existing ideas. ### Analytical accounts of sensemaking The second domain of the sensemaking literature has focused on analytically identifying reasoning approaches, or instances (mainly involving mathematics) that qualify as sensemaking [35; 36; 37; 38; 39; 40; 41; 42; 8; 12; 8; 13; 8; 14; 8; 12; 8; 13; 8; 12; 8; 13; 8; 14; 8; 12; 8; 13; 8; 12; 8; 13; 8; 14; 8; 15; 16; 17; 18; 12; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 35; 40; 41; 42]. A review of this literature reveals a variety of definitions adopted to analyze the sensemaking process. A subset of this literature has defined sensemaking as establishing coherence between multiple representations of physics knowledge such as equations, figures, tables, or linguistic phrases [35; 36; 8; 35]. While Emigh _et al._[35] define coordination between these forms of representations as sensemaking, Lenz _et al._[36] observe sensemaking as seeking coherence or meaning between them. Other studies have defined sensemaking as establishing connections between the structure of mathematical formalisms, and the physical world [37; 38; 39; 40; 41; 42]. These studies have observed'mathematical sensemaking' to entail mapping of formal mathematics with causal relations [37], conceptual understanding [38; 12], or intuitive reasoning [39] about physical systems. ### Cognitive outcomes of sensemaking The third domain of the sensemaking literature focuses on probing the cognitive outcomes of the sensemaking process. This literature posits three major outcomes of sensemaking: (i) generation of new knowledge, (ii) development of sophisticated epistemology, and (iii) enhanced content understanding. Generation of new knowledge The first cognitive outcome of sensemaking is the generation of new knowledge by blending curricular ideas with lived experiences. Studies discussing episodes of sensemaking have noted students making novel claims by constructing analogies, making assumptions, designing thought experiments, and predicting outcomes [43, 44, 45, 46]. Furthermore, sensemaking also entails a crucial component of scientists' and engineers' reasoning in knowledge construction while solving cross-disciplinary real-world problems [44, 8, 16]. #### ii.1.2 Sophisticated Epistemology Personal epistemology - perspectives about what it means 'to know', and the nature of knowledge - plays a crucial role in how one engages with a given task [47, 48, 49]. During sensemaking, students iteratively coordinate and reconcile between different forms of knowledge and reasoning approaches. The knowledge forms include lived experiences, intuitive arguments, conceptual and procedural ideas, or hypotheses [43, 8, 4]. These knowledge forms are further accompanied with reasoning practices such as argumentation [6], problematization [9], or modeling [50, 22]. This virtue of leveraging different forms of knowledge and blending them with a broad spectrum of epistemic practices results in sophisticated epistemology [51, 2] - the second major cognitive outcome of sensemaking. #### ii.1.3 Enhanced content understanding One of the consequences of generating new knowledge through sophisticated epistemology is enhanced content understanding (the third cognitive outcome). Referring multiple sources of knowledge and leveraging varying epistemic practices during sensemaking contributes towards better content understanding [52, 53, 54, 55, 56, 30] by equipping students to 'transfer' skills across multiple disciplines [44, 16, 5]. ## III Theory ### Agentie paradigm in task-design Research on task-design has traditionally involved prescribing a set of design features (often backed by analysis of students' responses) which can elicit a targeted response from students. However, studies have increasingly highlighted the role of contextual factors such as local norms of students' community (e.g., teachers, classrooms, and institutions) on what counts as "knowing" or "doing" science [57], students' agency in accessing knowledge sources [58, 59], and their in-the-moment framing of the task's expectations [60] as influencing students' engagement with tasks. Efforts on explicit prompting in tasks too have evoked mixed results. While few have noted explicit prompting to enhance students' understanding on domain principles and procedural knowledge [61, 62, 63], others have noted them to impede students' intuitive reasoning [64] by selectively emphasizing parts of the presented information [65, 66]. As Berland _et al._[67] note _"[...]emphasizing the actions alone can result in rote performance and attainment of skills, rather than student engagement in the rich work of scientific knowledge construction, evaluation, and refinement."_ In light of these observations, we adopt an agent-based approach in arguing about task features by shifting the vocabulary from "_tasks entailing a feature X_" to "_tasks that cue students about X_" or "_tasks that cue students to do X_". By "cuing" we mean, conveying or setting up expectations for students about a feature in a task or about a specific way of reasoning as a solution approach to the task. Such vocabulary would better account for students' agency along with the local practices of their learning environments. In the rest of this paper, we adopt this framing in theorizing assessment features that can increase the likelihood of students sensemaking in physics. ### Conjecture Mapping Design-based research accompanies a set of epistemic commitments about design and functioning of learning environments in addition to advancing the understanding of teaching and learning processes [14]. Attending to these commitments often requires researchers to articulate conjectures about how the designed learning environment functions in an intended setting. Conjecture mapping [14] is a technique which conceptualizes these arguments by establishing relationships between the design features, processes enacted by participants engaging with these features, and the intended outcomes. This technique highlights the relationships between various aspects of educational design through six elements: (i) a high-level conjecture, (ii) embodiment, (iii) mediating processes, (iv) outcomes, (v) design conjectures, and (v) theoretical conjectures (Figure 1). A _high level conjecture_ forms the first element of a conjecture map which articulates the theoretical idea driving the design of a novel learning environment. The articulated conjecture provides the road-map of the theoretical idea's operationalization in a given setting. This conjecture is then reified in _embodiment_, the second element of a conjecture map, which crystallizes the design features into several components. These components include: tools and materials (assessments, devices, etc.), task structures (the nature and form of tasks), participant structures (roles and responsibilities of participants), and discursive practices (forms of participants' discourses). These components further contribute to the _mediating processes_, a set of interactions and artifacts produced from the participants that mediate between the designed features and the intended cognitive/meta-cognitive _outcomes_. The _embodiment_, _mediating processes_, and _outcomes_ are connected through _design_ and _theoretical conjectures_ - the last two elements of a conjecture map. Design conjectures are the arguments about how the components of embodiment (tools/materials, task/participant structures and discursive practices) lead to the mediating processes. Theoretical conjectures, on the other hand, are the arguments describing how the mediating processes will in turn result into the desired outcomes. Figure 1 schematically represents the elements of a conjecture map and their interrelationships. We adopt conjecture mapping to elucidate how a set of task features (embodiment) can nudge students to engage "sensemaking elements" (mediating process) leading to generation of new knowledge, sophisticated epistemologies, and enhanced content understanding (outcomes). The theoretical arguments in favour of these outcomes (theoretical conjectures) are discussed in Section II.3. Sections V to VIII detail the arguments about how the proposed task features elicit the features of sensemaking (design conjectures). Figure 2 represents the adoption of conjecture mapping to our study. ### Sensemaking Studies in science education have described sensemaking in diverse ways. In the rest of this paper, we adopt Odden and Russ' [4] synthesized account of sensemaking as: _a dynamic process of building or revising an explanation in order to 'figure something out' - to ascertain the mechanism underlying a phenomenon in order to resolve a gap or inconsistency in one's understanding. One builds this explanation out of a mix of everyday knowledge and formal knowledge by iteratively proposing and connecting up different ideas on the subject. One also simultaneously checks that those connections and ideas are coherent, both with one another and with other ideas in one's knowledge system._ Odden and Russ put forward this definition by synthesizing three approaches through which researchers have conceptualized sensemaking in science education. In the first approach - as a stance towards science learning - sensemaking has been noted to entail generation of explanations describing the underlying mechanism of a phenomenon. In the second approach - as a cognitive process - sensemaking has been noted in involve integration of prior knowledge (experiences) with formal knowledge. In the last approach - as a discourse practice - sensemaking has been conceptualized as construction and critique of claims during argumentation. The construction component of argumentation entails proposing and connecting ideas to substantiate a claim. The critique component on the other hand, entails ensuring coherence between various the connected ideas. Based on the definition of the sensemaking process, and the conceptualizations of sensemaking across the three approaches in the science education literature, we note the following "sensemaking elements" or the set of activities crucial for engaging in sensemaking: 1. Use of everyday and formal knowledge while reasoning about a phenomenon (sensemaking as a discourse practice). 2. Ascertaining the underlying mechanism of the phenomenon (sensemaking as a stance towards science learning). 3. Generating and connecting up different ideas in one's knowledge system (sensemaking as a discourse practice). 4. Seeking coherence between the generated ideas (sensemaking as a discourse practice). It should be noted that the above elements do not take into account the crucial aspect of noticing inconsistencies in one's understanding during sensemaking. The noticing of a discrepancy in one's knowledge system is a highly Figure 1: Modified schematic representation highlighting relationships between the elements of a conjecture map. The original representation can be found in [14]. contextualized activity influenced by various factors including prior knowledge, awareness, self-evaluation, and adopted strategies while reasoning about a given scenario [68]. Our list of sensemaking elements does not include this critical feature due to its highly contextual nature. In order to address this shortcoming in our theoretical approach, we adopt a probabilistic stance ("the task features _increase the likelihood_ of students sensemaking") rather than a deterministic one ("the task features _elicit_ sensemaking") in our arguments. We blend the above-mentioned sensemaking elements with conjecture mapping framework in identifying task features which promote sensemaking. By definition, if students engage in all of the above-mentioned sensemaking elements during an activity, they are more likely to engage in the sensemaking process. Along the same lines, we posit as our high level conjecture that _a set of task features which elicit the sensemaking elements increase the likelihood of students sensemaking_. Our design conjectures correspond to the arguments (articulated in Sections IV to VIII) which link the proposed task features to the sensemaking elements of sensemaking. ## IV Task features that facilitate sensemaking In Section III.3, we identified four "sensemaking elements" or a set of activities which together contribute to the likelihood of sensemaking. These include: blending everyday and formal knowledge while reasoning about a phenomenon, ascertaining the underlying mechanism of the phenomenon, generating and connecting diverse ideas, and seeking coherence between the generated ideas. We posit that the set of task features which elicit these sensemaking elements increases the likelihood of students sensemaking in physics. In the next four sections, we propose that tasks which cue students about the following to promote sensemaking in physics: (i) the presence of real-world context(s), (ii) to engage in mechanistic reasoning, (iii) to coordinate between multiple representations, and (iv) to extract physical implications from mathematical expressions. Each section consists of _conjectures_ - arguments about a task feature eliciting specific sensemaking elements, _theoretical background_ - a theoretical basis of the argument, and _empirical evidence_ - evidence in favor of the argument from the literature. Table 1 summarizes these components. ## V Tasks Cuing about the presence of real-world context(s) The first feature we argue to contribute for students sensemaking is the task cuing students about the presence of real-world context(s). In line with the contemporary discourse in the science education literature, we consider a real-world context as a scenario relevant to the learner, and which requires application of scientific principles/models to make sense of the presented scenario [69]. We argue that tasks perceived as rooted in real-world contexts facilitate two of the four sensemaking elements of sensemaking: use of everyday and formal knowledge; and generating and connecting up diverse ideas in students' knowledge system. These arguments, as design conjectures in our conjecture map (Figure 2), have been labeled as RW1 and RW2. ### RW1: Real-world contexts facilitate use of everyday and curricular knowledge **Conjecture RW1:**_If a task cues students about the presence of a real-world context, then it is more likely to invoke their everyday and curricular knowledge._ In other words, we posit that real-world scenarios in physics tasks appeal to students' lived experiences along with priming their formal curricular ideas. In order to substantiate this argument, we turn to studies in cognitive psychology probing the influence of words or phrases in tasks priming specific information from one's knowledge system. **Theoretical background**: Investigations on human interactions with tasks associated with a language's vocabulary (lexical tasks) have observed the role of tasks' contexts on participants' reasoning [70]. According to these studies, the greater the relevance of the task's context to the participants, the better is the task's interaction with their memories [71; 72]. 'Semantic priming' [73] is one of the theoretical constructs proposed to explain how words or phrases in a lexical task cue related ideas from one's memory. Semantic priming is a cognitive effect in which people respond faster to targeted words (e.g., 'dolphin') when they are preceded by related words (e.g., 'w whale'), as compared to the unrelated ones (e.g., 'chair'). Semantic relatedness represents the similarity in meaning or the overlap in featural description between a set of words or phrases [74]. Collins and Loftus [75], present the 'Spreading Activation Theory' to describe the mechanism through which semantic memory is accessed during lexical activities. According to this theory, semantic memory consists of a network of interconnected nodes, with each node representing a concept. A "concept" can take several forms ranging from a word to a proposition. The connections between any pair of nodes represent the information connecting the two concepts. The stronger the connections between the two nodes, the easier it is to retrieve associated concepts from memory. This memory network further embeds a network called semantic networks where the nodes are connected based on the words' meaning, and their shared features. The strength of association between the nodes depends on the degree to which the associated nodes share common features. For instance, the semantic association of the word "red" is stronger with "rose" as compared with "elephant". When a concept is primed during a lexical activity (such as while reading the task prompt), activation spreads out from the primed node along the paths of the network. The "intensity" of activation spread is higher for a strongly associated pair of nodes. We conjecture that context-based tasks trigger semantic priming with activation spread emanating from concepts (nodes) associated with students' lived experiences as well as with their curricular knowledge. In other words, real-world scenarios in tasks are more likely to invoke arguments from everyday lives and formal knowledge. This cuing is more likely to be semantic in nature, i.e., based on shared features of the words/phrases in the task description. **Empirical evidence:** We find empirical evidence for our above conjecture from several studies in PER. Odden and Russ [9], while noting the role of vexing questions in sensemaking, discuss a pair of students' (Jake and Liam) reasoning on a task rooted in real-life. The task inquires about the safety of a car's passengers when exiting the vehicle following a lightning strike during a thunderstorm (with the passengers inside the car being unaffected by the lightning). The students approach the task by blending conceptual arguments about charge distribution with their everyday experiences about the shape of the car door's handle. Similar observations reflecting amalgamation of curricular knowledge with lived experiences can be found in case studies involving context-based tasks from other studies in PER [50, 76]. A more direct evidence for our conjecture comes from Enghag _et al._'s [77] study exploring students' reasoning about a context-rich physics problem. The authors note students initiating their approaches by rephrasing the given prompt based on their lived experiences before referring to the underlying physics principles. The authors highlight references to everyday knowledge as instrumental in students' meaning making, and understanding of the physics involved in the task. ### RW2: Real-world contexts facilitate generating and connecting diverse ideas **Conjecture RW2:**_If a task cues students about the presence of a real-world context, then it is more likely to lead students to generate and connect diverse sets of ideas (conceptual, procedural and intuitive) from their knowledge system_. We again refer to the literature from cognitive psychology, particularly on search and selective retrieval of ideas from memory [78, 79, 80] in support of our argument. **Theoretical background:** Nijstad _et al._[81], propose the Search for Ideas in Associative Memory (SIAM) as a mechanism to describe how ideas get generated while engaging in an activity. According to this account, the internal process of idea generation proceeds through two distinct stages: (i) knowledge Figure 2: Contextual operationalization of the conjecture map in our study. Our high level conjecture (not represented in this figure) takes the form: “a set of task features which elicit the sensemaking elements increase the likelihood of students sensemaking”. While the design conjectures are detailed in Sections V- VIII, the theoretical conjectures are discussed in Section II.3. activation, and (ii) idea production. In the knowledge activation stage, a search in one's memory networks is triggered by a cue from the contextual features of the task. The structure and function of these memory networks is similar to the networks discussed in Activation Spread Theory in RW1. The memory search initiated by the contextual cue results in the retrieval of an image (idea), whose probability of retrieval depends on the strength of association between the cue and the image. In the second stage, i.e., the idea production stage, the initial image (produced in the previous stage) now acts as the triggering cue, leading to the production of an additional image. This chain of image production - a preceding image acting as a triggering cue for a new image - results into a "train of thought" until the information processing session is terminated. The conditions of termination depend on the nature and outcomes of the activity. We conjecture that presence of real-world contexts in physics tasks are more likely to trigger a diverse set of ideas from one's knowledge system. From the viewpoint of SIAM, this conjecture can be rephrased as: context-based tasks are more likely to trigger knowledge activation leading to generation of diverse 'trains of thoughts'. This argument, as a design conjecture in our conjecture map (Figure 2) has been labeled as RW2. **Empirical evidence:** We find several references in cognitive psychology and science education literature in support of our above argument. While discussing the SIAM account, Nijstad _et al_[81] further note that semantically diverse cues (having diverse featural association between the cues) in a task lead to the generation of diverse set of ideas. George and Wiley [82] note people "rely too heavily on familiar or easily accessible information during idea generation". Other researchers too have made similar observations on the familiarity of contextual cues stimulating generation of novel ideas [83; 84; 85]. In science education, Rennie and Parker [86] document students' perspectives on solving physics problems based on real-life scenarios. The authors note students referring to context-rich problems as "easier to visualize" as one of the emerging themes in students' responses. ## VI Tasks Cuing Students to Generate Mechanistic Explanation Mechanistic explanations - descriptions unpacking the underlying mechanism of a phenomenon - have been considered more sophisticated as compared to say, occult or teleological accounts [87; 34]. In what follows, we argue that tasks cuing students to generate a mechanistic explanation of a (real-wold) phenomenon to elicit three of the four sensemaking elements. These include: referring everyday and curricular knowledge, ascertaining the underlying mechanism of a phenomenon, and proposing and connecting up different ideas in one's knowledge system. These conjectures have been respectively labelled as ME1, ME2, and ME3 in Figure 2. As the ME2 conjecture - tasks requiring students to generate a mechanistic account lead to mechanistic reasoning - is self explanatory, we will exclude it from detailed discussions below. ### ME1: Mechanistic explanations facilitate use of everyday and curricular knowledge **Conjecture ME1:**_If a task cues students to generate mechanistic explanation(s), then it is more likely to invoke references to everyday and curricular knowledge._ We substantiate our argument by referring to studies on storage and accessibility of knowledge about mechanisms in memory. **Theoretical background:** The cognitive science literature argues for six possible formats through which knowledge about mechanisms (henceforth referred to as'mechanism knowledge') is internally represented. These include: (i) associations, (ii) forces or powers, (iii) icons, (iv) placeholders, (v) networks, and (vi) schemas. A detailed discussion about each of the representational formats would be beyond the scope of this paper. However, we briefly describe each of these formats, and encourage readers to go through [88; 89; 90] along with the cited references for additional details. "Associations" represent the mapping between two or more distinct events from one's memory such that the knowledge about a familiar event guides the expectations about the unfamiliar one [91; 92]. In physics, this association can be observed in the ways propagation of sound in a medium is explained in terms of the compressions and rarefactions occurring on a vibrating spring. The second format, "forces" [93; 94] or "powers" [95; 96], posits that mechanistic inferences are driven by the knowledge of physical laws. According to the "forces" account, interaction between entities (e.g., collision between two objects) are mediated by forces, and this interaction is described through vectors highlighting the direction of the entities' motion in the influence of the involved forces. On the other hand, the "powers" account posits that humans comprehend mechanisms by conceptualizing entities as having inherent dispositional features. These features either take the form of "powers" (tendency to bring about effects) or "liabilities" (tendency to undergo the effects). Melting of ice in presence of heat, for instance, would be explained in terms of the "power" of heat (causing the ice to melt) and the "liability" of ice (to melt in the influence of heat). The third candidate - "icons" - is a representation in which mechanisms are conceptualized as mental simulations or mental models consisting of a series of icons or image-like formats [97; 98; 99]. The mechanistic imagery that humans possess about the functioning of gears or pulleys is an example of this format [100]. On the contrary, the "placeholders" account (the fourth representational format) posits that people tend to hold a placeholder or a reference pointer for mechanisms instead of a detailed knowledge [101]. Studies arguing for this format have observed people to possess skeletal details about the functioning of familiar everyday complex systems (such as sewing machines or can openers) with a meta-representational placeholder representing an unknown existing mechanism. The penultimate representational format, "networks", has its origin in statistics and artificial intelligence. According to this account, causal relations are internally comprehended through causal networks (or "Causal Bayesian Networks") in which the nodes represent the variables involved in a mechanism, and the links between the nodes represent the causal relations between the involved variables [102, 103, 104]. As an example, the experience of drinking coffee leading to the sense of feeling energized would be represented in a typical causal network with "drinking coffee" and "feeling energized" as two nodes with an arrow pointing from the former towards the latter. The last candidate in our list - "schemas" [105] - correspond to clusters of knowledge in the long-term memory that are employed while figuring out the mechanism of a phenomenon. For instance decisions on the appropriate container to carry cold drinks during summer, are guided by the schemas about heat conductivity through various kinds of materials encountered in daily lives. One of the common themes across the six representational formats discussed above is their association with one's prior knowledge. The formats highlight that people construct mechanistic accounts by building on their existing notions about the functioning of their surrounding world. Consequently, we posit that tasks cuing students to generate a mechanistic explanation, particularly about a real-world context/phenomenon, can nudge them to invoke their everyday ideas in addition to knowledge gained from formal instruction. **Empirical evidence:** Several studies in physics education provide empirical evidence in support of our argument. For instance, diSessa [49] observes students to have a "_sense of mechanism_" through which they gauge the likelihood of various events, make "backward and forward chaining" of events [87], and provide the causal account of an observed phenomenon. This sense of mechanism is built from basic sensemaking elements called "phenomenological primitives" which are in turn derived from one's lived experiences. Resonating a similar view, Hammer [106] notes students and physicists to have "rich stores of causal intuitions", and generating mechanistic explanations to entail references to lived experiences and formal ideas. Sironokar _et al._[50, 76] too observe student-generated mechanistic account of an amusement park ride (a real-world context) to involve an amalgamation of lived experiences and curricular ideas. ### ME3: Mechanistic explanations facilitate generation and connection of different ideas **Conjecture ME3:**_If a task cues students to generate a mechanistic explanation, then it is more likely to lead them in generation and connection of diverse ideas from their knowledge system_. We support our argument by discussing the nature and features of mechanistic reasoning as described in the philosophy of science and science education literature. To begin with, as noted above, mechanistic reasoning entails drawing ideas from lived experiences and curricular knowledge. Thus, intuitive and formal insights contribute to the spectrum of ideas invoked in unpacking the mechanism of a phenomenon. **Theoretical background:** Furthermore, mechanistic reasoning is a complex cognitive process involving description of the behaviour of relevant entities and processes that give rise to a phenomenon [107, 108, 46, 87]. One generates mechanistic accounts by transitioning from observable features of the phenomenon at the macro level to the underlying entities or processes (often at the micro level) [87, 46]. The process of ascertaining the mechanism can further involve transitioning back from the micro to the macro features, and testing the validity of the generated explanations by varying the spatial or temporal organization of the entities or processes. This cyclic navigation across "scalar levels" - between observable features and underlying entities or processes, requires one to invoke conceptual, procedural or intuitive ideas and establish coherence between them. This argument as our design conjecture, has been labelled 'ME3' in Figure 2. **Empirical evidence** Several studies describing episodes of mechanistic reasoning have noted students invoking and connecting diverse sets of ideas in their explanations [109, 87, 110]. Russ _et al._[87] discuss first-grade students' mechanistic account of a scenario involving a piece of paper and a book simultaneously dropped from a same height. The students explain the mechanism of falling objects in terms of gravity (a curricular idea) and everyday experience of jumping and landing back on the ground. Similarly, de Andrade _et al._[109] discuss a pair of middle school students' collaborative exploration of how antacid pills neutralize stomach's acidity. The students (Iris and Raul) generate an explanation by invoking the conceptual argument of the formation of salt and water upon the acid-base reaction. This argument also accompanies a procedural idea of the combination of elements during reaction in determining the molecular formula of the salt and water. The students also reason by making arguments based on everyday experiences that molecules (or objects in general) get smaller in size after collision in a reaction. We find a similar observation in Bachtiar _et al's._[110] study in which students invoke conceptual, procedural and intuitive ideas while generating mechanistic accounts of a soccer ball's motion while designing its animation. ## VII Tasks Cuing students to engage with multiple representations Elucidating complex ideas through multiple external representations such as equations (wave functions, equations of state), graphs (kinematic plots, isotherms), or words (laws, theorems) is a common practice in physics. By multiple representations, we mean a combination of distinct external representations that illustrate the same content but use different symbol systems [111; 112]. Representational formats of an idea complement each other by highlighting specific information about its content [113; 114; 115; 116; 117]. For instance, the kinematic equation \(v=v_{0}+at\) can better highlight the dependence of an object's final velocity (\(v\)) on initial velocity (\(v_{0}\)), duration of its motion (\(t\)), and its uniform acceleration (\(a\)). On the other hand, the graphical representation of the same equation (velocity vs time plot) better highlights the qualitative variation of the object's velocity for a given nature of acceleration (positive, negative or zero). We argue that tasks using students to engage with multiple representations - either provided or constructed - address the following sensemaking elements: proposing and connecting up different ideas; along with establishing coherence between them. These arguments as our design conjectures, are labelled "MR1" and "MR2" in Figure 2. As a primer, note that unlike the last two sections, in the current section and in the next one, we substantiate the the relevant design conjectures through a common theoretical background. **Theoretical background:** As a basis for these conjectures, we refer to Mayer's "Cognitive Theory of Multimedia Learning (CTML)" [118] describing the cognitive process involved in interacting with multiple representations. According to CTML, engaging with multiple representations (or multimedia) involves participation of sensory, working, and long term memories. Sensory memory is a short-term memory in which information obtained through sensory inputs (such as visuals of a painting) are stored in their original perceptual form. Working memory corresponds to the cognitive faculty involved in processing and manipulating instantaneous information in active consciousness (e.g., the cognitive process invested in comprehending the meaning of this sentence). Lastly, the long term memory corresponds to the accessible information stored across longer periods of time (e.g., information about one's childhood). With the participation of these memory forms, the cognitive process involved in interacting with multiple representations proceeds through three distinct and consecutive phases: (i) _selection_, (ii) _organization_, and (iii) _integration_ of information. As noted earlier, each representational format of an idea highlights a specific component of the information about the idea. The first phase - _selection_ - involves selective choice of this information to be expressed into, or extracted from each representational format with the participation of one's sensory memory. In our kinematic example, the selective extraction of the information about the interdependence of the variables (\(v\), \(v_{0}\), \(a\), and \(t\)), along with their behavior in limiting conditions, mark the _selection_ phase associated with the algebraic representation. Similarly, an analogous argument can be made about the graphical representation (\(v-t\) plot), in which the qualitative information about the velocity variation is selectively comprehended. The next phase - _organization_ - involves forming mental representations of the embedded, or the selected pieces of information in the working memory. These mental representations are constructed by establishing internal connections between the informational pieces. In the kinematic example, this can correspond to the formation of mental representations of the interdependence of the variables (extracted from the equation), and the velocity variations for a given acceleration (extracted from the graph). Lastly, these mental representations are fused with the help of prior-knowledge drawn from the long-term memory marking the _integration_ phase of the CTML. In the kinematic analogy, this phase can correspond to the amalgamation of the algebraic and graphical mental representations using existing knowledge about slopes, or about uniform/non-uniform motion of objects. ## MR1: Engaging with multiple representations facilitate generation and connection of ideas **Conjecture MR1:**_If a task cues students to engage with multiple representations, then it is more likely to lead students into generation and connection of ideas from their knowledge system._ Based on the CTML's three phases, particularly the _selection_ and the _organization_ phases, we note that engaging with multiple representations involve generation and connection of ideas. While the former phase involves idea generation through selective interaction with information from the representations, the latter involves connecting the ideas through formation of mental representations. **Empirical evidence:** Several studies in physics education have made observations about representations facilitating generation and connection of ideas. Researchers have observed multiple representational formats to cue students in employing and connecting diverse set of domain-specific principles and strategies during problem solving [119; 120; 121]. De Cock [119] observes that an isomorphic task presented in varying representational formats tends to elicit different solution approaches along with physics principles. On a similar study, Podolefsky and Finkelstein [120] note that use of multiple representations can facilitate mapping of ideas during analogical reasoning. Van Heuvelen and Zou [121] note multiple representations of work-energy processes such as verbal descriptions, bar-charts, and mathematical equations facilitate students in better visualizing the energy conservation principle in addition to production of "mental images for different energy quantities". ## MR2: Engaging with multiple representations facilitate establishing coherence between ideas **Conjecture MR2:**_If a task cues students to engage with multiple representations, then it is more likely to nudge them in seeking coherence between ideas._ Along the same lines, the CTML's last two phases - _organization_ and _integration_ - highlights that engaging with multiple representations facilitates establishing coherence between the generated ideas. While the former phase entails establishing coherence between the selected pieces of information from a representational format, the latter involves establishing coherence between ideas from representations. Seufert [122] refers to these two phases as 'intra-representational coherence formation' (establishing interrelations _within_ a representational format), and 'inter-representational coherence formation' (establishing interrelations _between_ representational formats). **Empirical evidence:** Cox [116] argues that external representations help in better comprehending an idea as each representational format directs attention to a particular characteristic feature highlighted by the representation. Indeed, Gire and Price [113] note students reasoning in quantum mechanics by effectively coordinating between Dirac, algebraic and matrix notations while representing quantum states of a system. The authors observe students establishing coherence between their ideas by using one notation as a template while creating corresponding representations in other notations. ## VIII Tasks Cuing students to extract physical implications from mathematical expressions Physics education research has an extensive corpus of discussions on the role and use of mathematics in physics [123; 124]. A major section of this work has analyzed students' interaction with mathematical formalisms during problem solving [30; 37; 55]. In the rest of this subsection, we argue that tasks cuing students to extract physical implications from mathematical expressions (equations, plots, etc.) lead to generation and connection of ideas, along with establishing coherence between them. These arguments have been labelled "PI1" and "PI2" in our conjecture map (Figure 2). **Theoretical background:** Discussions in the philosophy of science literature posit that extracting physical implications from mathematical expressions involve mapping structural features of mathematical formalisms to that of physical systems [125; 126; 127; 128]. This view has been identified with several theoretical perspectives such as'mapping account' [125], 'interpretation' [126], 'inferential conception' [127] or 'inferential function' [128]. Nevertheless, the underlying theoretical view remains that interpreting mathematical relations involve bridging the structure of mathematical formalisms with the features of the target system. For instance, inferring the motion of a spring (target system) from the equation \(F=-kx\) (mathematical structure) involves mapping the algebraic symbol \(F\) with the net force on the spring, \(x\) with the spring's displacement from its mean position, \(k\) with the spring constant and the negative sign with the force's direction. Evidently, this mapping requires one to simultaneously engage with formal mathematical ideas (procedural or conceptual) along with ideas about the physical system. Consequently, we argue that the process of interpreting meaning from mathematical expressions involve generation of ideas and establishing coherence between them. ## PI1: Physical interpretations facilitate generation and connection of ideas **Conjecture PI1:**_If a task cues students to interpret mathematical expressions in light of physical systems, then it is likely to facilitate generation and connection of ideas._ Physically interpreting mathematical expressions is a common practice in physics. Whether it's determining the likelihood of an event based on the changes in entropy of involved systems, or identifying the position of an image from ray diagrams, students and physicists alike are familiar with this practice. **Empirical evidence:** Several studies have noted interpretation of mathematical results as a crucial component of reasoning in physics [32; 56; 129; 130]. Sherin [42] makes a case for the existence of knowledge structures called'symbolic forms' which mediate the process of meaning making through mathematical formalisms. According to this view, students blend contextual ideas with mathematical insights while interpreting (or expressing) meaning from mathematical expressions. Making a similar observation, Arcavi [131] argues for'symbol sense' in mathematics, which facilitates interpretation of mathematical expressions via intuitions. Perhaps, a more direct evidence in support of our argument comes from the study by Kuo _et al._[40] investigating students' blending of conceptual arguments with formal mathematics. The authors discuss one of their participants' (Pat) reasoning about the difference between final velocities of two balls dropped with differing initial velocities. The reasoning approach involves interpreting a kinematic equation (\(v=v_{0}+at\)) through the lens of derivatives (a mathematical idea), and linking it to the variation of the balls' parameters (ideas of the physical system). Gifford and Finkelstein [23] term this approach as mathematical sensemaking involving use of mathematical 'tools' to reason about physical system. ### PI2 Physical interpretations facilitate establishing coherence between ideas **Conjecture PI2:**_eIf a task cues students to interpret mathematical expressions in light of physical systems, then it is likely to nudge them in seeking coherence between ideas._ Along the same lines, we argue that the process of interpreting mathematical expressions involves establishing coherence between the generated ideas. In our above example involving the spring's motion, one can interpret the negative sign in the equation as the net force and the displacement vectors being oppositely directed at a given instant of time. **Empirical evidence:** Several studies in PER have indeed referred to the process of coherence seeking between mathematics and physical systems as "mathematical sensemaking". While Kuo _et al._ define it as "_leveraging coherence between formal mathematics and conceptual understanding_" [12], Dreyfus _et al._ define the same as "_looking for coherence between the structure of the mathematical formalism and causal or functional relations in the world_" [37]. Wilcox _et al._[129] further note this practice as 'Reflection of results' while discussing upper-division students' use of mathematics in physics. ## IX Discussion ### Operationalizing conjecture mapping in the context of task-design in physics We operationalize conjecture mapping - a framework in design-based research - in the context of identifying the assessment task features that promote sensemaking in physics (**RQ1**). Based on the literature's description of the sensemaking process, we note the sensemaking elements (the set of activities) that constitute sensemaking. The sensemaking elements correspond to the _mediating processes_ of our conjecture map - the set of interactions and artifacts produced by participants while engaging with the designed learning environment. Our _high level conjecture_ takes the form: "a set of task features which elicit the sensemaking elements increase the likelihood of students sensemaking". These task features then correspond to the _embodiment_ component of our conjecture map - the material features which elicit the sensemaking elements of sensemaking. The arguments substantiating the embodiment, i.e. our _design conjectures_, have been detailed in Sections V to VIII. Lastly, we note the theoretical conjectures about the _outcomes_ of engaging in the sensemaking process from the literature (Section II.3). Figure 2 highlights the contextual operationalization of conjecture mapping to our study. ### Identifying task features that increase the likelihood of students sensemaking in physics We identify the task features which increase the likelihood of students sensemaking in physics (**RQ2**) by leveraging contemporary theoretical ideas from cognitive psychology, education, and philosophy of science. These features include tasks curing students about: (i) the presence of real-world context(s), (ii) to unpack the underlying mechanism of a phenomenon, (iii) to engage with multiple representations, and (iv) to physically interpret mathematical expressions. The identified features complement the contemporary pedagogical efforts in supporting students in making sense of their surrounding world using curricular ideas. Several studies in science education have examined students' reasoning while engaging with real-world contexts (our first task feature). In addition to developing context-based pedagogical materials [132; 133; 134; 135], researchers have analyzed students' cognitive, meta-cognitive, and affective behaviors while engaging with such materials [136; 137; 138; 139; 69]. These studies have noted context-based problems to enhance students' situational interest [86], motivation [140; 141; 132; 142], along with improving attitudes towards science learning [143]. Our work adds increased chances of engaging in sensemaking to this growing list. Ogilivie [144] indeed notes context rich, open-ended problems to be fertile grounds for students to notice inconsistencies in their knowledge systems - a crucial feature of the sensemaking process. Researchers have also explored students' engagement with multiple representations (our third task feature) in physics. Coordination between representations - referred to as "representational fluency" - has been noted to assist students in invoking conceptual ideas not specified in the problem statement [145; 113], leveraging information highlighted by representation(s) [120], and facilitating organization, prioritization, and communication of the contextual information [146; 147; 148; 116]. Our work adds to this list by noting that engaging with multiple representations leads to generating ideas and establishing coherence between the ideas thereby facilitating sensemaking. Recent investigations have also explored the close association between sensemaking and modeling. Sirmorkar _et al._[50] note assembling of prior knowledge during sensemaking to entail construction of mental models about the target systems. The authors also note addressing and resolving the perceived inconsistencies during sensemaking to entail coherence seeking in the models, and testing them in light of their target systems. Our identified task features complement these observations by facilitating promotion of sensemaking through modeling. Real-world systems (our first task feature) specify the nature of target systems, which when modeled, can increase the likelihood of students sensemaking in physics. Similarly, coordinating multiple representations, and physically interpreting mathematical results (the last two task features) specify the ways of establishing coherence and testing the merit of the models. ## X Conclusion We make a theoretical argument that to promote sensemaking, tasks should cue students to unpack the underlying mechanism of a real-world phenomenon by coordinating multiple representations and by physically interpreting mathematical expressions. We make this argument by leveraging existing theoretical perspectives on the cognitive features of sensemaking, and by adopting conjecture mapping [14]. One of the primary contributions of this work involves adopting an agent-based approach in articulating task-design arguments in physics. Research on task design has traditionally focused on the valued objectives of researchers by overlooking the role of contextual features in influencing students' engagement with tasks. The current work presents an exemplar case by simultaneously attending to the valued objectives of the researchers along with accounting for the students contextual factors. While our theoretical approach on deducing sensemaking elements from its definition reflects the researchers' valued objectives in task design, the vocabulary adopted in making task-related arguments reflect the consideration of students' contextual factors. Our work also makes contribution to the contemporary literature by operationalizing conjecture mapping in the context of task design in physics. This technique \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} Task-feature & Conjecture & Theoretical basis & Empirical evidence \\ \hline \multirow{4}{*}{Real-world context (RW)} & If a task cues students & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & Semantic priming and Spreading activation \\ & & and curricular knowledge. & \\ & & (RW2) about the presence of a real-world context, then it is more likely to lead students to generate and connect diverse sets of ideas (conceptual, procedural and intuitive) from their knowledge system. & Search for Ideas in Associative Memory \\ \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & (ME1) cues students to generate mechanistic explanation(s), then it is more likely to invoke references to everyday and curricular knowledge. & Representational formats of mechanism knowledge & [49, 50, 76, 87, 106] \\ & & (ME2) to generate mechanistic explanation(s), then it is more likely to elicit mechanistic accounts. & \\ \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & (ME3) to generate mechanistic explanation(s), then it is more likely to cue generation and connection of diverse ideas from their knowledge system. & Theory of mechanistic reasoning & [87, 109, 110] \\ \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & (MR1) cues students to engage with multiple representations, then it is more likely to cue generation and connection of ideas from their knowledge system. & Cognitive Theory of Multimedia Learning (CTML) & [119, 120, 121] \\ & & (MR2) cues students to engage with multiple representations, then it is more likely to nudge them in seeking coherence between ideas & [113, 116] \\ \multirow{4}{*}{\begin{tabular}{} \end{tabular} } & (PI1) cues students to interpret mathematical expressions in light of physical systems, then it is likely to facilitate generation and connection of ideas. & Mapping account/ [40, 42, 131] \\ & & Interpretation/ Inferential Conception/ Inferential Function & \\ \multirow{4}{*}{ \begin{tabular}{} \end{tabular} } & (PI2) cues students to interpret mathematical expressions in light of physical systems, then it is likely to nudge them in seeking coherence between ideas. & [12, 37, 129] \\ \end{tabular} \end{table} Table 1: A brief summary of the task features, design conjectures associated with each feature, theoretical background for the design conjecture and the corresponding empirical evidence from the literature. has been traditionally employed in designing learning environments such as (but not limited to) vocational training [149], online or hybrid learning [150; 151], or pedagogy in informal communities [152]. The current work leverages this framework in designing physics tasks. Operationalization of this framework also brings together the broad literature on sensemaking (Refer Section II). While the 'embodiment' and'mediating processes' (Section III.2) encompass the theoretical and analytical views on sensemaking, the 'outcomes' embodies the literature on the effects of sensemaking. For instructors, this study provides a generalized framework for designing assignment/examination questions, or crafting examples for classroom discussions that can promote sensemaking. The generalized nature of these task features provide avenues for instructors to design tasks based on their local learning objectives and curricula, thereby facilitating their agency [58; 153]. For researchers, the current work describes a methodology for identifying task features, which can be extended to promote other valued epistemic practices such as argumentation or modeling [20]. Our proposed methodology - extracting salient features of a cognitive process from its definition, and back-tracking the task characteristics - can contribute to the community's efforts in promoting valued epistemic practices in our classrooms. Additionally, there is an increasing traction of investigations on sensemaking in laboratories [154; 155]. Researchers can also extend our methodology in identifying features of activities or tasks that promote sensemaking during experimentation. Since we do not claim the identified features to be exhaustive, researchers can expand on the proposed list, or can further investigate the conditions in which the identified task features are effective. Contemporary reports on our task features do indicate certain accompanying constraints. Heckler [64] notes explicit prompting on constructing representations may cue protocol-based approaches to learning as opposed to intuitive engagement with the content. Similarly, researchers have noted real-world contexts in tasks to elicit subjective judgements about the scenarios [137], and initiating gender-based disparity in performances [156]. This study also opens up avenues to explore the interaction of the proposed task features with their structural features. Research on task design has noted activities such as designing experiments, or modeling complex systems to differ from solving the typical end-of-the chapter physics problems. Unlike the former (referred as ill-structured problems), the latter (well-structured) tasks have a well-defined protocol for initiating, proceeding, and terminating the activity [157; 158; 159; 160; 161]. The current study paves way for researchers to probe the influence of our identified task features in the context of well- and ill-structured problems. However, a number of caveats accompany the claims made in this paper. Firstly, the sensemaking process, according to our adopted definition, is driven by a perceived gap in one's knowledge system [4]. This noticing of inconsistencies depends on prior knowledge, awareness, self-evaluation, and approaches students employ while reasoning about the given scenario [68]. Our proposed task features do not attend to this crucial element of the sensemaking process due to its contextual and meta-cognitive nature. Additionally, our objective is neither to argue that the proposed task features _necessarily_ engage students into sensemaking, nor these features to be the _only ones_ to promote sensemaking. Rather, we make a modest argument that the identified features which when present in a task _together_, enhance the likelihood of students sensemaking in physics. Future work would involve analyzing students' responses on the tasks embedding the four proposed task features, and validating (or refining) our conjecture map (Figure 2). We also intend to explore the effect of the task features in open-ended tasks as compared to the ones with scaffolds. ## XI Acknowledgements We would like to thank Dean Zollman, Bethany Wilcox, Brandi Lohman, and Bill Bridges for their valuable insights. This material is based upon work supported by the National Science Foundation under Grant No. 1726360.
2308.04748
Fuzz4All: Universal Fuzzing with Large Language Models
Fuzzing has achieved tremendous success in discovering bugs and vulnerabilities in various software systems. Systems under test (SUTs) that take in programming or formal language as inputs, e.g., compilers, runtime engines, constraint solvers, and software libraries with accessible APIs, are especially important as they are fundamental building blocks of software development. However, existing fuzzers for such systems often target a specific language, and thus cannot be easily applied to other languages or even other versions of the same language. Moreover, the inputs generated by existing fuzzers are often limited to specific features of the input language, and thus can hardly reveal bugs related to other or new features. This paper presents Fuzz4All, the first fuzzer that is universal in the sense that it can target many different input languages and many different features of these languages. The key idea behind Fuzz4All is to leverage large language models (LLMs) as an input generation and mutation engine, which enables the approach to produce diverse and realistic inputs for any practically relevant language. To realize this potential, we present a novel autoprompting technique, which creates LLM prompts that are wellsuited for fuzzing, and a novel LLM-powered fuzzing loop, which iteratively updates the prompt to create new fuzzing inputs. We evaluate Fuzz4All on nine systems under test that take in six different languages (C, C++, Go, SMT2, Java and Python) as inputs. The evaluation shows, across all six languages, that universal fuzzing achieves higher coverage than existing, language-specific fuzzers. Furthermore, Fuzz4All has identified 98 bugs in widely used systems, such as GCC, Clang, Z3, CVC5, OpenJDK, and the Qiskit quantum computing platform, with 64 bugs already confirmed by developers as previously unknown.
Chunqiu Steven Xia, Matteo Paltenghi, Jia Le Tian, Michael Pradel, Lingming Zhang
2023-08-09T07:36:21Z
http://arxiv.org/abs/2308.04748v2
# Universal Fuzzing via Large Language Models ###### Abstract. Fuzzing has achieved tremendous success in discovering bugs and vulnerabilities in various software systems. Systems under test (SUTs) that take in programming or formal language as inputs, e.g., compilers, runtime engines, constraint solvers, and software libraries with accessible APIs, are especially important as they are fundamental building blocks of software development. However, existing fuzzers for such systems often target a specific language, and thus cannot be easily applied to other languages or even other versions of the same language. Moreover, the inputs generated by existing fuzzers are often limited to specific features of the input language, and thus can hardly reveal bugs related to other or new features. This paper presents Fuzz4All, the first fuzzer that is _universal_ in the sense that it can target many different input languages and many different features of these languages. The key idea behind Fuzz4All is to leverage large language models (LLMs) as an input generation and mutation engine, which enables the approach to produce diverse and realistic inputs for any practically relevant language. To realize this potential, we present a novel autoprompting technique, which creates LLM prompts that are well-suited for fuzzing, and a novel LLM-powered fuzzing loop, which iteratively updates the prompt to create new fuzzing inputs. We evaluate Fuzz4All on nine systems under test that take in six different languages (C, C++, Go, SMT2, Java and Python) as inputs. The evaluation shows, across all six languages, that universal fuzzing achieves higher coverage than existing, language-specific fuzzers. Furthermore, Fuzz4All has identified 76 bugs in widely used systems, such as GCC, Clang, Z3, CVC5, OpenJDK, and the Qiskit quantum computing platform, with 47 bugs already confirmed by developers as previously unknown. ## 1. Introduction Fuzz testing (Fuzz, 2016; 2017), also known as fuzzing, is an automated testing approach for generating inputs designed to expose unexpected behaviors, e.g., crashes, of a system under test (SUT). Researchers and practitioners have successfully built practical fuzzing tools, which have shown great success in finding numerous bugs and vulnerabilities in real-world systems (Brands et al., 2017). A particularly important family of SUTs are systems that take in programming or formal language inputs, e.g., compilers, runtime engines, constraint solvers, and literally any libraries with accessible APIs. Numerous fuzzers have been proposed for such systems since they are the fundamental building blocks for software development (Linging and Zhang, 2017), e.g., finding bugs in compilers and runtime engines is crucial because they can affect all corresponding downstream applications. Traditional fuzzers can be categorized as generation-based (Zhu et al., 2017; Zhang et al., 2017; Zhang et al., 2017) or mutation-based (Zhu et al., 2017; Zhang et al., 2017; Zhang et al., 2017). Generation-based fuzzers aim to directly synthesize complete code snippets, e.g., using a predefined grammar for the target language. Instead of synthesizing from scratch, mutation-based fuzzers apply mutation operators or transformation rules to a set of high quality fuzzing seeds. Unfortunately, both traditional fuzzing approaches face the following limitations and challenges: _C1: Tight coupling with target system and language._ Traditional fuzzers are often designed to target a specific language or a particular SUT. However, designing and implementing a fuzzer is extremely time-consuming. For example, Csmith(Zhu et al., 2017), a fuzzer for C/C++ - compilers, has more than 80K lines of code, while Syzkalfe(Zhu et al., 2017), a fuzzer for Linux system calls, contains tens of thousands of handcrafted rules (Brands et al., 2017) to generate and modify system calls. Because each target language is different, it is often non-trivial to reuse the effort of implementing a fuzzer from one input language to another. Furthermore, fuzzing strategies that work well for one SUT may not work at all for another one. _C2: Lack of support for evolution._ Real-world systems are constantly evolving, e.g., by adding new features to the input language. Traditional fuzzers designed for a specific version of a language or SUT may lose their effectiveness on a new version and cannot be easily used to test newly implemented features. For example, Csmith supports only a limited set of features up to C++11, while the C++ language has evolved significantly since then. In fact, recent work (Zhu et al., 2017) shows that over a six-month fuzzing period, Csmith was not able to uncover any new bugs in the latest releases of popular GCC and Clang compilers, showing that new versions of compilers are becoming immune to existing fuzzers. _C3: Restricted generation ability._ Even within the scope of a specific target language, both generation-based and mutation-based fuzzing often are unable to cover a large part the input space. Generation-based fuzzers rely heavily on an input grammar to synthesize valid code, and additionally are equipped with semantic rules that ensure the validity of the synthesized code. To generate a high amount of valid fuzzing inputs or to side-step difficult-to-model language features, generation-based fuzzers often use a subset of the full language grammar, which limits them to test only a subset of all language features. Similarly, mutation-based fuzzers are limited by their mutation operators and require high quality seeds that can be difficult to obtain. **Our Work.** We present Fuzz4All, the first fuzzer that is _universal_ in the sense that it can target many different input languages and many different features of theses languages. Our approach fundamentally differs from existing general-purpose fuzzers, e.g., AFL (Xu et al., 2019) and LibFuzzer (Ling et al., 2019), which use extremely simple mutations, are unaware of the target language, and therefore struggle to produce meaningful programming language fuzzing inputs. Instead, our key idea is to leverage a large language model (LLM) as an input generation and mutation engine. Because LLMs are pre-trained on large amounts of examples in various programming languages and other formal languages, they come with an implicit understanding of the syntax and semantics of these languages. Fuzz4All leverages this ability by using an LLM as a universal input generation and mutation engine. The input to Fuzz4All are user-provided documents describing the SUT, and optionally, specific features of the SUT to focus on, e.g., in the form of documentation, example code, or formal specifications. However, these user inputs may be too verbose to directly use as a prompt for the LLM. Instead of requiring the user to manually engineer a prompt (Ling et al., 2019), which is time-consuming, we present an _autoprompting_ step that automatically distills all user-provided inputs into a concise and effective prompt for fuzzing. This prompt is the initial input to an LLM that generates fuzzing inputs. Since continuously sampling with the same prompt would lead to many similar fuzzing inputs, we present an _LLM-powered fuzzing loop_, which iteratively updates the prompt to generate a diverse set of fuzzing inputs. To this end, Fuzz4All combines fuzzing inputs generated in previous iterations with natural language instructions, e.g., asking to mutate these inputs. The LLM-generated fuzzing inputs are then passed to the SUT, which we validate against a user-provided test oracle, such as checking for system crashes. Fuzz4All addresses the previously discussed limitations and challenges of traditional fuzzers. Instead of meticulously designing a single-purpose fuzzer for a specific SUT (C1), Fuzz4All, by using an LLM as the generation engine, can be applied to a wide range of SUTs and input languages. Compared to existing fuzzers that target a specific version of the SUT or input language (C2), Fuzz4All can easily evolve with the target. For example, to fuzz-test a newly implemented feature, a user can simply provide documentation or example code related to that feature. To address the restricted generation ability of traditional fuzzers (C3), Fuzz4All exploits the fact that LLMs are pre-trained on billions of code snippets, enabling them to create a wide range of examples that likely obey the syntactic and semantic constraints of the target language/SUT. Finally, Fuzz4All does not require any instrumentation of the SUT, making the approach easily applicable in practice. We perform an extensive evaluation on six input languages (C, C++, SMT, Go, Java, and Python) and nine SUTs. For each of them, we compare our approach against state-of-the-art generation-based and mutation-based fuzzers. The results show that Fuzz4All achieves the highest code coverage across all languages, improving the previous state-of-the-art coverage by 36.8%, on average. Additionally, we demonstrate that Fuzz4All supports both general fuzzing and fuzzing targeted at specific features of the SUT, which a user decides upon by providing adequate input documents. Finally, Fuzz4All detects 76 bugs across our studied SUTs, with 47 already confirmed by developers as previously unknown. **Contributions:** This paper makes the following contributions: * **Universal fuzzing**. We introduce a new dimension for fuzzing that directly leverages the multi-lingual capabilities of LLMs to fuzz-test many SUTs with a wide range of meaningful inputs. * **Autoprompting for fuzzing**. We present a novel autoprompting stage to support both general and targeted fuzzing by automatically distilling user inputs into a prompt that is effective at generating inputs to the SUT. * **LLM-powered fuzzing loop**. We present an algorithm that continuously generates new fuzzing inputs by iteratively modifying the prompt with selected examples and generation strategies. * **Evidence of real-world effectiveness**. We show across six popular languages and nine real-world SUTs (e.g., GCC, CVC5, Go, javac, and Qiskit) that our approach significantly improves coverage compared to state-of-the-art fuzzers (avg. 36.8%) and detects 76 bugs, with 47 already confirmed as previously unknown. * **Continuous updating**. We plan to continue to apply Fuzz4All on additional targets and languages. Our code, dataset, and up-to-date progress can be found at: [https://fuzz4all.github.io](https://fuzz4all.github.io) ## 2. Background & Related Work ### Large Language Models Recent developments in natural language processing (NLP) has lead to the wide-spread adoption of large language models (LLMs) for both natural language (Kal fuzzers create complete code snippets using pre-defined grammars and built-in knowledge of the semantics of the target language. Csmitth(Csmitth, 2017) and YARPGen(Xu et al., 2018) hard-code language specifications to ensure the validity of generated code snippets to test C and C++ compilers, respectively. jsfunfuzz(Xu et al., 2018) combines a language grammar with historical bug-triggering code snippets to generate new inputs to test JavaScript engines. Generation-based fuzzers have also been used to test OpenCL(Xu et al., 2018), the JVM (JVM, 2017), CUDA (Xu et al., 2018) and deep learning compilers (Xu et al., 2019). Mutation-based fuzzers (Xu et al., 2019) iteratively perform transformations on seeds to generate new fuzzing inputs. In addition to basic mutations, researchers have developed complex transformations targeted at ensuring type consistency (JVM, 2017; JVM, 2017), adding historical bug-triggering code snippets (Xu et al., 2018; Xu et al., 2019), and coverage feedback (Xu et al., 2018; Xu et al., 2019; Xu et al., 2019). To benefit from both generation and mutation, many fuzzers use a combination of both approaches (Xu et al., 2018; JVM, 2017). Different from the above fuzzers, which target specific SUTs or languages, another line of research is on general-purpose fuzzing. AFL (Xu et al., 2018) and ihFuzzer(Xu et al., 2019) are general-purpose fuzzers that use genetic algorithms with a fitness function to prioritize fuzzing inputs for further mutations that achieve new coverage. These mutations are unaware of the SUT and focus on byte-level transformations. That is, when applied on SUTs that receive programming languages as input, general-purpose fuzzers are extremely unlikely to produce valid inputs. Recent work (Xu et al., 2018) has instead added regular expression-based mutation operators to match common programming statements (e.g., change + to -). The simplicity of these mutation operators limits the ability of such fuzzers at covering new code, especially in more complex languages, such as C (Xu et al., 2018; Xu et al., 2018). PolyGot(Xu et al., 2018) is another language-agnostic fuzzer, which first parses the seed programs into a uniform intermediate representation using a language-specific grammar and then uses a set of mutation operators to generate new programs. While promising, PolyGot still uses a limited set of mutations and cannot achieve the same level of coverage as fuzzers that are designed for a particular language (Xu et al., 2018). To complement traditional fuzzing techniques and apply fuzzing to emerging domains, learning-based fuzzers have been proposed. Prior learning-based techniques mainly focus on training a neural network to generate fuzzing inputs. TreeFuzz(Xu et al., 2018) parses the training corpus into a tree structure and through tree traversal, learns a probabilistic, generative model that synthesizes new fuzzing inputs. Deep learning models have been used to fuzz PDF parsers (Xu et al., 2018), OpenCL (Xu et al., 2018), C (Xu et al., 2018), network protocols (Xu et al., 2018), and JavaScript(Xu et al., 2018). Very recently, researchers have also directly leveraged LLMs for fuzzing specific libraries. TitanFuzz(Xu et al., 2018) uses Codex(Xu et al., 2018) to generate seed programs and InCoder (Xu et al., 2019) to perform template-based mutation for fuzzing deep learning libraries (Xu et al., 2018; Xu et al., 2018). FuzzGPT (Xu et al., 2018) is another LLM-based deep learning library fuzzer, which leverages historical bug-triggering code snippets to either prompt or directly fine-tune LLMs towards generating more unusual code snippets for more effective fuzzing. Unlike prior learning- and LLM-based fuzzers, FuzzAll is easily applicable across many programming languages. Prior work trains language-specific models or requires language-specific parsing. Even recent LLM-based techniques (Xu et al., 2018; Xu et al., 2018) are designed specifically for deep learning libraries with hand-crafted prompts or mutation patterns, and therefore cannot be easily extended to other SUTs. Furthermore, unlike existing techniques, which produce general fuzzing inputs in a particular language, FuzzAll additionally supports targeted fuzzing, which can generate code snippets that focus on selected features. In addition to fuzzing, LLMs have also been applied to the related problem of unit-test generation (Xu et al., 2018; Xu et al., 2018; Xu et al., 2019; Xu et al., 2019; Xu et al., 2019). CodeMosa(Xu et al., 2018) interleaves traditional search-based software testing with querying Codex to generate new unit-tests whenever a coverage plateau is reached. TestPilot(Xu et al., 2019) prompts Codex with method source code and example usages to generate unit-tests and to fix incorrectly generated tests. In contrast to these LLM-based test generators, which require a specific type of input (e.g., function source code) and only work for unit testing (Xu et al., 2019; Xu et al., 2019), by using our novel autoprompting stage, FuzzAll can take inputs in arbitrary formats for both general and targeted fuzzing. Furthermore, such unit-test generators often require manual work to check/complete the tests as even state-of-the-art LLMs (Xu et al., 2019; Xu et al., 2019) cannot always produce reliable oracle. Instead, FuzzAll leverages widely-used fuzzing oracles, such as crashes, and is fully automated. ## 3. Fuzz4All Approach We present Fuzz4All, a universal fuzzer that leverages LLMs to support both general and targeted fuzzing of any SUTs that take in programming language input. Figure 1 provides an overview of our approach. Fuzz4All first takes in arbitrary _user input_ that describes the fuzzing inputs to be generated, e.g., documentation of the SUT, example code snippets, or specifications. As the user input may be long, redundant, and partially irrelevant, the approach distills it into a concise but informative prompt for fuzzing. To this end, Fuzz4All performs an _autoprompting_ step (Section 3.1) by using a large, state-of-the-art _distillation LLM_ to sample multiple different candidate prompts (). Each candidate prompt is passed on to the _generation LLM_ to generate code snippets (i.e., fuzzing inputs) (). Fuzz4All then selects the prompt that produces the highest quality fuzzing inputs (). Fuzz4All builds on two models, a distillation LLM that reduces the given user input and a generation LLM that creates the fuzzing inputs, to balance the trade-off between the costs and benefits different LLMs provide. Because the distillation LLM needs to understand and distill arbitrary user input, we use a high-end, large foundational model with strong natural language understanding abilities. However, directly using such a large model for input generation would be inefficient due to the high inference cost of autoregressive generation. Instead, to perform efficient fuzzing, Fuzz4All uses a smaller model as the generation LLM. While our approach is general across any pairs of distillation and generation LLMs, we implement Fuzz4All with state-of-the-art GPT4 (Xu et al., 2019) and StarCoder (Xu et al., 2018). Using the best prompt selected via autoprompting as the initial input prompt for the generation LLM, we then move on to the _fuzzing loop_ (Section 3.2), where Fuzz4All continuously samples the generation LLM to generate fuzzing inputs (). To avoid generating many similar fuzzing inputs, Fuzz4All continuously updates the input prompt in each iteration. Specifically, the approach selects a previously generated input as an _example_ (), which demonstrates the kind of future inputs we want the model to generate. In addition to the example, Fuzz4All also appends a _generation instruction_ to the initial prompt, which guides the model toward generating new fuzzing inputs. This process is repeated while continuously passing the generated fuzzing inputs into the SUT and checking its behavior against a user-defined oracle, such as crashes. ### Autoprompting The following presents the details of the first of two main steps of Fuzz4All, which distills the given user input via autoprompting into a prompt suitable for fuzzing. The user input may describe the SUT in general, or particular feature of the SUT to be tested. As shown in Figure 1, user inputs may include technical documentation, example code, specifications, or even combinations of different modalities. Unlike traditional fuzzers that require inputs to follow a specific format, e.g., code snippets to use as seeds or well-formed specifications, Fuzz4All can directly understand the natural language descriptions or code examples in the user input. However, some information in the user input may be redundant or irrelevant, and hence, directly using the user inputs as a prompt for the generation LLM may be ineffective, as confirmed by our ablation study in Section 5.3. Therefore, the goal of autoprompting is to generate a distilled input prompt that enables effective LLM-based fuzzing. #### 3.1.1. Autoprompting Algorithm Algorithm 1 details Fuzz4All's autoprompting step. The inputs are the user input and the number of candidate prompts to generate. The final output is the input prompt selected to be used for the fuzzing campaign. As our goal is to use a distillation LLM to generate prompts that distill the information provided by the user, we give the following autoprompting instruction to the distillation LLM: "Please summarize the above information in a concise manner to describe the usage and functionality of the target". Let \(\mathcal{M}_{\mathcal{D}}\) be the distillation LLM, userInput be the user input and APIInstruction be the autoprompting instruction. The prompt prompt generated can be formalized as the conditional probability: \(\mathcal{M}_{\mathcal{D}}(\texttt{prompt}\,|\,\texttt{userInput},\texttt{APInstruction})\) Fuzz4All first generates a candidate prompt using greedy sampling with temperature 0 (line 2). By first sampling with low temperature, the algorithm obtains a plausible solution with a high degree of confidence. This approach is commonly used in other domains, e.g., program synthesis (Grover et al., 2017), where the greedy output is evaluated first to check if it can solve the problem. The algorithm then moves on to sampling with higher temperature to obtain more diverse prompts (line 5), as done in prior work (Grover et al., 2017; Krizza et al., 2017). Compared to greedy, sampling with high temperature yields different prompts that can each provide a unique distilled summary of the user input. Each generated prompt is added to a list of candidate prompts (line 6), until the algorithm reaches the desired number of candidates. To pick the best input prompt to be used in the fuzzing step, the algorithm evaluates each candidate prompt by performing a small-scale fuzzing experiment. Specifically, the approach uses each prompt as an input to the generation LLM to produce multiple code snippets per prompt. Fuzz4All then scores the generated code snippets for each prompt based on a scoring function. While the scoring function can be based on a variety of different metrics, e.g., coverage, bug finding, or the complexity of generated fuzzing inputs, to make the approach lightweight and general, our scoring function is the number of unique generated code snippets that are valid, i.e., accepted by the target SUT. This metric is chosen since for fuzzing, we want fuzzing inputs to be valid or close to valid to trigger logic deep inside the SUT. Let \(\mathcal{M}_{\mathcal{G}}\) be the generation LLM, p be a candidate prompt, isValid be the function that returns 1 if a generated code cis valid and 0 if invalid. Our default scoring function is defined as: \(\sum_{\text{c\in\mathcal{M}}_{\mathcal{G}}(\texttt{p})}[\texttt{isValid(c, SUt)}]\). Finally, Fuzz4All selects the input Figure 1. Overview of Fuzz4All. prompt with the highest score (line 7) as the initial input prompt to be used for fuzzing. In summary, our autoprompting step combines both prompt generation and scoring, which allows Fuzz4All to automatically generate/select a prompt suitable for the fuzzing target. #### 3.1.2. Example: Autoprompting Figure 2 shows an example of an input prompt generated by our autoprompting algorithm. The example is for fuzzing C++ compilers while focusing specifically on std::expected, a new feature introduced in C++23. As the user input, we pass the original cppreference documentation (Bauer et al., 2017) to Fuzz4All, which spans multiple screen lengths with small tables and verbose descriptions (498 words, 3262 characters). In contrast, the distilled input prompt created by the autoprompting algorithm provides a more concise natural language description of the targeted feature (214 words, 1410 characters). The input prompt contains a high-level description of how std::expected is to be used. For example, the input prompt contains a concise sentence (highlighted in orange) that summarizes the situations the feature is useful in. Additionally, the input prompt contains descriptions of the inputs, as well as the different usages (i.e., member functions) of the feature. For example, functions and then, transform, or_else, and transform_error have very similar descriptions in the original documentation, which is repeated for each function. Instead, in the distilled input prompt, these functions are grouped together in a concise manner that still illustrates how they can be used. Using the distilled input prompt, Fuzz4All can generate fuzzing inputs that effectively target the std::expected feature of C++ compilers. #### 3.1.3. Comparison with Existing Autoprompting Techniques To the best of our knowledge, we are the first to automatically distill knowledge from arbitrary user inputs for a software engineering task using black-box autoprompting. Compared to prior work on autoprompting in NLP (Narayanan et al., 2017) and software engineering (Narayanan et al., 2017), which optimize the prompt by accessing model gradients, our autoprompting needs only black-box, sampling access to the distillation LLM. While the use of a scoring function to evaluate each prompt is similar to recent work in NLP (Narayanan et al., 2017), our scoring function directly evaluates the prompt on the exact downstream task of generating valid code snippets, instead of using an approximate proxy scoring function. ### Fuzzing Loop Given the input prompt created in the first step of Fuzz4All, the goal the fuzzing loop is to generate diverse fuzzing inputs using a generation LLM. However, due to the probabilistic nature of LLMs, sampling multiple times using the same input would produce the same or similar code snippets. For fuzzing, we aim to avoid such repeated inputs and instead want to generate a diverse set of fuzzing inputs that cover new code and discover new bugs. To accomplish this goal, we exploit the ability of LLMs to utilize both examples and natural language instructions to guide the generation. The high-level idea of the fuzzing loop is to continuously augment the original input prompt by selecting an example fuzzing input from previous iterations and by specifying a generation strategy. The goal of using an example is to demonstrate the kind of code snippet we want the generation LLM to produce. The generation strategies are designed as instructions on what to do with the provided code example. These strategies are inspired by traditional fuzzers, mimicking their ability to synthesize new fuzzing inputs (as in generation-based fuzzers) and to produce variants of previously generated inputs (as in mutation-based fuzzers). Before each new iteration of the fuzzing loop, Fuzz4All appends both an example and a generation strategy to the input prompt, enabling the generation LLM to continuously create new fuzzing inputs. #### 3.2.1. Fuzzing Loop Algorithm Algorithm 2 describes the fuzzing loop. The inputs are the initial input prompt and the fuzzing budget. The final output is a set of bugs identified by the user-defined oracle. First, the algorithm initializes the generation strategies (generate-new, mutate-existing, and semantic-equiv), which will be used to modify the input prompt during the fuzzing loop (line 2). Figure 3 (top-right) lists our three generation strategies along with their corresponding instructions. For the first invocation of the generation LLM, denoted with \(\mathcal{M}_{\mathcal{G}}\), the algorithm does not yet have any examples of fuzzing inputs. Hence, it appends to the input prompt the generate-new generation instruction, which guides the model toward producing a first batch of fuzzing inputs (line 3). ``` 1FunctionFuzzingLoop: Input :inputPrompt, timeBudget Output:bugs genStrats -[generate-new, mutate-existing, semantic-equiv] fuzzingInputs - \(\mathcal{M}_{\mathcal{G}}\)(inputPrompt + generate-new) bugs - Oracle (fuzzingInputs, SUT) whiletimeElapsed < timeBudgetdo example - sample (fuzzingInputs, SUT) instruction - sample (genStrats) fuzzingInputs - \(\mathcal{M}_{\mathcal{G}}\)(inputPrompt + example + instruction) bugs - bugs + Oracle (fuzzingInputs, SUT) return bugs ``` **Algorithm 2**Fuzzing loop Next, the algorithm enters the main fuzzing loop (lines 5-9), which continuously updates the prompt to create new fuzzing inputs. To this end, the algorithm selects an example from the previous batch of generated fuzzing inputs, randomly picking from all those fuzzing inputs that are valid for the SUT (line 6). In addition to the example, the algorithm also randomly picks one of the three generation strategies (line 7). The generation strategy either instructs the model to mutate the selected example (mutate-existing), to produce a fuzzing input that is semantically equivalent to the example (semantic-equiv), or to come up with a new fuzzing input (generate-new). The algorithm concatenates the initial input prompt, Figure 2. Autoprompting result for std::expected. the selected example, and the selected generation strategy into a new prompt, and then queries the generation LLM with this prompt to produce another batch of fuzzing inputs (line 8). The main fuzzing loop is repeated until the algorithm has exhausted the fuzzing budget. For each created fuzzing input, FuzzAtll passes the input to the SUT. If the user-defined oracle identifies an unexpected behavior, e.g., a crash, then the algorithm adds a report to the set of detected bugs (lines 4 and 9). #### 3.2.2. Example: Fuzzing Loop Figure 3 illustrates how our fuzzing loop uses input examples and the generation strategies to create different fuzzing inputs. In this case, we are fuzzing an SMT solver where the inputs are logic formulas written in the SMT2 language. Initially 1, there are no examples, and hence, the algorithm uses the generate-new strategy to synthesize new fuzzing inputs. Next, taking a generated, valid fuzzing input as an example, the algorithm queries the model to create a new input 1 based on the mutate-existing strategy, which aims to mutate the selected example. We observe that the new fuzzing input subtly modifies the previous input by swapping the type of a variable as well as adding some computation. In the next fuzzing iteration 1, the algorithm selects the previously generated fuzzing input as the example and uses the semantic-equiv generation strategy, which aims to create an input that does not modify the semantics of the given example. This time, we observe that the new fuzzing input simply adds a syntax tag to the selected example. In fact, the combination of generation strategies shown in the example helps FuzzAtll to generate a fuzzing input that causes an unexpected crash in the SMT solver. The crash exposes one of the real-world bugs detected by FuzzAtll during our evaluation, which has been confirmed and fixed by developers. #### 3.2.3. Oracle The fuzzing inputs produced by FuzzAtll during the fuzzing loop can be used to check the behavior of the SUT against an oracle to detect bugs. The oracle is custom for each SUT, and it can be fully defined and customized by the user. For example, when fuzzing C compilers, a user could define a differential testing oracle that compares the compiler behavior under different optimization levels (FuzzAtll, 2017). In this paper, we focus on simple and easy-to-define oracles, such as crashes due to segmentation faults and internal assertion failures, with more details discussed in Section 4.2. ## 4. Experimental Design We evaluate FuzzAtll on the following research questions: * **RQ1:** How does FuzzAtll compare against existing fuzzers? * **RQ2:** How effective is FuzzAtll in performing targeted fuzzing? * **RQ3:** How do different components contribute to FuzzAtll's effectiveness? * **RQ4:** What real-world bugs does FuzzAtll find? ### Implementation FuzzAtll is primarily implemented in Python. The autoprompting and fuzzing loop components of FuzzAtll contain only 872 LoC. Compared to traditional fuzzers, such as Csmith (>80K LoC), which need high manual effort to implement generators, FuzzAtll has a very lightweight implementation. FuzzAtll uses GPT4 (FuzzAtll) as the distillation LLM to perform autoprompting since this model is the state-of-the-art for a wide range of NLP-based reasoning tasks (FuzzAtll, 2017). Specifically, we use the gpt-4-0613 checkpoint with max_token of 500 provided via the OpenAI API (FuzzAtll, 2017). For autoprompting, we sample four candidate prompts, generate 30 fuzzing inputs each, and evaluate using a scoring function based on validity rate (as described in Section 3.1.1). For the fuzzing loop, we use the Hugging Face implementation of the StarCoder (FuzzAtll) model as the generation LLM, which is trained on over one million code tokens across over 80 languages. Our default setting when generating fuzzing inputs uses a temperature of 1, a batch size of 30, a maximum output length of 1,024 using nucleus sampling (Sutton et al., 2017) with a top-p of 1. ### Systems Under Test and Baselines To demonstrate the generality of FuzzAtll, we evaluate it on six input languages and nine SUTs. Table 1 shows each of the languages, SUTs, and the corresponding baseline tools. Note that we compare coverage on one SUT per language, with the SUT versions used for coverage measurements shown in the last column of Table 1. Except for the coverage experiments, we perform fuzzing on the nightly release of each target. Unless otherwise mentioned, we use unexpected compiler crashes as the oracle and consider a fuzzing input as valid if it compiles successfully. Each baseline fuzzer is run with its default settings. For baseline fuzzers that require input seeds, we use the default seed corpus provided in their replication repository. We now present more evaluation details for each SUT. #### 4.2.1. C/C++ Compilers We target the popular GCC and Clang compilers and provide the standard C library documentation as user input to FuzzAtll by default. Our baselines include Csmith(FuzzAtll, 2017), a classic generation-based C compiler fuzzer, and Gravc(FuzzAtll, 2017), a recent mutation-based fuzzer that uses coverage feedback together with specialized mutation operators. For C++, we target new C++23 features by providing the C++23 standard documentation as input to FuzzAtll. Our baseline is YARPGen(FuzzAtll, 2017), a generation-based \begin{table} \begin{tabular}{l c c c} \hline \hline **Language** & **SUT(s)** & **Baseline tool(s)** & **Version** \\ \hline C & GCC, Clang & Gravc (FuzzAtll, 2017) & C++ (FuzzAtll, 2017) & Gcc-13.1.1 \\ C++ & G++, Clang++ & YARPGen(FuzzAtll, 2017) & G++-13.1.1 \\ SMT2 & Z3, CVCS & Tverify(FuzzAtll, 2017) & CVCS-1.0 \\ Go & Go & Go-FuzzAtll (2017) & g8-120.6 \\ Java & javac & HepingFastus(FuzzAtll, 2017) & OpenJDK-javac-18 \\ Python & Qiskit & Morph(GoGo, 2017) & giskit-0.43.1 \\ \hline \hline \end{tabular} \end{table} Table 1. SUTs and baseline tools. Figure 3. Fuzzing strategies and example of fuzzing loop. fuzzer that extends Csmith with new language features in C++ and generation policies to trigger different compiler optimizations. #### 4.2.2. SMT Solvers We run Fuzz4All on Z3 and CVC5 with commonly enabled developer settings, such as debug and assertion, following prior work (Fuzz4All, 2017; Fuzz4All, 2018). Fuzz4All generates SMT formulas as fuzzing inputs using an overview documentation of the SMT2 language and SMT solver as input by default. A fuzzing input is considered valid if the SMT solver returns either SAT or UNSAT without any error. Our baseline is state-of-the-art TypeFuzz(Fuzz4All, 2018), which mutates existing SMT expressions based on newly generated expressions of the same type. #### 4.2.3. Go Toolchain We run Fuzz4All on the most recent version of Go. By default, we use the Go standard library documentation as input to Fuzz4All. As a baseline, we use go-fuzz(Fuzz4All, 2018), a coverage-guided, mutation-based fuzzer designed for Go, which generates inputs for various Go standard libraries using handwritten templates. #### 4.2.4. Java Compiler We evaluate Fuzz4All on the OpenJDK Java compiler, javac, which compiles source code into bytecode. Our default input is the latest standard Java API documentation page. We compare against Hephaestus(Hephaestus, 2018), a recent combined generation- and mutation-based fuzzer designed for JVM compilers and targeting type-related bugs. #### 4.2.5. Quantum Computing Platform We target Qiskit(Hephaestus, 2018), a popular quantum computing framework (Krishnan and Kwiepka, 2018). Qiskit is built on top of Python, i.e., both the input program and the compilation are defined in Python code. Thus, creating a valid input for Qiskit means using the Qiskit Python APIs in a meaningful way, e.g., to create a quantum circuit. It is challenging for traditional synthesis tools to handle dynamically typed general-purpose languages (like Python) (Zhu and Kwiepka, 2018; Kwiepka, 2018), not to mention the additional API constraints, making fuzzing Qiskit a particularly difficult challenge. Our baseline is MorphQ(Fuzz4All, 2018), a recent fuzzer that uses a template- and grammar-based approach to generate valid quantum programs and then applies metamorphic transformations. Unlike for the other SUTs, which receive fuzzing inputs in a file, to invoke Qiskit, we must run the generated Python program itself. As an oracle, we add statements at the end of the generated Python file, which collect all QuantumCircuit objects via Python's built-in introspection APIs and then apply two oracles on each circuit. The two oracles are directly borrowed from previous work for a fair comparison (Kwiepka, 2018). The first oracle compiles the circuit via a transpile call with different optimization levels and reports any crash. The second oracle converts the circuit to its lower-level QASM (Krishnan and Kwiepka, 2018) representation and then reads it back, reporting any crash. ### Experimental Setup and Metrics Fuzzing campaignsFor RQ1, we use a fuzzing budget of 24 hours (including autoprompting), which is used commonly in prior work (Fuzz4All, 2018). To account for variance, we repeat the experiment for both Fuzz4All and the baselines five times. Due to the high cost of experiments, for later RQs, we use a fuzzing budget of 10,000 generated fuzzing inputs and repeat four times for the ablation study. EnvironmentExperiments are conducted on a 64-core workstation with 256 GB RAM running (Ubuntu 20.04.5 LTS with 4 NVIDIA RTX A6000 GPUs (only one GPU is used per fuzzing run). **Metrics.** We use the widely adopted measure of _code coverage_ for evaluating fuzzing tools (Fuzz4All, 2018; Fuzz4All, 2018; Fuzz4All, 2018). To be uniform, we report the line coverage for each of the targets studied in the evaluation. Following prior work (Fuzz4All, 2018), we use the Mann-Whitney U-test (Mann and Whitney, 1992) to compute statistical significance and indicate significant (p < 0.05) coverage results in applicable tables (Tables 2 and 4) with \({}^{*}\). We additionally measure the _validity rate_ (% valid) of inputs as the percentage of fuzzing inputs generated that are valid and unique. As Fuzz4All supports both general and targeted fuzzing, to assess the effectiveness of targeted fuzzing, we report the _hit rate_, i.e., the percentage of fuzzing inputs that use a specific target feature (checked with simple regular expressions). Finally, we also report the most important metric and goal of fuzzing: the number of bugs detected by Fuzz4All for each of our nine SUTs. ## 5. Results that during each iteration of Fuzz4All's fuzzing loop, the original input prompt is updated with both a new example and a generation strategy (Section 3.2), nudging the LLM to generate new fuzzing inputs. We hypothesize that this allows Fuzz4All to effectively generate new and diverse fuzzing inputs even after a long period of fuzzing, leading to sustained coverage increase. #### 5.1.2. Generation Validity, Number, and Coverage We examine the number of fuzzing inputs generated and their validity rate across our studied SUTs. In Table 2, Column "# programs" represents the number of unique inputs generated, "% valid" is the percentage of fuzzing inputs that are valid, and "Coverage" shows the final coverage obtained by each fuzzer along with the relative improvement over the best baseline. We first observe that almost all traditional fuzzing tools can achieve a very high validity rate apart from Hephestus, which purposefully generates invalid code (focused on incorrect types) to check for miscompilation bugs. In contrast, Fuzz4All has a lower percentage of valid fuzzing inputs generated (56.0% average reduction compared to baseline tools). Furthermore, the raw number of fuzzing inputs generated by baseline tools are also much higher. By using an LLM as the generation engine, Fuzz4All is bottlenecked by GPU inference, leading to 43.0% fewer fuzzing inputs compared to traditional fuzzers. In spite of the lower validity rate and number of fuzzing inputs, Fuzz4All generates much more diverse programs compared to traditional fuzzing tools, as evidenced by the high coverage obtained (+36.8% average increase). Additionally, even invalid code snippets that are close to valid can be useful for fuzzing, as they allow for finding bugs in the validation logic of the SUT. In Section 5.4, we further describe the various types of bugs detected by Fuzz4All, with both valid and invalid code snippets, to additionally showcase the benefit of generating diverse fuzzing inputs. We note that Fuzz4All achieves a wide range of validity rates and numbers of fuzzing inputs across different SUTs. The number of fuzzing inputs varies across targets due to the varying cost to invoke the SUT after each fuzzing iteration for bug detection. Regarding validity rate, a general-purpose programming language, such as C, has a relatively lower validity rate compared to domain-specific languages, such as the SMT2 language used for SMT solvers. A more rigorous language, e.g., Go, which does not allow any declared but unused variables, has an even lower validity rate. We also observe a low validity rate for fuzzing quantum computing platforms. As quantum computing is an emerging area with its own set of library APIs, the generation LLM may not have seen as many examples of quantum programs during its training as for more established languages. Nevertheless, Fuzz4All is still able to leverage user-provided documentation to generate interesting fuzzing inputs, which leverage quantum library APIs and achieve an impressive coverage improvement (+75.6%) compared to the state-of-the-art fuzzer. ### RQ2: Effectiveness of Targeted Fuzzing We now evaluate the ability of Fuzz4All to perform targeted fuzzing, i.e., to generate fuzzing inputs that focus on a particular feature. For each target SUT and language, we test by targeting three different example features and compare them to the setup with general user input, as used for RQ1 (described in Section 4.3). These features are built-in libraries or functions/APIs (Go, C++ and Qiskit), language keywords (C and Java), and theories (SMT). The user input for the targeted fuzzing runs is documentation of the particular feature we are focusing on. Table 3 shows the results of targeted fuzzing as well as the default general fuzzing used in RQ1. Each column represents a targeted fuzzing run where we focus on one feature. The value in each cell shows the hit rate of the feature (Section 4.3) for a particular fuzzing run. We also include the coverage results obtained. We observe that targeting a specific feature yields a high amount of fuzzing inputs that directly use the feature, with an average hit rate of 83.0%. This result demonstrates that Fuzz4All indeed performs targeted fuzzing by prompting the generation LLM with Figure 4. Coverage trend of Fuzz4All against state-of-the-art fuzzers in a 24-hour fuzzing campaign. an input prompt that describes a particular feature. Furthermore, we observe that fuzzing on features that are related can lead to a moderately high cross-feature hit rate (i.e., hit rate of feature x on fuzzing run for feature v). For example, the C keywords typedef and union are both related to type operations, and hence, their cross-feature hit rate is high compared to an unrelated feature, such as goto. As shown in Table 3, a general fuzzing approach, while achieving the highest overall code coverage, can be extremely inefficient in targeting a specific feature (average 96.0% reduction in hit rate compared with Fuzz4All's targeted fuzzing). For example, in Qiskit, the general fuzzing campaign has a 0% hit rate of the three target features. This can be explained by the fact that these features were added recently to Qiskit and not yet widely used, thus being extremely rare in the LLM training data. However, by providing suitable user input during the targeted fuzzing campaign, Fuzz4All can successfully generate fuzzing inputs that use these new features. This ability of Fuzz4All will be valuable to developers who want to test novel features or components of a SUT. ### RQ3: Ablation Study To study how each component of Fuzz4All contributes to the overall fuzzing effectiveness, we conduct an ablation study based on the two key components of Fuzz4All: (a) Autoprompting, the type of initial input prompt provided to the generation LLM; (b) Fuzzing loop, the use of selected examples and generation strategies. We study three variants for each of the two key components. Table 4 shows the coverage and validity rate of our studied variants. #### 5.3.1. Autoprompting First, we examine the effect of different initial inputs provided to the generation LLM. To reduce the impact of additional factors, we fix the generation strategy to only use generate-new and study three variants1: 1) no input: does not use any initial prompts 2) raw prompt: directly use the raw user input as the initial prompt, 3) autoprompt: applies autoprompting to generate the initial prompt. We observe that across all studied languages, the no input variant achieves the lowest coverage. In no input, we do not provide any initial prompt, which provides useful information on the features we want to generate fuzzing inputs for. As such, the LLM can only generate simple code snippets with high validity rate but is less effective in covering the SUT. We observe a coverage boost as we use the raw prompt variant, where we provide the raw documentation as the initial prompt. However, we can further improve both the code coverage and the validity rate by using our autoprompting stage to distill the user input into a concise but informative prompt (autoprompt), instead of using the raw user input. Directly using the user-provided input may include information that is irrelevant for fuzzing, leading to both a lower validity rate (as the generation LLM may struggle to understand the raw documentation) and lower coverage (since, unlike our autoprompting generated prompt, the raw documentation is not designed to be used for LLM generation). Footnote 1: [https://github.com/fuzz4All](https://github.com/fuzz4All) #### 5.3.2. Fuzzing loop Next, we examine the different variants of our fuzzing loop setup by keeping the initial prompt the same (by using the default autoprompting): 1) w/o example: does not select an example during the fuzzing loop (i.e., it continuously samples from the same initial prompt), 2) w/ example: selects an example but only uses the generate-new instruction2, 3) Fuzz4All: the full approach with all generation strategies used. We first observe that by only sampling from the same input (w/o example), LLMs will often repeatedly generate the same or similar fuzzing inputs. On average, 8.0% of the fuzzing inputs generated are repeated in w/o example compared to only 4.7% when using the full Fuzz4All approach. Adding an example to the input prompt (w/ example) avoids sampling from the same distribution and improves both coverage and validity rate. Finally, the full Fuzz4All approach achieves the highest coverage across all SUTs. Compared to the w/ example variant (the second-best), the full Fuzz4All adds additional generation strategies, semantic-equiv and mutate-existing, which help to further provide useful instructions to the generation LLM. \begin{table} \begin{tabular}{l l r r r r} \hline \hline & \multicolumn{3}{c}{**C targeted campaign (keywords)**} \\ & & \multicolumn{1}{c}{typedef} & \multicolumn{1}{c}{union} & \multicolumn{1}{c}{goto} & \multicolumn{1}{c}{General} \\ \hline \multirow{4}{*}{**Suit**} & \multirow{2}{*}{updoor**} & **83.11\%** & 47.16\% & 0.48\% & 4.38\% \\ & & union & 10.80\% & **80.43\%** & 0.10\% & 0.32\% \\ & & goto & 0.22\% & 0.11\% & **77.62\%** & 1.16\% \\ \hline \multicolumn{2}{c}{Coverage} & 123,226 & 125,041 & 120,452 & 188,148 \\ \hline \multicolumn{4}{c}{**C\(\leftrightarrow\) targeted campaign (built-in functions)**} \\ & & apply & expected & variant & General \\ \hline \multirow{4}{*}{**Suit**} & \multirow{2}{*}{apply} & **70.23\%** & 0.41\% & 0.68\% & 0.32\% \\ & & expected & 0.26\% & **79.72\%** & 0.94\% & 1.33\% \\ & & variant & 1.16\% & 5.98\% & **93.19\%** & 3.63\% \\ \hline \multicolumn{2}{c}{Coverage} & 182,261 & 175,963 & 182,333 & 193,254 \\ \hline \multicolumn{2}{c}{**Suit**} & \multicolumn{1}{c}{**SMT targeted campaign (theories)**} \\ & & \multicolumn{1}{c}{\(\mathit{array}\)} & \multicolumn{1}{c}{\(\mathit{array}\)} & \multicolumn{1}{c}{\(\mathit{array}\)} & \multicolumn{1}{c}{\(\mathit{array}\)} \\ & \multicolumn{1}{c}{\(\mathit{array}\)} & \multicolumn{1}{c}{\(\mathit{array}\)} & \multicolumn{1}{c}{\(\mathit{array}\)} & \multicolumn{1}{c}{\(\mathit{array}\)} \\ \hline \multirow{4}{*}{**Suit**} & \multirow{2}{*}{_Array_} & **82.23\%** & 2.08\% & 1.44\% & 11.07\% \\ & & bitvec & 2.57\% & **88.48\%** & 0.86\% & 5.46\% \\ & & Real & 1.45\% & 0.17\% & **96.01\%** & 17.36\% \\ \hline \multicolumn{2}{c}{Coverage} & 46,392 & 48,841 & 47,619 & 52,449 \\ \hline \multicolumn{2}{c}{**Go targeted campaign (built-in libraries)**} \\ & & atomic & atomic & \(\mathit{\begin{array}{c}\text{been}\\ \text{been}\end{array}{array}{array}{\text{}}\end{array}{array}\) & \(\mathit{\begin{array}{c}\text{\begin{array}{c}\text{\begin{array}{c}\text{\begin{array}{c}\text{\ \begin{array}{c}\text{\begin{array}{c}\text{\begin{array}{c}\text{\begin{array}{c}\text{\begin{array}{c}\text{\begin{array}{c}\text{\begin{array}{c}\text{\begin{array}{c}\text{\begin{array}{c}\text{\begin{array}{c} \text{\begin{array}{c}\text{\begin{array}{c}\text{\begin{array}{c}\text{\begin{array}{c} \text{\beginbegin{array}[]{c}\text{\begin{array}{c}\text{\begin{array}[\definecolor{\begin}[ \definecolor}{\definecolor}{\pgfsys@ ### RQ4: Bug Finding Table 5 summarizes the bugs found by Fuzz4All on our nine studied SUTs. In total, Fuzz4All detects 76 bugs, with 47 bugs already confirmed by developers as previously unknown. These results not only demonstrate the practical effectiveness of Fuzz4All in finding large amounts of bugs but also the promised generality of Fuzz4All across languages and SUTs. #### 5.4.1. Examples Figure 4(a) shows a bug found in GCC when using noexcept(x), a C++ feature that specifies a function is non-throwing if x evaluates to true. In this example bug, Fuzz4All generates a rather complex code using std::optional, which indicates that a particular value may or may not be present at runtime. While this code is valid and should compile correctly, this combination of difficult runtime dependencies cause GCC to crash with an internal compiler error. We note that this bug cannot be found by prior techniques since they simply do not support the noexcept feature. The developers have already confirmed and fixed this bug. Interestingly, they even added a slightly modified version of our submitted code snippet to the official test suite of GCC. Figure 4(b) shows a bug found in Clang, where the invalid code leads to a segmentation fault. Fuzz4All uses an unusual syntax for function declaration (i.e., auto x (...) -> return_type), which makes use of the decltype operation in C++. However, the bug occurs when the throw statement inside of the decltype is evaluated first, skipping the evaluation of the return type since throw exits the scope early and crashes Clang. This code, while invalid, is still useful to reveal a bug in the Clang frontend as confirmed by developers. Additionally, prior fuzzing tools can hardly find this bug since they typically focus on generating valid code only and do not handle the especially difficult-to-model decltype function. Figure 4(c) shows a bug found in Go where a nil input causes a segmentation fault instead of producing a useful failure message. This bug is found by targeting the runtime Go standard library, where we provide the documentation, which includes the description of the ReadMemStats function. The bug has been confirmed and fixed by the developers. While this bug might look simple (invoking a singular function), it cannot be found by the co-Fuzz baseline simply because go-Fuzz requires manually written templates to target specific libraries, and runtime is not a part of any such template. With Fuzz4All, users can directly target any Go standard libraries by providing relevant input information (e.g., documentation). Figure 4(d) shows a bug found in Qiskit's QASM exporter. A quantum program, represented by the qc variable, is exported to QASM, a low level representation, silently generating an invalid output file, which leads to a crash when being reimported. The problem is that the exporter represents the register in QASM using its name as identifier, i.e.,"crz", which also is the name of a well-known operation of the QASM language, thus making the generated code ambiguous. Note that prior work (Stein times and check for statistically significant results. Since the generation LLM leverages the knowledge acquired during its training done within the last year, reapplying Fuzz4All using the exact checkpoint of the LLM (StarCoder) used in this work might degrade the effectiveness in the future due to data-shift. Fuzz4All can mitigate this using the autoprompting step where more up-to-date documentation/example code allows the model to also generate up-to-date fuzzing inputs. One additional threat comes from the use of the distillation LLM to generate the initial inputs, where the LLM may "hallucinate", i.e., produce made-up or inaccurate information (Fuzz4All, 2017). This limitation is common to most pipelines that use LLMs, and we hope to address it in our future work. ## 7. Conclusion We present Fuzz4All, a universal fuzzer leveraging LLMs to support both general and targeted fuzzing of arbitrary SUTs that take in a multitude of programming languages. Fuzz4All uses a novel autoprompting stage to produce input prompts that concisely summarize the user-provided inputs. In its fuzzing loop, Fuzz4All iteratively updates the initial input prompt with both code examples and generation strategies aimed at producing diverse fuzzing inputs. Evaluation results on nine different SUTs across six different languages demonstrate that Fuzz4All is able to significantly improve coverage compared to state-of-the-art tools. Furthermore, Fuzz4All is able to detect 76 bugs with 47 already confirmed by developers as previously unknown.
2307.12977
On ordinary differentially large fields
We provide a characterisation of differentially large fields in arbitrary characteristic and a single derivation in the spirit of Blum axioms for differentially closed fields. In the case of characteristic zero, we use these axioms to characterise differential largeness in terms of being existentially closed in the differential algebraic Laurent series ring, and we prove that any large field of infinite transcendence degree can be expanded to a differentially large field even under certain prescribed constant fields. As an application, we show that the theory of proper dense pairs of models of a complete and modelcomplete theory of large fields, is a complete theory. As a further consequence of the expansion result we show that there is no real closed and differential field that has a prime model extension in closed ordered differential fields, unless it is itself a closed ordered differential field.
Omar León Sánchez, Marcus Tressl
2023-07-24T17:54:47Z
http://arxiv.org/abs/2307.12977v2
# On ordinary differentially large fields ###### Abstract. We provide a characterisation of differentially large fields in arbitrary characteristic and a single derivation in the spirit of Blum axioms for differentially closed fields. In the case of characteristic zero, we use these axioms to characterise differential largeness in terms of being existentially closed in the differential algebraic Laurent series ring, and we prove that any large field of infinite transcendence can be equipped with a differentially large structure. As an application, we show that there is no real closed and differential field that has a prime model extension in closed ordered differential fields, unless it is itself a closed ordered differential field. Key words and phrases:differential fields, large fields, formal Laurent series 2010 Mathematics Subject Classification: Primary: 12H05, 12E99. Secondary: 03C60. Acknowledgments. The first author was partially supported by EPSRC grant EP/V03619X/1. The first-order characterisation of differential largeness provided in [14, 4.7] makes reference to the somewhat elaborate axiom scheme UC from [13, 4.5]. In 2.8 below we give a significant simplification of this axiom scheme in the case of a single derivation, so \(\Delta=\{\delta\}\). The new scheme resembles the Blum axioms for differentially closed fields of characteristic \(0\) (\(\mathrm{DCF}_{0}\)) and at the same time allows an extension of the notion of differential largeness to arbitrary characteristic (cf. 2.1). In subsequent sections we give applications of our new simple description of differential largeness. Henceforth we restrict to a single derivation. An immediate consequence of the new axioms is the new characterisation 2.9 of closed ordered differential fields (CODF), in the sense of Singer [15], which does not make reference to the order. A further corollary (cf. 2.12) provides geometric axioms for differentially large fields in arbitrary characteristic in terms of D-varieties, in the spirit of the Pierce-Pillay axioms for \(\mathrm{DCF}_{0}\), see [10]. In the rest of the paper we readopt the characteristic zero assumption. In Section 3, we prove that differential largeness can be characterised in terms of being existentially closed in the differential algebraic formal Laurent series, see 3.5. Our proof uses an approximation-type statement that resembles that of Denef-Lipshitz in [13]. We then use this to produce a new way (or rather an improvement of the construction in [14]) to construct differentially large fields using iterated differential algebraic Laurent series, see 3.8. In section 4, we show that for any differential field \((K,d)\) and any given large field \(L\supseteq K\) of transcendence degree over \(K\) at least the size of \(K\), there is an extension \(\delta\) of \(d\) to \(L\) such that \((L,\delta)\) is differentially large, see 4.3. This has two consequences: Firstly, large fields of infinite transcendence degree (over \(\mathds{Q}\)) are characterized in 4.5 as exactly those fields that possess a derivation \(d\) for which \((L,d)\) is differentially large (significantly generalizing an earlier result by Christian Michaux saying that \(\mathds{R}\) carries a CODF structure). Secondly, we show in 4.7 that no real closed field equipped with any derivation has a prime model extension in CODF, unless it is already a CODF; this strengthens a result from [15] stating that the theory CODF does not have a prime model. By a _differential ring_ in this paper we always mean a commutative unital ring furnished with a single derivation. ## 2. Blum-style axioms for ordinary differentially large fields In [14], differentially large fields in characteristic zero were introduced. The definition there makes sense also for differential fields of characteristic \(p>0\). We define: ### Definition A differential field \((K,\delta)\), of arbitrary characteristic, is said to be differentially large if it is large as a field and for every differential field extension \((L,\delta)/(K,\delta)\), if \(K\) is e.c. in \(L\) as a field, then \((K,\delta)\) is e.c. in \((L,\delta)\). Examples of differentially large fields in characteristic \(p>0\) are differentially closed fields (in the sense of Wood [21]) and also separably differentially closed fields (in the sense of Ino and the first author [14]). Recall that a differential field \((K,\delta)\) is said to be separably differentially closed if for every differential field extension \((L,\delta)/(K,\delta)\) with \(L/K\) separable (as fields), \((K,\delta)\) is e.c. in \((L,\delta)\). To see that this class of differential fields is differentially large one only needs to note that if \(K\) is e.c. in \(L\) as a field, then \(L/K\) is separable. Let \((K,\delta)\) be a differential field (of arbitrary characteristic). In what follows we freely (and interchangeably) view any differential polynomial \(f\in K\{x\}\) of order \(n\) as a differential polynomial in the differential variables \(x=(x_{1},\ldots,x_{m})\) and also as a polynomial in \(m(n+1)\) algebraic variables \(x,\delta x,\ldots,\delta^{n}x\). It will be clear from the context which view we are taking; for instance, if \(a\in K^{m(n+1)}\) and we write \(f(a)=0\) we certainly mean viewing \(f\) as a polynomial in \(m(n+1)\) variables. In Theorem 2.8 below we provide Blum-style axioms for differentially large fields of arbitrary characteristic. The proof relies in the following fact (and its consequences) about extending derivations. ### Fact [1, Theorem 18, SSIV.7]_Suppose \(L/K\) is a separable field extension. If \(\delta:K\to L\) is a derivation, then \(\delta\) can be extended to a derivation \(L\to L\)._ **2.3 Corollary**.: _Let \((K,\delta)\subseteq(L,\delta)\) be an extension of differential fields and let \(E\) be a subset of \(L\) with \(L/K(E)\) separable. Then there is a derivation \(\partial:K(E\cup\delta(E))\longrightarrow K(E\cup\delta(E))\) that restricts to \(\delta\) on \(K(E)\)._ _If \(E\) is finite, then for each such \(\partial\) there is some \(f\in K[E\cup\delta(E)]\) such that \(\partial\) restricts to a derivation of the localisation \(K[E\cup\delta(E)]_{f}\longrightarrow K[E\cup\delta(E)]_{f}\)._ Proof.: Since \(\delta(K(E))\subseteq K(E\cup\delta(E))\) we may apply 2.2 to the derivation \(\delta|_{K(E)}:K(E)\longrightarrow K(E\cup\delta(E))\) and get a derivation \(\partial:K(E\cup\delta(E))\longrightarrow K(E\cup\delta(E))\) that restricts to \(\delta\) on \(K(E)\). Assume then that \(E\) is finite. There is some nonzero \(f\in K[E\cup\delta(E)]\) such that \(f\cdot\partial(\delta(a))\in K[E\cup\delta(E)]\) for each \(a\in E\). Obviously \(f\) has the required property. ### Proposition _Let \(K\) be a differential field and let \(S=(S,\delta)\) be a differentially finitely generated \(K\)-algebra and a domain such that \(S/K\) is separable (i.e., \(\operatorname{Quot}(S)/K\) is a separable field extension). Let \(A\) be a finitely generated \(K\)-subalgebra of \(S\). Then, there are an element \(f\in S\), a finitely generated \(K\)-subalgebra \(B\) of \(S_{f}\) containing \(A\), a derivation \(\partial\) on \(B\) and a differential \(K\)-algebra homomorphism \(S\longrightarrow(B,\partial)\) that restricts to the identity map on \(A\). In particular \(\partial a=\delta a\) for all \(a\in A\)._ Proof.: Let \(b\in S^{n}\) such that \(S\) is the differential \(K\)-algebra generated by \(b\) and \(A\subseteq K[b]\). Let \(\mathfrak{p}=\{f\in K\{x\}\mid f(b)=0\}\) be the differential vanishing ideal of \(b\) over \(K\). Then, \(\mathfrak{p}\) is a separable prime differential ideal (separability is due to fact that \(K\{x\}/\mathfrak{p}\) is \(K\)-isomorphic to \(S\)). By the differential basis theorem of Kolchin [13, Corollary 4, SSIII.5], there is a finite set \(\Sigma\subseteq\mathfrak{p}\) that generates \(\mathfrak{p}\) as a radical differential ideal. Take \(d\geq 1\) such that each derivative of any \(x_{1},\ldots,x_{n}\) occurring in some polynomial from \(\Sigma\) has order \(\leq d\). Finally take \[E=\{\delta^{k}b_{i}\mid i\in\{1,\ldots,n\},\ k\leq d\}\subseteq S.\] By possibly taking a larger \(d\), a result of Kolchin's appearing in [13, Lemma 1, SSIII.2] tells us that \(S/K(E)\) is separable. By 2.3 there are \(f\in K[E\cup\delta(E)]\) and a derivation \(\partial\) of \(B:=K[E\cup\delta(E)]_{f}\) that restricts to \(\delta\) on \(K[E]\). Then \(\partial^{k}b_{i}=\delta^{k}b_{i}\) for all \(i\in\{1,\ldots,n\},\ k\leq d\) and therefore \(b\) is a solution to \(\Sigma=0\) in \((B,\partial)\). Consequently, the identity map of \(K\cup\{b_{1},\ldots,b_{n}\}\) extends to a differential \(K\)-algebra homomorphism \(\varphi:S\longrightarrow(B,\partial)\). By choice of \(b\), the map \(\varphi\) restricts to the identity map of \(A\). ### Corollary _Let \(\Sigma\) be a set of differential polynomials over \((K,\delta)\) in finitely many differential variables. Suppose \(\Sigma=0\) has a solution in some differential field extension \((L,\delta)\) with \(L/K\) separable. Then there is a finitely generated \(K\)-subalgebra \(B\) of \(L\) and a derivation \(\partial\) of \(B\) such that \((B,\partial)\) has a solution to \(\Sigma=0\). In particular, \((B,\partial)\) is differentially algebraic over \((K,\delta)\) and \(B/K\) is separable._ _Notice that if \(K\) is e.c. in \(L\) as a field then \(K\) is also e.c. in \(B\) as a field._ Proof.: By assumption, there is a solution of \(\Sigma=0\) in a differentially finitely generated \(K\)-subalgebra \(S\) of \(L\). Now apply 2.4 to \(S\) and \(A=K\). ### Remark In the case of several commuting derivations statements similar to 2.4 and 2.5 fail in general. This follows from examples produced by Johnson, Reinhart, and Rubel [20, Theorem 2]. In particular, working over \((\mathbb{C}(z_{1},z_{2}),\delta_{1}\equiv\frac{\partial}{\partial z_{1}}, \delta_{2}\equiv\frac{\partial}{\partial z_{2}})\), they prove that the PDE \[\delta_{2}(x)=\left(1-\frac{z_{1}}{z_{2}}\right)\,x+1\] has no differential algebraic solutions (equivalently, has no solution in a differential field extension of finite transcendence degree over \(\mathbb{C}\)). Recall that for a differential polynomial \(f\in K\{x\}\), where \(x\) is a single differential variable, we denote by \(s_{f}\) the separant of \(f\); namely, the formal partial derivative of \(f\) with respect to its highest order variable. Furthermore we write \[[f]:s_{f}^{\infty}=\{g\in K\{x\}:s^{m}g\in[f]\;\;\mbox{for some $m\geq 0$}\}.\] ### Observation _Let \(K\) be a differential field and let \(f\in K\{x\}\) for \(x\) a single differential variable. Let \(n=\operatorname{ord}(f)\geq 0\) and let \(a\in K^{n+1}\) with \(f(a)=0\) and \(s_{f}(a)\neq 0\). Then there is an irreducible factor \(h\) of \(f\) with \(\operatorname{ord}(h)=n\), \(h(a)=0\) and \(s_{h}(a)\neq 0\)._ Proof.: Let \(f_{0},f_{1}\in K\{x\}\), with \(f_{0}\) irreducible, \(f=f_{0}\cdot f_{1}\) and \(\operatorname{ord}(f_{0})=n\). Then \[(*)\qquad\qquad s_{f}=\frac{\partial f}{\partial x_{n}}=\frac{\partial f_{0}} {\partial x_{n}}\cdot f_{1}+f_{0}\cdot\frac{\partial f_{1}}{\partial x_{n}}.\] If \(f_{0}(a)=0\), then \((*)\) implies \(s_{f_{0}}(a)=\frac{\partial f_{0}}{\partial x_{n}}(a)\neq 0\). If \(f_{0}(a)\neq 0\), then \(f_{1}(a)=0\) and \((*)\) shows \(s_{f_{1}}(a)=\frac{\partial f_{1}}{\partial x_{n}}(a)\neq 0\); hence also \(\operatorname{ord}(f_{1})=n\) and in this case we may replace \(f\) by \(f_{1}\) and proceed by induction. We now come to the promised axiomatisation. **2.8 Theorem**.: _Let \((K,\delta)\) be a differential field of arbitrary characteristic. The following conditions are equivalent._ 1. \((K,\delta)\) _is differentially large._ 2. \(K\) _is large as a field and for every pair_ \(f,g\in K\{x\}\)_, where_ \(x\) _is a single differential variable, with_ \(g\) _nonzero and_ \(\operatorname{ord}(f)>\operatorname{ord}(g)\)_, if the system_ \[f(x)=0\ \&\ s_{f}(x)\neq 0\] _has an algebraic solution in_ \(K\)_, then_ \(f(x)=0\ \&\ g(x)\neq 0\) _has a differential solution in_ \(K\)_._1__ Footnote 1: By 2.7 we may also assume that \(f\) is irreducible in this condition. _._ 3. _For every pair_ \(f,g\in K\{x\}\)_, where_ \(x\) _is a single differential variable, with_ \(\operatorname{ord}(f)\geq 1\) _and_ \(\operatorname{ord}(f)\geq\operatorname{ord}(g)\)_, if the system_ \[f(x)=0\ \&\ g(x)\cdot s_{f}(x)\neq 0\] _has an algebraic solution in_ \(K\)_, then it has infinitely many differential solutions in_ \(K\)_._ _Notice that each of the properties (ii) and (iii) gives an axiom scheme for a first order axiomatization of differential largeness in the language of differential rings._ Proof.: (i)\(\Rightarrow\)(iii). Let \(f,g\in K\{x\}\) with \(\operatorname{ord}(f)\geq 1\) and \(\operatorname{ord}(f)\geq\operatorname{ord}(g)\) and assume \[(\dagger)\qquad\qquad f(x)=0\ \&\ g(x)\cdot s_{f}(x)\neq 0\] has an algebraic solution in \(K\). Let \(n=\operatorname{ord}(f)\). By 2.7, we may assume that \(f\) is irreducible. Let \(\mathfrak{p}=[f]:s_{f}^{\infty}\). Since \(s_{f}\neq 0\), Theorem 3.1(2) of [12] says that \(\mathfrak{p}\) is a separable prime differential ideal of \(K\{x\}\). We write \(a=x\operatorname{mod}\mathfrak{p}\). Now, an algebraic solution of \(f(x)=0\ \&\ s_{f}(x)\neq 0\) in \(K\) is a smooth \(K\)-rational point of \[K[x_{0},\dots,x_{n}]/(f)\cong_{K}K[a,\dots,a^{(n)}].\] The largeness of \(K\) yields that \(K\) is e.c. in \(K(a,\dots,a^{(n)})\). Since the latter is equal to the differential field \(K\langle a\rangle\) generated by \(a\) over \(K\), differential largeness implies that \((K,\delta)\) is e.c. in \((K\langle a\rangle,\delta)\). Since \(\operatorname{ord}(f)\geq\operatorname{ord}(g)\) and \((\dagger)\) has an algebraic solution in \(K\), Lemma 3.6(1) of [12] implies that \(g\cdot s_{f}\notin\mathfrak{p}\). Hence \(a\) is a differential solution of \((\dagger)\) in \(K\langle a\rangle\). As \((K,\delta)\) is e.c. in \((K\langle a\rangle,\delta)\) also \(K\) has a differential solution \(\alpha\) of \((\dagger)\). To argue that there are infinitely many solutions, note that \(g\cdot(x-\alpha)\) has again order at most \(\operatorname{ord}(f)\). By largeness of \(K\) and the assumption \(\operatorname{ord}(f)\geq 1\), there is an algebraic solution of the new system where we replace \(g\) with \(g\cdot(x-\alpha)\). It follows, by repeating the above argument, that there are infinitely many differential solutions of \((\dagger)\) in \(K\). (iii)\(\Rightarrow\)(ii) It suffices to show that \(K\) is large as a field. By [11, Lemma 5.3.1, p. 67], a field \(K\) is large if and only if for every absolutely irreducible polynomial \(F(X,Y)\in K[X,Y]\), if there is a point \((a,b)\in K^{2}\) with \(F(a,b)=0\) and \(\frac{\partial F}{\partial Y}(a,b)\neq 0\), then there are infinitely many such points. So take an absolutely irreducible polynomial \(F(X,Y)\in K[X,Y]\) and some \((a,b)\in K^{2}\) with \(F(a,b)=0\) and \(\frac{\partial F}{\partial Y}(a,b)\neq 0\). Consider the differential polynomial \(f(x)=F(x,x^{\prime})\). Then \(f(x)=0\ \&\ s_{f}(x)\neq 0\) has an algebraic solution in \(K\), namely \((a,b)\). By (iii) there are infinitely many differential solutions in \(K\). But then there are infinitely many solutions to \(F(X,Y)=0\) and \(\frac{\partial F}{\partial Y}(X,Y)\neq 0\) in \(K\) as well. (ii)\(\Rightarrow\)(i). To prove differential largeness, let \(F\) be a differential field extension of \(K\) such that \(K\) is e.c. in \(F\) as a field. Note that then \(F/K\) is separable. We need to show that \(K\) is e.c. in \(F\) as a differential field. Let \(\Sigma\) be a system of differential polynomials in \(n\) differential variables over \(K\) and assume that \(\Sigma=0\) has a solution \(a\in F^{n}\). We may assume that \(F=K\langle a\rangle\). By 2.5 applied to \(F\), we may assume that \(F\) is differentially algebraic over \(K\) (and \(F/K\) remains separable). Condition (ii) guarantees that \([K:C_{K}]\) is infinite; hence, by the differential primitive element theorem [13, Proposition 9, SSII.8], the differential field \(F\) is differentially generated over \(K\) by a single element \(b\in F\). Let \(\mathfrak{p}\) be the prime differential ideal of \(K\{x\}\) associated to \(b\). Note that \(\mathfrak{p}\) is separable (over \(K\)). Then, by Theorem 3.1(1) of [12], \(\mathfrak{p}=[f]:s_{f}^{\infty}\) for \(f\in\mathfrak{p}\) irreducible of minimal rank. Write \(a=(a_{1},\ldots,a_{n})\) and let \(f_{i},g\in K\{x\}\) with \(a_{i}=\frac{f_{i}(b)}{g(b)}\). By the differential division algorithm [13, SSI.9] there are \(h\in K\{x\}\) reduced with respect to \(f\) and some \(r\geq 0\) with \[(i_{f}s_{f})^{r}g\equiv h\mod[f].\] Since \(f(b)=0\) and \(i_{f}(b){\cdot}s_{f}(b)\neq 0\) we get \(i_{f}^{r}(b)s_{f}^{r}(b)g(b)=h(b)\neq 0\). Hence, we may replace \(g\) by \(h\) and \(f_{i}\) by \((i_{f}s_{f})^{r}{\cdot}f_{i}\) if necessary and assume that \(g\) is reduced with respect to \(f\). Notice that \(a_{i}\in K\{b\}_{g(b)}\). Now, since \(K\) e.c. in \(F\) as a field, the system \(f(x)=0\) & \(s_{f}(x)\neq 0\) has an algebraic solutions in \(K\). By condition (ii), the set \[\{f=0\}\cup\{q\neq 0\ |\ q\in K\{x\}\text{ is nonzero and }\operatorname{ord}(q)< \operatorname{ord}(f)\}\] is finitely satisfiable in the differential field \(K\). Hence there is an elementary extension \(L\) of the differential field \(K\) having a differential solution \(c\) to \(f(x)=0\) such that \(q(c)\neq 0\) for all \(q\in K\{x\}\) with \(\operatorname{ord}(q)<\operatorname{ord}(f)\). Since \(f\) is irreducible, it follows that \(q(c)\neq 0\) for all \(q\in K\{x\}\) that are reduced with respect to \(f\). In particular \(f(c)=0\) & \(g(c)\neq 0\). Since \(K\prec L\) there is some \(d\in K\) with \(f(d)=0\) & \(g(d)\neq 0\). This means there is a differential \(K\)-homomorphism \((K\{x\}/\mathfrak{p})_{g\operatorname{mod}\mathfrak{p}}\longrightarrow K\). By choice of \(\mathfrak{p}\) we have \((K\{x\}/\mathfrak{p})_{g\operatorname{mod}\mathfrak{p}}\cong K\{b\}_{g(b)}\) as differential \(K\)-algebras. Since \(K\{a_{1},\ldots,a_{n}\}\subseteq K\{b\}_{g(b)}\) we obtain a differential \(K\)-algebra homomorphism \(K\{a_{1},\ldots,a_{n}\}\longrightarrow K\) and this corresponds to a differential solution of \(\Sigma=0\) in \(K^{n}\). When \(K\) is real closed, the above theorem yields a new axiomatisation of the theory CODF. A differential field \((K,\delta)\) is a model of CODF if and only if it is an existentially closed model of the theory of ordered differential fields. Axioms for CODF appear in [14]. While the axioms there make explicit reference to the order, our new axioms are purely in the differential field language, namely: **2.9 Corollary**.: _Let \((K,\delta)\) be a differential field. The following are equivalent._ 1. \((K,\delta)\models\operatorname{CODF}\)_._ 2. \(K\) _is real closed and for every pair_ \(f,g\in K\{x\}\)_, where_ \(x\) _is a single differential variable, with_ \(g\) _nonzero and_ \(\operatorname{ord}(f)>\operatorname{ord}(g)\)_, if the system_ \[f(x)=0\ \&\ s_{f}(x)\neq 0\] _has an algebraic solution in_ \(K\)_, then_ \(f(x)=0\ \&\ g(x)\neq 0\) _has a differential solution in_ \(K\)_._ Notice that every field \(K\) is algebraically closed in the large field \(K((t))\), but not every field is large. In the differential phrasing this changes: **2.10 Corollary**.: _Let \(K\subseteq L\) be an extension of differential fields. If \(K\) is differentially algebraically closed in \(L\) and \(L\) is differentially large, then \(K\) is differentially large as well._ Proof.: We verify 2.8(iii). Take \(f,g\in K\{x\}\), \(x\) a single differential variable, with \(\operatorname{ord}(f)\geq 1\) and \(\operatorname{ord}(f)\geq\operatorname{ord}(g)\), and assume that \(f(x)=0\ \&\ g(x)\cdot s_{f}(x)\neq 0\) has an algebraic solution in \(K\). Since \(L\) is differentially large, it has infinitely many differential solutions to \(f(x)=0\ \&\ g(x)\cdot s_{f}(x)\neq 0\). But then each of these solutions is differentially algebraic over \(K\). Hence all these solutions are in \(K\) _._ ### Remark We note that the condition of a differential field \((K,\delta)\) being differentially algebraically closed in some extension \((L,\delta)\) is quite strong. Arguably, being differentially algebraically closed in an extension is not quite the right differential analogue of being algebraically closed in the field sense. We do not know whether the assumption in 2.10 can be weakened to only assuming that \(K\) is constrainedly closed in \(L\) (namely, every finite tuple from \(L\) which is constrained over \(K\), in the sense of Kolchin [10, SSIII.10], is from \(K\)). We conclude this section with a geometric characterisation of being differentially large. Namely, in terms of algebraic D-varieties. Recall that an algebraic D-variety over \(K\) is a pair \((V,s)\) where \(V\) is an algebraic variety over \(K\) and \(s:V\to\tau V\) is a section over \(K\) of the prolongation of \(V\) (see [11, SS2], for instance). The latter is the algebraic bundle \(\pi:\tau V\to V\) with the characteristic property that for any differential field extension \((L,\delta)\) of \((K,\delta)\) we have that if \(a\in V(L)\) then \((a,\delta a)\in\tau V\). **2.12 Corollary**.: _Let \(K\) be a large field of arbitrary characteristic and let \(\delta\) be a derivation of \(K\). The following conditions are equivalent._ 1. \((K,\delta)\) _is differentially large_ 2. _Let_ \(V\) _and_ \(W\) _be_ \(K\)_-irreducible algebraic varieties with_ \(W\subseteq\tau V\)_. If_ \(\pi|_{W}:W\to V\) _is a separable morphism and_ \(W\) _has a smooth_ \(K\)_-point, then the set_ \[\{(a,\delta a)\in W:\;a\in V(K)\}\] _is Zariski dense in_ \(W\)_._ 3. _Let_ \((V,s)\) _be a_ \(K\)_-irreducible algebraic D-variety. If_ \(V\) _has a smooth_ \(K\)_-point, then the set_ \[\{a\in V(K):\;s(a)=(a,\delta(a))\}\] _is Zariski dense in_ \(V\)_._ Proof.: (i)\(\Rightarrow\)(ii) Let \((a,b)\) be a \(K\)-generic point of \(W\). Since \(\pi_{W}:W\to V\) is a separable morphism, we obtain that \(a\) is \(K\)-generic in \(V\) and \(K(a,b)/K(a)\) is a separable extension. Since \(W\subseteq\tau V\), there is a derivation \(\delta:K(a)\to K(a,b)\) extending the one on \(K\) such that \(\delta(a)=b\). As \(K(a,b)/K(a)\) is separable, by 2.2, we can extend the derivation to \(K(a,b)\to K(a,b)\). Then, for any nonempty Zariski-open \(O_{W}\subseteq W\) over \(K\), in the differential field extension \((K(a,b),\delta)\) we can find a solution to \(x\in V\) and \((x,\delta x)\in O_{W}\) (namely, the tuple \(a\)). Since \(W\) has a smooth \(K\)-point, we get that \(K\) is e.c. in \(K(W)=K(a,b)\) as a field. By differential largeness, \((K,\delta)\) is e.c. in \((K(a,b),\delta)\), and so we can find the desired solution in \(K\). (ii)\(\Rightarrow\)(iii) If we let \(W=s(V)\subseteq\tau V\), then the pair \(V\) and \(W\) satisfy the conditions of (ii) (note that if \(b\) is a smooth point of \(V\) then \((b,s(b))\) is a smooth point of \(W\)). If follows that the set of points in \(W\) of the form \((a,\delta a)\) with \(a\in V(K)\) is Zariski dense in \(W\). But then, as \(W=s(V)\), the set of points \(a\in V\) such that \(s(a)=(a,\delta a)\) must be Zariski dense in \(V\). (iii)\(\Rightarrow\)(i) We verify 2.8(ii). Let \(f,g\in K\{x\}\) with \(\operatorname{ord}(g)<\operatorname{ord}(f)\) and \(g\) nonzero. Assume the system \[f(x)=0\ \&\ s_{f}(x)\neq 0\] has an algebraic solution in \(K\). In particular, \(s_{f}\neq 0\). By Observation 2.7, we may assume that \(f\) is irreducible. By Theorem 3.1(1) of [12], \(\mathfrak{p}=[f]:s_{f}^{\infty}\) is a separable prime differential ideal of \(K\{x\}\). Let \(a=x+\mathfrak{p}\) in the fraction field of \(K\{x\}/\mathfrak{p}\). Letting \(n=\operatorname{ord}(f)\), we see that \((a,\delta a,\dots,\delta^{n-1}a)\) is algebraically independent over \(K\) and \(\delta^{n}a\) is separably algebraic over \(K(a,\dots,\delta^{n-1}a)\). It follows that \[\delta^{n+1}a=\frac{h(a,\delta a,\dots,\delta^{n}a)}{s_{f}(a)}\] for some \(h\in K[t_{0},\dots,t_{n}]\). Let \(V\) be the localisation at \(g\cdot s_{f}\) of the Zariski-locus of \((a,\delta a,\dots,\delta^{n}a)\) over \(K\). From the assumptions (on existence of an algebraic solution in \(K\)), we see that \(V\) has a smooth \(K\)-rational point and that morphism on \(V\) induced by \[(t_{0},t_{1},\dots,t_{n})\mapsto((t_{0},t_{1},\dots,t_{n}),(t_{1},t_{2},\dots, t_{n},\frac{h(t_{0},t_{1},\dots,t_{n})}{s_{f}(t_{0},t_{1},\dots,t_{n})})\] yields a regular algebraic map \(s:V\to\tau V\). This equips \(V\) with a D-variety structure. Then, the assumption of (iii) yields \(\alpha\in V(K)\) such that \(s(\alpha)=(\alpha,\delta\alpha)\). But then \(\alpha\) is the desired differential solution of \(f(x)=0\) & \(g(x)\neq 0\) in \(K\). ## 3. Power series in characteristic zero In this section we assume fields are of characteristic zero, and thus the results on differentially large fields from [11] may be deployed. We prove, in 3.5, two further characterisations of being differentially large. For a differential field \(K\) we endow \(K((t))\) with its natural derivation extending the given derivation on \(K\) and satisfying \(\delta(t)=1\); that is, \[\delta(\sum_{n\geq k}a_{n}t^{n})=\sum_{n\geq k}\delta(a_{n})t^{n}+\sum_{n\geq k }na_{n}t^{n-1}.\] In [11] it is shown that \((K,\delta)\) is differentially large if and only if \((K,\delta)\) is e.c. in \((K((t)),\delta)\). We do not know if this characterisation extends to positive characteristic, the proof relies on the existence of a _twisted version_ of the Taylor morphism [11, 3.4], whose construction picks up rational denominators. Below we prove that it suffices to ask for \((K,\delta)\) to be e.c. in the differential subfield of \((K((t),\delta)\) consisting of differential algebraic elements (over \(K\)). **3.1 Definition**.: Let \(K\) be a differential field and let \(S\) be a differential \(K\)-algebra. We write \(S_{\operatorname{diffalg}}\) for the differential subring of all \(a\in S\) that are differentially algebraic over \(K\). **3.2 Remark**.: Since \(K((t))\) is the localization of \(K[[t]]\) at \(t\), the fraction field of \(K[[t]]_{\operatorname{diffalg}}\) is \(K((t))_{\operatorname{diffalg}}\). **3.3 Proposition**.: _Let \((K,\delta)\) be a differential field (of characteristic zero) that is large as a field and let \(S\) be a differentially finitely generated \(K\)-algebra. If there is a \(K\)-algebra homomorphism \(S\to L\) for some field extension \(L/K\) in which \(K\) is e.c. (as a field, there are no derivations on \(L\) given), then there is a differential \(K\)-algebra homomorphism \(S\to K[[t]]_{\operatorname{diffalg}}\)._ Proof.: By [11, 3.5] there is a differential \(K\)-algebra homomorphism \(\psi:S\to K[[t]]\). Applying 2.4 to \(\psi(S)\) we may then find a finitely generated \(K\)-subalgebra \(B\) of \(K((t))\), a derivation \(\partial\) of \(B\) extending \(\delta\) on \(K\) together with a differential \(K\)-algebra homomorphism \(\varphi:\psi(S)\longrightarrow(B,\partial)\). By [11, 3.5] applied to \((B,\partial)\) and the inclusion map \(B\hookrightarrow K((t))\) there is a differential \(K\)-algebra homomorphism \(\gamma:B\to K[[t]]\). Since \(B\) is a finitely generated \(K\)-algebra, the image of \(\gamma\) is in \(K[[t]]_{\rm diffalg}\). Hence the map \(\gamma\circ\varphi\circ\psi:S\longrightarrow K[[t]]_{\rm diffalg}\) has the required property. A special case of 3.3 resembles an approximation statement over large and differential fields in the spirit of [10, Theorem 2.1]: **3.4 Corollary**.: _Let \((K,\delta)\) be a differential field of characteristic zero such that \(K\) is large as a field. Let \(\Sigma\) be a system of differential polynomials in finitely many differential variables over \(K\). If the differential ideal generated by \(\Sigma\) has an algebraic solution in \(K((t))\), then \(\Sigma=0\) has a differential solution in \(K[[t]]_{\rm diffalg}\)._ Proof.: Apply 3.3 to the differential coordinate ring of \(\Sigma\). **3.5 Corollary**.: _Let \(K\) be a large field of characteristic 0 and let \(\delta\) be a derivation of \(K\). The following conditions are equivalent._ 1. \((K,\delta)\) _is differentially large_ 2. \(K\) _is e.c. in_ \(K[[t]]_{\rm diffalg}\) _as a differential field._ 3. _For every_ \(K\)_-irreducible algebraic D-variety_ \((V,s)\)_, if_ \(V\) _has a_ \(K\)_-point, then there is_ \(a\in V(K)\) _such that_ \(s(a)=(a,\delta a)\)_._ Proof.: (i)\(\Rightarrow\)(ii) is a consequence of [10, 4.3(ii)], which says that \(K\) is e.c. in \(K((t))\) as a differential field. (ii)\(\Rightarrow\)(i). By 3.4 one verifies that \(K\) is e.c. in \(K((t))\) as a differential field. Hence by [10, 4.3], \((K,\delta)\) is differentially large (iii)\(\Rightarrow\)(i) We verify 2.12(iii). Let \((V,s)\) be a \(K\)-irreducible \(D\)-variety with a smooth \(K\)-point. Let \(h\in K[V]\) nonzero. Then, there is an induced D-variety structure in the localisation \(K[V]_{h}\). Denote this D-variety by \((W,t)\). As \(K\) is large and \(V\) has a smooth \(K\)-point, we get that \(K\) is Zariski dense in \(V\). Thus, \(W\) has a \(K\)-point. The assumption now yields a \(K\)-point \(b\) in \(W\) such that \(s(b)=(b,\delta b)\). As \(h\) was arbitrary, it follows that the set of points \(\{a\in V(K):s(a)=(a,\delta a)\}\) is Zariski dense. (i)\(\Rightarrow\)(iii) Let \((V,s)\) be a \(K\)-irreducible D-variety with a \(K\)-point. Applying 3.3 with \(S=K[V]\) and \(L=K\), we find a \(K((t))\)-rational point \(b\) of \(V\) such that \(s(b)=(b,\delta b)\). As \(K\) is differentially large, it is e.c. in \(K((t))\) as a differential field. Hence, we can find such a point in \(K\). We may now improve the construction of differentially large fields from [10, 5.2] in the ordinary case. A few preparations are necessary. **3.6 Proposition**.: _Let \((K_{i},f_{ij})_{i,j\in I}\) be a directed system of differential fields and differential embeddings with the following properties._ 1. _All_ \(K_{i}\) _are large as fields._ 2. _All embeddings_ \(f_{ij}:K_{i}\longrightarrow K_{j}\) _are isomorphisms onto a subfield of_ \(K_{j}\) _that is e.c. in_ \(K_{j}\) _as a field._ 3. _For all_ \(i\in I\) _there exist_ \(j\geq i\) _and a differential homomorphism_ \(K_{i}[[t]]_{\rm diffalg}\longrightarrow K_{j}\) _extending_ \(f_{ij}\)_._ _Then the direct limit \(L\) of the directed system is a differentially large field._ Proof.: The proof is identical to the proof of [10, 5.1], except we use 3.3 in that proof instead of [10, 3.5]. **3.7 Observation**.: _Let \(K\) be a differential field. Then \(K[[t]]_{\rm diffalg}\) is a Henselian valuation ring._ Proof.: We write \(S=K[[t]]_{\rm diffalg}\). Then \(S\) is a valuation ring because if \(f\in K((t))_{\rm diffalg}\) then the degree of \(f\) is \(\geq 0\), hence \(f\in S\), or the degree of \(f\) is negative and then \(f^{-1}\in S\). Clearly the maximal ideal of \(S\) is \(t\cdot S\). To verify that \(S\) is Henselian it suffices to show that for all \(\mu_{2},\ldots,\mu_{n}\in\mathfrak{m}\) there is some \(f\in S\) with \[1+f+\mu_{2}f^{2}+\ldots+\mu_{n}f^{n}=0.\] As \(K[[t]]\) is Henselian, there is such an \(f\) in \(K[[t]]\). Obviously, \(f\in S\). **3.8 Theorem**.: _Let \((K,\delta)\) be any differential field of characteristic zero. Set \(K_{0}=K\) and let \(K_{n+1}=K_{n}((t_{n}))_{\rm diffalg}.\) Then \(\bigcup_{n\geq 0}K_{n}\) is differentially large._ Proof.: By 3.7, \(K_{n}[[t_{n}]]_{\rm diffalg}\) is a Henselian valuation ring. By [10], \(K_{n}((t_{n}))_{\rm diffalg}\) is a large field. We see that all assumptions of 3.6 are satisfied for the \(K_{n}\) and the inclusion maps \(K_{n}\hookrightarrow K_{n+k}\). Now the argument for [13, 5.2(i)] can be copied, where we use 3.6 instead of [13, 5.1]. _3.9 Remark_.: We note that when \(K\) is furnished with the trivial derivation, then the derivation on \(K((t))\) is the standard one, namely \(\frac{\rm d}{\rm d}t\), and hence by standard results on differential transcendence we know that \(K((t))_{\rm diffalg}\) is a _proper_ differential subfield of \(K((t))\). For instance, by Holder's theorem, for \(K=\mathbb{C}\), the Gamma function is in the latter while not in the former. We do not know whether this proper containment holds for an arbitrary differential field \(K\). ## 4. Expansions of large fields to differentially large fields The main goal of this section is 4.3 which implies that any large field of characteristic zero of infinite transcendence degree over \(\mathbb{Q}\) can be expanded to a differentially large field. A further consequence of 4.3 is 4.7, which says that prime model extensions in CODF only exist in the trivial case. Throughout this section fields are assumed to be of characteristic zero. _4.1 Notation_.: Let \(K\) be a field (of characteristic zero). A **differentially large problem** of \(K\) is a pair \((f,g)\) of polynomials from \(K\{x\}=K[x_{0},x_{1},\ldots]\) such that \(f\) is of order \(n\geq 0\), the order of \(g\) is strictly less than \(n\) and for which there is an element \((c_{0},\ldots,c_{n})\in K^{n+1}\) such that \[f(c_{0},\ldots,c_{n})=0\ \&\ s_{f}(c_{0},\ldots,c_{n})\neq 0.\] We call \(\bar{c}\) an **algebraic solution of the differentially large problem**. Obviously a differentially large problem over \(K\) remains a differentially large problem over every field extension of \(K\). If \(d\) is a derivation of \(K\), then a **solution of a differentially large problem** of \(K\) in a differential field \((L,\delta)\) extending \((K,d)\) is an element \(a\in L\) with \(f(a)=0\ \&\ g(a)\neq 0\), where polynomials are now evaluated as differential polynomials. **4.2 Proposition**.: _Let \(K\subseteq L\) be fields, \(n\in\mathds{N}\) and assume that \(\operatorname{tr.deg}(L/K)\geq n\). Let \((f,g)\) be a differentially large problem of \(K\) with \(\operatorname{ord}(f)=n\). Let \(d\) be a derivation of \(K\) and assume \(L\) is large._ _Then there is a subfield \(K_{1}\) of \(L\) that is finitely generated over \(K\) as a field, a derivation \(\delta\) of \(K_{1}\) extending \(d\) and a solution \(a\in K_{1}\) of the differentially large problem \((f,g)\) such that \(a,\delta a,\ldots,\delta^{n-1}a\) are algebraically independent over \(K\)._ Proof.: Let \(\bar{x}=(x_{0},\ldots,x_{n})\) and let \(Z\) be the solution set in \(L\) of the system \[f(\bar{x})=0\ \&\ s_{f}(\bar{x})\neq 0\ \&\ g(\bar{x})\neq 0.\] _Claim_.: There exists a point \((a_{0},\ldots,a_{n})\in Z\) with \(\operatorname{tr.deg}(a_{0},\ldots,a_{n}/K)=n\). Proof.: Let \(W\) be the variety defined by the two polynomials \[f(\bar{x}),\ y\cdot\!s_{f}(\bar{x})\cdot\!g(\bar{x})-1\in K[\bar{x},y].\] Write \(h(\bar{x},y)=y\cdot\!s_{f}(\bar{x})\cdot\!g(\bar{x})-1\). Then any common zero \((\bar{a},c)\) of \(f\) and \(h\) in the algebraic closure of \(L\) is a regular point of \(W\), because \(c\cdot\!s_{f}(\bar{a})\cdot\!g(\bar{a})-1=0\) implies \(\frac{\partial f}{\partial x_{n}}(\bar{a})\neq 0\) and obviously \(\frac{\partial h}{\partial y}=s_{f}\cdot\!g\) does not vanish at \(\bar{a}\). Hence the determinant of the matrix \(\begin{pmatrix}\frac{\partial f}{\partial x_{n}}&\frac{\partial f}{\partial y }\\ \frac{\partial h}{\partial x_{n}}&\frac{\partial h}{\partial y}\end{pmatrix}\) is not zero at \((\bar{a},c)\). This shows that \(W\) is smooth. Since \((f,g)\) is a differentially large problem of \(K\) we know that \(W\) has a \(K\)-rational point. By [11, Theorem 1], using \(\operatorname{tr.deg}(L/K)\geq n=\dim(W)\), there is a \(K\)-embedding \(K(W)\longrightarrow L\). A generic point of \(W\) in \(K(W)\) is then mapped to a point \((a_{0},\ldots,a_{n})\in Z\) with \(\operatorname{tr.deg}(a_{0},\ldots,a_{n}/K)=n\). \(\diamond\) As \(s_{f}(a_{0},\ldots,a_{n})\neq 0\), \(a_{n}\) is algebraic over \(K(a_{0},\ldots,a_{n-1})\). But now we see that \(K_{1}:=K(a_{0},\ldots,a_{n})\) is isomorphic to the quotient field of \(K\{x\}/\mathfrak{p}\), where \(\mathfrak{p}=[f]:s_{f}^{\infty}\). This induces a derivation \(\delta\) on \(K_{1}\) and this derivation has the required properties: \(a=a_{0}\) solves the given differentially large problem. ### Theorem _Let \(K\subseteq L\) be fields of characteristic \(0\) and suppose \(L\) is a large field. Let \(d\) be a derivation of \(K\). If \(\operatorname{tr.deg}(L/K)\geq\operatorname{card}(K)\), then there is a derivation \(\delta\) of \(L\) extending \(d\) such that \((L,\delta)\) is differentially large._ Proof.: Let \(\kappa=\operatorname{card}(K)\). By extending \(K\) and \(d\) we may assume that \(\operatorname{tr.deg}(L/K)=\kappa\). Let \(\{t_{i}\mid i<\kappa\}\) be a transcendence basis of \(L\) over \(K\) and let \((f_{i},g_{i})_{i\in\kappa}\) be a list of all differentially large problems of \(L\); so here \(f_{i},g_{i}\in L\{x\}\) in the terminology of 4.1. For \(i<\kappa\) we define a subfield \(K_{i}\) of \(L\) and a derivation \(d_{i}\) of \(K_{i}\) such that 1. \(K_{i}\) contains \(t_{i}\), \(\operatorname{tr.deg}(K_{i}/K)\) is finite for finite \(i\) and \(\operatorname{tr.deg}(K_{i}/K)\leq\operatorname{card}(i)\) for \(i\geq\omega\), 2. \((K_{i},d_{i})\) extends \((K_{j},d_{j})\) for \(j<i\), and 3. \((K_{i},d_{i})\) solves the differentially large problem \((f_{i},g_{i})\). Suppose \(i<\kappa\) and \((K_{j},d_{j})\) has already been defined with properties (a)-(c); this also covers the case \(i=0\). Let \(\bar{b}\subseteq L\) be finite with \(f_{i},g_{i}\in K(\bar{b})\{x\}\) such that there is an algebraic solution of the differentially large problem \((f_{i},g_{i})\) in \(K(\bar{b})\). Let \(K_{*}\) be the field generated by \(K(t_{i},\bar{b})\cup\bigcup_{j<i}K_{j}\) and extend the derivation \(\bigcup_{j<i}d_{j}\) to a derivation \(d_{*}\) of \(K_{*}\) arbitrarily. Obviously then \(\operatorname{tr.deg}(K_{*}/K)\) is finite if \(i\) is finite and \(\leq\operatorname{card}(i)\) otherwise. Consequently \(\operatorname{tr.deg}(L/K_{*})\) is infinite and we may apply 4.2 to the extension \(K_{*}\subseteq L\), the derivation \(d_{*}\) and the differentially large problem \((f_{i},d_{i})\). We obtain an extension \((K_{i},d_{i})\) of \((K_{*},d_{*})\) such that \(K_{i}\) is a subfield of \(L\) that is finitely generated over \(K_{*}\). Clearly \((K_{i},d_{i})\) satisfies (a)-(c). Then \(L=\bigcup_{i<\kappa}K_{i}\) and by 2.8 the differential field \((L,\partial)\) with \(\partial=\bigcup_{i<\kappa}d_{i}\) is differentially large. ### Remark In characteristic \(p>0\) the conclusion in 4.3 fails even under the assumption that \(L/K\) is separable. For example \(L\) might be perfect (as a field), and hence any derivation on \(L\) is trivial. ### Corollary _A large field \(L\) of characteristic zero is of infinite transcendence degree if and only if there is a derivation \(d\) of \(L\) such that \((L,d)\) is differentially large._ Proof.: If \(L\) has infinite transcendence degree, then by 4.3 applied with \(K=\mathds{Q}\) shows that there is a derivation \(d\) of \(L\) such that \((L,d)\) is differentially large. For the converse assume there is a derivation \(d\) of \(L\) such that \((L,d)\) is differentially large. By [12, 5.12], the algebraic closure \(\overline{L}\) of \(L\) is a DCF. We may then replace \(L\) by the differential closure of \(\mathds{Q}\). By the non-minimality of the differential closure of \(\mathds{Q}\) ([13]), there is an embedding \(L\longrightarrow L\) that is not surjective. Hence \(L\) cannot have finite transcendence degree. We now apply 4.3 to answer a question about prime model extensions for CODF. Recall that a CODF in the sense of Singer (cf. [14]) is the same as a differentially large field that is real closed as a pure field. In [14] Singer shows that CODF has no prime model, i.e. there is no CODF that embeds into all other CODFs.2 We now show that in fact no differential and formally real field (i.e. it possesses an ordering) has a prime model extension for CODF3, unless its real closure is already a CODF. In particular, no formally real field equipped with the trivial derivation has a prime model extension in CODF. The proof is essentially an application of 4.3 together with the following purely field theoretic fact. Footnote 2: Note that CODF is model complete in the language of differential rings, i.e. every embedding of CODFs is elementary. Footnote 3: A _prime model extension of \(K\) for_ CODF is a model \(\hat{K}\) of CODF having \(K\) as a differential subfield such that \(\hat{K}\) embeds over \(K\) as a differential field into any other CODF that has \(K\) as a differential subfield. ### Proposition _Let \(R\) be a real closed field and let \(\kappa\) be its cardinality. Then, there are real closed fields \(M,N\) containing \(R\) of transcendence degree \(\kappa\) over \(R\) with the following property: If \(S\supseteq R\) is a real closed field then \(S\) can be embedded over \(R\) into \(M\) and into \(N\) if and only if \(\operatorname{tr.deg}(S/R)\leq 1\) and \(R\) is Dedekind complete in \(S\)._ Proof.: We take \(M\supseteq R\) by successively adjoining infinitely large elements \(a_{\alpha}\) for \(\alpha<\kappa\). Hence \(a_{\alpha}>R(a_{\beta}\mid\beta<\alpha)\) in \(M\) and \(M\) is algebraic over \(R(a_{\alpha}\mid\alpha<\kappa)\). Then \(R\) is Dedekind complete in \(M\) and \(M\) has transcendence degree \(\kappa\) over \(R\). For \(N\) we may take any real closed subfield of \(R((t^{\mathds{R}}))\) of transcendence degree \(\kappa\) over \(R\). Such fields exist because of the following reason: Let \(\Lambda\) be a basis of the \(\mathds{Q}\)-vector space \(R\). Since \(R\) is real closed, the cardinality of \(\Lambda\) is \(\kappa\). Then the set \(\{\exp(\lambda\cdot t)\mid\lambda\in\Lambda\}\subseteq R[[t]]\) is an algebraically independent subset of \(R((t^{\mathds{R}}))\) over \(R\): this is a baby case of Ax's positive solution to the functional Schanuel conjecture, but is not difficult to prove directly. Hence we may take \(N\) as the real closure of \(R(\exp(\lambda\cdot t)\mid\lambda\in\Lambda)\) in \(R((t^{\mathds{R}}))\). Clearly \(N\) has transcendence degree \(\kappa\) over \(R\). Since \(R\) is Dedekind complete in \(M\) and in \(N\), any real closed field \(S\) containing \(R\) with \(\operatorname{tr.deg}(S/R)\leq 1\) in which \(R\) is Dedekind complete, can be embedded into \(M\) and into \(N\). It remains to show that any real closed subfield \(S\) of \(M\) containing \(R\) that can be embedded into \(N\) over \(R\) is of transcendence degree at most \(1\) over \(R\); note that \(R\) is Dedekind complete in \(S\) because \(R\) is Dedekind complete in \(M\) (and in \(N\)). For a contradiction, suppose \(S\) has transcendence degree \(2\) over \(R\). We furnish \(M\) with the valuation whose valuation ring is the convex hull of \(R\) in \(M\). Real closures are now taken in \(M\) throughout and this is indicated by the superscript \({}^{\mathrm{rcl}}\). Take \(\bar{a}=(a_{\alpha_{1}},\ldots,a_{\alpha_{n}})\), \(\alpha_{1}<\ldots<\alpha_{n}\) such that \(S\subseteq R(\bar{a})^{\mathrm{rcl}}\). Then by choice of the \(a_{\alpha}\) the chain \(R\subseteq R(a_{\alpha_{1}})^{\mathrm{rcl}}\subseteq\ldots\subseteq R(a_{ \alpha_{1}},\ldots,a_{\alpha_{n}})^{\mathrm{rcl}}\) witnesses that the value group of \(R(a_{\alpha_{1}},\ldots,a_{\alpha_{n}})^{\mathrm{rcl}}\) has height \(n\), where height stands for the number of convex subgroups of the value group. Since \(\mathrm{tr.\,deg}(S/R)=2\), there are \(n-2\) elements \(b_{1},\ldots,b_{n-2}\) from \(\{a_{\alpha_{1}},\ldots,a_{\alpha_{n}}\}\) that are algebraically independent over \(S\). Since \(S\) can be embedded into \(R((t^{\mathrm{R}}))\) we know that \(S\) has height \(1\): Crucially we use here that any such embedding preserves the valuations because the natural valuation on \(R((t^{\mathrm{R}}))\) again has the convex hull of \(R\) in \(R((t^{\mathrm{R}}))\) as its valuation ring. But now the chain \(R\subseteq S\subseteq S(b_{1})^{\mathrm{rcl}}\subseteq\ldots\subseteq S(b_{1 },\ldots,b_{n-2})^{\mathrm{rcl}}=R(a_{\alpha_{1}},\ldots,a_{\alpha_{n}})^{ \mathrm{rcl}}\) witnesses that the value group of \(R(a_{\alpha_{1}},\ldots,a_{\alpha_{n}})^{\mathrm{rcl}}\) has height at most \(n-1\), which gives the desired contradiction. **4.7 Theorem**.: _Let \(K\) be a differential and formally real field. If \(K\) has a prime model extension \(\hat{K}\) for \(\mathrm{CODF}\), then \(\hat{K}\) is algebraic over \(K\)._ Proof.: Suppose there is a prime model extension \(\hat{K}\) of \(K\) for \(\mathrm{CODF}\) but \(\hat{K}\) is not algebraic over \(K\). Let \(R\) be the algebraic closure of \(K\) in \(\hat{K}\). Then \(R\) is a differential subfield of \(\hat{K}\) and \(\hat{K}\) is also a prime model of \(R\) for \(\mathrm{CODF}\): If \(R\subseteq M\models\mathrm{CODF}\), then any \(K\)-embedding \(\hat{K}\longrightarrow M\) must be the identity on \(R\). Hence we may assume that \(K\) is real closed all along. Choose real closed fields \(M,N\) for \(K\) as in 4.6. By 4.3 there are extensions of the derivation of \(K\) to \(M,N\) respectively such that \(M,N\) furnished with these extensions are \(\mathrm{CODF}\)s. Since \(\hat{K}\) can be embedded into \(M\) and into \(N\) by assumption, 4.6 implies that \(\hat{K}\) must be of transcendence degree \(\leq 1\) over \(K\) and \(K\) is Dedekind complete in \(\hat{K}\). As \(K\neq\hat{K}\), we know that \(\mathrm{tr.\,deg}(\hat{K}/K)=1\). Since \(\hat{K}\) is a \(\mathrm{CODF}\) it follows that \(\hat{K}\) has a positive infinitesimal element \(t\) with respect to \(K\) such that \(t^{\prime}=1\) (in particular \(t\notin K\)). Then \(\hat{K}\) is a differential subfield of \(K((t^{\mathrm{Q}}))\) (endowed with the derivation extending the one on \(K\) and satisfying \(t^{\prime}=1\)). By [11, end of 5.3] we know that \(t^{-1}\) has no integral in \(K((t^{\mathrm{Q}}))\). This contradicts the fact that \(t^{-1}\) has an integral in the \(\mathrm{CODF}\)\(\hat{K}\). _4.8 Remark_.: The proofs of 4.6 and 4.7 can be adapted to get the analogous statements about differential and formally p-adic fields and the class of p-adically closed differentially large fields. One possible task for future work is to extend 4.7 (or rather 4.6) to topological differential fields in the sense of [10]. We do not know if there is a version of 4.7 outside of that context. For example, if \(K\) is a subfield of a pseudo-finite field and \(d\) is a derivation of \(K\), it is unclear whether there is a prime model over \((K,d)\) in the class of differentially large and pseudo-finite fields (all of characteristic zero).
2303.11751
Generative AI for Cyber Threat-Hunting in 6G-enabled IoT Networks
The next generation of cellular technology, 6G, is being developed to enable a wide range of new applications and services for the Internet of Things (IoT). One of 6G's main advantages for IoT applications is its ability to support much higher data rates and bandwidth as well as to support ultra-low latency. However, with this increased connectivity will come to an increased risk of cyber threats, as attackers will be able to exploit the large network of connected devices. Generative Artificial Intelligence (AI) can be used to detect and prevent cyber attacks by continuously learning and adapting to new threats and vulnerabilities. In this paper, we discuss the use of generative AI for cyber threat-hunting (CTH) in 6G-enabled IoT networks. Then, we propose a new generative adversarial network (GAN) and Transformer-based model for CTH in 6G-enabled IoT Networks. The experimental analysis results with a new cyber security dataset demonstrate that the Transformer-based security model for CTH can detect IoT attacks with a high overall accuracy of 95%. We examine the challenges and opportunities and conclude by highlighting the potential of generative AI in enhancing the security of 6G-enabled IoT networks and call for further research to be conducted in this area.
Mohamed Amine Ferrag, Merouane Debbah, Muna Al-Hawawreh
2023-03-21T11:17:41Z
http://arxiv.org/abs/2303.11751v1
# Generative AI for Cyber Threat-Hunting in 6G-enabled IoT Networks ###### Abstract The next generation of cellular technology, 6G, is being developed to enable a wide range of new applications and services for the Internet of Things (IoT). One of 6G's main advantages for IoT applications is its ability to support much higher data rates and bandwidth as well as to support ultra-low latency. However, with this increased connectivity will come to an increased risk of cyber threats, as attackers will be able to exploit the large network of connected devices. Generative Artificial Intelligence (AI) can be used to detect and prevent cyber attacks by continuously learning and adapting to new threats and vulnerabilities. In this paper, we discuss the use of generative AI for cyber threat-hunting (CTH) in 6G-enabled IoT networks. Then, we propose a new generative adversarial network (GAN) and Transformer-based model for CTH in 6G-enabled IoT Networks. The experimental analysis results with a new cyber security dataset demonstrate that the Transformer-based security model for CTH can detect IoT attacks with a high overall accuracy of 95%. We examine the challenges and opportunities and conclude by highlighting the potential of generative AI in enhancing the security of 6G-enabled IoT networks and call for further research to be conducted in this area. Generative AI, Security, GPT, GAN, IoT, 6G. ## I Introduction The Internet of Things (IoT) has revolutionized how people interact with the environment around them. With the emergence of 6G technology, the IoT is expected to reach new levels of connectivity and intelligence [1, 2]. As shown in Figure in Fig. 1, the 6G-enabled IoT network comprises fouriers: the perception tier, the network tier, the edge tier, and the cloud tier. The perception layer is the first layer of the 6G-enabled IoT network. The layer is in charge of sensing and collecting data from the physical world. Various sensors, such as temperature, microphones, and camera sensors, are embedded in devices such as smartphones, smart home appliances, and industrial equipment at this layer. The network layer is the second layer of the 6G-enabled IoT network. It is responsible for the connection of all the devices in the network and allowing data transfer between them. The network layer is composed of various networking technologies, such as Wi-Fi, Bluetooth, and 6G, which enable the devices to communicate with each other and with the edge layer. The edge layer is the third layer of the 6G-enabled IoT network. It's responsible for processing and analyzing data at the edge of the network, rather than in the cloud. The edge layer is composed of peripheral devices, such as routers, gateways, and servers, which are equipped with powerful processors and memory. The cloud layer is responsible for the storage, management, and analysis of data collected by the perception layer. This layer consists of cloud servers, located in data centers and accessible via the Internet. Generative AI refers to a class of artificial intelligence that can generate new material, such as music, images, or text [3]. Those particular systems are constructed to learn the characteristics and features of a specific dataset and then use that intelligence to generate new, original content that follows the same pattern. Generative AI in general has a variety of different uses, including data synthesis, algorithm Invention, data augmentation, and anomaly detection. Table I presents the comparison between the Generative AI model and the Traditional AI. There are, at the same time, limitations on the use of generative AI for cyber threat hunting. One challenge is that such systems depend on the completeness and quality of the data on which they're trained. If the data being trained is incomplete or biased, the performance of the system can be damaged. In addition, generative AI systems can create false positives, which involves the identification of a threat that could be occurring when no threat really exists. This can result in useless testing and resource consumption Cyber threat hunting is the process of searching for proactive signs of malicious activity within an organization's networks and systems. This is key to finding and minimizing the threats before they cause serious damage. Over the past few years, however, an increasing interest has emerged in the application of generative artificial intelligence (AI) to cyber threat hunting. There are many different ways to use generative AI for cyber threat hunting, and one of them is to analyze large quantities of data to find and identify patterns and anomalies that could provide an indication of the presence of a threat. One method of using generative AI for cyber threat hunting is the report and alert generation. The traditional cyber threat hunt can often include reviewing massive quantities of data manually and then generating alerts or reports depending on the findings. This is time-consuming and can lead to errors. Generative AI systems, on the other hand, can perform real-time data analysis and produce reports or alerts directly depending on the findings. This can contribute to a significant acceleration of the threat detection process and decrease the number of missed security threats. Using generative AI for cyber threat hunting is one technique that is adopted for analyzing large amounts of data in order to find any abnormal activity patterns and anomalies that might reveal the existence of a malicious threat. A generative AI system, for example, might be trained on a network traffic log dataset and then be employed to recognize abnormal patterns of activity that may be indicative of a threat adversary. Motivated by the facts mentioned above, in this article, we review the use of generative AI for cyber threat-hunting in 6G-enabled IoT networks. Specifically, we discuss the Generative AI use cases for IoT applications as well as evaluate three generative AI models for cyber security, including, the GAN-based method, GPT-based method, and BERT-based method. Therefore, we propose a new GAN and Transformer-based model for Cyber Threat-Hunting in 6G-enabled IoT Networks. The experimental analysis results with a new cyber security dataset demonstrate that the Transformer-based security model for cyber threat-hunting can detect IoT attacks with a high overall accuracy of 95%. In addition, we provide several challenges regarding the use of generative AI for cyber threat-hunting in 6G-enabled IoT networks, including, scalability issues, decentralized training issues, data quality issues, energy challenges, privacy-preserving challenges, and tokenization challenges. This article is structured as follows: In section II, the generative AI use cases for IoT applications are presented. Section III outlines the methodology of generative AI for Cyber Security. The proposed GAN and Transformer-based model is detailed in section IV, while the results of our experiments are provided in section V. Section VI covers the open challenges concerning the use of generative AI for cyber threat-hunting in 6G-enabled IoT networks, and finally, section VII concludes the article. ## II Generative AI Use Cases As GPT and GAN are two completely different models with different strengths and weaknesses, they can be combined to form a more robust system. For example, GPT can be used to generate text-based synthetic data, which can then be transmitted through a GAN to generate realistic images. This combination can be employed to generate more synthetic data for computer vision models, audio models, security models, and text models, which can help improve their robustness and accuracy. In this section, we discuss the Generative AI use cases for IoT applications, including, visual IoT applications, audio IoT applications, text-based IoT applications, code-based IoT applications, and IoT security. ### _Visual IoT Applications_ Visual applications can be employed in various types of IoT contexts, from surveillance and real-time monitoring to diagnostic and remote maintenance. Specifically, as an example, video cameras and other sensors can be embedded in IoT systems to facilitate the monitoring of equipment or installations remotely, enabling users to identify and solve problems quickly before they progress to serious issues. One of the most well-known applications of generative AI is the application of generative adversarial networks (GANs) to generate realistic images. A GAN comprises a pair of neural networks: one is a generator, which learns to produce novel images, and the other is a discriminator, which learns to distinguish genuine from fabricated images. Both networks are trained jointly in a zero-sum game, where the generator tries to generate images that are unrecognized from the real ones and the discriminator tries to categorize the images as real or fake with accuracy. We consider a visual IoT system used to monitor the status of crop health in an agricultural field [13]. Specifically, the system could employ cameras and sensors to capture data on the crops' development and progress. Through the use of generative AI, however, it is possible to produce pictures or videos that demonstrate how crops are expected to develop over time, based on the data that has been collected. This could be used to assist in helping farmers to better understand how to properly manage their crops and ensure that they are making the best use of their resources. As such, Generative AI will play a key role in the future development of visual IoT applications. ### _Audio IoT Applications_ An example of a voice-based device in the IoT is the Amazon Echo, which employs the Alexa voice assistant to enable consumers to monitor smart home equipment and control and access content via their voice prompts. The device can be deployed to power on and off lamps, regulate room thermostats, listen to music, and many others. Additional examples of voice-activated devices in the IoT include Apple HomePod, Google Home,...etc. The use of voice-activated devices is a one-way audio application that can be implemented in the IoT. Therefore, Generative AI has the power to transform the way we interact with audio applications in 6G-enabled IoT networks. By employing machine learning algorithms to generate new audio content, generative AI can enable a vast number of applications that were not previously possible. One possible potential application of generative AI Fig. 1: A 6G-enabled IoT Network. in audio is the construction of personal audio environments for virtual and augmented reality (VR/AR) experiences. By processing the preferences of a user and the context of the VR/AR environment, a generative AI system can generate a specific audio experience that is designed to immerse the user in the virtual reality environment. While this could be particularly beneficial for entertainment and gaming apps, it could also be implemented in more practical environments, such as training simulations. In addition, Generative AI could also be used to enhance the availability of audio content for people with hearing difficulties. By generating descriptions of visual content, such as films or television programs, generative AI could allow people with hearing loss to access and enjoy audio-visual media that was previously unfeasible. ### _Text-based IoT Applications_ A specific application of Natural Language Processing (NLP) in the IoT is the development of automated intelligent assistants and chatbots. The systems use NLP-based algorithms to interpret and process user queries, enabling people to interact with machines and systems using natural language. To illustrate, a user may ask a smart assistant to activate the lights or regulate the temperature in their residence, and the assistant will employ NLP to interpret and process the query. Therefore, a popular implementation of generative AI involves the deployment of language models to generate text in natural language. These models are trained on larger text datasets and can produce sentences and paragraphs consistently that reflect the style and structure of the language. These models can be employed for various applications, such as dialogue systems, text summarization, and machine translation. Generative AI has also potential uses in augmenting data, where it can be employed to produce more training examples for machine learning models. This may especially be relevant in situations where the quantity of training data available is small, as it can help to enhance the model's performance. However, a major issue of concern with generative AI is the possibility that it can be exploited to generate malicious or fraudulent material, \begin{table} \begin{tabular}{|p{14.2pt}|p{14.2pt}|p{14.2pt}|p{14.2pt}|p{14.2pt}|p{14.2pt}|p{14.2pt}|} \hline **Parameter** & **User** & **Level Stream model** & **Description** & **Gms. AI** & **ML model** & **Price (3) Open Source (4) \\ \hline \hline Zhang et al. [24] & 2021 & by assuming a regular signal packet and user queries with multiple streams. & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} & \begin{tabular}{} \end{tabular} \\ \hline \hline Raudet et al. [2] & 2021 & the power’s design regarding the assembly of embedded visualized media that collectively manip maniptively manip such as deepfake videos or fake news articles. In order to deal with this issue, it's important to build detection and mitigation approaches for such threats, such as enhancing the resilience of discriminative models and building methods for authenticity verification of the produced contents. ### _Code-based IoT Applications_ A key feature of the IoT is the generation of codes, or software programs, that allow devices to interact with each other and execute specific actions. These codes can be employed to build applications, which can be downloaded to various devices and used to monitor and operate various components of the IoT. There are various approaches to employing generative AI for code generation. There is one approach that involves the use of the machine learning model to generate code based on a set of required input parameters and preferences. Specifically, for example, a user could provide the intended function of a code fragment and the machine learning model will generate the code that will accomplish the desired function. This might be especially valuable in situations where the intended function is complicated and takes time for a human developer to code manually. However, there are also potential limitations and challenges when using generative AI for code generation. One challenge is that the training data for the machine learning model needs to be high quality in order to have the model generate reliable code. An additional challenge is that generated code is not always able to be interpreted by humans, which may make it challenging for the developers to understand and debug. ### _IoT Security_ GPT and GAN are two popular machine learning models that have been widely used in various applications such as natural language processing (NLP) and image generation. Although both models are generative, they have different functions and applications in the security field. With the increase of connected devices, security has now become a major issue. These devices can be susceptible to attackers, who can use them to gain access to sensitive data or to monitor the devices themselves. One potential use of generative AI in IoT security is in detecting and preventing cyberattacks. Most cyberattacks are actually automated and employ pre-determined attack patterns to attempt to penetrate systems. Generative AI can be employed to recognize and prevent these patterns, essentially stopping the attack before it causes any serious damage. Generative AI can also be used to monitor devices continuously for abnormal patterns or activities, allowing for quick recognition and resolving potential security risks. Another potential use of generative AI in IoT security is the creation of secure communication protocols. Multiple IoT devices depend on wireless communication to communicate with the Internet and with each other. These wireless communications, however, can be easily captured and compromised by attackers. Generative AI can be employed to build secure communication protocols that encrypt data transmitted between devices, which makes it significantly more difficult for attackers to gain access to critical data. ## III Generative AI for Cyber Security Table II reviews the recent works on Generative AI for Cyber Threat-Hunting. ### _Generative AI-based method_ #### Iii-A1 GAN-based method Cui et al. [7] introduced DP-GAN, a modified version of the GAN model that enables decentralized and asynchronous Federated Learning (FL) while preserving differential privacy. The system operates by concurrently running two games: one between the generator and discriminator of a traditional GAN, and the other between the discriminator and a new component named DP identifier (DPI). Additionally, the framework incorporates blockchain technology to enhance system reliability and establish a decentralized IoT anomaly detection system. On the other hand, Block Hunter, developed by Yazdinejad et al. [14], utilizes a cluster-oriented design to identify irregularities in smart factories based on blockchain technology. By employing the cluster-based approach, the detection process becomes more effective as it lowers the amount of data transmitted while also increasing throughput in IIoT networks. Block Hunter also examines various anomaly detection algorithms, including clustering-based, statistical, subspace-based, classifier-based, and tree-based techniques. The proposed approach is evaluated based on block generation, block size, and miners and the assessment includes metrics such as accuracy, precision, recall, F1-score, and True Positive Rate (TPR). The security challenges of the Internet of Things (IoT) are addressed by Habibi et al. in their study [12], with a specific focus on the problem of IoT Botnets and their impact on the reliability of IoT systems. The authors assert that current botnet detection methods are flawed as they rely on untrustworthy or unlabeled datasets, leading to a decrease in the performance of security tools. To address this, the study proposes the use of a Generative Adversarial Network model for tabular data modeling and generation. Results indicate that with data augmentation through the proposed Generative AI algorithm, Multilayer Perceptron (MLP) shows high accuracy and F1-score, as well as high sensitivity and specificity. Similarly, the utilization of GAN in trust management for dependable and immediate communication in 6G wireless networks is investigated by Yang et al. [9]. A novel intelligent trust management framework that blends fuzzy logic theory \begin{table} \begin{tabular}{|l|l|l|l|} \hline **Metric** & **GAN** & **GPT** & **BERT** \\ \hline \hline Model type & Generative & Transformer & Transformer \\ \hline Anomaly detection & Yes & Yes & Yes \\ \hline Authentication & No & No & No \\ \hline Encryption and decryption & No & No & No \\ \hline Network intrusion detection & Yes & Yes & Yes \\ \hline Access control & Yes & Yes & Yes \\ \hline Pushing detection & Yes & Yes & Yes \\ \hline Spinn detection & Yes & Yes & Yes \\ \hline Malware detection & Yes & Yes & Yes \\ \hline AI attack detection & Partial & No & No \\ \hline Adversarial training & Yes & No & No \\ \hline Robustness to adversarial examples & Yes & No & No \\ \hline Cyber Threat Intelligence & No & Yes & Yes \\ \hline Vulnerability analysis & No & Yes & Yes \\ \hline \end{tabular} \end{table} TABLE III: Comparison between Generative AI models. and adversarial learning is introduced in the study. Moreover, a trust decision-making model based on GAN is proposed to appraise the credibility of network devices and enhance the robustness of trust management. Tabassum et al. [10] introduced FEDGAN-IDS, a Federated Deep Learning Intrusion Detection System that utilizes the GAN architecture to identify cyber threats in smart IoT systems. The purpose of FEDGAN-IDS is to resolve the disparity in data distribution for rare classes by using GANs and augmenting local data to improve privacy and performance in distributed training. The authors assessed the effectiveness of FEDGAN-IDS by comparing it to other federated intrusion detection models and conducting experiments on multiple datasets. The outcomes indicate that the proposed model outperforms the latest standalone IDS and reaches convergence more quickly. Zhang et al. [4] conducted a study on the vulnerabilities of federated learning in edge computing for IoT applications. Federated learning is a technique that allows machine learning models to be trained locally instead of centrally to reduce privacy concerns. However, the study revealed that this method is vulnerable to poisoning attacks, where an attacker introduces malicious data to corrupt the global model. To tackle this issue, the study suggested two approaches, namely Data_Gen and PoisonGAN. Data_Gen is a technique that utilizes GAN to generate poison data based on the global model parameters. PoisonGAN, on the other hand, is a new poisoning attack model that leverages Data_Gen to reduce attack assumptions and make the attacks more feasible. The effectiveness of these attack models was tested using two common poisoning attack strategies, label flipping, and backdoor, on a federated learning prototype. The findings demonstrated that these attack models were successful in federated learning. #### Iii-B2 GPT-based method Ranade et al. [8] investigate the potential harm of incorporating false Cyber Threat Intelligence (CTI) into cyber-defense systems, which rely on semi-structured data and text to create knowledge graphs. They employ GPT-2 with fine-tuning, a type of Transformer, to produce realistic CTI text descriptions and fool the cyber-defense systems. To demonstrate the destructive consequences of this attack, the authors launch a data poisoning assault on a Cybersecurity Knowledge Graph (CKG) and a cybersecurity corpus. The study also involves feedback from cybersecurity professionals and threat hunters, who acknowledge the produced fake CTI as authentic. #### Iii-B3 BERT-based method Ranade et al. [5] introduced CyBERT, a specialized version of BERT that has been adapted to the field of cybersecurity. This model utilizes a large corpus of cybersecurity data to improve its performance in processing detailed information related to threats, attacks, and vulnerabilities. The key contribution of this work is the development of a fine-tuned BERT model that can accurately and efficiently complete a range of cybersecurity-specific tasks. The model was trained on open-source, unstructured, and semi-unstructured Cyber Threat Intelligence (CTI) data using Masked Language Modeling (MLM) and was evaluated on various downstream tasks that have potential applications in Security Operations Centers (SOCs). Additionally, the paper also presents examples of how CyBERT can be used in real-world cybersecurity tasks. Jo et al. [11] introduced Fig. 2: The proposed GAN and Transformer-based model for Cyber Threat-Hunting in 6G-enabled IoT Networks. Vulcan, a CTI system that extracts relevant information from unstructured text and establishes semantic connections. The system employs neural language model-based named entity recognition and relation extraction techniques. The researchers conducted experiments and found that Vulcan boasts a high level of accuracy, with an F-score of 0.972 for named entity recognition and 0.985 for relation extraction. Furthermore, the system provides a platform for security professionals to create applications for threat analysis, and the study includes two examples of such applications - identifying the evolution of threats and profiling threats - which can save time and resources in analyzing cyber threats and provide in-depth information about the threats. ### _Comparison between Generative AI-based models_ Table III presents a comparison between Generative AI models, namely, GAN, GPT, and BERT. In order to secure IoT applications, GANs have a number of benefits over GPT and BERT. A key advantage of GANs is their potential to generate new data similar to training data. This leads to their suitability for applications such as phishing detection, spam detection, malware detection, network intrusion detection, and anomaly detection. GANs are also resilient to adversarial examples, making them suitable for attack detection and defense. GPT and BERT, however, are not as robust to adversarial examples as GANs. They are valuable for performing other tasks such as providing natural language understanding and text generation. Currently, proposing a cyber security system using Transformer-based models for securing IoT applications is challenging. For the other cybersecurity metrics listed in the table, GAN, GPT, and BERT lack access control, encryption and decryption capabilities, and authentication. From this analysis, we explore and propose the combination of GAN and Transformer models for Cyber Threat-Hunting in 6G-enabled IoT Networks. ## IV The proposed GAN and Transformer-based model The proposed GAN and Transformer-based model for Cyber Threat-Hunting in 6G-enabled IoT Networks is presented in Figure 2. ### _Generative Adversarial Networks_ Generative Adversarial Networks (GANs) are a type of deep learning algorithm that utilizes generative and discriminative models in combination to generate new data that is similar to an existing dataset. The fundamental structure of a GAN is comprised of two neural networks: a generator and a discriminator. The generator is responsible for creating new data, while the discriminator is tasked with evaluating the authenticity of the generated data. The objective of the generator is to minimize the loss function by generating a greater number of samples that the discriminator classifies as genuine. On the other hand, the discriminator aims to maximize the loss function by accurately identifying as many true data samples as possible and as many generated samples as false. The GAN algorithm with an IoT cybersecurity dataset can be organized into the following steps [15]: * Step 1: Generator and discriminator networks are initialized: \[G=G(z),D=D(x)\] (1) * Step 2: Generate a random noise vector \(z\) from a noise distribution. * Step 3: Pass the noise vector over the generator network to generate a new data point \(x^{\prime}\). * Step 4: Pass the generated data point \(x^{\prime}\) and an actual data point \(x\) from the IoT cybersecurity dataset through the discriminator network. * Step 5: the loss functions of the generator and discriminator networks are calculated as follows: The generator's loss is determined by the negative logarithm of the discriminator's output on the generated sample (G(z)). On the other hand, the discriminator's loss is determined by the negative logarithm of its output on the real sample (x) and the output on the generated sample (G(z)) subtracted from 1. \[L(D)=-(log(D(x))+log(1-D(G(z))))\] (2) \[L(G)=-log(D(G(z)))\] (3) Fig. 3: The proposed Transformer - model architecture. * Step 6: Update the weights of generator and discriminator networks via backpropagation and optimization techniques, such as gradient descent or Adam's optimization. * Step 7: Repeat steps 2-6 with a fixed iteration or until the loss function achieves a suitable level. * Step 8: Employ the trained generator network to produce more data points that resemble the existing IoT cybersecurity dataset. The objective of the GAN is to reduce the gap between the generated and trained IoT data while increasing the gap between the generated IoT data and the actual IoT data. ### _Transformer model_ Generative Pre-training Transformer (GPT) is a type of Transformer-based neural network language model that is trained using a large dataset of text. It is typically can be used for vulnerability analysis of IoT text data. The algorithm used in GPT for vulnerability analysis of IoT text data employs the Transformer architecture, which was introduced in a 2017 paper by Google researchers titled "Attention Is All You Need" [16]. The Transformer architecture is a type of neural network that uses self-attention techniques to process sequence IoT data. Specifically, we consider a set of IoT data \(X=x_{1},x_{2},...,x_{n}\) as input and a set of labels \(Y=y_{1},y_{2},...,y_{n}\) as output indicating whether each IoT data attack or not. The GPT algorithm with an IoT cybersecurity dataset can be organized into the following steps: * Step 1: Data preprocessing. This involves the elimination of any unnecessary text or symbols from IoT data. Then, tokenize the IoT data into individual words as well as eliminate the stop words (e.g., "is", "and", "the"). After that, text normalization techniques are adopted (e.g., Lemmatization/Stemming) for further processing. * Step 2: Feature Extraction. This step involves the extraction of features from the preprocessed dataset and represents each IoT data as a feature vector, \(X=x_{1},x_{2},...,x_{n}\). * Step 3. Fine-tune the GPT model. This step consists of using the train set to fine-tune the GPT model. The model learns the patterns and features of IoT data. * Step 4. Model testing. Once the model is fine-tuned, use the test set to evaluate its performance and classify new IoT data as an attack or not. The model predicts the label for each IoT data, \(Y=y_{1},y_{2},...,y_{n}\). * Step 5: Evaluation. The model's performance is assessed through metrics such as accuracy, precision, recall, and F1 score. Subsequently, the model's settings are adjusted, and the feature extraction process is modified as necessary to enhance its performance. The architecture of the proposed Transformer model is presented in Figure 3. The architecture consists of two main components: the Transformer Encoder and the Transformer Model. The Transformer Encoder is a module that implements the attention mechanism and feed-forward neural network of a Transformer. It has four main sub-modules: a Layer Norm module for normalizing the input, a Multi-head Attention module for performing self-attention, a Dropout module for regularization, and a Conv1d module for implementing the feed-forward neural network. The architecture uses 95 as the feature dimension and 15 as the number of classes. The Transformer Encoder module is parameterized with head_size, num_heads, filters, and dropout. The Transformer Model is parameterized with head_size, num_heads, filters, num_Transformer_blocks, and dropout. The attention mechanism adopted by the proposed Transformer model weighs the importance of different elements in the input IoT data, which is defined as: \[Att(Q,K,V)=softmax(\frac{QK^{T}}{\sqrt{d_{k}}})V \tag{4}\] The equation above represents the Attention function, which takes in the query matrix (Q), key matrix (K), and value matrix (V). The output of the function is the dot product of the softmax function of the quotient of the dot product of Q and the transpose of K, divided by the square root of the dimension of the keys \((d_{k})\), and the value matrix (V). By calculating the dot product between the query and key matrices, the attention mechanism obtains the attention weights, which are then subjected to the softmax function. To arrive at the weighted sum of the value matrix, the attention weights are employed. The proposed Transformer model architecture also uses multi-head attention, which is defined as: \[MultiHead(Q,K,V)=Concat(h_{1},...,h_{h})W^{O} \tag{5}\] Where \(h_{i}\) is computed as: \[h_{i}=Att(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V}) \tag{6}\] The formula computes the attention score for the i-th element in a set of query vectors, using the corresponding key and value vectors. The attention score is calculated by feeding the query vector through a linear transformation specified by the matrices Q and K, followed by a softmax operation over the dot products between the transformed query and key vectors. The resulting attention weights are then used to compute a weighted sum of the value vectors, yielding the output vector \(h_{i}\). The position-wise feed-forward network is defined as: \[FFN(x)=max(0,xW_{1}+b_{1})W_{2}+b_{2} \tag{7}\] Where \(x\) is the input, \(W_{1}\) and \(W_{2}\) are weight matrices, and \(b_{1}\) and \(b_{2}\) are biases. The proposed Transformer includes a residual connection and layer normalization. The residual connection is defined as the sum of the input x and the output of the sub-layer. Layer normalization is defined by subtracting the mean \(\mu\) from the input x and dividing by the standard deviation \(\sigma\). The residual connection is defined as: \[residual(x)=x+Sublayer(x) \tag{8}\] Where \(x\) is the input and \(Sublayer\) is the sub-layer of the Transformer. The layer normalization is defined as: \[LayerNorm(x)=\frac{x-\mu}{\sigma} \tag{9}\] ## V Experimental Evaluation The experimental evaluation of generative AI for cyber threat-hunting in 6G-enabled IoT networks is a critical step in understanding the capabilities and limitations of this technology. In this section, we will evaluate the performance of the proposed GAN and Transformer-based model for Cyber Threat-Hunting. ### _Experimental setup and pre-processing of the Dataset_ The Edge-IIoT dataset, sourced from [17]1, is our reference in this study. It is composed of 15 classes, comprising 1 Normal class and 14 attack classes. The dataset was generated from a testbed intended for IoT and IIoT applications, covering diverse devices, sensors, protocols, cloud and edge configurations. The data was sourced from over 10 types of IoT devices, including those for flame detection, heart rate monitoring, soil moisture measurement, pH level tracking, water level detection, ultrasonic detection, and environmental measurement sensors. The dataset further provides 14 types of attacks against IoT and IIoT connectivity protocols, categorized into five categories, namely Denial of Service/Distributed Denial of Service, information gathering, man-in-the-middle attacks, injection attacks, and malware attacks. The dataset also presents 61 diverse features acquired from different sources such as network traffic, logs, system resources, and alerts. Footnote 1: [https://www.kaggle.com/datasets/mohamedamineferrag/edgeiotset-cyber-security-dataset-of-iot](https://www.kaggle.com/datasets/mohamedamineferrag/edgeiotset-cyber-security-dataset-of-iot) The initial step is to organize and clean the data by eliminating duplicate entries and filling in any missing values. Next, we eliminate any irrelevant features and convert categorical variables using label encoding. After that, we use a standardization technique to normalize the features and divide the data into training and testing sets, with the training set utilized for model validation and the test set reserved for the final evaluation of the model. ### _Performance Metrics_ In order to evaluate the effectiveness of the proposed model based on GAN and Transformer, the following significant performance metrics are employed: * True Positive (TP) refers to the correct identification of attack samples. * False Negative (FN) pertains to the incorrect identification of attack samples. * True Negative (TN) signifies the accurate identification of benign samples. * False Positive (FP) represents the erroneous identification of benign samples. * Accuracy measures the ratio of accurately classified entries to the total number of entries, as determined by the formula: \[\frac{TP_{Attack}+TN_{Normal}}{TP_{Attack}+TN_{Normal}+FP_{Normal}+FN_{Attack}}\] (10) * Precision indicates the ratio of correctly classified attack samples to the total number of predicted attack samples, which is determined by the equation: \[\frac{TP_{Attack}}{TP_{Attack}+FP_{Normal}}\] (11) * Recall reflects the proportion of accurately identified attack samples to the total number of actual attack samples, Fig. 4: Confusion Matrix for multi-classification. Fig. 5: Accuracy and loss for multi-classification. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline **Class** & **Precision** & **Recall** & **F1-Score** & **Support** \\ \hline \hline Normal & 1.00 & 1.00 & 1.00 & 323129 \\ \hline Backdoor & 0.95 & 0.94 & 0.95 & 4972 \\ \hline Vulnerability scanner & 0.94 & 0.85 & 0.89 & 10022 \\ \hline DDoS\_ICMP & 1.00 & 1.00 & 1.00 & 32387 \\ \hline Password & 0.43 & 0.80 & 0.56 & 10031 \\ \hline Port Scanning & 0.54 & 0.12 & 0.20 & 4513 \\ \hline DDoS\_UDP & 1.00 & 1.00 & 1.00 & 22007 \\ \hline Uploading & 0.61 & 0.38 & 0.47 & 7527 \\ \hline DDoS\_HTTP & 0.75 & 0.94 & 0.83 & 9982 \\ \hline SQL injection & 0.52 & 0.23 & 0.32 & 10241 \\ \hline Ransomware & 1.00 & 0.78 & 0.88 & 2185 \\ \hline DDoS\_TCP & 0.71 & 1.00 & 0.83 & 10012 \\ \hline XSS & 0.53 & 0.28 & 0.37 & 3183 \\ \hline MITM & 1.00 & 1.00 & 1.00 & 80 \\ \hline Fingerprinting & 0.65 & 0.37 & 0.47 & 200 \\ \hline \hline **accuracy** & & & **0.95** & 441371 \\ \hline **macro avg** & 0.77 & 0.71 & **0.72** & 441371 \\ \hline **weighted avg** & 0.95 & 0.95 & **0.94** & 441371 \\ \hline \end{tabular} \end{table} TABLE IV: Classification report for multi-classification. as given by the formula: \[\frac{TP_{Attack}}{TP_{Attack}+FN_{Attack}} \tag{12}\] * F1-Score represents the harmonic mean of Precision and Recall, as computed by the formula: \[2\cdot\frac{Precision\cdot Recall}{Precision+Recall}\] (13) ### _Experimental Results_ Figure 5 presents the evaluation of the model's performance using loss and accuracy metrics. The training loss starts at 0.139 and decreases over time, reaching 0.111 in the last epoch. The training accuracy starts at 93.755% and increases over time, reaching 94.546% in the last epoch. This suggests that the model is learning and improving its performance on the training dataset. The testing loss starts at 0.123 and decreases over time, reaching 0.111 in the last epoch. The testing accuracy starts at 94.093% and increases over time, reaching 94.555% in the last epoch. This suggests that the model is generalizing well and performing similarly on unseen data. Overall, the model performed well and reached a stable performance of around 95% accuracy after the 7th Epoch. It's worth noting that the performance on the test dataset is similar to the performance on the training dataset, which means that the model is not overfitting. Table IV presents the multi-classification report of the proposed Transformer model for cyber threat-hunting in 6G-enabled IoT networks, which shows a high overall accuracy of 0.95. This indicates that the model is able to accurately identify and classify different types of cyber threats in these networks. The precision, recall, and f1-score of the model also show promising results. The precision metric calculates the percentage of accurate positive predictions made out of all positive predictions made, whereas recall determines the proportion of true positive predictions out of all actual positive occurrences. The f1-score, which is a commonly used indicator of a model's effectiveness, is a harmonic average of precision and recall. The model shows near-perfect performance in identifying normal instances, with precision, recall, and f1-score of 1.00. It also shows high performance in identifying DDoS_ICMP and DDoS_UDP instances, with precision, recall, and f1-score of 1.00. However, the model has lower performance in identifying other types of cyber threats, such as Password, Port_Scanning, and SQL_injection. The precision, recall, and f1-score for these types of instances are 0.43, 0.54, and 0.52, respectively. The support metric shows the number of instances for each class, indicating the imbalance in the dataset. The Confusion Matrix for the proposed Transformer model for multi-classification is shown in Figure 4. We can observe that the normal class pattern was clearly differentiated from all the attack patterns, indicating that the IoT devices' task-oriented nature and consistent data distribution could enhance real-time attack detection capability. Overall, the proposed Transformer model shows promising performance in identifying and classifying cyber threats in 6G-enabled IoT networks. However, there is room for improvement in identifying certain types of cyber threats, such as Password and Port_Scanning. To improve the performance of the model, additional data and techniques can be used to balance the dataset and further fine-tune the model. ## VI Open Challenges There are several challenges regarding the use of generative AI for cyber threat-hunting in 6G-enabled IoT networks, including Scalability issues, Decentralized training issues, Data quality issues, Energy challenges, Privacy-preserving challenges, and Tokenization challenges. ### _Scalability issues_ Generative AI for IoT applications has several scalability issues, including, Cost, Latency, Memory limitations, and High computational requirements. Generative AI is extremely computationally intensive and can be expensive to obtain and maintain. Generative AI models require significant computational resources to train and run, for example, GPT-3 and GPT-4 are large models with 175 billion parameters and 100 trillion parameters, respectively. In addition, the large size of the model also ensures that it requires a lot of memory to run, making it difficult to deploy on memory-limited devices. Therefore, the question we ask here is: how to optimize the scalability of a Generative AI-based system for IoT? We believe that a comparative study of the scalability of Generative AI is needed for IoT security. ### _Decentralized training issues_ Decentralized training of Generative AI raises concerns about data privacy and security, as IoT data (i.e., sensitive and personal information) may be exposed to malicious nodes. Therefore, the coordination of decentralized Generative AI model training among multiple parties can be complex and time-consuming. One potential area of research in this topic could be focused on creating secure and confidential solutions for AI-generated models within decentralized settings. ### _Data quality issues_ One of the most significant data quality issues when using Generative AI models is data bias. GPT-3 for example is trained on a massive dataset of internet text, which can introduce bias into the model. An additional problem with data quality that can occur when using GPT-3 is data noise. With the large amount of data used to train GPT-3, it is probable that there will be some noise in the data that will be of poor quality or irrelevant to that task. Reducing the impact of this noise on the model's performance is crucial, as it may result in the model learning incorrect patterns or making inaccurate predictions. Consequently, there is a need to prioritize the challenge of mitigating this issue by providing a high-quality and well-organized dataset. ### _Energy challenges_ One of the main challenges of Generative AI models is their computational power. For example, GPT-3 model has 175 billion parameters, which makes it one of the largest language models in existence. To build such a model, a large quantity of computing power is needed. An additional energy challenge of GPT-3 is its implementation. GPT-3 requires a large quantity of memory to operate, which means that it needs to be deployed on high-performance servers. At the same time, these servers use a large amount of power to run, which makes a contribution to the overall energy footprint of the model. To reduce the model's energy consumption during training and deployment, there are some solutions that can be adopted in the future, including, reducing the model's size, adopting federated learning, and employing more energy-efficient hardware (e.g., GPUs or TPUs). ### _Privacy-preserving challenges_ There are various challenges with privacy preservation that are associated with GPT-3, including model extraction, model inversion, and data leakage. GPT-3 can store vulnerable data from fine-tuning data. Attackers can use the text generated by GPT-3 to infer private information from the fine-tuning data or the data used to train the model. The GPT-3 model's parameters can be extracted by attackers and used to generate text or infer privacy from the model. The most important question that may arise is how to develop a new privacy strategy such as Differential privacy. ### _Tokenization challenges_ Transformer-based models are built to process different length input sequences, which makes them adequate for a wide range of Natural language processing (NLP) tasks. The input sequences, however, must be in a suitable format for the model to be processed. That is where Tokenization becomes important, which is the process of splitting the text into single words or sub-words. The main challenges in tokenization for Transformer-based models are dealing with special characters (e.g., punctuation, emoji, and numbers), out-of-vocabulary (OOV) words, and rare words. OOV words are those that are not present in the model's vocabulary, which can cause issues for the model, as it is not able to process them properly. Rare words are those that appear rarely in the training data, which can cause issues for the model to handle them correctly. One potential avenue of research regarding this subject may involve the process of tokenization for IoT traffic. ## VII Conclusions In this paper, we discussed the use of generative AI for cyber threat-hunting in 6G-enabled IoT networks, which has the potential to revolutionize the way we detect and prevent cyber-attacks. Then, we proposed a new generative adversarial network (GAN) and Transformer-based model for Cyber Threat-Hunting in 6G-enabled IoT Networks. The experimental analysis results with a new cyber security dataset demonstrate that the Transformer-based security model for cyber threat-hunting can detect IoT attacks with a high overall accuracy of 95%. However, there are also several challenges that must be addressed, including scalability, decentralized training, data quality, energy efficiency, privacy, and tokenization. Despite these challenges, the potential benefits of using generative AI for cyber threat-hunting in 6G-enabled IoT networks make it a promising area of research that is worth exploring further.
2310.09026
Operators commuting with complex symmetric weighted composition operators on $H^2$
In this paper, we initially study when an anti-linear Toeplitz operator is in the commutant of a composition operator. Primarily, we investigate weighted composition operators $W_{g,\psi}$ commuting with complex symmetric weighted composition operators $W_{f,\varphi}$ on the Hardy space $H^2(\mathbb{D})$. In particular, we give the descriptions of the symbols $g$ and $\psi$ such that the inducing weighted composition operator $W_{g,\psi}$ commutes with the complex symmetric weighted composition operator $W_{f,\varphi}$ with the conjugation $\mathcal{J}$. Furthermore, we subsequently demonstrate that these weighted composition operators are normal and complex symmetric in accordance with the properties of the fixed point of the associated symbol $\varphi$.
Sudip Ranjan Bhuia
2023-10-13T11:38:48Z
http://arxiv.org/abs/2310.09026v2
# Operators commuting with complex symmetric weighted composition operators on \(H^{2}\) ###### Abstract. In this paper we study weighted composition operators \(W_{g,\psi}\) commuting with complex symmetric weighted composition operators \(W_{f,\varphi}\) on the Hardy space \(H^{2}(\mathbb{D})\). In particular, we give the descriptions of the symbols \(g\) and \(\psi\) such that the inducing weighted composition operator \(W_{g,\psi}\) commutes with the complex symmetric weighted composition operator \(W_{f,\varphi}\) with the conjugation \(\mathcal{J}\). Furthermore, we subsequently demonstrate that these weighted composition operators are normal and complex symmetric in accordance with the properties of the fixed point of the associated symbol \(\varphi\). Key words and phrases:Weighted composition operator, complex symmetric operator, reproducing kernel, Hilbert space 2020 Mathematics Subject Classification: Primary 47B20 ; Secondary 47A05, 47B38, 47B33 ## 1. Introduction and preliminaries Let \(\mathcal{B}(H)\) be the algebra of all bounded linear operators on a separable complex Hilbert space \(H\). Given a fixed operator \(T\in\mathcal{B}(H)\), we say an operator \(S\) commutes with \(T\) if \(TS=ST\). The set of all operators which commute with \(T\), denoted \(\{T\}^{\prime}\) that is, \(\{T\}^{\prime}=\left\{S\in\mathcal{B}(H):ST-TS=0\right\}\). It is well known that the set \(\{T\}^{\prime}\) forms a weakly closed algebra which is called the commutant of \(T\). Let \(\mathbb{D}\) denote the open unit disk in the complex plane \(\mathbb{C}\). The Hardy space \(H^{2}(\mathbb{D})\) is the Hilbert space of the analytic functions \(f\) on \(\mathbb{D}\) having power series representations with square-summable complex coefficients. That is, \[H^{2}(\mathbb{D})=\left\{f:f(z)=\sum_{n=0}^{\infty}a_{n}z^{n}\quad\text{and} \quad\sum_{n=0}^{\infty}|a_{n}|^{2}<\infty\right\}.\] The space \(H^{2}(\mathbb{D})\) is a Hilbert space with the inner product given by \[\langle f,g\rangle=\sum_{n=0}^{\infty}a_{n}\bar{b}_{n},\] where \(f(z)=\sum_{n=0}^{\infty}a_{n}z^{n}\) and \(g(z)=\sum_{n=0}^{\infty}b_{n}z^{n}\) are in \(H^{2}(\mathbb{D})\). Let \(f\) be an analytic function on \(\mathbb{D}\). Then \(f\) is in \(H^{2}(\mathbb{D})\) if and only if \[\sup_{0<r<1}\frac{1}{2\pi}\int_{0}^{2\pi}|f(re^{i\theta})|^{2}d\theta<\infty.\] Moreover, the norm of such \(f\) is given by \[\|f\|^{2}=\sup_{0<r<1}\frac{1}{2\pi}\int_{0}^{2\pi}|f(re^{i\theta})|^{2}d\theta<\infty.\] We define the Hilbert space \(L^{2}(\mathbb{T})\) by the space of the square integrable functions on \(\mathbb{T}\), the unit circle in complex plane with respect to Lebesgue measure, endowed with the inner product given by \[\langle f,g\rangle=\frac{1}{2\pi}\int_{0}^{2\pi}f(e^{i\theta})\overline{g(e^{i \theta})}d\theta,\] for all \(f,g\in L^{2}(\mathbb{T})\). The space \(L^{\infty}(\mathbb{T})\) is the Banach space consisting of all essentially bounded measurable functions on \(\mathbb{T}\). The Hardy space \(H^{2}(\mathbb{D})\) on the open unit disc \(\mathbb{D}\) is identified with the closed subspace \(H^{2}(\mathbb{T})\) of \(L^{2}(\mathbb{T})\) consisting of functions \(f\) on the boundary \(\mathbb{T}\) whose negative Fourier coefficients vanish; the identification is given by the radial limit \[\tilde{f}(e^{i\theta}):=\lim_{r\to-1}f(re^{i\theta})\quad\text{for almost every }\theta\in[0,2\pi]\] for \(f\in H^{2}(\mathbb{D})\). Moreover, the reverse process is given by the Poisson integral formula \[f(re^{i\theta})=\frac{1}{2\pi}\int_{0}^{2\pi}\tilde{f}(e^{it})\frac{1-r^{2}}{ 1+r^{2}-2r\cos(\theta-t)},\quad re^{i\theta}\in\mathbb{D}.\] Here \(f\) is the harmonic extension of \(\tilde{f}\) into the open unit disc \(\mathbb{D}\) given by the Poisson integral formula. We usually use the same symbol \(f\) for \(\tilde{f}\) and write \(H^{2}(\mathbb{D})=H^{2}(\mathbb{T})\) under the identification. We denote by \(H^{\infty}(\mathbb{D})\) the space of all functions that are analytic and bounded on \(\mathbb{D}\). The space \(H^{\infty}(\mathbb{D})\) is a subspace of \(H^{2}(\mathbb{D})\). Let \(\varphi\) be a holomorphic self map of \(\mathbb{D}\) and let \(w\in\overline{\mathbb{D}}\). We say that \(w\) is a _fixed point_[7, page 50] of \(\varphi\) if \[\lim_{r\to 1^{-}}\varphi(rw)=w.\] By a well known result [7, page 51], if \(w\in\mathbb{T}\) is a fixed point of \(\varphi\), then \[\varphi^{\prime}(w)=\lim_{r\to 1^{-}}\varphi^{\prime}(rw),\] exists as a positive real number or \(+\infty\). Now let \(\varphi\) be an automorphism of \(\mathbb{D}\). We say that \(\varphi\) is: 1. _elliptic_ if it has exactly one fixed point situated in \(\mathbb{D}\), 2. _hyperbolic_ if it has two distinct fixed points in \(\mathbb{T}\), and 3. _parabolic_ if there is only one fixed point in \(\mathbb{T}\). The Hardy space \(H^{2}(\mathbb{D})\) is a reproducing kernel Hilbert space with the kernel function \(K_{w}\), where \(K_{w}(z)=\frac{1}{1-\bar{w}z}\) for each \(w\in\mathbb{D}\) with the property that \(\langle f,K_{w}\rangle=f(w)\) for each \(f\in H^{2}(\mathbb{D})\). The linear span of the reproducing kernels \(\{K_{w}:w\in\mathbb{D}\}\) is dense in \(H^{2}(\mathbb{D})\). For a holomorphic self map \(\varphi\) on \(\mathbb{D}\) and a holomorphic function \(f\) on \(\mathbb{D}\), the weighted composition operator \(W_{f,\varphi}\) on \(H^{2}(\mathbb{D})\) is defined by \(W_{f,\varphi}g=f\cdot(g\circ\varphi)\) for all \(g\in H^{2}(\mathbb{D})\). The composition operator \(C_{\varphi}\) is defined by \(C_{\varphi}=W_{1,\varphi}\). It is worth to note that \(W_{f,\varphi}=M_{f}C_{\varphi}\) whenever \(f\in H^{\infty}\), where \(M_{f}\) denotes the multiplication operator. The action of the adjoint of weighted composition operator \(W_{f,\varphi}\) on the kernel function is given by \[W_{f,\varphi}^{*}K_{w}=\overline{f(w)}K_{\varphi(w)}.\] For more about composition operators, we refer the book [7]. **Definition 1.1**.: A conjugation on a separable complex Hilbert space \(H\) is an anti-linear operator \(C\) on \(H\) which satisfies the following conditions 1. \(C\) is isometric: \(\langle Cx,Cy\rangle=\langle y,x\rangle\), \(x,y\in H\), 2. \(C\) is involutive: \(C^{2}=I\). where \(I\) is the identity operator on \(H\). **Definition 1.2**.: An anti-linear operator \(C\) on a separable complex Hilbert space \(H\) is a conjugation if and only if it is both unitary and self-adjoint. We say that \(T\) is \(C\)-symmetric if \(T=CT^{*}C\), and complex symmetric if there exists a conjugation \(C\) with respect to which \(T\) is \(C\)-symmetric. In the Hardy space \(H^{2}(\mathbb{D})\), the conjugation operator \(\mathcal{J}:H^{2}(\mathbb{D})\to H^{2}(\mathbb{D})\) is defined by \(\mathcal{J}f(z)=\overline{f(\bar{z})}\) for all \(z\in\mathbb{D}\) and \(f\in H^{2}(\mathbb{D})\). For more details on complex symmetric operators readers are referred to [9, 10, 11, 12]. The study of complex symmetric weighted composition operators on the Hardy space was initiated by Garcia and Hammond in [8]. In [13], authors gave a classification of complex symmetric weighted composition operators with respect to a special conjugation. Generally, providing information about the operators that commute with a specific operator offers insights into the operator's structure. The commutant of a particular operator is a relatively rare subject of study. C. C. Cowen's research has focused on examining the commutants of composition and specific Toeplitz operators, as indicated in references [3] and [4]. Additionally, B. Cload has contributed findings concerning the commutants of composition operators in [2]. In [6], authors have studied the self-adjoint weighted composition operators o the Hardy space of unit disc. E. Ko studied the commutant of self-adjoint weighted composition operators in [14]. It would be interesting to classify weighted composition operators commuting with complex symmetric weighted composition operators \(W_{f,\varphi}\) with certain conjugation on \(H^{2}\). In section 2, we give description of the weighted composition operators that commute with complex symmetric weighted composition operator with the conjugation \(\mathcal{J}\). **Theorem 1.3**.: _[_5_, Theorem 6]_ _Let \(\gamma\in\mathbb{N}\). Let \(f\) be in \(H^{\infty}\) and let \(\varphi\) be an analytic map of the unit disc into itself. If the weighted composition operator \(W_{f,\varphi}\) is Hermitian on \(H_{\gamma}(\mathbb{D})\), then \(f(0)\) and \(\varphi^{\prime}(0)\) are real and_ \[\varphi(z)=a_{0}+\frac{a_{1}z}{1-\bar{a}_{0}z}\quad\text{and}\quad f(z)=\frac{ c}{(1-\bar{a}_{0}z)^{\gamma}} \tag{1.1}\] _where \(a_{1}=\varphi^{\prime}(0)\), and \(c=f(0)\)._ _Conversely, let \(a_{0}\in\mathbb{D}\), and let \(c\) and \(a_{1}\) be real numbers. If \(\varphi(z)=a_{0}+\frac{a_{1}z}{1-\bar{a}_{0}z}\) maps the unit disc into itself and \(f(z)=\frac{c}{(1-\bar{a}_{0}z)^{\gamma}}\), then the weighted composition operator \(W_{f,\varphi}\) is Hermitian._ **Theorem 1.4**.: _[_14_, Theorem 3.5]_ _Let \(g\in H^{\infty}\) and let \(\psi\) be an analytic map of \(\mathbb{D}\) into itself. Assume that \(W_{f,\varphi}\) is self-adjoint on \(H^{2}(\mathbb{D})\) and \(\varphi\), not an elliptic automorphism, has a fix point \(b\in\mathbb{D}\). Then \(W_{g,\psi}\in\{W_{f,\varphi}\}^{\prime}\) if and only if \(W_{g,\psi}\) has the following symbols; for \(b\neq 0\),_ \[\psi(z)=d_{0}+\frac{d_{2}z}{1-d_{1}z}\quad\text{and}\quad g(z)=g(b)\frac{d_{3 }}{1-d_{1}z},\] _where \(\psi(0)=d_{0}=\frac{(\alpha-1)b}{|b|^{2}\alpha-1}\), \(d_{1}=\frac{(\alpha-1)\bar{b}}{|b|^{2}\alpha-1}\), \(\psi^{\prime}(0)=d_{2}=\alpha\frac{(|b|^{2}-1)^{2}}{(|b|^{2}\alpha-1)^{2}}\), \(d_{3}=\frac{|b|^{2}-1}{|b|^{2}\alpha-1}\) for some \(\alpha\in\mathbb{C}\) and for \(b=0\),_ \[\psi(z)=\alpha z\quad\text{and}\quad g(z)=g(0)\] _for some \(\alpha\in\mathbb{C}\)._ **Theorem 1.5**.: _[_13_, Theorem 3.3]_ _Let \(\varphi\) be an analytic self-map of \(\mathbb{D}\) and \(f\in H^{\infty}(\mathbb{D})\) be not identically zero. If the weighted composition operator \(W_{f,\varphi}\) is complex symmetric defined on \(H^{2}(\mathbb{D})\) with the conjugation \(\mathcal{J}\), then_ \[f(z)=\frac{b}{1-a_{0}z}\quad\varphi(z)=a_{0}+\frac{a_{1}z}{1-a_{0}z},\] _where \(a_{0}=\varphi(0),\,a_{1}=\varphi^{\prime}(0)\), and \(b=f(0)\)._ _Conversely, let \(a_{0}\in\mathbb{D}\). If \(\varphi(z)=a_{0}+\frac{a_{1}z}{1-a_{0}z}\) maps the unit disc into itself and \(f(z)=\frac{b}{1-a_{0}z}\), then the weighted composition operator \(W_{f,\varphi}\) is complex symmetric with the conjugation \(\mathcal{J}\)._ **Lemma 1.6**.: _[_16_]_ _A linear fractional map \(\phi\) of the form \(\phi(z)=\frac{az+b}{cz+d}\); \(ad-bc\neq 0\), maps \(\mathbb{D}\) into itself if and only if_ \[|b\bar{d}-a\bar{c}|+|ad-bc|\leqslant|d|^{2}-|c|^{2}. \tag{1.2}\] **Lemma 1.7**.: _[_13_, Lemma 4.8]_ _Let \(\varphi(z)=a_{0}+\frac{a_{1}z}{1-a_{0}z}\). Then \(\varphi\) maps the open unit disc into itself if and only if \(|a_{0}|<1\) and \(2|a_{0}+\bar{a}_{0}(a_{1}-a_{0}^{2})|\leqslant 1-|a_{1}-a_{0}^{2}|^{2}\)._ _In particular, when \(a_{1}=a_{0}^{2}=0\), \(\varphi\) maps the open unit disc into itself if and only if \(|a_{0}|\leqslant\frac{1}{2}\), and when \(a_{1}-a_{0}^{2}=\pm 1\), \(\varphi\) maps the open unit disc into itself if and only if \(a_{0}\) is either real or purely imaginary._ **Proposition 1.8**.: _[_13_, Proposition 4.4]_ _Let \(\varphi\) be an analytic self-map of \(\mathbb{D}\) and let \(f\in H^{\infty}(\mathbb{D})\) be not identically zero on \(\mathbb{D}\), where \(\varphi(0)\neq 0,\,\varphi^{\prime}(0)\neq 0\), and \(\varphi(\lambda)=\lambda\) for some \(\lambda\in\mathbb{D}\). If \(W_{f,\varphi}\) is complex symmetric with the conjugation \(\mathcal{J}\), then_ \[g_{j}(z):=\frac{1}{1-\lambda z}\left(\frac{\lambda-z}{1-\lambda z}\right)^{j}\] _is an eigenvector of \(W_{f,\varphi}\) with respect to the eigenvalue \(f(\lambda)\varphi^{\prime}(\lambda)^{j}\) for each non-negative integer \(j\)._ ## 2. Description of commutant Throughout this section, we consider \(W_{f,\varphi}\) is complex symmetric with the conjugation \(\mathcal{J}\) and \(\varphi\) is not an identity map. Because if \(\varphi\) is an identity map then \(W_{g,\psi}\in\{W_{f,\varphi}\}^{\prime}\) will always holds. That is, in view of Theorem 1.5, we will always assume that \(a_{1}\neq 1\). **Lemma 2.1**.: _Let \(g\in H^{\infty}\) and \(\psi\) be an analytic map of \(\mathbb{D}\) into itself. Assume that \(W_{g,\psi}\in\{W_{f,\varphi}\}^{\prime}\), where \(W_{f,\varphi}\) is complex symmetric with the conjugation \(\mathcal{J}\). If \(\varphi\) has a fixed point in \(\mathbb{D}\), then \(\psi\) has the same fixed point with \(\varphi\)._ Proof.: Since \(W_{f,\varphi}\) is \(\mathcal{J}\)-symmetric with the conjugation \(\mathcal{J}f(z)=\overline{f(\bar{z})}\), it follows from Theorem 1.5 that \[f(z)=\frac{b}{1-a_{0}z},\quad\varphi(z)=a_{0}+\frac{a_{1}z}{1-a_{0}z}, \tag{2.1}\] where \(a_{0}=\varphi(0)\), \(a_{1}=\varphi^{\prime}(0)\), and \(b=f(0)\). Let \(\lambda\) be a fixed point of \(\varphi\) in \(\mathbb{D}\). It is very clear that if \(a_{0}=0\), then \(\lambda=0\). When \(a_{0}\neq 0\), then the fixed point \(\lambda\) is given by \[\lambda=\frac{2a_{0}}{1+a_{0}^{2}-a_{1}\mp\sqrt{(1+a_{0}^{2}-a_{1})^{2}-4a_{0} ^{2}}}. \tag{2.2}\] **Case 1:** If \(\lambda=0\), then \(\varphi(z)=a_{1}z\). Since \(W_{g,\psi}\in\{W_{f,\varphi}\}^{\prime}\), we have \[\psi(a_{1}z)=\psi(\varphi(z))=\varphi(\psi(z))=a_{1}\psi(z).\] Now consider the power series \(\psi(z)=\sum\limits_{n=0}^{\infty}\beta_{n}z^{n}\), then we have \[\sum\limits_{n=0}^{\infty}\beta_{n}a_{1}^{n}z^{n}=\sum\limits_{n=0}^{\infty}a _{1}\beta_{n}z^{n}.\] Hence \(\beta_{0}=a_{1}\beta_{0}\). Since \(a_{1}\neq 1\), we have \(\beta_{0}=0\) and this implies that \(\psi(0)=0\). **Case 2:** If \(\lambda\neq 0\), then consider the kernel function \(K_{\lambda}\), that is, \(K_{\lambda}(z)=\frac{1}{1-\lambda z}\) for all \(z\in\mathbb{D}\). We know that \(W_{f,\varphi}^{*}K_{\lambda}=\overline{f(\lambda)}K_{\varphi(\lambda)}\) and since \(W_{g,\psi}\in\{W_{f,\varphi}\}^{\prime}\), we have \[W_{f,\varphi}^{*}W_{g,\psi}^{*}K_{\lambda}=W_{g,\psi}^{*}W_{f,\varphi}^{*}K_{ \lambda}=\overline{f(\lambda)}W_{g,\psi}^{*}K_{\varphi(\lambda)}=\overline{f( \lambda)}W_{g,\psi}^{*}K_{\lambda}.\] **Case 2(i):** If \(W_{g,\psi}^{*}K_{\lambda}=0\), then \(K_{\lambda}\) is the eigenvector of \(W_{g,\psi}^{*}\) corresponding to the eigenvalue \(0\). Since \(\varphi(\psi(z))=\psi(\varphi(z))\), we get \(\varphi(\psi(\lambda))=\psi(\varphi(\lambda))=\psi(\lambda)\), and this implies that \(\psi(\lambda)\) is a fixed point of \(\varphi\), and hence \(\psi(\lambda)=0\), or \(\lambda\). Now if \(\psi(\lambda)=0\), then \(a_{0}=\varphi(0)=\varphi(\psi(\lambda))=\psi(\varphi(\lambda))=\psi(\lambda)=0\), a contradiction to the fact that \(a_{0}\neq 0\). Therefore \(\psi(\lambda)=\lambda\). **Case 2(ii):** If \(W_{g,\psi}^{*}K_{\lambda}\neq 0\), then \(W_{g,\psi}^{*}K_{\lambda}\) is an eigenvector of \(W_{f,\varphi}^{*}\) with an eigenvalue \(\overline{f(\lambda)}\). Since \(W_{f,\varphi}\) is \(\mathcal{J}\)-symmetric, we have \(\mathcal{J}W_{g,\psi}^{*}K_{\lambda}\) is an eigenvector corresponding to the eigenvalue \(f(\lambda)\). Therefore, by Proposition 1.8, we get \(\mathcal{J}W_{g,\psi}^{*}K_{\lambda}=\beta K_{\bar{\lambda}}\) for some nonzero complex number \(\beta\), and this implies \(g(\lambda)K_{\overline{\psi(\lambda)}}=\beta K_{\bar{\lambda}}\). Therefore, \(\psi(\lambda)=\lambda\). Thus \(\lambda\) is a fixed point of \(\psi\). This completes the proof. **Lemma 2.2**.: _Let \(d_{0}=\frac{\lambda(\alpha-1)}{\lambda^{2}\alpha-1}=d_{1}\), \(d_{2}=\alpha\frac{(\lambda^{2}-1)^{2}}{(\lambda^{2}\alpha-1)^{2}}\), where \(\alpha\) is any complex number and \(\lambda\) as in Equation 2.2. Then the following are true_ 1. \(d_{1}+a_{0}(d_{2}-d_{0}d_{1})=a_{0}+d_{1}(a_{1}-a_{0}^{2})\)_._ 2. \(\bar{d}_{0}(d_{2}-d_{0}^{2}-1)=-\bar{\lambda}(\lambda^{2}+1)\frac{|1-\alpha|^{ 2}}{|\lambda^{2}\alpha-1|^{2}}\)_._ Proof.: **Proof of (1):** To prove the relation in (1), it is enough to prove \(a_{0}(d_{2}-d_{0}d_{1}-1)-d_{1}(a_{1}-a_{0}^{2}-1)=0\). Now \[a_{0}(d_{2}-d_{0}d_{1}-1) =a_{0}(d_{2}-d_{0}^{2}-1)\] \[=a_{0}\left(\frac{\lambda^{2}-\alpha}{\lambda^{2}\alpha-1}-1\right)\] \[=a_{0}\frac{(1-\alpha)(\lambda^{2}+1)}{\lambda^{2}\alpha-1}\] and \[d_{1}(a_{1}-a_{0}^{2}-1)=\frac{\lambda(\alpha-1)(a_{1}-a_{0}^{2}-1)}{\lambda^ {2}\alpha-1}.\] Therefore, \[a_{0}(d_{2}-d_{0}d_{1}-1)-d_{1}(a_{1}-a_{0}^{2}-1)\] \[=a_{0}\frac{(1-\alpha)(\lambda^{2}+1)}{\lambda^{2}\alpha-1}- \frac{\lambda(\alpha-1)(a_{1}-a_{0}^{2}-1)}{\lambda^{2}\alpha-1}\] \[=\frac{1-\alpha}{\lambda^{2}\alpha-1}\left[a_{0}(\lambda^{2}+1)+ \lambda(a_{1}-a_{0}^{2}-1)\right]\] \[=\frac{1-\alpha}{\lambda^{2}\alpha-1}\left[a_{0}(\lambda^{2}+1)- \lambda(1+a_{0}^{2}-a_{1})\right]\] \[=\frac{1-\alpha}{\lambda^{2}\alpha-1}\left[a_{0}\left\{\left( \frac{1+a_{0}^{2}-a_{1}\pm\sqrt{(1+a_{0}^{2}-a_{1})^{2}-4a_{0}^{2}}}{2a_{0}} \right)^{2}+1\right\}\right.\] \[\qquad\qquad\left.-\frac{1+a_{0}^{2}-a_{1}\pm\sqrt{(1+a_{0}^{2}-a _{1})^{2}-4a_{0}^{2}}}{2a_{0}}(1+a_{0}^{2}-a_{1})\right]\] \[=\frac{1-\alpha}{\lambda^{2}\alpha-1}\left[\frac{(1+a_{0}^{2}-a_ {1})^{2}\pm(1+a_{0}^{2}-a_{1})\sqrt{(1+a_{0}^{2}-a_{1})^{2}-4a_{0}^{2}}}{2a_{0}}\right.\] \[\qquad\qquad\left.-\frac{1+a_{0}^{2}-a_{1}\pm\sqrt{(1+a_{0}^{2}-a _{1})^{2}-4a_{0}^{2}}}{2a_{0}}(1+a_{0}^{2}-a_{1})\right]\] \[=0.\] This completes the proof of (1). **Proof of (2):** We have \[\begin{split} d_{2}-d_{0}^{2}-1&=\alpha\frac{(\lambda^{2 }-1)^{2}}{(\lambda^{2}\alpha-1)^{2}}-\frac{\lambda^{2}(\alpha-1)^{2}}{(\lambda^{ 2}\alpha-1)^{2}}-1\\ &=\frac{\alpha\lambda^{4}-2\lambda^{2}\alpha+\alpha-\lambda^{2} \alpha^{2}+2\lambda^{2}\alpha-\lambda^{2}}{(\lambda^{2}\alpha-1)^{2}}-1\\ &=\frac{\alpha\lambda^{4}+\alpha-\lambda^{2}\alpha^{2}-\lambda^{ 2}}{(\lambda^{2}\alpha-1)^{2}}-1\\ &=\frac{\lambda^{2}-\alpha}{\lambda^{2}\alpha-1}-1\\ &=\frac{\lambda^{2}-\alpha-\lambda^{2}\alpha+1}{\lambda^{2}\alpha -1}\\ &=\frac{\lambda^{2}-\alpha-\lambda^{2}\alpha+1}{\lambda^{2}\alpha -1}\\ &=\frac{(1-\alpha)(\lambda^{2}+1)}{\lambda^{2}\alpha-1}.\end{split} \tag{2.3}\] Therefore, \[\bar{d}_{0}(d_{2}-d_{0}^{2}-1)=\frac{\bar{\lambda}(\bar{\alpha}-1)}{\bar{ \lambda}^{2}\alpha-1}\times\frac{(1-\alpha)(\lambda^{2}+1)}{\lambda^{2}\alpha -1}=-\bar{\lambda}(\lambda^{2}+1)\frac{|1-\alpha|^{2}}{|\lambda^{2}\alpha-1|^ {2}}\] _Remark 2.3_.: \(\bar{d}_{0}(d_{2}-d_{0}^{2}-1)=d_{0}(\overline{d_{2}-d_{0}^{2}}-1)\) if and only if \(\lambda\) is real. **Theorem 2.4**.: _Let \(g\in H^{\infty}\) and let \(\psi\) be an analytic map of \(\mathbb{D}\) into itself. Assume that \(W_{f,\varphi}\) is complex symmetric with the conjugation \(\mathcal{J}\) on \(H^{2}(\mathbb{D})\) and \(\varphi\), not an elliptic automorphim, has a fixed point \(\lambda\) in \(\mathbb{D}\). Then \(W_{g,\psi}\in\{W_{f,\varphi}\}^{\prime}\) if and only if \(W_{g,\psi}\) has the following symbol functions; for \(\lambda\neq 0\),_ \[\psi(z)=\frac{(\lambda^{2}-\alpha)z+\lambda(\alpha-1)}{\lambda(1-\alpha)z+( \lambda^{2}\alpha-1)}\quad\text{and}\quad g(z)=g(\lambda)\frac{\lambda^{2}-1} {\lambda(1-\alpha)z+(\lambda^{2}\alpha-1)}\] _and for \(\lambda=0\),_ \[\psi(z)=\alpha z\quad\text{and}\quad g(z)=g(0)\] _for some \(\alpha\in\mathbb{C}\)._ Proof.: Let \(\lambda\neq 0\). Since \(W_{g,\psi}\in\{W_{f,\varphi}\}^{\prime}\), we have \[\begin{split} W_{f,\varphi}W_{g,\psi}K_{\bar{\lambda}}& =W_{g,\psi}W_{f,\varphi}K_{\bar{\lambda}}\\ &=W_{g,\psi}\mathcal{J}W_{f,\varphi}^{\ast}\mathcal{J}K_{\bar{ \lambda}}\\ &=W_{g,\psi}\mathcal{J}W_{f,\varphi}^{\ast}K_{\lambda}\\ &=f(\lambda)W_{g,\psi}K_{\bar{\lambda}}.\end{split}\] If \(W_{g,\psi}K_{\bar{\lambda}}=0\), then \(K_{\bar{\lambda}}\) is an eigenvector of \(W_{g,\psi}\) corresponding to an eigenvalue \(0\). If \(W_{g,\psi}K_{\bar{\lambda}}\neq 0\), then If \(W_{g,\psi}K_{\bar{\lambda}}\) is an eigenvector of \(W_{f,\varphi}\) corresponding to the eigenvalue \(f(\lambda)\). Then by Proposition 1.8, we have \(W_{g,\psi}K_{\bar{\lambda}}=\gamma_{1}K_{\bar{\lambda}}\) for some nonzero \(\gamma_{1}\in\mathbb{C}\). Again by Proposition 1.8, the function \(\eta=\frac{1}{1-\lambda z}\left(\frac{\lambda-z}{1-\lambda z}\right)\) is an eigenvector of \(W_{f,\varphi}\) corresponding to the eigenvalue \(f(\lambda)\varphi^{\prime}(\lambda)\). Since \(W_{g,\psi}\in\{W_{f,\varphi}\}^{\prime}\), we have \[W_{f,\varphi}W_{g,\psi}\eta=W_{g,\psi}W_{f,\varphi}\eta=f(\lambda)\varphi^{ \prime}(\lambda)W_{g,\psi}\eta.\] This shows that \(W_{g,\psi}\eta\) is an eigenvector corresponding to the eigenvalue \(f(\lambda)\varphi^{\prime}(\lambda)\). Therefore, \(W_{g,\psi}\eta=\gamma_{2}\eta\). Thus we have \[\begin{split} W_{g,\psi}\frac{1}{1-\lambda z}\left(\frac{ \lambda-z}{1-\lambda z}\right)&=\gamma_{2}\frac{1}{1-\lambda z} \left(\frac{\lambda-z}{1-\lambda z}\right)\\ g(z)\frac{1}{1-\lambda\psi(z)}\left(\frac{\lambda-\psi(z)}{1- \lambda\psi(z)}\right)&=\gamma_{2}\frac{1}{1-\lambda z}\left( \frac{\lambda-z}{1-\lambda z}\right)\\ W_{g,\psi}K_{\bar{\lambda}}(z)\left(\frac{\lambda-\psi(z)}{1- \lambda\psi(z)}\right)&=\gamma_{2}\frac{1}{1-\lambda z}\left( \frac{\lambda-z}{1-\lambda z}\right)\\ \gamma_{1}K_{\bar{\lambda}}(z)\left(\frac{\lambda-\psi(z)}{1- \lambda\psi(z)}\right)&=\gamma_{2}\frac{1}{1-\lambda z}\left( \frac{\lambda-z}{1-\lambda z}\right)\\ \gamma_{1}K_{\bar{\lambda}}(z)\left(\frac{\lambda-\psi(z)}{1- \lambda\psi(z)}\right)&=\gamma_{2}K_{\bar{\lambda}}(z)\left( \frac{\lambda-z}{1-\lambda z}\right)\\ \left(\frac{\lambda-\psi(z)}{1-\lambda\psi(z)}\right)& =\alpha\left(\frac{\lambda-z}{1-\lambda z}\right).\end{split} \tag{2.4}\] Thus we conclude that \(\psi(z)=\frac{(\lambda^{2}-\alpha)z+\lambda(\alpha-1)}{\lambda(1-\alpha)z+( \lambda^{2}\alpha-1)}\). Since \(W_{g,\psi}K_{\bar{\lambda}}=\gamma_{1}K_{\bar{\lambda}}\) for some nonzero \(\gamma_{1}\in\mathbb{C}\), for all \(z\in\mathbb{D}\), we have \[\frac{g(z)}{1-\lambda\psi(z)}=\frac{\gamma_{1}}{1-\lambda z}.\] If we put \(z=\lambda\), we have \[\frac{g(\lambda)}{1-\lambda\psi(\lambda)}=\frac{\gamma_{1}}{1-\lambda^{2}}.\] Thus by Lemma 2.1, we have \(\gamma_{1}=g(\lambda)\). Therefore, \[\begin{split} g(z)&=g(\lambda)\frac{1-\lambda\psi(z )}{1-\lambda z}\\ &=\frac{g(\lambda)}{1-\lambda z}(1-\lambda\psi(z))\\ &=\frac{g(\lambda)}{1-\lambda z}\left(1-\lambda\frac{(\lambda^{2} -\alpha)z+\lambda(\alpha-1)}{\lambda(1-\alpha)z+(\lambda^{2}\alpha-1)}\right) \\ &=g(\lambda)\frac{\lambda^{2}-1}{\lambda(1-\alpha)z+(\lambda^{2} \alpha-1)}\end{split}\] Now set \(d_{0}=\frac{\lambda(\alpha-1)}{\lambda^{2}\alpha-1}=d_{1}\), \(d_{2}=\alpha\frac{(\lambda^{2}-1)^{2}}{(\lambda^{2}\alpha-1)^{2}}\), and \(d_{3}=\frac{\lambda^{2}-1}{\lambda^{2}\alpha-1}\), then \[\psi(z)=d_{0}+\frac{d_{2}z}{1-d_{1}z}\quad\text{and}\quad g(z)=g(\lambda)\frac{ d_{3}}{1-d_{1}z}. \tag{2.5}\] On the other hand, consider the converse assertions to be true. Since \(W_{g,\psi}W_{f,\varphi}=M_{g(f\circ\psi)}C_{\varphi\circ\psi}\) and \(W_{f,\varphi}W_{g,\psi}=M_{f(g\circ\phi)}C_{\psi\circ\phi}\), it suffices to show that \(g(f\circ\psi)=f(g\circ\varphi)\) and \(\varphi\circ\psi=\psi\circ\varphi\). It is clear when \(\lambda=0\). Assume \(\lambda\neq 0\). \[\begin{split}(g(f\circ\psi))(z)&=\frac{g(\lambda)d_ {3}}{1-d_{1}z}\times\frac{f(0)}{1-a_{0}\psi(z)}\\ &=\frac{g(\lambda)d_{3}}{1-d_{1}z}\times\frac{f(0)}{1-a_{0} \frac{d_{0}+(d_{2}-d_{0}d_{1})z}{1-d_{1}z}}\\ &=\frac{g(\lambda)d_{3}f(0)}{(1-a_{0}d_{0})-\left[d_{1}+a_{0} \left(d_{2}-d_{0}d_{1}\right)\right]z}.\end{split} \tag{2.6}\] \[\begin{split}(f(g\circ\varphi))(z)&=\frac{f(0)}{1- a_{0}z}\times\frac{g(\lambda)d_{3}}{1-d_{1}\varphi(z)}\\ &=\frac{g(\lambda)d_{3}f(0)}{(1-a_{0}d_{1})-\left[a_{0}+d_{1} \left(a_{1}-a_{0}^{2}\right)\right]z}\end{split} \tag{2.7}\] Therefore, by using (1) of Lemma 2.2, we conclude that \((g(f\circ\psi))(z)=(f(g\circ\varphi))(z)\). \[\begin{split}(\varphi\circ\psi)(z)&=\frac{a_{0}+(a _{1}-a_{0}^{2})\psi(z)}{1-a_{0}\psi(z)}\\ &=\frac{a_{0}+\left(a_{1}-a_{0}^{2}\right)d_{0}+\left[-a_{0}d_{1 }+\left(a_{1}-a_{0}^{2}\right)\left(d_{2}-d_{0}d_{1}\right)\right]z}{1-a_{0}d _{0}+\left[-d_{1}-a_{0}\left(d_{2}-d_{0}d_{1}\right)\right]z}.\end{split} \tag{2.8}\] and \[\begin{split}(\psi\circ\varphi)(z)&=\frac{d_{0}+(d _{2}-d_{0}d_{1})\,\varphi(z)}{1-d_{1}\varphi(z)}\\ &=\frac{d_{0}+(d_{2}-d_{0}d_{1})\,\frac{a_{0}+\left(a_{1}-a_{0}^{ 2}\right)z}{1-a_{0}z}}{1-d_{1}\frac{a_{0}+\left(a_{1}-a_{0}^{2}\right)z}{1-a_{ 0}z}}\\ &=\frac{d_{0}+(d_{2}-d_{0}d_{1})\,a_{0}+\left[-a_{0}d_{0}+\left( d_{2}-d_{0}d_{1}\right)\left(a_{1}-a_{0}^{2}\right)\right]z}{1-a_{0}d_{1}+ \left[-a_{0}-d_{1}\left(a_{1}-a_{0}^{2}\right)\right]z}.\end{split} \tag{2.9}\] Again using (1) of Lemma 2.2, we obtain \((\varphi\circ\psi)(z)=(\psi\circ\varphi)(z)\). This completes the proof. Using [15, Remark 4.8] and Remark 2.3, we can easily prove the following: **Corollary 2.5**.: _Let \(g\in H^{\infty}\) and \(\psi\) be an analytic map of \(\mathbb{D}\) into itself. Assume that \(W_{g,\psi}\in\{W_{f,\varphi}\}^{\prime}\), where \(W_{f,\varphi}\) is complex symmetric with the conjugation \(\mathcal{J}\). If \(\varphi\) is not an elliptic automorphism and has a fixed point, say \(\lambda\) in \(\mathbb{D}\). Then the following holds true:_ 1. _If_ \(\lambda\) _is real, then_ \(W_{g,\psi}\) _is normal._ _._ 2. _If both_ \(\lambda\) _and_ \(\alpha\) _are real, then_ \(W_{g,\psi}\) _is self-adjoint._ 3. _If_ \(|d_{0}|<1\) _and_ \(2|d_{0}+\bar{d_{0}}\left(d_{2}-d_{0}^{2}\right)|\leq 1-|d_{2}-d_{0}^{2}|\)_, then_ \(W_{g,\psi}\) _is_ \(\mathcal{J}\)_-symmetric._ **Corollary 2.6**.: _Let \(g\in H^{\infty}\) and \(\psi\) be an analytic map of \(\mathbb{D}\) into itself. Assume that \(W_{g,\psi}\in\{W_{f,\varphi}\}^{\prime}\) where \(W_{f,\varphi}\) is complex symmetric with the conjugation \(\mathcal{J}\). If \(\varphi\) has a fixed point, say \(\lambda\) in \(\mathbb{D}\), then the space \(\mathcal{M}=\{h\in H^{2}:h(\lambda)=0\}\) is invariant subspace for \(W_{g,\psi}\)._ Proof.: By Lemma 2.1, \(\psi\) has the same fixed point \(\lambda\). Therefore, for any \(h\in\mathcal{M}\) \[W_{g,\psi}h(\lambda)=g(\lambda)h(\psi(\lambda))=g(\lambda)h(\lambda)=0\] _Remark 2.7_.: 1. \(\mathcal{M}=B_{\lambda}H^{2}\), where \(B_{\lambda}(z)=\frac{z-\lambda}{1-\lambda z}\) and \(B_{\lambda}H^{2}\) is invariant for \(C_{\psi}\). Therefore, by employing [1, Theorem 2.3], we can deduce that \(\frac{B_{\lambda}\circ\psi}{B_{\lambda}}\) belongs to the class denoted as \(\mathcal{S}(\mathbb{D})\). In this context, \(\mathcal{S}(\mathbb{D})\) refers to the collection of functions that are both holomorphic and have a modulus bounded by one on the domain \(\mathbb{D}\), formally defined as: \[\mathcal{S}(\mathbb{D})=\{g\in H^{\infty}(\mathbb{D}):\|g\|_{\infty}:=\sup_{z \in\mathbb{D}}|g(z)|\leq 1\}.\] This particular set is recognized as the Schur class, and its elements are known as Schur functions. 2. If we assume that the weight function \(\psi\) is identically \(1\), and the composition operator \(C_{\varphi}\) is complex symmetric, then the symbol \(\varphi\) possess a fix point. Therefore, we can relax the assumption that \(\varphi\) has a fixed point from our results. **Acknowledgements** The research of the author is supported by the NBHM postdoctoral fellowship, Department of Atomic Energy (DAE), Government of India (File No: 0204/16(21)/2022/R&D-II/11995) and the INSPIRE grant of Dr. Srijan Sarkar (Ref: DST/INSPIRE/04/2019/000769), Department of Science & Technology (DST), Government of India.
2306.02695
Thermality of the zero-point length and gravitational selfduality
It has been argued that the existence of a zero-point length is the hallmark of quantum gravity. In this letter we suggest a thermal mechanism whereby this quantum of length arises in flat, Euclidean spacetime $\mathbb{R}^d$. For this we consider the infinite sequence of all flat, Euclidean spacetimes $\mathbb{R}^{d'}$ with $d'\geq d$, and postulate a probability distribution for each $d'$ to occur. The distribution considered is that of a canonical ensemble at temperature $T$, the energy levels those of a 1-dimensional harmonic oscillator. Since both the harmonic energy levels and the spacetime dimensions are evenly spaced, one can identify the canonical distribution of harmonic-oscillator eigenvalues with that of dimensions $d'$. The state describing this statistical ensemble has a mean square deviation in the position operator, that can be interpreted as a quantum of length. Thus placing an oscillator in thermal equilibrium with a bath provides a thermal mechanism whereby a zero-point length is generated. The quantum-gravitational implications of this construction are then discussed. In particular, a model is presented that realises a conjectured duality between a weakly gravitational, strongly quantum system and a weakly quantum, strongly gravitational system.
P. Fernandez de Cordoba, J. M. Isidro, Rudranil Roy
2023-06-05T08:36:39Z
http://arxiv.org/abs/2306.02695v1
# Thermality of the zero-point length ###### Abstract It has been argued that the existence of a zero-point length is the hallmark of quantum gravity. In this letter we suggest a thermal mechanism whereby this quantum of length arises in flat, Euclidean spacetime \(\mathbb{R}^{d}\). For this we consider the infinite sequence of all flat, Euclidean spacetimes \(\mathbb{R}^{d^{\prime}}\) with \(d^{\prime}\geq d\), and postulate a probability distribution for each \(d^{\prime}\) to occur. The distribution considered is that of a canonical ensemble at temperature \(T\), the energy levels those of a 1-dimensional harmonic oscillator. Since both the harmonic energy levels and the spacetime dimensions are evenly spaced, one can identify the canonical distribution of harmonic-oscillator eigenvalues with that of dimensions \(d^{\prime}\). The state describing this statistical ensemble has a mean square deviation in the position operator, that can be interpreted as a quantum of length. Thus placing an oscillator in thermal equilibrium with a bath provides a thermal mechanism whereby a zero-point length is generated. The quantum-gravitational implications of this construction are then discussed. In particular, a model is presented that realises a conjectured duality between a weakly gravitational, strongly quantum system and a weakly quantum, strongly gravitational system. ## 1 Introduction It has been argued that first-order quantum-gravity effects manifest themselves through the existence of a quantum of length \(L\)[8, 11, 22]. Under _first-order effects_ one understands such as can be observed mesoscopically, without the need for a complete theory of quantum gravity--a theory still under development from a number of different perspectives; for a sample see _e.g._[12] and refs. therein. The effects of a zero-point length have been extensively studied in the setting provided by a free, massive, relativistic particle propagating in Euclidean spacetime \(\mathbb{R}^{d}\)[14, 15, 18]. Let \(\mathcal{S}(\mathbf{x})=\int^{\mathbf{x}}\mathrm{d}s\) be the classical action functional for the particle when \(m=1\). Let \(G^{d}_{(L=0)}\) be the Feynman propagator for the particle in the _absence_ of a quantum of length \(L\), and let \(G^{d}_{(L)}\) denote the same propagator in the _presence_ of a quantum of length \(L\): \[G^{d}_{(L=0)}(\mathbf{x}) = \sum_{\mathrm{paths}}\exp\left[-m\,\mathcal{S}(\mathbf{x}) \right], \tag{1}\] \[G^{d}_{(L)}(\mathbf{x}) = \sum_{\mathrm{paths}}\exp\left\{-m\left[\mathcal{S}(\mathbf{x}) +\frac{L^{2}}{\mathcal{S}(\mathbf{x})}\right]\right\}. \tag{2}\] One can regard the propagator \(G^{d}_{(L)}\) as the UV completion of the propagator \(G^{d}_{(L=0)}\) because \(G^{d}_{(L)}\) is duality invariant, _i.e._, invariant under the exchange of \(\mathcal{S}\) and \(L^{2}/\mathcal{S}\). Now in ref. [3] the decomposition \[G^{d}_{(L)}=\sum_{n=0}^{\infty}\frac{(-\pi)^{n}}{n!}G^{d+2n}_{(L=0)} \tag{3}\] has been established. It expresses the quantum-gravity _corrected_ propagator in \(d\) dimensions as an infinite sum of quantum-gravity _free_ propagators in all virtual dimensions \(d+2n\), with \(n\in\mathbb{N}\). This fact seems to cast some doubt on the notion of a sharply defined dimension for quantum spacetimes, at least in the mesoscopic regime. Let us now Wick rotate Euclidean spacetime \(\mathbb{R}^{d}\) into Minkowski spacetime: \(t\to\mathrm{i}\tau\). Let \(\mathcal{G}_{M}\) (resp. \(\mathcal{G}_{R}\)) denote the Feynman propagator for a real scalar field \(\phi\) in the Minkowski vacuum (resp. Rindler vacuum). Then \(\mathcal{G}_{M}\) can be thought of as a thermalised version of \(\mathcal{G}_{R}\), in the sense that [17, 20] \[\mathcal{G}_{M}(\mathrm{i}\tau)=\sum_{n=-\infty}^{\infty}\mathcal{G}_{R}( \mathrm{i}\tau+2\mathrm{i}\pi n). \tag{4}\] Despite the fact that Eq. (3) refers to a particle while (4) refers to a field, the analogy between them is inspiring: given the thermality of the Rindler frame, it suggests a possible thermal origin for the quantum of length \(L\). It is one goal of this letter to establish that, indeed, _the quantum of length \(L\) implementing UV completeness can be interpreted as having a thermal origin_. In proving this statement we will achieve a complementary goal. Namely, we will provide a specific example of the UV/IR duality between _the weakly gravitational, strongly quantum regime of a system and the weakly quantum, strongly gravitational regime of a dual system_ conjectured in ref. [21]. In order to achieve these two goals we will first identify a canonical ensemble in equilibrium with a thermal bath at temperature \(T\), in such a way that it mimics the quantum-gravity corrected, Euclidean spacetime \(\mathbb{R}^{d}_{(L)}\). This ensemble is immediately suggested by the right-hand side of Eq. (3): an infinite collection of quantum-gravity free, Euclidean spacetimes \(\mathbb{R}^{d+2n}_{(L=0)}\), one for each value of \(n\in\mathbb{N}\). The following hints provide further clues: _i)_ associated with a harmonic oscillator there is a natural length scale, namely \[\lambda_{0}=\sqrt{\frac{\hbar}{m\omega}}; \tag{5}\] _ii)_ spacetime dimensions are evenly spaced in the same way as the energy eigenvalues of the 1-dimensional harmonic oscillator; _iii)_ the sum over virtual dimensions (3) is such that only those virtual dimensions are summed over that have the same parity as \(d\); also this is reminiscent of the well-defined parity of harmonic eigenstates. These hints suggest that the sought-for canonical ensemble might be given by the infinite collection of excited energy levels \(n=d,d+2,d+4,\ldots\) of a 1-dimensional harmonic oscillator. The oscillator groundstate \(n=0\) is mapped into the spacetime dimension \(d=0\); although the latter is meaningless as a dimension, meaningful physical quantities will be attached to the value \(d=0\). This oscillator will be placed in thermal equilibrium with an energy reservoir at temperature \(T\). Probabilities will be distributed according to the Boltzmann law: an energy \(\varepsilon_{n}=(n+1/2)\hbar\omega\) will be weighted by \(w_{n}=\exp(-\varepsilon_{n}/k_{B}T)\), and the partition function will be given by \(Z=\sum_{n}w_{n}\). The mass \(m\) of the harmonic oscillator will be taken to equal that of the particle, and the frequency \(\omega\) will be its Compton frequency. The coordinate along which this oscillator moves will be denoted by \(q\); the corresponding quantum operator will be \(Q\). As usual we will have \(H|n\rangle=(n+1/2)\hbar\omega|n\rangle\), where \(H\) is the harmonic Hamiltonian and the \(|n\rangle\), \(n=0,1,\ldots\) are normalised energy eigenstates. This completes our identification of the quantum-mechanical, 1-dimensional harmonic oscillator that is necessary for our construction. What remains is to prove that \(L\) actually arises as the thermal average of a certain quantity at a certain temperature; a point that we develop next. Thermality of the zero-point length ### Oscillators as a model for virtual dimensions Let us begin with the canonical partition function \(Z_{\rm ho}(\beta)\) for the quantum, \(1\)-dimensional harmonic oscillator in thermal equilibrium with an energy reservoir at temperature \(T\): \[Z_{\rm ho}(\beta)=\sum_{n=0}^{\infty}{\rm e}^{-(n+1/2)\beta\hbar\omega}=\frac{1 }{2}\,{\rm csch}\left(\frac{\beta\hbar\omega}{2}\right). \tag{6}\] We will also need the partition functions with a defined parity \[Z_{\rm ho}^{\rm even/odd}(\beta)=\sum_{n=0\atop{\rm n\,even/odd}}^{\infty}{ \rm e}^{-(n+1/2)\beta\hbar\omega}=\frac{1}{2}\,{\rm e}^{\pm\beta\hbar\omega/2} {\rm csch}\left(\beta\hbar\omega\right), \tag{7}\] the positive (resp. negative) sign corresponding to the even (resp. odd) parity. Moreover, we would like to construct a partition function appropriate to the sum over virtual dimensions (3). This is readily done: given a value of \(d\geq 0\), consider the object \(Z_{d}(\beta)\) defined as \[Z_{d}(\beta)=\sum_{n=0}^{\infty}{\rm e}^{-(d+2n+1/2)\beta\hbar\omega}=\frac{1 }{2}\,{\rm e}^{-(d-1/2)\beta\hbar\omega}\,{\rm csch}\left(\beta\hbar\omega \right). \tag{8}\] We will refer to the ensemble of oscillator states described by the partition function (8) as _the truncated oscillator_. Truncation means that one sums only over those excited oscillator states \(|n\rangle\) such that \(n\geq d\), and then only over those carrying the same parity as \(d\) (as dictated by the sum over virtual dimensions (3)). By contrast we will refer to the ensemble of oscillator states described by the partition function (6) as _the complete oscillator_, completeness here meaning that one sums over all states \(n\geq 0\), and also regardless of parity. ### The complete oscillator #### 2.2.1 Thermal density matrices The thermal density matrix for the complete harmonic oscillator is defined by \[\varrho_{\rm ho}(\beta)=\sum_{n=0}^{\infty}|n\rangle{\rm e}^{-(n+1/2)\beta \hbar\omega}\langle n|. \tag{9}\] We will also need some related sums. One of them is the alternating sum \[\varrho_{\rm ho}^{\rm alt}(\beta)=\sum_{n=0}^{\infty}(-1)^{n}|n\rangle{\rm e} ^{-(n+1/2)\beta\hbar\omega}\langle n|, \tag{10}\] that one can readily evaluate: \[\varrho_{\rm ho}^{\rm alt}(\beta)={\rm i}\,\varrho_{\rm ho}\left(\beta+\frac {{\rm i}\pi}{\hbar\omega}\right). \tag{11}\] Also necessary are the sums over all even/odd states \[\varrho_{\rm ho}^{\rm even/odd}(\beta)=\sum_{n=0\atop\rm n\ even/odd}^{\infty}|n \rangle{\rm e}^{-(n+1/2)\beta\hbar\omega}\langle n|. \tag{12}\] Now \(\varrho_{\rm ho}^{\rm even}(\beta)\) is best evaluated by inserting the projector \((1+(-1)^{n})/2\) and then applying Eq. (11), while \(\varrho_{\rm ho}^{\rm odd}(\beta)=\varrho_{\rm ho}(\beta)-\varrho_{\rm ho}^{ \rm even}(\beta)\). We thus arrive at \[\varrho_{\rm ho}^{\rm even/odd}(\beta)=\frac{1}{2}\,\varrho_{\rm ho}(\beta) \pm\frac{{\rm i}}{2}\,\varrho_{\rm ho}\left(\beta+\frac{{\rm i}\pi}{\hbar \omega}\right), \tag{13}\] the even (resp. odd) sum corresponding to the plus (resp. minus) sign. We can now express all the above density matrices in the position representation. Let the matrix elements \(\langle q|\varrho_{\rm ho}(\beta)|q^{\prime}\rangle\) be denoted by \(\varrho_{\rm ho}(q,q^{\prime};\beta)\). It is known that [4] \[\varrho_{\rm ho}(q,q^{\prime};\beta) \tag{14}\] \[=\frac{1}{\lambda_{0}}\frac{1}{\sqrt{2\pi\sinh(\beta\hbar\omega)}}\exp\left\{ \frac{-1}{2\lambda_{0}^{2}\sinh(\beta\hbar\omega)}\left[(q^{2}+{q^{\prime}}^{2 })\cosh(\beta\hbar\omega)-2qq^{\prime}\right]\right\}.\] Specifically we will need the diagonal matrix element \(\langle q|\varrho_{\rm ho}(\beta)|q\rangle\), also denoted by \(\varrho_{\rm ho}(q;\beta)\): \[\varrho_{\rm ho}(q;\beta)=\frac{1}{\lambda_{0}}\frac{1}{\sqrt{2\pi\sinh(\beta \hbar\omega)}}\exp\left[-\tanh\left(\frac{\beta\hbar\omega}{2}\right)\frac{q^ {2}}{\lambda_{0}^{2}}\right]. \tag{15}\] Using Eqs. (11) and (15), the diagonal of \(\varrho_{\rm ho}^{\rm alt}(\beta)\) turns out to be \[\varrho_{\rm ho}^{\rm alt}(q;\beta)=\frac{1}{\lambda_{0}}\frac{1}{\sqrt{2\pi \sinh(\beta\hbar\omega)}}\exp\left[-\coth\left(\frac{\beta\hbar\omega}{2} \right)\frac{q^{2}}{\lambda_{0}^{2}}\right], \tag{16}\] while for the density matrices (13) one finds \[\varrho_{\rm ho}^{\rm even/odd}(q;\beta)=\frac{1}{2\lambda_{0}}\frac{1}{ \sqrt{2\pi\sinh(\beta\hbar\omega)}} \tag{17}\] \[\times\left\{\exp\left[-\tanh\left(\frac{\beta\hbar\omega}{2}\right)\frac{q^{ 2}}{\lambda_{0}^{2}}\right]\pm\exp\left[-\coth\left(\frac{\beta\hbar\omega}{2 }\right)\frac{q^{2}}{\lambda_{0}^{2}}\right]\right\},\] the positive (resp. negative) sign corresponding to the even (resp. odd) parity. Finally, upon integration over \(q\in\mathbb{R}\) we obtain the partition functions \[Z_{\rm ho}^{\rm even/odd}(\beta)=\int_{-\infty}^{\infty}{\rm d}q\,\varrho_{ \rm ho}^{\rm even/odd}(q;\beta)=\frac{1}{4}\,{\rm csch}\left(\frac{\beta\hbar \omega}{2}\right)\!\pm\!\frac{1}{4}\,{\rm sech}\left(\frac{\beta\hbar\omega}{ 2}\right), \tag{18}\] in perfect agreement with their previous values in Eq. (7). #### 2.2.2 A thermally induced quantum of length It is convenient to reexpress the diagonal matrix elements (15) as \[\varrho_{\rm ho}(q;\beta)=\frac{1}{\lambda_{0}\sqrt{2\pi\sinh(\beta\hbar\omega)} }\exp\left[-\frac{q^{2}}{\lambda^{2}(\beta)}\right], \tag{19}\] where the temperature-dependent Gaussian width \(\lambda(\beta)\) is the following function: \[\lambda(\beta)=\lambda_{0}\sqrt{\coth\left(\frac{\beta\hbar\omega}{2}\right)}. \tag{20}\] Thus \(\lambda(\beta)\) is a temperature-dependent length scale analogous to (5), to which it reduces in the zero-temperature limit since \(\lim_{\beta\rightarrow\infty}\lambda(\beta)=\lambda_{0}\). Once properly normalised, the Gaussian (19) gives the probability for finding the system at \(Q=q\). One finds \[\int_{-\infty}^{\infty}{\rm d}q\,\varrho_{\rm ho}(q;\beta)=Z_{\rm ho}(\beta) \tag{21}\] in nice agreement with the partition function (6). Thus averages within this thermal ensemble will be computed with respect to the normalised distribution \(Z_{\rm ho}^{-1}(\beta)\varrho_{\rm ho}(q;\beta)\). For the position operator \(Q\) and its square \(Q^{2}\) we find1 Footnote 1: We denote thermal averages by round brackets. \[\left(Q\right)_{\rm ho}=\frac{1}{Z_{\rm ho}(\beta)}\int_{-\infty}^{\infty}{ \rm d}q\,q\varrho_{\rm ho}(q;\beta)=0 \tag{22}\] and \[\left(Q^{2}\right)_{\rm ho}=\frac{1}{Z_{\rm ho}(\beta)}\int_{-\infty}^{\infty }{\rm d}q\,q^{2}\varrho_{\rm ho}(q;\beta)=\frac{1}{2}\lambda^{2}(\beta). \tag{23}\] Then the mean square deviation \(\left(\Delta Q\right)_{\rm ho}^{2}\) reads \[\left(\Delta Q\right)_{\rm ho}^{2}=\left(Q^{2}\right)_{\rm ho}-\left(Q\right) _{\rm ho}^{2}=\frac{1}{2}\lambda^{2}(\beta). \tag{24}\] It is meaningful to identify the above mean square deviation with one half the square \(L^{2}\) of the quantum of length for the complete oscillator: \[\left(\Delta Q\right)_{\rm ho}^{2}=\frac{1}{2}L^{2}. \tag{25}\] This allows one to solve neatly for the inverse temperature:2 Footnote 2: The notation \(\coth^{-1}(x)\), sometimes also written \(\arccoth(x)\), stands for the function inverse to \(\coth(x)\). \[\beta_{\rm ho}=\frac{2}{\hbar\omega}\coth^{-1}\left(\frac{L^{2}}{\lambda_{0}^ {2}}\right). \tag{26}\] We should bear in mind that the averages (22) and (23) are _thermal_ in nature, because they are computed with respect to the _thermal_ probability distribution function \(Z_{\rm ho}^{-1}(\beta)\varrho_{\rm ho}(q;\beta)\). This notwithstanding, is it instructive to compare them to the _quantum_ averages \[\langle n|Q|n\rangle=0,\qquad\langle n|Q^{2}|n\rangle=\left(n+\frac{1}{2}\right) \lambda_{0}^{2} \tag{27}\] corresponding to the harmonic eigenstates \(|n\rangle\): they match exactly when \(n=0\), with the sole replacement of the zero-temperature length scale \(\lambda_{0}\) with its thermal counterpart \(\lambda(\beta)\). Summarising: a zero-point length \(L\) defines the temperature of the thermal bath through Eq. (26). Conversely, one can intrepret the latter equation as meaning that _a thermal bath induces a quantum of length_, as claimed. Instead of string fluctuations as in ref. [7], here we have thermal fluctuations as the origin of the zero-point length. ### The truncated oscillator So far we have only considered the complete oscillator. In the this section we perform a similar analysis for the truncated oscillator. #### 2.3.1 Thermal density matrices As suggested by the sum over dimensions (3), here the relevant thermal density matrix to consider is \[\varrho_{d}(\beta)=\sum_{n=0}^{\infty}|d+2n\rangle{\rm e}^{-(d+2n+1/2)\beta \hbar\omega}\langle d+2n|, \tag{28}\] which we conveniently reexpress as \[\varrho_{d}(\beta)=\sum_{n=0\atop n=d\,{\rm mod}2}^{\infty}|n\rangle{\rm e}^ {-(n+1/2)\beta\hbar\omega}\langle n|-\sum_{n=0\atop n=d\,{\rm mod}2}^{d-1}|n \rangle{\rm e}^{-(n+1/2)\beta\hbar\omega}\langle n|. \tag{29}\] Above, the parity of the \(n\)'s summed over is the same as that of \(d\). As before we will denote the diagonal matrix elements \(\langle q|\varrho_{d}(\beta)|q\rangle\) by \(\varrho_{d}(q;\beta)\). While the integral \(\int_{-\infty}^{\infty}{\rm d}q\varrho_{d}(q;\beta)\) must equal the partition function \(Z_{d}(\beta)\) already known by Eq. (8), the integrand \(\varrho_{d}(q;\beta)\) is so far unknown, and must be computed as the diagonal of the density operator (29). In the end, \(Z_{d}^{-1}(\beta)\varrho_{d}(q;\beta)\) will provide us with the normalised probability distribution function necessary to compute thermal averages in this ensemble. It turns out that the sought-for distribution function reads (see Eqs. (47) and (54) of the Appendix) \[\frac{1}{Z_{d}(\beta)}\varrho_{d}^{\rm even/odd}(q;\beta)=\frac{1}{Z_{d}( \beta)}\left[\varrho_{\rm ho}^{\rm even/odd}(q;\beta)-f_{d}^{\rm even/odd}( q;\beta)\right], \tag{30}\] where "even/odd" in \(\varrho_{d}^{\rm even/odd}(q;\beta)\) refers to the parity of \(d\). Above, \(Z_{d}(\beta)\) is given by Eq. (8) regardless of parity, \(\varrho_{\rm ho}^{\rm even/odd}(q;\beta)\) is known by Eq. (17), and the functions \(f_{d}^{\rm even/odd}(q;\beta)\) are given by (see Eqs. (46) and (53) of the Appendix) \[f_{d}^{\rm even/odd}(q;\beta)=\sum_{n=0\atop n\,{\rm even/odd}}^{d-1}{\rm e}^{- (n+1/2)\beta\hbar\omega}\,|\langle n|q\rangle|^{2}. \tag{31}\] In (31), the parity of the \(n\)'s summed over is the same as that of \(d\). The thermal probability distribution functions (30) differ from their partner (19) in one important respect, namely, by the terms \(f_{d}^{\rm even/odd}(q;\beta)\). The latter arise from the fact that the lowest-lying energy eigenstate for the truncated oscillator is no longer the (Gaussian) oscillator groundstate \(|0\rangle\), but the excited eigenstate \(|d\rangle\) instead. The finite sums \(f_{d}^{\rm even/odd}(q;\beta)\) are nothing but the thermal probability distributions associated with the oscillator eigenstates lying _below_ the state \(|d\rangle\), with due care being taken of parity. #### 2.3.2 A thermally induced quantum of length We will evaluate the thermal averages \[(Q)_{d}^{\rm even/odd}=\frac{1}{Z_{d}(\beta)}\int_{-\infty}^{\infty}{\rm d}q\, q\varrho_{d}^{\rm even/odd}(q;\beta) \tag{32}\] and \[\left(Q^{2}\right)_{d}^{\rm even/odd}=\frac{1}{Z_{d}(\beta)}\int_{-\infty}^{ \infty}{\rm d}q\,q^{2}\varrho_{d}^{\rm even/odd}(q;\beta). \tag{33}\] As before, a (squared) quantum of length \(L^{2}\) will be defined as (twice) the mean square deviation of the position operator \(Q\): \[\left((\Delta Q)_{d}^{\rm even/odd}\right)^{2}=\left(Q^{2}\right)_{d}^{\rm even /odd}-\left((Q)_{d}^{\rm even/odd}\right)^{2}=\frac{1}{2}L^{2}. \tag{34}\] Use of Eqs. (17) and (27) immediately yields \[(Q)_{d}^{\rm even/odd}=0. \tag{35}\] However, the evaluation of \(\left(Q^{2}\right)_{d}^{\rm even/odd}\) is lengthier, and details have been relegated to the Appendix. The final result is Eq. (62): \[\left((\Delta Q)_{d}^{\rm even/odd}\right)^{2}=\left(Q^{2}\right)_{d}^{\rm even /odd}=\left[\lambda(2\beta)\right]^{2}+\lambda_{0}^{2}\left(d-\frac{1}{2} \right). \tag{36}\] As was the case for the complete harmonic oscillator, one can interpret the quantum of length \(L\) as possessing a thermal origin. Two features of the above are worth mentioning. First, the temperature-dependent Gaussian width of Eq. (20) appears evaluated at \(2\beta\) rather than \(\beta\). Moreover, there appears a temperature-independent contribution proportional to \((d-1/2)\). These two features arise as consequences of the truncation of the oscillator. The sum over dimensions (3) starts at the value \(d\), and it contains all higher dimensions of the same parity as \(d\). This parity requirement amounts to doubling the frequency \(\omega\) or, equivalently, the inverse temperature \(\beta\). Gravitational selfduality In ref. [21] it has been conjectured that a strongly quantum, weakly gravitational system must be dual to a weakly quantum, strongly gravitational system.3 We claim that an instance of this duality symmetry is provided by the following example. Footnote 3: A related proposal was put forward in ref. [9]. For the complete oscillator of previous sections to implement the weakly gravitational, strongly quantum regime, it suffices to impose two additional requirements: _i)_ the mass \(m\) is very small; _ii)_ the quantum number \(n\) is low enough. We will construct a dual system that is weakly quantum but strongly gravitational. Consider the quantum system whose (dimensionless) Hamiltonian \(\tilde{H}\) is given by \[\tilde{H}=\left(\frac{H}{\hbar\omega}\right)^{-1}, \tag{37}\] where \(H\) is the harmonic Hamiltonian satisfying \(H|n\rangle=(n+1/2)\hbar\omega|n\rangle\). The (dimensionless) energy levels \(\tilde{E}_{n}\) of \(\tilde{H}\) are \[\tilde{H}|n\rangle=\tilde{E}_{n}|n\rangle,\qquad\tilde{E}_{n}=\frac{1}{n+1/2},\qquad n\in\mathbb{N}. \tag{38}\] Now \(n\) was assumed small to guarantee the strongly quantum regime of the initial oscillator; hence the dual system governed by \(\tilde{H}\) implements the weakly quantum behaviour. More precisely: small values of \(n\) correspond to eigenvalues of \(H\) that are comparable to the oscillator vacuum energy; this may be called the IR regime of the original oscillator. In the dual system governed by \(\tilde{H}\), the same small values of \(n\) correspond to high energies (as compared to the rest of the spectrum (38)); this may be called the UV regime of the dual system. In this sense, _the map (37) between these two dual systems implements UV/IR duality_. Let the mass of this dual system be \(\tilde{m}\). We need it to be large so gravitational effects will also be large. Thus requiring \[m\tilde{m}=M_{P}^{2}, \tag{39}\] where \(M_{P}\) is the Planck mass, ensures the desired behaviour. We can further elaborate on the gravitational aspects of the dual system (38). The area operator defined as \[\tilde{A}=L^{2}\tilde{H}, \tag{40}\] where \(L\) is the quantum of length, has the quantised area levels \[\tilde{A}|n\rangle=\tilde{A}_{n}|n\rangle,\qquad\tilde{A}_{n}=\frac{L^{2}}{n+ 1/2},\qquad n\in\mathbb{N}. \tag{41}\] We can place this dual system described by the area operator \(\tilde{A}\) in contact with an entropy reservoir at a constant value of the area; ideally this reservoir would be a black hole with horizon area \(A_{BH}\) and entropy \[S_{BH}=kA_{BH},\qquad k=\frac{k_{B}c^{3}}{4\hbar G}. \tag{42}\] Then the dual system can exchange quanta of entropy \(k\tilde{A}_{n}\) with the entropy reservoir. This is analogous to the oscillator exchanging energy quanta \(\hbar\omega\) with the thermal bath. Probabilities are distributed among the different area levels according to the Boltzmann law \(\exp\left(-\tilde{A}_{n}/A_{BH}\right)\). Furthermore, upon multiplication by \(k\) as in Eq. (42), the area operator \(\tilde{A}\) becomes the entropy operator \(\tilde{S}=k\tilde{A}\). Then the eigenvalue equation \[\tilde{S}|n\rangle=\tilde{S}_{n}|n\rangle,\qquad\tilde{S}_{n}=k\tilde{A}_{n} \tag{43}\] provides an instance of the entropic picture of quantum mechanics first postulated in ref. [1]. The duality presented here can also be regarded as an instance of that analysed in ref. [13]. ## 4 Discussion Gravity has been argued to be selfdual. Indeed gravity escapes the usual pattern of an effective theory (in the Wilsonian sense). Any such theory at low energy holds all the way up to a certain energy scale, beyond which it breaks down and must be replaced by a more fundamental theory. Beginning now with classical gravity at low energy, gravity turns quantum as energy is increased until a certain characteristic scale is reached. Surprisingly, however, gravity becomes once again classical beyond this scale; this is the meaning of selfduality. In relation to UV completeness [2], in ref. [21] it has been argued that a UV/IR duality transformation must exist such that it will map the strongly quantum, weakly gravitational regime of a given system into the strongly gravitational, weakly quantum regime of a dual system. In this article we have presented an explicit example of two systems exhibiting this UV/IR duality property. Our starting point was Eq. (3), which states that quantum-gravity properties are conferred upon the \(d\)-dimensional propagator \(G^{d}_{(L)}\) by an infinite sum of propagators \(G^{d+2n}_{(L=0)}\) in virtual dimensions; the latter are all classical in the sense that they all carry \(L=0\). This has led us to interpret the zero-point length, the hallmark of quantum gravity, as a thermal phenomenon. Novel attempts at unification [5, 6, 19, 23] also take the quantum of length into account. Moreover, thermality should not surprise us given that thermodynamics is central to a number of modern approaches to gravity [10, 16, 24, 25]. Reinterpreted from a thermal point of view, the duality invariance of the quantum-mechanical propagator (3) is perfectly natural: the infinite sum is over all propagators _with a given parity_. Definite parity implies duality invariance; undefined parity does not. We have found that, while zero-temperature dimensions are sharply defined, thermal dimensions become averages over a statistical ensemble of virtual dimensions of classical spacetimes. Transitions between different virtual dimensions are induced by thermal fluctuations. Thus virtual dimensions fluctuate, as they should in any quantum theory. We have modelled virtual dimensions on the quantum number of a 1-dimensional harmonic oscillator. However we have not altered the notion of a smooth spacetime manifold in any other way: as stated in section 1, our analysis centers around the mesoscopics of quantum gravity, where all quantum effects can be ascribed to a nonvanishing quantum of length. The theory developed here is not of quantum _spacetime_, but of the _dimension_ of quantum spacetime in the mesoscopic regime. The microscopic constituents, or _atoms_, of the dimension of a quantum-gravity corrected spacetime \(\mathbb{R}^{d}_{(L)}\) are the virtual dimensions \(d+2n\) of all quantum-gravity free spacetimes \(\mathbb{R}^{d+2n}_{(L=0)}\) for \(n\in\mathbb{N}\). A model for these virtual dimensions has been provided by the energy levels of a truncated oscillator. Altogether, we can view the thermality of the zero-point length as yet another argument in favour of an atomistic, or granular, nature of quantum spacetime. **Acknowledgments** Work funded by FEDER/MCIN under grant PID2021-128676OB-I00. ## 5 Appendix ### Computation of \(\varrho_{d}^{\rm even/odd}(q;\beta)\) Here we derive Eq. (30), analysing the cases \(d=2k\) and \(d=2k+1\) separately. _i)_\(d=2k\). Then the density matrix (29) becomes \[\varrho_{d=2k}(\beta)=\sum_{n=0\atop n\ {\rm even}}^{\infty}|n\rangle{\rm e}^{-( n+1/2)\beta\hbar\omega}\langle n|-\sum_{n=0\atop n\ {\rm even}}^{d-1}|n\rangle{\rm e}^{-(n+1/2)\beta\hbar\omega}\langle n|. \tag{44}\] The first sum equals the density operator \(\varrho_{\rm ho}^{\rm even}(\beta)\) of Eq. (12), the corresponding diagonal being given in Eq. (17). Therefore the diagonal entries of (44) read \[\varrho_{d=2k}(q;\beta)=\varrho_{\rm ho}^{\rm even}(q;\beta)-\sum_{n=0\atop n \ {\rm even}}^{d-1}{\rm e}^{-(n+1/2)\beta\hbar\omega}\,|\langle n|q\rangle|^{2}. \tag{45}\] Finally defining \[f_{d=2k}^{\rm even}(q;\beta)=\sum_{n=0\atop n\ {\rm even}}^{d-1}{\rm e}^{-( n+1/2)\beta\hbar\omega}\,|\langle n|q\rangle|^{2}, \tag{46}\] the sought-for diagonal matrix element is \[\varrho_{d=2k}(q;\beta)=\varrho_{\rm ho}^{\rm even}(q;\beta)-f_{d=2k}^{\rm even }(q;\beta), \tag{47}\] where the right-hand side is explicitly known by Eqs. (17) and (46). As a consistency check, integration over \(q\in\mathbb{R}\) should produce the partition function \(Z_{d=2k}(\beta)\). Indeed: \[Z_{d=2k}(\beta)=Z_{\rm ho}^{\rm even}(\beta)-\sum_{n=0\atop n\ {\rm even}}^{d-1}{ \rm e}^{-(n+1/2)\beta\hbar\omega}, \tag{48}\] on account of Eq. (18) and of the \(|n\rangle\) being normalised eigenstates. Now the finite geometric sum can be readily computed, \[\sum_{n=0\atop n\,{\rm even}}^{d-1}\,{\rm e}^{-(n+1/2)\beta\hbar\omega}=\frac{1} {2}\left[{\rm e}^{\beta\hbar\omega/2}-{\rm e}^{-(d-1/2)\beta\hbar\omega}\right] {\rm csch}(\beta\hbar\omega), \tag{49}\] and Eqs. (18) and (49) yield the partition function \(Z_{d=2k}(\beta)\) \[Z_{d=2k}(\beta)=\frac{1}{2}\,{\rm e}^{-(d-1/2)\beta\hbar\omega}{\rm csch}( \beta\hbar\omega), \tag{50}\] in happy agreement with its previous evaluation in Eq. (8). _ii)_\(d=2k+1\). Here the density matrix (29) becomes \[\varrho_{d=2k+1}(\beta)=\sum_{n=0\atop n\,{\rm odd}}^{\infty}|n\rangle{\rm e }^{-(n+1/2)\beta\hbar\omega}\langle n|-\sum_{n=0\atop n\,{\rm odd}}^{d-1}|n \rangle{\rm e}^{-(n+1/2)\beta\hbar\omega}\langle n|. \tag{51}\] The first sum equals the density operator \(\varrho_{\rm ho}^{\rm odd}(\beta)\), the corresponding diagonal being given in Eq. (17). Therefore the diagonal entries of (51) read \[\varrho_{d=2k+1}(q;\beta)=\varrho_{\rm ho}^{\rm odd}(q;\beta)-\sum_{n=0\atop n \,{\rm odd}}^{d-1}{\rm e}^{-(n+1/2)\beta\hbar\omega}\,|\langle n|q\rangle|^{ 2}, \tag{52}\] so defining \[f_{d=2k+1}^{\rm odd}(q;\beta)=\sum_{n=0\atop n\,{\rm odd}}^{d-1}{\rm e}^{-(n +1/2)\beta\hbar\omega}\,|\langle n|q\rangle|^{2}, \tag{53}\] the sought-for diagonal matrix element will be \[\varrho_{d=2k+1}(q;\beta)=\varrho_{\rm ho}^{\rm odd}(q;\beta)-f_{d=2k+1}^{ \rm odd}(q;\beta), \tag{54}\] where the right-hand side is known by Eqs. (17) and (53). As a check, integration over \(q\in\mathbb{R}\) should produce the partition function \(Z_{d=2k+1}(\beta)\). This is indeed the case: \[Z_{d=2k+1}(\beta)=Z_{\rm ho}^{\rm odd}(\beta)-\sum_{n=0\atop n\,{\rm odd}} ^{d-1}{\rm e}^{-(n+1/2)\beta\hbar\omega}, \tag{55}\] again on account of Eq. (18) and of the normalisation of the \(|n\rangle\). Moreover, \[\sum_{n=0\atop n\,{\rm odd}}^{d-1}{\rm e}^{-(n+1/2)\beta\hbar\omega}=\frac{ 1}{2}\left[{\rm e}^{-\beta\hbar\omega/2}-{\rm e}^{-(d-1/2)\beta\hbar\omega} \right]{\rm csch}(\beta\hbar\omega). \tag{56}\] Thus Eqs. (18) and (56) yield the partition function \(Z_{d=2k+1}(\beta)\): \[Z_{d=2k+1}(\beta)=\frac{1}{2}\,{\rm e}^{-(d-1/2)\beta\hbar\omega}{\rm csch}( \beta\hbar\omega), \tag{57}\] again in beautiful agreement with Eq. (8). ### Computation of \(\left(Q^{2}\right)_{d}^{\rm even/odd}\) By Eq. (30), \[\int_{-\infty}^{\infty}{\rm d}q\,q^{2}\varrho_{d}^{\rm even/odd}(q;\beta)=\int_{- \infty}^{\infty}{\rm d}q\,q^{2}\varrho_{\rm ho}^{\rm even/odd}(q;\beta)-\int_{ -\infty}^{\infty}{\rm d}q\,q^{2}f_{d}^{\rm even/odd}(q;\beta). \tag{58}\] Terms to evaluate are \[\int_{-\infty}^{\infty}{\rm d}q\,q^{2}\,\varrho_{\rm ho}^{\rm even/odd}(q;\beta) \tag{59}\] \[=\frac{\lambda_{0}^{2}}{8}\left[\coth\left(\frac{\beta\hbar\omega}{2}\right) \mathrm{csch}\left(\frac{\beta\hbar\omega}{2}\right)\pm\tanh\left(\frac{\beta \hbar\omega}{2}\right)\mathrm{sech}\left(\frac{\beta\hbar\omega}{2}\right)\right]\] where Eq. (17) has been applied, and \[\int_{-\infty}^{\infty}{\rm d}q\,q^{2}f_{d}^{\rm even/odd}(q;\beta)=\lambda_ {0}^{2}\sum_{n=0\atop n\ {\rm even/odd}}^{d-1}\left(n+\frac{1}{2}\right){\rm e}^{-(n+1/2)\beta \hbar\omega} \tag{60}\] after using Eqs. (27), (31). The finite geometric sums are straightforward to evaluate as the derivatives, with respect to \(\beta\hbar\omega\), of the sums (49) and (56). Then adding together Eqs. (60) and (59) produces, after some algebra, \[\int_{-\infty}^{\infty}{\rm d}q\ q^{2}\varrho_{d}^{\rm even/odd}(q;\beta)= \frac{\lambda_{0}^{2}}{4}\,{\rm e}^{-(d-1/2)\beta\hbar\omega}\mathrm{csch}( \beta\hbar\omega)\left[2d+2\coth(\beta\hbar\omega)-1\right]. \tag{61}\] Finally using Eq. (20) we conclude \[\left(Q^{2}\right)_{d}^{\rm even/odd}=Z_{d}(\beta)^{-1}\int_{-\infty}^{ \infty}{\rm d}q\ q^{2}\varrho_{d}^{\rm even/odd}(q;\beta)\] \[=\frac{\lambda_{0}^{2}}{2}\left[2d-1+2\coth(\beta\hbar\omega)\right]=\left[ \lambda(2\beta)\right]^{2}+\lambda_{0}^{2}\left(d-\frac{1}{2}\right). \tag{62}\]
2301.09767
Truveta Mapper: A Zero-shot Ontology Alignment Framework
In this paper, a new perspective is suggested for unsupervised Ontology Matching (OM) or Ontology Alignment (OA) by treating it as a translation task. Ontologies are represented as graphs, and the translation is performed from a node in the source ontology graph to a path in the target ontology graph. The proposed framework, Truveta Mapper (TM), leverages a multi-task sequence-to-sequence transformer model to perform alignment across multiple ontologies in a zero-shot, unified and end-to-end manner. Multi-tasking enables the model to implicitly learn the relationship between different ontologies via transfer-learning without requiring any explicit cross-ontology manually labeled data. This also enables the formulated framework to outperform existing solutions for both runtime latency and alignment quality. The model is pre-trained and fine-tuned only on publicly available text corpus and inner-ontologies data. The proposed solution outperforms state-of-the-art approaches, Edit-Similarity, LogMap, AML, BERTMap, and the recently presented new OM frameworks in Ontology Alignment Evaluation Initiative (OAEI22), offers log-linear complexity, and overall makes the OM task efficient and more straightforward without much post-processing involving mapping extension or mapping repair. We are open sourcing our solution.
Mariyam Amir, Murchana Baruah, Mahsa Eslamialishah, Sina Ehsani, Alireza Bahramali, Sadra Naddaf-Sh, Saman Zarandioon
2023-01-24T00:32:56Z
http://arxiv.org/abs/2301.09767v3
# Truveta Mapper: A Zero-shot Ontology Alignment Framework ###### Abstract In this paper, a new perspective is suggested for unsupervised Ontology Matching (OM) or Ontology Alignment (OA) by treating it as a translation task. Ontologies are represented as graphs, and the translation is performed from a node in the source ontology graph to a path in the target ontology graph. The proposed framework, Truveta Mapper (TM), leverages a multi-task sequence-to-sequence transformer model to perform alignment across multiple ontologies in a zero-shot, unified and end-to-end manner. Multi-tasking enables the model to implicitly learn the relationship between different ontologies via transfer-learning without requiring any explicit cross-ontology manually labeled data. This also enables the formulated framework to outperform existing solutions for both runtime latency and alignment quality. The model is pre-trained and fine-tuned only on publicly available text corpus and inner-ontologies data. The proposed solution outperforms state-of-the-art approaches, Edit-Similarity, LogMap, AML, BERTMap, and the recently presented new OM frameworks in Ontology Alignment Evaluation Initiative (OAEI22), offers log-linear complexity in contrast to quadratic in the existing end-to-end methods, and overall makes the OM task efficient and more straightforward without much post-processing involving mapping extension or mapping repair. ## 1 Introduction Ontology Matching (OM) or Ontology Alignment (OA) is the process of finding correspondence between the entities of two ontologies. The purpose of this process is to unify data from different sources and reduce heterogeneity, making data more viable for research and development [21]. Classical state-of-the-art (SOTA) approaches on OM are based on non-contextual matching, where the model captures lexical similarity but fails to understand textual semantics, which results in ambiguity. On the other hand, with contextual approaches, the objective is to match complex pairs which are lexically different but semantically similar and vice-versa. For example, "Encephalopathy" and "Discorder of brain" are lexically different but are used in the same context. However, "Structure of permanent maxillary right second molar tooth" and "Structure of permanent mandibular right first molar tooth" are lexically similar but are semantically different. Recently, a transformer-based contextual framework using BERT [4], has been proposed in [14], which showed promising results compared to other OM systems. In their approach the existing pre-trained BERT model was fine-tuned to learn the similarity between different terms, and thereby achieve equivalence matching. This process involves computing the similarity of each input term with a large subset of terms in the target ontology, resulting in quadratic complexity. Additionally, the model captures textual context, however, it does not understand the ontology graph structure, which could significantly extend the capabilities of ontologies graph matching. Motivated by the potential of the transformer models for understanding textual semantic context and overcoming the limitations in the existing methods, the present work proposes Truveta Mapper (TM), a novel zero-shot sequence-to-sequence multi-task transformer-based framework for OM, with the capability of learning both the graph-structure and textual semantics of the ontologies. The model is first pre-trained to learn the hierarchical graph structure of ontology and semantics of each class using Masked Language Modeling (MLM), then fine-tuned using class labels and synonyms as input and class hierarchical-ID as the output, capturing the structure of the ontology. As such, we treat OM as a translation task, where the source ontology class is translated to a path in the matching target ontology class in a zero-shot and multitask manner. Proposed approach is based on zero-shot learning and prediction, where "zero-shot learning" refers to the ability of the model to make source-to-target predictions without requiring manually labeled cross-ontologies matching pairs, and "zero-shot prediction" performs end-to-end mapping from the source to the target without any similarity calculation across the entire/subset target ontology or post-processing like extension/repair. With multi-tasking, a single model is capable of matching different ontologies such as SNOMED to FMA, SNOMED to NCIT, and so on, and takes advantage of transfer learning as well. In this work, empirical comparison is made with the state-of-the-art lexical matching approaches and the recent contextual models presented in [1] on the Unified Medical Language System (UMLS) datasets as part of the New Bio-ML track for OAEI 2022. The Ontology Alignment Evaluation Initiative (OAEI) organizes yearly campaigns on ontology matching tasks. Our solution surpasses state-of-the-art LogMap, AML models, Edit-similarity, and recently proposed BERTMap, AMD, LogMap-Lite, BERTMap-Lite, LSMatch, Matcha and AT-Matcher, while offering log-linear complexity in contrast to quadratic in many existing approaches. The remainder of this paper is as follows. Section 2 reviews the recent SOTA-related works on OM/OA; Section 3 defines the problem statement, provides a high-level understanding of our proposed approach and the ontologies used; Section 4 describes TM in detail, elaborates on pre-training, fine-tuning, zero-shot learning, and predictions; Section 5 shows the evaluation criteria, results, and gives insight about the overall model performance; and lastly, Section 6 provides a detailed discussion, conclusion on the framework, and outlines our potential future works. ## 2 Related Work OM classical approaches are primarily based on non-contextual matching. Related to that, some notable works in the field of OM include Edit-Similarity [1], LSMatch [13], LogMap [14], and AgreementMakerLight (AML) [15], among others. Edit-Similarity is a naive lexical matching approach based on normalized edit similarity scores. LSMatch is another lexical matching approach based on string similarity match. LogMap and AML are two classical OM systems with leading performance in many equivalence matching tasks. These two approaches are based on lexical matching, mapping extension (adding new mappings for semantically related classes of the current mappings), and mapping repair (removing mappings that can lead to logical conflicts). However, these lexical approaches do not consider contextual semantics. Recently, several OM systems, such as OntoEmma [21], DeepAlignment [12], VeeAlign [17], leveraged dense word embeddings, in which words are projected into a vector. Word pairs with smaller Euclidean distances in the vector space will have closer semantic meanings. Different techniques are used to generate these embeddings. OntoEmma and [13] uses word2vec [15], which is trained on Wikipedia; [20] uses FastText [1]; LogMap-ML [3] uses OWL2Vec* [3], which is a word2vec model trained on corpora extracted from the ontology with different kinds of semantics; DeepAlignment uses refined word embeddings using counter-fitting; VeeAlign proposes dual embeddings using class labels. These are primarily traditional non-contextual word embedding methods and do not consider word-level contexts. Some of these approaches, such as VeeAlign, are based on supervised training, which requires high-quality labeled mappings for training and can be challenging to obtain. Recently, transformer-based models [23], thanks to their ability to learn textual contexts, obtained SOTA for several tasks in natural language processing such as machine translation [16, 17, 18], question answering [19], among others. Similarly, in the field of OM, recent developments have also shown the potential of using transformer-based frameworks [22, 14, 20]. Neutel and de Boer (2021) employed contextual BERT embeddings to match two domain ontologies associated with occupations. Each sentence is embedded using BERT, and similarity is applied to get the scores for OM. More recently, [14] proposed BERTMap model, which is obtained by fine-tuning the already pre-trained BERT model for the binary classification task. The BERTMap model often outperformed non-contextual approaches such as LogMap, AML, and LogMap-ML. However, it requires quadratic time complexity, which is challenging for large ontologies. AMD [21] is another recent context-based matching approach that uses a BERT-based model to generate mappings and then filters these mappings using graph embedding techniques. Other related ontology matching systems that participated in OAEI 2022 [1] are LogMap-Lite, BERTMap-Lite, Matcha, and ATMatcher. ## 3 Methodology ### Problem statement Ontology Matching (OM) or Ontology Alignment (OA) is the process of finding correspondence between the entities/classes of two ontologies [14]. In this work, a new perspective is presented by treating OM as a translation task for equivalence matching and can be mathematically presented as \(f(c_{1},T)\), where function \(f\) gives the matching target ontology class \(c_{2}\in C_{2}\), given a source class \(c_{1}\in C_{1}\), and \(T\) is the alignment task identifier. \(O_{1}\) and \(O_{2}\) as the source and target ontologies, with \(C_{1}\) and \(C_{2}\) being their respective named class sets. Since we are training a multi-task model, a unique identifier is used for each task. The present work focuses on equivalence matching, where classes having the same semantic meaning in different ontologies are matched with each other. As shown in Figure 1, each ontology is presented in the form of a hierarchical graph structure with parent-child relation, where each class presents a node in the given ontology graph. In Figure 1, we illustrate our high-level solution, where we train our model to learn this hierarchical structure, and consequently, target class \(c_{2}\in C_{2}\) is obtained as a path in the target ontology graph, for a given input node representing class \(c_{1}\in C_{1}\) in the input ontology 1. ### Ontologies In this work, as a part of the New Bio-ML track [1], we focus on three UMLS equivalence matching tasks, SNOMED to FMA (Body), SNOMED to NCIT (Neoplas), and SNOMED to NCIT (Pharm), in an unsupervised setting from [1], where the matching pairs between these ontologies are only divided into validation (10%) and testing (90%) sets, without any training data. Pharm, Neoplas, and Body are associated with the semantic types of "Pharmacologic Substance", "Neoplastic Process", and "Body Part, Organ, or Organ Components" in UMLS, respectively. Based on these semantics types, subset ontologies are provided in [1], and are given as SNOMED (Body), SNOMED (Neoplas), SNOMED (Pharm), FMA (Body), NCIT (Neoplas) and NCIT (Pharm), where the first three are the source and last three are the target ontologies in our matching task (Table 1). For each of the classes present in the given ontologies, class ID is provided along with its associated label and possible synonyms (class descriptions). For example, in Figure 1, for Snomed ID 78904004, the class label is "Chest Wall Structure," and its synonyms are "Thoracic Wall" and "Chest Wall". ## 4 Truveta Mapper (TM): Proposed approach for OM Figure 2 demonstrates training architecture, with two main steps of pre-training and fine-tuning. Starting from a language model pre-trained on the C4 dataset, the model is further pre-trained on the full ontologies, learning each ontology's semantics and hierarchical structure. Afterward, the model is trained on the downstream task using the subset ontology data during the fine-tuning stage. The pre-training and fine-tuning steps are done in a multi-task manner on inner-ontologies, which enables the model in extensive transferring (Figure 2). In the prediction step, given a source ontology, the output is predicted in a zero-shot manner. More \begin{table} \begin{tabular}{l l l l} \hline \hline Ontologies & \#Classes & Subsets & \#Classes \\ \hline \multirow{2}{*}{SNOMED} & \multirow{2}{*}{358,222} & Body & 24,182 \\ & & Pharm & 16,045 \\ & & Neoplas & 11,271 \\ \hline FMA & 104,523 & Body & 64,726 \\ \hline NCIT & 163,842 & Pharm & 15,250 \\ & & Neoplas & 13,956 \\ \hline \hline \end{tabular} \end{table} Table 1: Ontologies and their subsets [1], same version as [1]. SNOMED subsets are the source ontologies, while FMA and NCIT are the target ontologies. Figure 1: The equivalence matching between the SNOMED class ID 78904004 – “Chest Wall Structure” and two FMA concepts, “Wall of thorax” with ID of fma10428 and “Chest wall” with ID of fma50060, is illustrated in this figure. TM translates from the source node encoding “Chest Wall Structure” in the SNOMED graph to the highlighted path “A...C...F” (presenting Chest Wall) and “A...B...E” (Thoracic Wall) in FMA ontology. While the SNOMED graph ’s “Chest Wall Structure” node and the FMA graph’s “Chest Wall” node have children, the FMA ontology’s “Thoracic Wall” is considered a leaf in this graph (no children). details are provided for each step in the subsequent subsections. ### Pre-training Hierarchical-ID generation.An ontology is represented in the form of a graph where each node represents a class, and the parent and child relations of the ontology serve as connections between classes. Based on this graph structure of each full ontology, hierarchical-IDs are generated for all the classes. These are constructed by starting from the root node, separated by "-" at each hierarchy level, and traversing through each node in that level as shown in Figure 3. Following this method, a unique ID is generated for each path traversed. As such, for ontologies like SNOMED, where there are multiple paths between the root and any given class, there could be multiple IDs for that node. In such cases, the shortest ID is considered the hierarchical-ID of that node (highlighted in yellow in Figure 3), while the other path IDs are considered its synonymIDs. Each node ID inherently captures the information of all its ancestors. This enables the model to trace from a broader class, starting from the root and getting more granular at each level, thus simplifying the translation task. Training.After generating the hierarchical-IDs, multi-task pre-training is done on full ontologies using MLM by randomly masking the nodes, enabling the model to learn the hierarchy and semantics. For instance, "Structure of Forel's H2 bundle" is represented as "1-1-0-0-0-0-4-1-1-0-0-0-7" and is masked as "1-1-0-0-0-0-[MASK]-1-0-0-0-7". Furthermore, additional tasks are included in order for the model to learn the semantics of each class in the form of class-level synonyms, labels, and descriptions; class-level relations between child and parent nodes; and the relation between synonym-ID and hierarchical-ID, using separate identifiers for each task in the pre-training step (Figure 2). The pre-training dataset has 2,406,456 instances constituting SNOMED, NCIT, and FMA ontologies. The model is trained for 3 epochs, with an increasing masking percentage linearly over time, starting at 10% and increasing to 35% in the final batch. The pre-training is done on 8 V100 32GB Nvidia GPUs with a batch size of 20, using a learning rate of 1e-3 with linear decay scheduler and AdamW optimizer. In this work, ByT5 [22], which is a token-free variation of mT5 [22] and supports multi-task training, is used as the model structure for pre-training, fine-tuning and zero-shot predictions. ### Fine-tuning The fine-tuning step aims to train the model on the downstream OM tasks. Only target subset ontologies, i.e., NCIT (Pharm), NCIT (Neoplas), and FMA (Body), are used for fine-tuning. The training data of each target sub-ontologies is augmented using the exact matches present in the labels and synonyms of other subset ontologies. We are also taking advantage of older ontology versions to add more synonyms to each target label. This expands the training corpus, enriches the data with minimal processing, and helps to perform more comprehensive learning. After the data augmentation for all the target sub-ontologies, fine-tuning is performed only on these target sub-ontologies corpora, i.e., NCIT (Pharm), NCIT (Neoplas), and FMA (Body). Training data is generated for each class in the target ontologies, where the input is the class label, synonyms, and descriptions, and output is the corresponding node hierarchical-ID, using a separate identifier for each task. Figure 2: Training Architecture. Starting from a language model pre-trained on the C4 dataset, further pre-training is done using MLM on the full ontology graphs. The pre-trained model is then fine-tuned on downstream tasks, translating from the class descriptions (label and synonyms) to the target node path (hierarchical-IDs). The pre-training and fine-tuning are done in a multi-task manner. The pre-training is performed on both source and target inner-ontologies, and fine-tuning is done on task specific target subset ontologies. The 462,789 samples that made up the fine-tuning data included Pharm, Neoplas, and Body subsets. Using 8 Nvidia V100 32GB GPUs with a batch size of 20, the fine-tuning took around 21 epochs. For the fine-tuning, a learning rate of 1e-3 with linear decay scheduler and warm-up of 1.5 epoch using AdamW optimizer with eps of 1e-8 and weight decay of 1e-2 is used. ### Zero-shot Predictions TM is a multi-task model with the capability to translate between multiple ontologies from the input source class labels/synonyms to target hierarchical-IDs. For the inference, in contrast to BERTMap which leverages similarity scores between input and multiple potential matches, TM performs zero-shot predictions based on the input terms in source ontology. One of the main advantages of our proposed TM is that given an input term with a specified task identifier, it is able to predict the best possible match from the target ontology with \(O(log(n))\) complexity, where \(n\) corresponds to the size of the target ontology. As such, without even considering the confidence score of the predictions into account, TM offers high accuracy with lower time complexity as compared to the existing methods. For confidence scoring, typically, two techniques of greedy and beam search are used. However, to make the TM predictions more robust and improve model precision, we leverage semantic similarity using embeddings of source terms and predicted target candidates. As such, the output is generated in two steps: (i) Prediction step: Given a source term, the model predicts the potential candidate in the target ontology graph, and (ii) Validation step: Using the same model, the embeddings are also generated for the target candidate and the similarity score is obtained between the source term and predicted target term embeddings (Figure 4). Scores are generated across all the source and predicted class labels and synonyms, all of which are also augmented by singularization. The maximum generated score is considered as the similarity score. The source and the target candidates are considered valid mapping pairs if their similarity score exceeds a selected threshold. As such, the proposed model takes advantage of both graph search and semantic matching. Mathematically, similarity score \(S\) is given as: \[S=\begin{cases}1.0,&\text{if }\Omega(c_{1})\cap\Omega(c_{2})\neq\emptyset\\ max(Sim(\Omega(c_{1}),\Omega(c_{2})),&\text{otherwise}\end{cases} \tag{1}\] where \(c_{2}\) is the predicted class for \(c_{1}\), \(\Omega(c_{1})\) and \(\Omega(c_{2})\) are sets of labels and synonyms for \(c_{1}\) and \(c_{2}\), respectively, and \(max(Sim(\Omega(c_{1}),\Omega(c_{2}))\) selects the maximum cosine similarity score across all the labels and synonyms of \(c_{1}\) (source) and \(c_{2}\) (predicted). If an exact match is available between the labels and synonyms of source and target classes, we assign a maximum similarity score, since embedding similarity will also give a similar result. ## 5 Results ### Evaluation criteria Commonly used metrics for evaluating OM systems [14]: Precision (P), Recall (R), and F-score are used as the global evaluation metrics. Mathematically, \[\begin{split}& P=\frac{|M_{out}\cap M_{ref}|}{|M_{out}|}\,\ \ \ \ R=\frac{|M_{out}\cap M_{ref}|}{|M_{ref}|}\\ & F_{\beta}=(1+\beta^{2})\frac{P.R}{\beta^{2}.P+R}\end{split} \tag{2}\] where, \(M_{ref}\) are the reference mappings, consisting of matching pairs, \(m=(c,c^{\prime})\), such that \(c\) and \(c^{\prime}\) are two classes from the to-be-aligned ontologies, and \(M_{out}\) are the mappings computed by OM systems and \(\beta=1\). Local evaluation metrics, \(Hits@K\) and Mean Reciprocal Rank (\(MRR\)), introduced in [14] are also used for current evaluation and can be represented as: \[\begin{split}& Hits@K=\frac{|\{m\in M_{ref}|Rank(m)\leq K\}|}{|M_{ ref}|}\\ & MRR=\frac{\sum_{m\in M_{ref}}Rank(m)^{-1}}{|M_{ref}|}\end{split} \tag{3}\] where \(Rank(m)\) returns the ranking position of \(m\) among \(M_{m}\cup\{m\}\) according to their scores, \(M_{m}\) represents a set of negative mappings pairs for each of the source term \(c\) in \(M_{ref}\), such that \((c,c^{\prime\prime}_{i})\in M_{m}\) with \(i\in\{1,2,...,100\}\) and Figure 3: Hierarchical-IDs generation. This diagram illustrates hierarchical-IDs generation for the Enzyme concept in the SNOMED ontology. The enzyme has four paths because this node has multiple parents. The shortest ID (highlighted) is chosen as a Hierarchical-ID, and others are SynonymIDs for this concept. \(c^{\prime\prime}_{i}\) are the 100 negative output candidates from target ontologies for each of the source terms \(c\) in \(M_{ref}\). As such, the Hits and MRR would be different for different selected 100 samples. We have published the results of our model based on the provided \(M_{m}\) set in [11] for a fair comparison. To provide a more robust measure of local metrics, we are reporting overall accuracy as well, although this is not provided for any of the other models. Accuracy here can be mathematically presented as: \[Accuracy=\frac{|\{m\in M_{ref}|f(c,T)=c^{\prime}\}|}{|M_{ref}|} \tag{4}\] where \(m=(c,c^{\prime})\) represents matching pairs in the \(M_{ref}\) set, and \(f(c,T)\) refers to the target candidate predicted by the model, given an input term \(c\) and appropriate task identifier \(T\). **Baselines.** Results are compared with the SOTA approaches: Edit-Similarity, LogMap, AML, BERTMap [11], and recently published results in [1]. To be consistent, evaluation for P, R, F-score, Hit@1, and MRR is done using [1] library. ### Prediction Results Prediction results are shown in Tables 2-4, for the three equivalence OM tasks, from SNOMED to FMA (Body), SNOMED to NCIT (Pharm), and SNOMED to NCIT (Neoplas). The results demonstrate the precision, recall, F-score, Hit@1, MRR, and accuracy for TM and the baseline approaches presented in [11] and [1] on the test data for the unsupervised setting. The highest numbers for each of these metrics are highlighted in the tables to emphasize which model is outperforming others in each category. The overall results illustrate that TM is outperforming all the baselines for all three OM tasks in F-score, Hit@1, and MRR. A high threshold is selected to generate the most confident cross-ontology matching pairs. Note that a single unified model is trained and leveraged here to predict all the results in the form of a source class to target hierarchical-IDs, using appropriate task identifiers. There are two TM results presented in the given tables, and both are based on different scoring schemes. TM\({}^{2}\) is based on greedy search scores with softmax probabilities using temperature scaling. TM\({}^{1}\) is based on a new and more robust prediction scheme described in Subsection 4.3, taking advantage of both graph search and semantic similarity. It can be seen that both of our methods surpass SOTA for all the tasks, but TM\({}^{1}\) is more robust and has significant improvements as compared to any of the existing methods. To be precise, 2.3% improvement over the second best result (AML) in Body, 11.0% improvement for Pharm (as compared to AMD), and 4.3% improvement for Neoplas as compared to BertMap-Lite and Edit-Similarity, is seen for TM\({}^{1}\) in the F-score. It should be noted that even without TM, none of these methods are SOTA in all the tasks. For generating local metrics for Hit@1 and MRR, TM is used to generate the embedding similarity score of input terms in the test set and their corresponding candidates in \(M_{m}\cup\{m\}\) set. We are also outperforming all existing SOTA methods based on MRR and Hit@1. Additionally, we are reporting accuracy metric, which is consistent, and more representative of the model performance. For this metric, the TM predictions are obtained across the entire target ontology without using any smaller subset of negative samples from the test set, while reducing the time complexity from quadratic to log-linear. ## 6 Conclusions and Discussions This work presents a new approach to OM by treating the OM process as a translation task and performing multi-task pre-training, fine-tuning, and predictions in a zero-shot, unified and end-to-end manner. The proposed approach takes advantage of transfer learning across different ontologies and does not require manual annotations for training. Additionally, the Figure 4: Zero-shot predictions. Given a source term and the assigned translation task (e.g., SNOMED to FMA), the output is generated in two steps: Prediction step and Validation step. In the Prediction step, a potential target candidate is generated along with the embeddings associated with the source term. In the Validation step, the target candidate class is again passed through our translation model to generate embeddings. Based on the source and target terms embeddings, a similarity score between the source and target candidate is obtained. This is done in a zero-shot manner with time complexity of \(O(log(n))\). \begin{table} \begin{tabular}{l l l l l l l} \hline Task & Precision & Recall & F-score & MRR & Hit@1 & Accuracy \\ \hline TM(Ours)\({}^{1}\) & 0.972 & 0.929 & **0.950** & **0.987** & **0.982** & **0.946** \\ TM(Ours)\({}^{2}\) & 0.977 & 0.872 & 0.922 & **0.987** & **0.982** & **0.946** \\ \hline Edit-Similarity\({}^{*}\) & 0.979 & 0.432 & 0.600 & 0.836 & 0.760 & NA \\ LogMap\({}^{*}\) & 0.915 & 0.612 & 0.733 & 0.820 & 0.695 & NA \\ AML\({}^{*}\) & 0.940 & 0.615 & 0.743 & NA & NA & NA \\ BERTMap\({}^{*}\) & 0.966 & 0.606 & 0.745 & 0.919 & 0.876 & NA \\ LogMap-Lite\({}^{**}\) & 0.995 & 0.598 & 0.747 & NA & NA & NA \\ AMD \({}^{**}\) & 0.962 & 0.745 & 0.840 & NA & NA & NA \\ BERTMap-Lite\({}^{**}\) & 0.979 & 0.432 & 0.600 & 0.836 & 0.760 & NA \\ Matcha\({}^{**}\) & 0.941 & 0.613 & 0.742 & NA & NA & NA \\ ATMatcher \({}^{**}\) & 0.937 & 0.566 & 0.706 & NA & NA & NA \\ LSMatch\({}^{**}\) & 0.982 & 0.551 & 0.706 & NA & NA & NA \\ \hline \end{tabular} \({}^{1,2}\) are based on our proposed TM model, where the former is based on similarity score and later is based on greedy search score \({}^{*}\) These numbers are based on recent [1] published results. \end{table} Table 2: Result for equivalence matching – SNOMED (Body) to FMA (Body). \begin{table} \begin{tabular}{l l l l l l l} \hline Task & Precision & Recall & F-score & MRR & Hit@1 & Accuracy \\ \hline TM(Ours)\({}^{1}\) & 0.972 & 0.929 & **0.950** & **0.987** & **0.982** & **0.946** \\ TM(Ours)\({}^{2}\) & 0.977 & 0.872 & 0.922 & & **0.960** & NA \\ \hline Edit-Similarity\({}^{*}\) & 0.979 & 0.432 & 0.600 & 0.836 & 0.760 & NA \\ LogMap\({}^{*}\) & 0.915 & 0.612 & 0.733 & 0.820 & 0.695 & NA \\ AML\({}^{*}\) & 0.940 & 0.615 & 0.743 & NA & NA & NA \\ BERTMap\({}^{*}\) & 0.966 & 0.606 & 0.745 & 0.919 & 0.876 & NA \\ LogMap-Lite\({}^{**}\) & 0.995 & 0.598 & 0.747 & NA & NA & NA \\ AMD \({}^{**}\) & 0.962 & 0.745 & 0.840 & NA & NA & NA \\ BERTMap-Lite\({}^{**}\) & 0.979 & 0.432 & 0.600 & 0.836 & 0.760 & NA \\ Matcha\({}^{**}\) & 0.941 & 0.613 & 0.742 & NA & NA & NA \\ ATMatcher \({}^{**}\) & 0.937 & 0.566 & 0.706 & NA & NA & NA \\ LSMatch\({}^{**}\) & 0.982 & 0.551 & 0.706 & NA & NA & NA \\ \hline \end{tabular} \({}^{1,2}\) are based on our proposed TM model, where the former is based on similarity score and later is based on greedy search score \({}^{*}\) These numbers are based on recent [1] published results. \end{table} Table 3: Results for equivalence matching – SNOMED (Pharm) to NCIT (Pharm). \begin{table} \begin{tabular}{l l l l l l l} \hline Task & Precision & Recall & F-score & MRR & Hit@1 & Accuracy \\ \hline TM(Ours)\({}^{1}\) & 0.972 & 0.929 & **0.950** & **0.987** & **0.982** & **0.946** \\ TM(Ours)\({}^{2}\) & 0.977 & 0.872 & 0.922 & & **0.960** & NA \\ \hline Edit-Similarity\({}^{*}\) & 0.979 & 0.432 & 0.600 & 0.836 & 0.760 & NA \\ LogMap\({}^{*}\) & 0.915 & 0.612 & 0.733 & 0.820 & 0.695 & NA \\ AML\({}^{*}\) & 0.940 & 0.615 & 0.743 & NA & NA & NA \\ BERTMap\({}^{*}\) & 0.966 & 0.606 & 0.745 & 0.919 & 0.876 & NA \\ LogMap-Lite\({}^{**}\) & 0.995 & 0.598 & 0.747 & NA & NA & NA \\ AMD \({}^{**}\) & 0.962 & 0.745 & 0.840 & NA & NA & NA \\ BERTMap-Lite\({}^{**}\) & 0.979 & 0.432 & 0.600 & 0.836 & 0.760 & NA \\ Matcha\({}^{**}\) & 0.941 & 0.613 & 0.742 & NA & NA & NA \\ ATMatcher \({}^{**}\) & 0.937 & 0.566 & 0.706 & NA & NA & NA \\ LSMatch\({}^{**}\) & 0.982 & 0.551 & 0.706 & NA & NA & NA \\ \hline \end{tabular} \({}^{1,2}\) are based on our proposed TM model, where the former is based on similarity score and later is based on greedy search score \({}^{*}\) These numbers are based on recent [1] published results. \end{table} Table 2: Result for equivalence matching – SNOMED (Body) to FMA (Body). trained model understands the semantics of the text as well as the structure of the ontologies. We show that our proposed method outperforms Edit-Similarity, LogMap, AML, BERTMap, and the recently proposed OM frameworks in the OM22 conference [1] in all the tasks. Our approach provides several advantages: (1) It reduces the time complexity to log-linear as opposed to quadratic in the existing approaches2, (2) It is based on zero-shot prediction, without requiring much post-processing and does not employ mapping extension or mapping repair in contrast to the other methods, (3) It does not require any manual labeled cross-ontologies matching pairs due to zero-shot learning, (4) One unified framework is used as a result of multi-tasking, which makes it easier to productionize these large transformer-based models, (5) It is robust toward different tokenization schemes as it uses byte level tokenization, (6) It learns complete ontologies graphs, using the hierarchical-IDs which provides a more natural path for translation, and would be significantly helpful for subsumption mappings. Footnote 2: Note that BERTMap reduces the time complexity from \(O(n^{2})\) in traditional approaches to \(O(kn)\), where \(k<<n\) with an additional preprocessing step by considering only a small portion of target subset ontology classes with at least one subword token common to the source class candidate, which adds dependency on the tokenization hyperparameters and could be error prone since some semantically matching cases with lexical variations could get filtered out in this process. Contrary to that, such limitation does not exist in TM since it performs matching from source to target without reducing the target corpora size. Time-complexity of TM is \(O(nlog(n))\), where n represents the number of nodes in the target ontology graph (same as the number of classes), noting that a single search in a tree structure with \(n\) nodes can be performed in \(O(log(n))\) time. In the future, we will pre-train the starting checkpoint with more domain-related corpus (e.g., PubMed, MIMIC-III, clinical notes) instead of the C4 dataset. Another interesting track can be ensemble learning of existing SOTA models with TM.
2301.11128
A Cloud-Edge Continuum Experimental Methodology Applied to a 5G Core Study
There is an increasing interest in extending traditional cloud-native technologies, such as Kubernetes, outside the data center to build a continuum towards the edge and between. However, traditional resource orchestration algorithms do not work well in this case, and it is also difficult to test applications for a heterogeneous cloud infrastructure without actually building it. To address these challenges, we propose a new methodology to aid in deploying, testing, and analyzing the effects of microservice placement and scheduling in a heterogeneous Cloud environment. With this methodology, we can investigate any combination of deployment scenarios and monitor metrics in accordance with the placement of microservices in the cloud-edge continuum. Edge devices may be simulated, but as we use Kubernetes, any device which can be attached to a Kubernetes cluster could be used. In order to demonstrate our methodology, we have applied it to the problem of network function placement of an open-source 5G core implementation.
Samuel Rac, Rajarshi Sanyal, Mats Brorsson
2023-01-26T14:26:36Z
http://arxiv.org/abs/2301.11128v1
# A Cloud-Edge Continuum Experimental Methodology applied to a 5G Core Study ###### Abstract There is an increasing interest in extending traditional cloud-native technologies, such as Kubernetes, outside the data center to build a continuum towards the edge and between. However, traditional resource orchestration algorithms do not work well in this case, and it is also difficult to test applications for a heterogeneous cloud infrastructure without actually building it. To address these challenges, we propose a new methodology to aid in deploying, testing, and analyzing the effects of microservice placement and scheduling in a heterogeneous Cloud environment. With this methodology, we can investigate any combination of deployment scenarios and monitor metrics in accordance with the placement of microservices in the cloud-edge continuum. Edge devices may be simulated, but as we use Kubernetes, any device which can be attached to a Kubernetes cluster could be used. In order to demonstrate our methodology, we have applied it to the problem of _network function placement_ of an open-source 5G core implementation. ## 1 Introduction Cloud-native technologies, such as Kubernetes [4], have significantly improved the way to allocate infrastructure resources to applications. For developers of distributed applications, deployment is greatly simplified as the individual components, typically embodied as Docker containers, are automatically mapped to nodes in the cluster that make up the infrastructure. This technology also has the potential to improve resource utilization and reduce over-provisioning, which is otherwise common. Overall it leads to shorter deployment times and reduced costs for infrastructure. Edge and fog computing [12] have been introduced to enable the deployment of (parts of) applications closer to the end-user in order to lower end-to-end latency, reduce data sent over the network, or improve privacy by keeping the data local. While this has obvious benefits, it also introduces new challenges. A software component can no longer execute anywhere in the compute infrastructure as the Edge nodes typically require specific formats and explicit placement. While data center nodes display limited kinds of heterogeneity, edge nodes come in many different forms and architectures. We need to extend the cloud-native paradigm from the data center to the edge. From the developer's perspective, deploying an application taking advantage of the edge should be as easy as deploying it in a cloud data center. However, the best placement of software components is often not clear and extensive experimentation is needed, both for the placement and for finding the right system architecture. Currently, there is no established methodology to test performance of cloud-native applications that span from the data-center to the edge. Currently used methods, see section 2, use either simulation, meaning that real distributed applications cannot be tested, or they do not allow the testing of geographical distribution or heterogeneous architectures. We present a novel methodology to build testbeds for real distributed applications deployed in a cluster where nodes might be of different types and we model geographic distribution by controlling bandwidth, and latency between the nodes. The methodology leverages the power of public cloud infrastructures and Kubernetes so that any application which can be deployed using Kubernetes can be used as a workload. We can simulate different geographical localities of subsets of nodes by controlling the latency and bandwidth available in communication links between nodes. Thanks to that, the application loading the system can be deployed unchanged from one experiment to another. With this methodology, we can avoid the tedious and time-consuming process of building large physical testbeds while the software development process can be kept the same as for a real environment and the experiments are easily reproducible. We demonstrate this methodology with a study of the placement of 5G core network functions in either i) the central cloud, ii) at the network edge, or iii) in an intermediate local data center. The methodology is, however, general and can be used in any other setting involving the edge to data center continuum. One example is the deployment of a cloud multiplayer gaming system. Since the methodology leverages Kubernetes, it can run every containerized application. In that manner, the testbeds generated by this methodology are application agnostic. Our main contributions are: 1. a methodology to study the impacts of deploying applications in a heterogeneous cloud environment [11] that i) allows for real distributed applications to be executed and ii) which does not need expensive physical infrastructure developed, and 2. a performance analysis of a 5G core installation while studying three 5G use cases deployed using different system architectures on a testbed generated by our methodology. ## 2 Related work Goshi et al. describe a testbed that highlights Inter-NF dependencies [9]. Kube5G is a cloud-native 5G testbed designed to handle the whole 5G stack [3]. COPA is an orchestration framework for networking running above the Kubernetes layer [16]. However, these three testbeds (and the others referred to in their study) are not meant for the evaluation of placement and performance of the applications with respect to heterogeneous system architectures. It is not possible to simulate the impact of geographical distances between nodes on networking (e.g., latency) or the bandwidth restrictions. In contrast, our methodology enables the deployment of reproducible experiments in a public or private cloud without the costs and constraints of handling a country-sized network. The _AccessOpt_ architecture detailed in section 4.2 is based on previous studies, e.g. [13, 14, 15, 10]. These studies describe multi-layered 5G architectures and are based on geographic areas and topologies as well as on logical layers. We do not claim to "invent" this architecture rather using a well-known architecture to demonstrate the capability of the methodology. Sarrigiannis et al. describe a two-tier architecture (Cloud and Edge) for virtual NF placement with a VNF orchestrator [17]. Contrary to their approach, we leverage Kubernetes, the state-of-the-art orchestration framework. It renders the flexibility to scale up and down on-demand or automatically. Exploiting Kubernetes, intricate architectures requiring complex interactions between nodes either at the control or user plane can be set up and tested without affecting the application. Ejaz et al. present a three-tier architecture (Cloud IoT, Edge IoT, and Local Edge IoT) to improve reliability for mission-critical processes, based on _iFogSim_ simulator [7]. This study helped us to define our system architectures. However, the iFogSim simulator does not allow deploying a real containerized application. Edgenet, as described by Senel et al. [6], provides a global distributed Kubernetes cluster, but it is not suitable as a testbed for 5G core or other edge-based applications as it cannot be configured, and there is no access to the Edge nodes. Enoslib [5] is another suggestion to facilitate experimentation with distributed systems. It is a general tool to facilitate reproducible experimentation and is thus orthogonal to our methodology, which could be used as the backend in an Enoslib experiment. We have so far not seen it beneficial to use Enoslib. ## 3 Methodology Our methodology relies on two main components: _cloud-native technologies_ and _tools to simulate many architectural options_ in a cloud environment. Testbeds according to this methodology can easily be deployed in public clouds. The tools and scripts needed for this are publicly available on github [11]. ### Cloud-native technologies A testbed in our methodology is a distributed computer cluster that can simulate heterogeneous architectures and relies on well-known cloud-native technologies. **Containers** We use Docker containers which greatly simplify application deployment [2]. With a very lightweight virtualization layer, this technology has become a standard to package applications for deployment. To quickly deploy, scale up/down, and manage _microservices_ in a cloud environment, we use _Kupernetes_, the state-of-the-art container orchestration tool, as mentioned earlier. Kubernetes manages _pods_ composed of at least one container. **Monitoring** Cloud-native technologies contain a large set of tools for monitoring vast infrastructures. _Prometheus_ collects and exposes many metrics (CPU usage, memory, networking, and other metrics). Automatically, logs, network traces, and other metrics are effortlessly recorded and stored to be able to collect experimental metrics. In addition, a custom scheduler can use all the metrics collected to make better decisions. **Kubernetes limitations** The Kubernetes scheduler is a powerful tool. It can find a proper _microservices_ placement when looking at available resources or node taints. However, network performance is not taken into account. It is not an issue while working within a traditional data center with homogeneous nodes, but it becomes a limitation when some nodes are outside the data center. It is, for instance, challenging to achieve ultra-low latency without considering at which geographical position a microservice is deployed. ### Architecture simulation In this section, we explain how we can simulate different system architectures on top of Kubernetes. This is a key feature for designing new infrastructures or developing new microservice placement strategies in the edge-cloud continuum. **Node architecture** A crucial part of our methodology is the ability to run production-ready applications on top of the testbeds that we create. This means a testbed must consist of real compute nodes. These nodes should represent the nodes in the cloud-edge continuum we want to investigate. In our evaluation, we have been using a public cloud provider and are thus limited to the node types available at this provider. Currently, the choice of node types includes a range of ARM, Intel, and AMD processors with varying core counts. We can thus choose an ARM node with a small core count and (relatively) low amount of memory to represent an edge node, and larger Intel/AMD nodes can represent data center nodes. Obviously, this is not fully representing the range of possible architectures you might see in a real edge deployment, but for purposes of evaluating placement or scheduling options, this will be sufficient. The nodes in the testbed are labeled according to their properties (resources, location, hardware accelerator) and follow a naming convention. These labels are used to select where to deploy an application's microservices according to system requirements using the Kubernetes scheduler. **Configurability of Network Capacity and Latency** We also need to be able to represent the anticipated latencies and available network bandwidths in a geographically distributed cluster. Such configurable latency and bandwidth is a key feature of our methodology. It enables the simulation of distance and link capacity between nodes, e.g., between a data center and an edge node. Theoretically, the more the distance, the higher will be the latency. The control of these parameters is achieved by means of _traffic control_(tc). This is a utility program that can reconfigure the Linux kernel packet scheduler. It can add latency on received packets, change maximum bandwidth and other networking parameters. We run tc inside pods as a side-car, modifying the pod properties one by one. **Microservice placement** The Kubernetes scheduler can use the above-described labels. Associating a microservice to a node can be done manually or automatically (implementing a custom scheduling policy). Manual microservice placement is based on Kubernetes _taints_ and _tolerations_, i.e., checking node labels and service permissions to know the candidate nodes where a service is authorized to be deployed. We can define a rule to force service deployment on a specific node using _Pod affinity_. Figure 1 gives an overview of the architecture simulation in a testbed setup. Nodes are labeled according to their kind, and networking between pods is configurable. ## 4 Deploying a 5G system In order to demonstrate the usefulness of our methodology, we have used it to define a sequence of testbeds that can run edge computing experiments. The following sections describe how we conduct a study on a complete 5G core system implementation, studying the effects of 5G network function placement in different system architectures and for different use cases. The \(5^{th}\) generation (5G) of the cellular telecommunication network is amenable to being deployed in an edge-to-data-center continuum (in contrast to previous generations, which needed much more specialized equipment). The main talked-about benefits of the 5G technology are enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communication (URLLC), and massive Machine Type Communication (mMTC). ### 5G Network Functions on the testbed A complete description of a 5G System (access network, devices, and core) is out of the scope of this paper, but some familiarity with the core components is Figure 1: Edge-to-cloud environment can be simulated on the public cloud. necessary to understand the study. The 5G core consists of a number of _Network Functions_ (NFs). The gNodeB (gNB) represents the radio access network (RAN) to which user equipment (UE, e.g., phones) is connected over cellular radio. Most of the details about NFs are not important for this study, but we detail three of them: AMF, SMF, and UPF. These are essential to understanding how the system architectures are defined. **AMF**: (Access and Mobility management Functions) handles incoming connections and session requests of UEs and manages mobility (handover between two cells). **UPF**: (User Plane Function) handles user data traffic. The UPF is directly connected to a Data Network (Internet or Application Server). **SMF**: (Session Management Function) establishes PDU sessions (Protocol Data Unit) for the UEs. A PDU session is a data tunnel that links a UE to a data network (DN) through a UPF. ### System architectures In this study, we use three different kinds of nodes: _Data center_, _Edge_ and _Cloudlet_ nodes, and define three system architectures comprising of different node types and topology. One architecture is used as a reference reproducing the traditional approach where all NFs are deployed in a data center, while the others use an edge node close to the gNB and a cloudlet node in-between the edge and the data center. We then experiment with different placement of the UPF, SMF, and AMF network functions in the different architectures and study the effect on system performance. Figure 2 shows different architectures. For each architecture, the RAN elements (gNB and UEs) are deployed on separate nodes to not interfere with the NF placement study. Links between nodes are called N2 to N6 as defined in 5G standard architecture [1]. The **Baseline** architecture, shown in Figure 1(a), is a reference architecture where all network functions are placed in the same data center. This architecture Figure 2: Three different system architectures: a) Baseline, b) optimized for end-user latency and bandwidth, c) optimized for session throughput. cannot support eMBB and URLLC use cases well (e.g., cloud gaming applications or AR/VR both need ultra-low latency and high bandwidth). The User Plane Function needs to be placed at the edge to achieve ultra-low latency. The **LatOpt** system architecture, shown in Figure 2b, is a well-known architecture. It should enable eMBB and URLLC use cases, significantly improving link N3 latency and throughput. With this architecture, the UPF is deployed on an edge node close to the gNB. Other NFs are running on data center nodes. The **AccessOpt** system architecture is similar to LatOpt architecture but includes Cloudlet nodes. This architecture wants to be a simplified implementation of the multi-layered 5G architectures mentioned in section 2. Investigating the effects of this architecture could provide valuable information for implementing more complex ones. Cloudlet nodes are closer to the data center nodes than edge nodes. Several gNBs may be connected to one Cloudlet node. AMF and SMF are deployed on Cloudlet nodes because they handle UE connection, session management, and mobility procedures. Thus, Cloudlet nodes can handle UE's massive mobility (many UEs moving from one gNB to another) while keeping reasonable latency with other NFs located in the data center. Figure 2c shows AccessOpt architecture. Deploying AMF and SMF on Cloudlet nodes should improve UE registration, mobility, and PDU session establishment procedures performances. ### Use cases In order to experiment with major 5G features (eMBB, URLLC, and mMTC), we introduce three use cases related to 5G. We investigate different NF placement, as discussed, on the above-described system architectures using these use cases: Augmented Reality (AR), Industrial IoT (IIoT, e.g., sensors in a smart factory), and Massive IoT (MIoT). Studying these three use cases will bring valuable knowledge for i) building new infrastructures including slices at the 5G edge, and ii) developing new scheduling methodologies for placing NFs. The AR and IIoT use cases are detailed in-depth by Siriwardhana et al. in [18]. We adapt the workload and the experiment duration to the capabilities of the testbed. However, note that in our study, the 5G core and its NFs are not simulated but are real operation-grade elements. In the AR use case, a UE should receive a high-quality video with low latency. We look at the UE end-to-end latency to evaluate different system architectures. The LatOpt architecture should improve this metric with respect to the baseline architecture by reducing the distance between the UPF and the gNB (manipulating latency). For the Industrial IoT use case, we consider the UEs as sensors in a smart factory. In industry 4.0, we consider that an IIoT UE will not change of network cell and that the network is acquired at UE power up. Periodically, these devices will establish a data session and send their data to a processing server. Before sharing data, a session has to be established. Power constraints are not considered in this use case, the factory environment should provide energy to devices. To evaluate the performance of this use case, we measure the end-to-end latency. The IIoT workload can be decomposed as follow: establishing a PDU session (to contact a processing server via the DN), sending data to the server, and getting the server's response. The IIoT end-to-end latency comprises two main parts: network acquisition time and data throughput (data transfer and server processing time). The AccessOpt architecture should have an impact on E2E latency for this use case. AMF and SMF located on a Cloudlet node should reduce the PDU session establishment time, while a UPF closer to gNB should reduce the data session's latency. For the Massive IoT (MIoT) use case, we are evaluating the control plane's performances when connecting many UEs. These devices will generate traffic on the control plane when switching on/off (to save battery) or moving from one cell to another. In order to reduce the time to complete registration and session establishment procedures and to limit traffic toward datacenters, we deploy AMF and SMF on cloudlet nodes (according to AccessOpt architecture). Cloudlets should provide many benefits: i) being closer to the UEs than datacentres, ii) having more resources than edge nodes (to be able to scale up NFs if necessary), and iii) being close to many gNB at the same time to handle user mobility. Looking at the time to complete a procedure is an important KPI to assure QoS and avoid procedure time out. ## 5 Experimental methodology In this section, we outline the experimental setup and parameters of the experiments, such as additional latency and use case workload. ### Experimental setup To test all use cases, we run all the experiments in a public cloud environment. We use a self-managed Kubernetes cluster with one master node and seven worker nodes. All of these machines have 2 CPUs and 4 GB of RAM. On this cluster, we run the open-source 5G core free5G [8]. Every Network Function (NF) runs inside its own pod. User Equipment (UE) and gNodeB are simulated using an open-source RAN simulator [19]. \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Use cases** & **Favoured System** & **Type of workload** & **KPIs** \\ \hline \hline AR (Smart Factory) & Baseline or LatOpt & High data rate on the UP & E2E latency \\ \hline IIoT (Smart Factory) & Baseline or AccessOpt & PDU session establishment process + Low data rate & E2E latency \\ \hline MIoT (Massive IoT) & Baseline or AccessOpt (load balancer) & \begin{tabular}{c} the UP \\ \end{tabular} & \begin{tabular}{c} the UP \\ \end{tabular} & \begin{tabular}{c} Time to register + \\ establish a PDU session \\ \end{tabular} \\ \hline \end{tabular} \end{table} Table 1: Use cases and their characteristics. ### Additional latency As described above, we can set an additional latency between two nodes to reflect the physical latency in the target system architecture. Table 2 summarizes the additional latencies used in experiments. These are the additional latencies to what is already experienced in the physical cloud infrastructure. ### Workload parameters Table 3 summarizes the use cases' workload parameters of the different experiments. The IIoT and MIoT use cases workload mainly be managed by the Control Plane (respectively on SMF and AMF). In contrast, the User Plane (UPF) should support the AR use case workload. ## 6 Results In this section, we compare KPI values obtained using different architectures. These results provide insights into which architecture provides the best performance per use case. Figures 2(a), 2(b) and 4 shows the mean KPI values for each use case according to the chosen architecture. Figure 2(a) shows a significant difference in end-to-end latency for the AR use case. This KPI value is four times lower when using the LatOpt architecture. This improvement can be explained by positioning the UPF closer to the gNB. Latency on the link N3 is lower with the LatOpt architecture as well as end-to-end latency. This demonstrates the ability of the testbed by replicating well-known use cases. The end-to-end latency for the IIoT use case is shown in figure 2(b). The AccessOpt architecture provides E2E latency almost four times lower than the Baseline architecture. E2E latency is divided into the time to achieve the PDU session establishment procedure and data traffic duration (transport and processing time). Both are significantly improved with AccessOpt. Establishing a new data session takes more time than transmitting data. Almost half of the requests go from AMF or SMF to the data center during the session establishment \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline **System** & **N2 (ms)** & **N3 (ms)** & **N4 (ms)** & **N6 (ms)** & **DC-Cloudlet (ms)** \\ \hline Baseline & 12.5 & 12.5 & 0 & 0 & 0 \\ LatOpt & 12.5 & 1 & 12.5 & 0 & 0 \\ AccessOpt & 3.5 & 1 & 3.5 & 0 & 9 \\ \hline \end{tabular} \end{table} Table 2: Additional latency used in different system architectures \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Use Case** & \multicolumn{2}{c|}{**Workload size**} & \multicolumn{1}{c|}{**\#UEs**} \\ \hline AR & 460 Mbit of video stream sent to the UE & 3 \\ \hline IIoT & PDU session establishment request + 640 LB of data & 20 \\ \hline MIoT & UE registration requests + & 50 \\ \hline & PDU session establishment request + 100 LB of data & \\ \hline \end{tabular} \end{table} Table 3: Use case Workloads. procedure, and the others go to the edge. Then having less latency to the edge improves the time to complete the procedure. With the AccessOpt architecture, the AMF is close to the edge nodes and on the same machine as SMF. This proximity explains the better KPI value for the AccessOpt architecture. Like in the AR use case, the UPF lowers the latency of the data session. The placement of AMF, SMF, and UPF in the AccessOpt architecture reduces the E2E latency significantly. Figure 4 shows a significant difference in KPI values when using the Baseline and the AccessOpt architectures. The total procedure with the baseline architecture is 13 times faster than AccessOpt. UE registration procedure is 14 times faster on the Baseline architecture than on AccessOpt. However, the PDU session establishment procedure is ten times faster on the AccessOpt architectures. However, session establishment represents only 0.06% of total time for AccessOpt and 8% for Baseline. Therefore PDU session establishment time has a limited impact on AccessOpt architecture's total performance. Registration procedures have to be complete before a session can be established. During the registration procedure, AMF mainly addresses NFs located in the data center (close to the database). This procedure will take more time to achieve with AccessOpt architecture, where AMF is far from the data center. It is contrary to the data session establishment procedure. Traffic is balanced between NFs in the Datacentre and at the Edge. Then, placing the AMF on a cloudlet node gives lesser performance for this procedure. When the latency between the cloudlet and data center nodes becomes too high, it causes a systematic registration time-out. Only a few UEs can register before all registration timers are triggered when latency becomes high. In that case, UEs will try two more times to register without success. The UEs' procedure retries impact the CPU consumption of the control plane because a UE will initialize many procedures. Our methodology helps to choose the best architecture for each 5G use case. Placing the UPF at the edge reduces the latency on the link N3 in every configuration tested. The optimal position of the AMF depends on the use case's Figure 3: End-to-end latency: a) AR use case, b) IIoT use case. procedures. AMF improves KPIs for the session establishment procedure when placed at the edge (or nearby), while results are better for the UE registration procedure when it stays in the data center. ## 7 Conclusion Studying new scenarios in the edge-cloud continuum raises new experimental issues. Experimenters need testbeds that can reproduce every aspect of this heterogeneous environment. Our methodology aims to help deploy edge-cloud experience in a traditional cloud environment. We aim in the future to investigate custom Kubernetes schedulers, using this methodology to evaluate their performances. ## Acknowledgments This work has been partly funded by the Luxembourg National Research Fund (FNR) under contract number 16327771 and has been supported by Proximus Luxembourg SA.
2303.08200
Small time delay approximation in replicator dynamics
We present a microscopic model of replicator dynamics with strategy-dependent time delays. In such a model, new players are born from parents who interacted and received payoffs in the past. In the case of small delays, we use Taylor expansion to get ordinary differential equations for frequencies of strategies with time delays as parameters. We apply our technique to get analytic expressions for interior stationary states in two games: Snowdrift and Stag-hunt. We show that interior stationary states depend continuously upon time delays. Our analytic formulas for stationary states approximate well exact numerical results for small time delays.
Jacek Miȩkisz, Javad Mohamadichamgavi, Raffi Vardanyan
2023-03-14T19:39:37Z
http://arxiv.org/abs/2303.08200v1
# Small time delay approximation in replicator dynamics ###### Abstract We present a microscopic model of replicator dynamics with strategy-dependent time delays. In such a model, new players are born from parents who interacted and received payoffs in the past. In the case of small delays, we use Taylor expansion to get ordinary differential equations for frequencies of strategies with time delays as parameters. We apply our technique to get analytic expressions for interior stationary states in two games: Snowdrift and Stag-hunt. We show that interior stationary states depend continuously upon time delays. Our analytic formulas for stationary states approximate well exact numerical results for small time delays. **Keywords**: evolutionary game theory, replicator dynamics, time delays, small-delay approximation, social dilemmas, Stug-hunt game, Snowdrift game. ## I Introduction Many biological and socio-economic processes can be modeled by systems of interacting entities such as animals in ecology and evolutionary biology, and people in social systems. It was usually assumed that interactions take place instantaneously and their effects are immediate. In reality, results of biological interactions between individuals may appear in the future, and in social models, individuals or players may act, that is choose appropriate strategies, on the basis of the information concerning events in the past. It is well known that time delays may cause oscillations in dynamical systems [1; 2; 3; 4]. One usually expects that equilibrium of evolving populations, describing coexisting strategies or behaviors, is asymptotically stable for small-time delays and for big ones it becomes unstable. Here we discuss replicator dynamics of populations of interacting individuals playing two-player games with two strategies and unique interior stationary states. It describes the time evolution of frequencies of strategies played in the population [5; 6; 7]. Effects of time delays in replicator dynamics were discussed in [8; 9; 10; 11; 12; 13; 14; 15; 16; 17]. In [9], the authors constructed a model, where individuals are born some time after their parents played and received payoffs. There were constructed two coupled equations: one for the frequency of the first strategy and the other one for the size of the population. It was shown that in such a model, oscillations are not possible, the original stationary state is globally asymptotically stable for any time delays. Here we modify the above model by allowing time delays to depend on strategies played by individuals (strategy-dependent delays were discussed in [14; 15; 16; 17]). We consider small delays and use Taylor expansion to get ordinary differential equations for frequencies of strategies with time delays as parameters. We apply our technique to get analytic expressions for interior stationary states in two games: Snowdrift and Stag-hunt. We show that interior stationary states depend continuously upon time delays. Our analytic formulas for stationary states approximate well exact numerical results obtained in [13] for small time delays. In Section 2, we construct replicator dynamics with strategy-dependent time delays. Section 3 contains a derivation of replicator dynamics for small delays. In Section 4, we apply our technique in two examples: Snowdrift and Stag-hunt games. Discussion follows in Section 5. ## II replicator dynamics with strategy-dependent time delays We consider games with two pure strategies; denote them by C and D. In discrete moments of time, individuals compete in pairwise contests and the outcome is given by the following payoff matrix: C D \[\begin{array}{cccc}\text{C}&\text{a}&\text{b}\\ \text{U}=&&&&\\ &\text{D}&\text{c}&\text{d}\end{array}\] where the \(i,j\) entry, \(i,j=C,D\), is the payoff of the first (row) player when he plays the strategy \(i\) and the second (column) player plays the strategy \(j\). We assume that both players are the same and hence payoffs of the column player are given by the matrix transposed to U; such games are called symmetric. From now on we will assume that \(a<c\) and \(d<b\) or \(c<a\) and \(b<d\) so there is a unique mixed Nash equilibrium, \(\bar{x}=\frac{b-d}{b-d+c-a}\), an interior stationary state in the replicator dynamics, respectively asymptotically stable or unstable one. We assume that each player interacts with all other ones and receive an average payoff with respect to the structure of the population, i.e. the proportion of the population playing each strategy. We interpret payoffs as a number of offspring that an individual has after a contest, the offspring inherits the strategy of an ancestor. We assume that during a very small-time interval of length \(\varepsilon\), only an \(\varepsilon\)-fraction of the population can manage to pair with partners and play the game. We assume that players do not get payoffs immediately - new players are born \(\tau\) units of time after parents interacted and received payoffs. We also assume that time delays are strategy dependent, we denote them by \(\tau_{C}\) and \(\tau_{D}\). Let \(p_{i}(t)\), \(i=C,D\), be the number of individuals who play strategies \(C\) and \(D\) respectively at the time \(t\). Then the total number of players will be \(p(t)=p_{C}(t)+p_{D}(t)\) and the fraction of the population playing strategy \(C\) will be \(x(t)=\frac{p_{C}(t)}{p(t)}\). Average payoffs which players get when playing the strategies \(C\) and \(D\), are given by \(U_{C}(t)=ax(t)+b(1-x(t))\) and \(U_{D}(t)=cx(t)+d(1-x(t))\) respectively. With all these notations and assumptions in mind, we propose the following equations to describe our model: \[p_{i}(t+\varepsilon)=(1-\varepsilon)p_{i}(t)+\varepsilon p_{i}(t-\tau_{i})U_{ i}(t-\tau_{i}), \tag{1}\] \(i\ =C,D\). Then for the size of the whole population we get \[p(t+\varepsilon)=(1-\varepsilon)p(t)+\varepsilon\Big{(}p_{C}(t-\tau_{C})U_{C }(t-\tau_{C})+p_{D}(t-\tau_{D})U_{D}(t-\tau_{D})\Big{)}. \tag{2}\] We divide (1) by (2) for i = C, obtain the equation for \(x(t+\varepsilon)\), subtract \(x(t)\), divide the difference by \(\varepsilon\), take the limit \(\varepsilon\to 0\), and get an equation for the frequency of the first strategy, \[\frac{dx}{dt}=\frac{p_{C}(t-\tau_{C})U_{C}(t-\tau_{C})(1-x(t))-p_{D}(t-\tau_{D })U_{D}(t-\tau_{D})x(t)}{p(t)}. \tag{3}\] Let us notice that unlike in the standard replicator dynamics, the above equation for the frequency of the first strategy is not closed, one needs equations for populations sizes. From (1) and (2) we get \[\frac{dp_{i}(t)}{dt}=-p_{i}(t)+p_{i}(t-\tau_{i})U_{i}(t-\tau_{i}),i\ =C,D \tag{4}\] \[\frac{dp(t)}{dt}=-p(t)+p_{C}(t-\tau_{C})U_{C}(t-\tau_{C})+p_{D}(t-\tau_{D})U_{D }(t-\tau_{D}). \tag{5}\] In [13], a transcendental equation for stationary states of (3) was obtained and then it was solved numerically for various games. Here we derive an approximate replicator equation for small time delays which enables us to get an analytical expression for the interior stationary state. ## III Small time delays approximation We begin by presenting (3) in a different form. We insert (4) into (3) and after some simplifications we get the following equation: \[\frac{dx}{dt}=\frac{1}{p(t)}[\frac{dp_{C}(t)}{dt}(1-x(t))-\frac{dp_{D}(t)}{dt} x(t)]. \tag{6}\] Now we Taylor expand the right part of (4), keep first powers of \(\tau_{C}\) and \(\tau_{D}\), and get the following equation: \[\frac{dp_{i}(t)}{dt}=-p_{i}(t)+p_{i}(t)U_{i}(t)-\tau_{i}[\frac{dp_{i}(t)}{dt} U_{i}(t)+p_{i}(t)\frac{dU_{i}(t)}{dt}], \tag{7}\] and hence \[\frac{dp_{C}(t)}{dt}=-p_{C}(t)+(a-b)x(t)p_{C}(t)+bp_{C}(t)-\tau_{C}[\frac{dp_{ C}(t)}{dt}((a-b)x(t)+b)+(a-b)p_{C}(t)\frac{dx(t)}{dt}], \tag{8}\] \[\frac{dp_{D}(t)}{dt}=-p_{D}(t)+(c-d)x(t)p_{D}(t)+dp_{D}(t)-\tau_{D}[\frac{dp_{ D}(t)}{dt}((c-d)x(t)+d)+(c-d)p_{D}(t)\frac{dx(t)}{dt}]. \tag{9}\] We solve (8) and (9) for derivatives of \(p_{C}(t)\) and \(p_{D}(t)\), \[\frac{dp_{C}(t)}{dt}=px\frac{-1+b+(a-b)x-(a-b)\tau_{C}\frac{dx}{dt}}{1+\tau_{C }(a-b)x+b\tau_{C}}, \tag{10}\] \[\frac{dp_{D}(t)}{dt}=p(1-x)\frac{-1+d+(c-d)x-(c-d)\tau_{D}\frac{dx}{dt}}{1+ \tau_{D}(c-d)x+d\tau_{D}}. \tag{11}\] Now we insert (10) and (11) into (6), solve it for \(\frac{dx}{dt}\) and get one ordinary differential equation for the frequency of the first strategy with \(\tau_{C}\) and \(\tau_{D}\) as parameters, \[\frac{dx}{dt}=\frac{x(1-x)}{K}\left[\frac{-1+b+(a-b)x}{1+\tau_{C}(a-b)x+b \tau_{C}}-\frac{-1+d+(c-d)x}{1+\tau_{D}(c-d)x+d\tau_{D}}\right], \tag{12}\] where \[K=1-x(1-x)\left[\frac{-(a-b)\tau_{C}}{1+\tau_{C}(a-b)x+b\tau_{C}}+\frac{(c-d)\tau _{D}}{1+\tau_{D}(c-d)x+d\tau_{D}}\right].\] Now it is easy to get an equation for the stationary state, \[\alpha x^{2}+\beta x+\gamma=0, \tag{13}\] where \[\begin{cases}\alpha=(\tau_{D}-\tau_{C})(a-b)(c-d),\\ \beta=\tau_{D}[(a-b)d+(b-1)(c-d)]+\tau_{C}[(d-c)b-(d-1)(a-b)]+a-b-c+d,\\ \gamma=\tau_{D}d(b-1)+\tau_{C}b(1-d)+b-d\end{cases}\] We solve (13) on the interval \([0,1]\) and get a formula for the stationary state, \[\bar{x}=\frac{-\beta\pm\sqrt{\beta^{2}-4\alpha\gamma}}{2\alpha}. \tag{14}\] We see that frequencies of strategies in the stationary state depend continuously on time delays, \(\tau_{C}\) and \(\tau_{D}\), not only on their difference. It is easy to see that when \(\tau_{C}\) is equal to \(\tau_{D}\), then there is only one solution of (13), namely \(\bar{x}=\frac{b-d}{b-d+c-a}\) as in the replicator dynamics without time delays. ## IV Examples Here we will analyze dependence of stationary states on time delays for Snowdrift and Stag-hunt game. We will also compare our approximate analytical results with numerical solutions for stationary states obtained in [13]. ### Snowdrift game Snowdrift game describes interactions between two car drives caught in a snow blizzard. They can cooperate and clear together the road from snow. However one of the drivers can defect and wait until the second one will do the job. However if both of them defect they will never go home. We see that it is profitable for a driver to choose the strategy different from that of the other one. That leads to a replicator dynamics with a stable interior state - a stable coexistence of both strategies. Snowdrift game is usually parameterized as follows: \[U_{1}=\begin{array}{ccc}\text{C}&\text{D}\\ \text{D}&\text{b-c/2}&\text{b-c}\\ \text{D}&\text{b}&0\end{array}\] In this matrix strategy \(C\) stands for cooperation and \(D\) for defection, \(b\) stands for the benefit and \(c\) is the cost that a cooperator pays, we of course assume that \(b>c\). For such a game we have: \[\begin{cases}\alpha=(\tau_{D}-\tau_{C})bc/2,\\ \beta=\tau_{D}b(b-c-1)-\tau_{C}(b(b-c)-c/2)+c/2-b,\\ \gamma=\tau_{C}(b-c)+b-c\end{cases}\] and for \(b=6\) and \(c=4\) \[\bar{x}=\frac{-(6\tau_{D}-10\tau_{C}-4)-\sqrt{(6\tau_{D}-10\tau_{C}-4)^{2}-48 (\tau_{D}-\tau_{C})(2\tau_{C}+2)}}{24(\tau_{D}-\tau_{C})} \tag{15}\] In Fig. 1 we show how the stationary state, that is the frequency of the cooperation, depends on time delays. We compare our small-delay approximation with numerical results obtained in [13], the agreement is quite good. We see that the bigger is a time delay of a given strategy, smaller is its proportion in the population. Let us note that delay in the strategy \(D\) affects the stationary state much more dramatically than delay in the strategy \(C\). We see that for \(\tau_{C}=0\), the stable interior stationary state increases as \(\tau_{D}\) increases and it disappears as \(\tau_{D}>\tau_{D}^{*}=1/9\). In general, it follows from (15) that a population consisting of just of C-players becomes globally asymptotically stable for \(\tau_{D}\geq\tau_{D}^{*}=(10\tau_{C}+1)/9\). ### Stag-hunt game Stag-hunt game describes the competition between two hunters: if they coordinate their actions, they get a stag, but if one of them defects, he will get a hare and the other one will stay with nothing. It follows that it is profitable for hunters to choose the same strategy. The game belongs to a class of games with two stable Nash equilibria and an unstable interior point (a mixed Nash equilibrium) in the replicator dynamics. Here we will consider the following payoff matrix: \[U_{2}=\begin{array}{cccc}&\text{C D}\\ &\text{C s }&0\\ &\text{D h h}\end{array}\] Now we get \[\begin{cases}\alpha=\alpha=0,\\ \beta=\beta=\tau_{D}sh-\tau_{C}s(h-1)+s,\\ \gamma=\gamma=-\tau_{D}h-h.\end{cases}\] Thus the stationary state is given by \[\bar{x}=\frac{h(\tau_{D}+1)}{s+hs(\tau_{D}-\tau_{C})+s\tau_{C}}. \tag{16}\] Figure 2: Dependence of the unstable interior stationary state on time delays in the Stag-hunt game with \(s=5,h=3\); we present the small-time approximation \(\bar{x}\) and the numerical solution from [13]. (a) stationary state as a function of \(\tau_{C}\), when \(\tau_{D}=0\), (b) stationary state as a function of \(\tau_{D}\), when \(\tau_{C}=0\), (c) stationary state as a function of \(\tau\), when \(\tau_{D}=2\tau\) and \(\tau_{C}=\tau\), and (d) stationary state as a function of \(\tau\) when \(\tau_{C}=2\tau\) and \(\tau_{D}=\tau\). Now we choose \(s=5\) and \(h=3\). In Fig. 2, we show the dependence of the stationary state on time delays. Again we see that our small-delay approximation is quite good. Now the bigger is a time delay of a given strategy, smaller is its basin of attraction. We see that for \(\tau_{C}=0\), the stable interior stationary state increases as \(\tau_{D}\) increases and it disappears as \(\tau_{C}>\tau_{C}^{*}=0.2\). In general, it follows from (16) that a population consisting of just D-players becomes globally asymptotically stable for \(\tau_{C}\geq\tau_{C}^{*}=\frac{12\tau_{D}+2}{10}\). Delay of the strategy \(C\) affects the stationary state more significantly than delays in the strategy \(D\). ## V Discussion Replicator dynamics with strategy-dependent time delays was studied in [13]. The authors derived a transcendental equation for a stationary state and solved it numerically for various games. Here we introduced a small-delay approximation for time-delayed differential equations. This enabled us to approximate the system of replicator equations for the strategy frequency and the population size by an ordinary differential equation for the strategy frequency with time delays as parameters. In this way we obtained an analytic expression for the stationary state. We applied our technique to the Snowdrift and Stag-hunt game. Our analytic formulas approximate well exact numerical solutions for small time delays. We showed that stationary frequencies of strategies depend continuously on time delays. Moreover, for the Snowdrift game, the frequency of a strategy in the stationary state is a decreasing function of its delay; for the Stag-hunt game, a basin of attraction of a strategy is a decreasing function of its delay. Such results were of course already presented in [13]. It would be interesting to analyze time-delayed replicator dynamics taking into account stochastic effects resulting from mutations and a random character of interactions. In the latter case, individuals play with particular opponents and not against the average strategy like in the standard replicator dynamics. We would also like to study time-delay effects in evolutionary games in finite populations. Some results were already presented in [18]. **Acknowledgments**: This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 955708. J. Miekisz and R. Vardanyan thank the National Science Centre, Poland, for a financial support under Grant No. 2015/17/B/ST1/00693.
2310.10577
Nondegeneracy properties and uniqueness of positive solutions to a class of fractional semilinear equations
We prove that positive solutions $u\in H^s(\mathbb{R}^N)$ to the equation $(-\Delta )^s u+ u=u^p$ in $\mathbb{R}^N$ are nonradially nondegenerate, for all $s\in (0,1)$, $N\geq 1$ and $p>1$ strictly smaller than the critical Sobolev exponent. By this we mean that the linearized equation $(-\Delta )^s w+ w-pu^{p-1}w = 0$ does not admit nonradial solutions beside the directional derivatives of $u$. Letting $B$ be the unit centered ball and $\lambda_1(B)$ the first Dirichlet eigenvalue of the fractional Laplacian $(-\Delta )^s$, we also prove that positive solutions to $(-\Delta )^s u+\lambda u=u^p$ in ${B}$ with $u=0$ on $\mathbb{R}^N\setminus B$, are nonradially nondegenerate for any $\lambda> -\lambda_1(B)$ in the sense that the linearized equation does not admit nonradial solutions. From these results, we then deduce uniqueness and full nondegeneracy of positive solutions in some special cases. In particular, in the case $N=1$, we prove that the equation $(-\Delta )^s u+ u=u^2$ in $\mathbb{R}$ or in $B$, with zero exterior data, admits a unique even solution which is fully nondegenerate in the optimal range $s \in (\frac{1}{6},1)$, thus extending the classical uniqueness result of Amick and Toland on the Benjamin-Ono equation. Moreover, in the case $N=1$, $\lambda=0$, we also prove the uniqueness and full nondegeneracy of positive solutions for the Dirichlet problem in $B$ with arbitrary subcritical exponent $p$. Finally, we determine the unique positive ground state solution of $(-\Delta )^{\frac{1}{2}} u+ u=u^{p}$ in $\mathbb{R}^N$, $N \ge 1$ with $p=1+\frac{2}{N+1}$ and compute the sharp constant in the associated Gagliardo-Nirenberg inequality $$ \|u\|_{L^{p+1}(\mathbb{R}^N)} \le C \|(-\Delta )^{\frac{1}{4}} u\|_{L^2(\mathbb{R}^N)}^{\frac{N}{N+2}} \|u\|_{L^2(\mathbb{R}^N)}^{\frac{2}{N+2}}. $$
Mouhamed Moustapha Fall, Tobias Weth
2023-10-16T16:53:37Z
http://arxiv.org/abs/2310.10577v5
Uniqueness and nondegeneracy of solutions to \((-\Delta)^{s}u+u=u^{p}\) in \(\mathbb{R}^{N}\) and in balls ###### Abstract. We prove that positive solutions \(u\in H^{s}(\mathbb{R}^{N})\) to the equation \((-\Delta)^{s}u+u=u^{p}\) in \(\mathbb{R}^{N}\) are unique up to translations and nondegenerate, for all \(s\in(0,1)\), \(N\geq 1\) and \(p>1\) is strictly smaller than the critical Sobolev exponent. This generalizes a result of Frank, Lenzmann and Silvestre [15], where the same uniqueness and nondegeneracy is proven for solutions with Morse index \(1\). Letting \(B\) be the unit centered ball and \(\lambda_{1}(B)\) the first Dirichlet eigenvalue of the fractional Laplacian \((-\Delta)^{s}\), we also prove that positive solutions to \((-\Delta)^{s}u+\lambda u=u^{p}\) in \(B\) with \(u=0\) on \(\mathbb{R}^{N}\setminus B\), are unique and nondegenerate for any \(\lambda>-\lambda_{1}(B)\). This extends the very recent results in [6, 7] which provide uniqueness and nondegeneracy of least energy solutions. ## 1. Introduction Let \(s\in(0,1)\) and \(N\geq 1\). The present paper is devoted to the uniqueness (up to translations) and nondegeneracy of solutions to the problem \[(-\Delta)^{s}u+u=u^{p}\quad\text{ in }\mathbb{R}^{N},\qquad u>0\quad\text{ in }\mathbb{R}^{N},\qquad u\in H^{s}(\mathbb{R}^{N}) \tag{1.1}\] for \(p\in(1,2^{*}_{s}-1)\) with \(2^{*}_{s}\) being the critical fractional Sobolev exponent is given by \[2^{*}_{s}=\frac{2N}{N-2s}\ \text{ if }2s<N\qquad\text{ and }\qquad 2^{*}_{s}\in(2,\infty)\ \text{ if }2s\geq 1=N.\] For the following, we denote by \(u\mapsto\|u\|_{s}^{2}=\sqrt{[u]_{s}^{2}+\|u\|_{L^{2}(\mathbb{R}^{N})}^{2}}\) the norm in \(H^{s}(\mathbb{R}^{N})\), where \[[u]_{s}^{2}=\frac{c_{N,s}}{2}\int_{\mathbb{R}^{N}}\!\!\int_{\mathbb{R}^{N}}\! \frac{(u(x)-u(y))^{2}}{|x-y|^{N+2s}}dxdy,\quad c_{N,s}=\pi^{-\frac{N}{2}}s4^{ s}\frac{\Gamma(\frac{N}{2}+s)}{\Gamma(1-s)}. \tag{1.2}\] It is well known (see e.g. [15, Proposition 3.1]) that positive solutions to (1.1) belong to \(L^{\infty}(\mathbb{R}^{N})\cap H^{s+1}(\mathbb{R}^{N})\cap C^{\infty}( \mathbb{R}^{N})\), are radially symmetric about a point \(x_{0}\in\mathbb{R}^{N}\) and are strictly decreasing in \(|x-x_{0}|\). In the local case \(s=1\), the uniqueness of solutions to problems (1.3) and (1.1) (up to translations) is a classical result and has been established by Kwong in his seminal paper [20], see also [5, 21]. In contrast, for semilinear fractional equations, the uniqueness and nondegeneracy of specific classes of solutions is a very challenging problem, and few results are available up to now. One of the main reasons for this is the lack of ODE techniques which are highly useful in the case \(s=1\). It was first proven by Amick and Toland in [1] that (1.1) has a unique solution in the special case \(s=1/2\), \(p=2\) and \(N=1\). Much later, Frank, Lenzmann and Silvestre [15] proved uniqueness and nondegeneracy of Morse index one solutions to (1.1) in the general case, extending an earlier result of Frank and Lenzmann for the case \(N=1\) to all dimensions, and extending also results of [19, 12] to all \(s\in(0,1)\). However, it remained open up to now whether any solution to (1.1) has Morse index one. The present paper gives an affirmative answer to this question, as we shall prove in fact that solutions to (1.1) are unique up to translations. Let \(B:=\{x\in\mathbb{R}^{N}\,:\,|x|<1\}\), \(\lambda_{1}(B)\) be the first Dirichlet eigenvalue of \((-\Delta)^{s}\) for \(s\in(0,1)\). For \(\lambda>-\lambda_{1}(B)\) we will also prove uniqueness and nondegeneracy of solutions \(u\in H^{s}(\mathbb{R}^{N})\) to the Dirichlet problem \[(-\Delta)^{s}u+\lambda u=u^{p}\quad\text{ in }B,\qquad u>0\quad\text{ in }B,\qquad u=0\quad\text{ in }\mathbb{R}^{N}\setminus B. \tag{1.3}\] From [22], we see that every (weak) solution \(u\in H^{s}(\mathbb{R}^{N})\) is contained in \(L^{\infty}(\mathbb{R}^{N})\), and therefore [23] and [22] imply that \(u\in C^{s}(\mathbb{R}^{N})\cap C^{\infty}(B)\). Moreover, by [17], we find that \(u\) is a radial function which is strictly decreasing in its radial variable (see also [3] for a different argument but with \(\lambda=0\)). With regard to the question of uniqueness for (1.3), we note that the values allowed for \(p\) are not restrictive since for \(p\in[0,1]\), uniqueness is known (see e.g. [9, Lemma A.1]). Moreover, for \(p\geq 2^{*}_{s}-1\), with \(N>2s\), the nonexistence results in [14, 24] imply that (1.3) does not admit a solution. The first uniqueness and nondegeneracy result for (1.3) is contained in the fairly recent paper [8], which establishes uniqueness and nondegeneracy of solutions to (1.3) when \(s\) or \(p\) are close to \(1\) in dimension \(N\geq 2\). Very recently, in the highly interesting works [6, 7], this uniqueness and nondegeneracy is proved for least energy positive solutions of (1.3) in the full range of parameters \(s\) or \(p\). We shall see in the present paper that some intermediate results in [6, 7], which are based on polarization and a variational principle from [2], can be combined with additional new tools to yield uniqueness and nondegeneracy for _any_ solution to (1.3). More precisely, we introduce a new Picone type identity and apply it to antisymmetric eigenfunctions of the linear eigenvalue problem associated with (1.3). In the case of (1.1), we combine this identity with similar polarization arguments as in [6, 7] and a geometric lemma to deduce the uniqueness and nondegeneracy of positive solutions (up to translations). The following is our first main result regarding (1.1). **Theorem 1.1**.: _Let \(s\in(0,1)\), \(N\geq 1\) and \(1<p<2^{*}_{s}-1\). Let \(u\in H^{s}(\mathbb{R}^{N})\) satisfy (1.1). Then there exists \(\overline{\Lambda}>0\) such that_ \[[w]_{s}^{2}+\int_{\mathbb{R}^{N}}w^{2}dx-p\int_{\mathbb{R}^{N}}u^{p-1}w^{2}dx \geq\overline{\Lambda}\int_{\mathbb{R}^{N}}w^{2}u^{p-1}dx\qquad\text{ for all }w\in\mathcal{M}_{u}\,\] _where_ \[\mathcal{M}_{u}:=\Big{\{}w\in H^{s}(\mathbb{R}^{N})\,:\int_{\mathbb{R}^{N}}u^{ p}wdx=\int_{\mathbb{R}^{N}}u^{p-1}(\partial_{x_{i}}u)w\,dx=0\quad\text{for all }i=1,\ldots,N\Big{\}}.\] Theorem 1.1 allows us to derive uniqueness via the continuation argument developed in [15], which is based on the implicit function theorem. More precisely, we have the following result: **Theorem 1.2**.: _Let \(s\in(0,1)\), \(N\geq 1\) and \(1<p<2^{*}_{s}-1\). Then (1.1) possesses a unique, up to a translation, solution in \(H^{s}(\mathbb{R}^{N})\)._ Our next theorem deals with (1.3), for which we have the following nondegeneracy property. **Theorem 1.3**.: _Let \(s\in(0,1)\), \(N\geq 1\), \(\lambda>-\lambda_{1}(B)\) and \(1<p<2^{*}_{s}-1\). Let \(u\in\mathcal{H}^{s}(B)\) satisfy (1.3). Then, there exists \(\overline{\Lambda}>0\) such that_ \[[w]_{s}^{2}+\lambda\int_{B}w^{2}dx-p\int_{B}u^{p-1}w^{2}dx\geq\overline{ \Lambda}\int_{B}w^{2}u^{p-1}dx\qquad\text{ for all }w\in\mathcal{H}^{s}(B)\ \text{with}\ \int_{B}wu^{p}dx=0,\] _where \(\mathcal{H}^{s}(B)=\{u\in H^{s}(\mathbb{R}^{N})\,:\,u=0\ \text{on}\ \mathbb{R}^{N}\setminus B\}\)._ Similarly, we shall combine Theorem 1.3 with [8] and a continuation argument to obtain the following uniqueness result. **Theorem 1.4**.: _Let \(s\in(0,1)\), \(N\geq 1\), \(\lambda>-\lambda_{1}(B)\) and \(1<p<2^{*}_{s}-1\). Then (1.3) possesses a unique solution in \(H^{s}(\mathbb{R}^{N})\)._ The paper is organized as follows. In Section 2, we first derive a Picone type identity for antisymmetric functions, which is a key tool in the analysis of nonradial solutions of the linearized problems associated with (1.1) and (1.3). We then prove the nondegeneracy results given in Theorems 1.1 and 1.3. Section 3 is then devoted to the continuation argument which yields the proof of Theorem 1.4. In Section 4, we complete the proof of Theorem 1.2 by pointing to the continuation argument in [15]. Finally, in the appendix, we first provide a geometric lemma which is also used in the analysis of nonradial eigenfunctions. Moreover, we provide a variant of the fractional Hopf type lemma in [25, Prop. 2.2] with somewhat weaker assumptions, which is essential in our context. ## 2. Nondegeneracy The following Picone-type result will be crucial in the following. **Lemma 2.1**.: _Let \(\Omega\) be an open set of \(\mathbb{R}^{N}\) and \(\alpha>0\). Let \(V\in L^{1}_{loc}(\Omega)\) and \(v\in C^{2s+\alpha}(\Omega)\cap L^{1}(\mathbb{R}^{N};(1+|x|)^{-N-2s})\) satisfy_ \[(-\Delta)^{s}v=Vv\qquad\text{ in }\Omega. \tag{2.1}\] _Let \(e\) be a unit vector of \(\mathbb{R}^{N}\) and let \(\sigma_{e}\) denote the reflection with respect to \(\{x\in\mathbb{R}^{N}\,:\,x\cdot e=0\}\). Suppose that \(v>0\) on \(\{x\in\mathbb{R}^{N}\,:\,x\cdot e>0\}\) and \(v=-v\circ\sigma_{e}\) on \(\mathbb{R}^{N}\). Then for any \(w\in H^{s}(\mathbb{R}^{N})\cap C_{c}(\Omega)\) satisfying \(w=-w\circ\sigma_{e}\) on \(\mathbb{R}^{N}\) and \(\frac{w}{v}\in C_{c}(\Omega)\), we have_ \[[w]_{s}^{2}-\int_{\mathbb{R}^{N}}Vw^{2}dx=\int_{\{x\cdot e>0\}}\int_{\{y\cdot e >0\}}H^{e}_{w,v}(x,y)dxdy, \tag{2.2}\] _where \(H^{e}_{w,v}\geq 0\) on \(\{x\cdot e>0\}\cap\{y\cdot e>0\}\) is given by_ \[H^{e}_{w,v}(x,y):=c_{N,s}v(x)v(y)\left[w(x)/v(x)-w(y)/v(y)\right]^{2}\times \left(\frac{1}{|x-y|^{N+2s}}-\frac{1}{|\sigma_{e}(x)-y|^{N+2s}}\right). \tag{2.3}\] Proof.: We assume without loss of generality that \(e=e_{1}\). We first note that for every \(x,y\in\mathbb{R}^{N}\) with \(x_{1}\neq 0\) and \(y_{1}\neq 0\), we have \[(w(x)-w(y))^{2}- w^{2}(x)\frac{(v(x)-v(y))}{v(x)}+w^{2}(y)\frac{(v(x)-v(y))}{v(y)}\] \[=v(x)v(y)\left(w(x)/v(x)-w(y)/v(y)\right)^{2}. \tag{2.4}\] Moreover, by assumption, \(w/v\in C_{c}(\Omega)\), and thus \[c_{N,s}\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{w^{2}(x)}{v(x)}\frac{v( x)-v(y)}{|x-y|^{N+2s}}dxdy=\int_{\Omega}\frac{w^{2}(x)}{v(x)}\,(-\Delta)^{s}v(x) dx=\int_{\Omega}V(x)w^{2}(x)dx<\infty\] by (2.1). From this and a change of variable, we get \[\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{(w(x)-w(y))^{2}}{|x-y |^{N+2s}}dydx-\frac{2}{c_{N,s}}\int_{\Omega}V(x)w^{2}(x)dx\] \[=\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{v(x)v(y)}{|x-y|^{ N+2s}}\left(w(x)/v(x)-w(y)/v(y)\right)^{2}dxdy\] \[=2\int_{\{x_{1}>0\}\cap\{y_{1}>0\}}\frac{v(x)v(y)}{|x-y|^{N+2s}} \left(w(x)/v(x)-w(y)/v(y)\right)^{2}dxdy\] \[-2\int_{\{x_{1}>0\}\cap\{y_{1}>0\}}\frac{v(x)v(y)}{((x_{1}+y_{1}) ^{2}+|\widetilde{x}-\widetilde{y}|^{2})^{(N+2s)/2}}\left(w(x)/v(x)-w(y)/v(y) \right)^{2}dxdy.\] Here, we used that \(x_{1}\mapsto v(x_{1},\cdot)\) is odd and \(x_{1}\mapsto\frac{w}{v}(x_{1},\cdot)\) is even. It follows, from this, that \[\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{(w(x)-w(y))^{2}}{|x-y|^{N+2s}} dydx-\frac{2}{c_{N,s}}\int_{\Omega}Vw^{2}dx=\frac{2}{c_{N,s}}\int_{\{x_{1}>0\} \cap\{y_{1}>0\}}H^{e}_{w,v}(x,y)dxdy, \tag{2.5}\] where, for all \((x,y)\in\{x_{1}>0\}\times\{y_{1}>0\}\), \[H^{e}_{w,v}(x,y):=c_{N,s}v(x)v(y) \left[w(x)/v(x)-w(y)/v(y)\right]^{2}\] \[\times\left(|x-y|^{-N-2s}-((x_{1}+y_{1})^{2}+|\widetilde{x}- \widetilde{y}|^{2})^{-(N+2s)/2}\right).\] By the triangular inequality and the fact that \(v(x)v(y)>0\) for all \(x,y\in\{x_{1}>0\}\cap\{y_{1}>0\}\), we deduce that \(H^{e}_{w,v}\geq 0\) on \(\{x_{1}>0\}\cap\{y_{1}>0\}\). Hence (2.2) follows. ### Nondegeneracy in the case of \(\mathbb{R}^{N}\) This section is devoted to the proof of Theorem 1.1, and we need to collect some preliminary information first. We start with the following result from [15]. **Lemma 2.2**.: _Let \(u\in H^{s}(\mathbb{R}^{N})\) satisfy (1.1). Then \(u\in C^{\infty}(\mathbb{R}^{N})\cap H^{1+2s}(\mathbb{R}^{N})\). Moreover \(u\) is radially symmetric about a point \(x_{0}\in\mathbb{R}^{N}\), strictly decreasing as a function of \(|x-x_{0}|\) and satisfies \(u(x)\to 0\) as \(|x|\to\infty\)._ By translation, it follows that it suffices to prove Theorem 1.1 for radial positive solutions \(u\). Hence we assume from now on that \(u=u(|x|)\) is a fixed radial solution of (1.1). We then consider the weighted eigenvalue problem \[(-\Delta)^{s}w+w=\Lambda u^{p-1}w\qquad\text{ in }\mathbb{R}^{N},\qquad\qquad w \in H^{s}(\mathbb{R}^{N}) \tag{2.6}\] with the positive radial and bounded weight function \(u^{p-1}\). Since \(u^{p-1}(x)\to 0\) as \(|x|\to\infty\), the embedding \(H^{s}(\mathbb{R}^{N})\hookrightarrow L^{2}(\mathbb{R}^{N},u^{p-1}dx)\) is compact. Consequently, by standard arguments, problem (2.6) admits a sequence of eigenvalues \(0<\Lambda_{1}<\Lambda_{2}\leq\dots\), where \(\Lambda_{1}\) is a simple eigenvalue with an eigenspace spanned by a positive eigenfunction. Moreover, all other eigenfunctions of (2.6) are orthogonal to the first eigenfunction with respect to the scalar product in \(L^{2}(\mathbb{R}^{N},u^{p-1}dx)\) and therefore have to change sign. Since \(u\) solves (2.6) with \(\Lambda=1\) and is positive, we conclude a posteriori that \(\Lambda_{1}=1\) is the first eigenvalue of (2.6). Moreover, the second eigenvalue admits the variational characterization \[\Lambda_{2}:=\inf\Bigl{\{}\|w\|_{s}^{2}\::\:w\in H^{s}(\mathbb{R}^{N}),\:\int_ {\mathbb{R}^{N}}w^{2}u^{p-1}dx=1,\:\int_{\mathbb{R}^{N}}wu^{p}dx=0\Bigr{\}}. \tag{2.7}\] We note the following key observation, which is a variant of [2, Lemma 2.1]. For this, we let \[(v_{1},v_{2})\mapsto\langle v_{1},v_{2}\rangle_{s}=c_{N,s}\int_{\mathbb{R}^{N} \times\mathbb{R}^{N}}\frac{(v_{1}(x)-v_{1}(y))(v_{2}(x)-v_{2}(y))}{|x-y|^{N+2s} }dxdy+\int_{\mathbb{R}^{N}}v_{1}v_{2}dx\] denote the standard scalar product on \(H^{s}(\mathbb{R}^{N})\). **Lemma 2.3**.: _Let \(w\in H^{s}(\mathbb{R}^{N})\) be a sign changing function. Then the following are equivalent:_ * \(w\) _is an eigenfunction of (_2.6_) corresponding to_ \(\Lambda=\Lambda_{2}\)_._ * _The inequality_ (2.8) \[\langle w,\tilde{w}\rangle_{s}\leq\Lambda_{2}\int_{\mathbb{R}^{N}}(\tilde{w} )^{2}u^{p-1}dx\] _holds for both_ \(\tilde{w}=w^{+}\) _and_ \(\tilde{w}=-w^{-}\)_._ * _Equality holds for both_ \(\tilde{w}=w^{+}\) _and_ \(\tilde{w}=-w^{-}\) _in (_2.8_)._1__ Footnote 1: Here and in the following, we let \(w^{+}=\max\{w,0\}\) and \(w^{-}=\max\{-w,0\}\) denote the positive and negative part of \(w\). Proof.: By testing the eigenvalue equation with the functions \(w^{+}\) and \(w^{-}\), we see directly that (i) implies (iii). Moreover, (iii) trivially implies (ii). It remains to prove that (ii) implies (i). The implication and its proof is strongly inspired by [2, Lemma 2.1]. For the convenience of the reader, we give the details here. We first note that \[\int_{\mathbb{R}^{N}}w^{\pm}u^{p}dx>0\] since \(u\) is positive and \(w\) is sign-changing by assumption, and hence there exists \(\alpha_{0}>0\) with \[\int_{\mathbb{R}^{N}}(w^{+}-\alpha_{0}w^{-})u^{p}dx=\int_{\mathbb{R}^{N}}w^{+ }u^{p}dx-\alpha_{0}\int_{\mathbb{R}^{N}}w^{-}u^{p}dx=0.\] The variational characterization (2.7) and (2.8) therefore imply that \[\|w^{+}-\alpha_{0}w^{-}\|_{s}^{2}\geq\lambda_{2}\int_{\mathbb{R}^ {N}}u^{p-1}(w^{+}-\alpha_{0}w^{-})^{2}\,dx\] \[=\lambda_{2}\Bigl{(}\int_{\mathbb{R}^{N}}u^{p-1}(w^{+})^{2}\,dx+ \alpha_{0}^{2}\int_{\mathbb{R}^{N}}u^{p-1}(w^{-})^{2}\,dx\Bigr{)}\geq\langle w,w^{+}\rangle_{s}+\alpha_{0}^{2}\langle w,-w^{-}\rangle_{s} \tag{2.9}\] \[=\|w^{+}\|_{s}^{2}+\alpha_{0}^{2}\|w^{-}\|_{s}^{2}-(1+\alpha_{0}^ {2})\langle w^{+},w^{-}\rangle_{s}.\] On the other hand, we have \[\|w^{+}-\alpha_{0}w^{-}\|_{s}^{2}=\|w^{+}\|_{s}^{2}+\alpha_{0}^{2}\|w^{-}\|_{s }^{2}-2\alpha_{0}\langle w^{+},w^{-}\rangle_{s}\] and thus, by (2.9), \[(1-\alpha_{0})^{2}\langle w^{+},w^{-}\rangle_{s}=[(1+\alpha_{0}^{2})-2\alpha_ {0}]\langle w^{+},w^{-}\rangle_{s}\geq 0, \tag{2.10}\] while \[\langle w^{+},w^{-}\rangle_{s} =c_{N,s}\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{(w^{+}(x)- w^{+}(y))(w^{-}(x)-w^{-}(y))}{|x-y|^{N+2s}}dxdy\] \[=-2c_{N,s}\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{w^{+}(x) w^{-}(y)}{|x-y|^{N+2s}}dxdy<0\] since \(w\) is sign changing. Hence equality must hold in (2.10) with \(\alpha_{0}=1\), and thus equality must also hold in (2.9) with \(\alpha_{0}=1\). Consequently, after normalization, \(w\) attains equality in the variational characterization of \(\Lambda_{2}\), and thus \(w\) is an eigenfunction of (2.6) corresponding to \(\Lambda=\Lambda_{2}\). This completes the proof of the fact that (ii) implies (i). We may now complete the Proof of Theorem 1.1.: Our aim is to prove that \[\inf\Bigl{\{}\|w\|_{s}^{2}\,:\,w\in\mathcal{M}_{u},\,\int_{\mathbb{R}^{N}}w^{2} u^{p-1}dx=1\Bigr{\}}>p=\Lambda_{2}, \tag{2.11}\] where \(\mathcal{M}_{u}\) is defined in Theorem 1.1. For this, we prove first, using the argument from [2], that \(\Lambda_{2}\) cannot admit a radial eigenfunction. In the second step, we use Lemma 2.1 to obtain \(p=\Lambda_{2}\) and the desired strict inequality. For this latter fact we need to show a decomposition of all eigenfunctions into a sum of an element on \(\operatorname{span}\{\partial_{x_{1}}u,\dots,\partial_{x_{N}}u\}\) and a radial function. **Claim 1**.: _Let \(w\) be an eigenfunction of (2.6) corresponding to \(\Lambda=\Lambda_{2}\). Then \(w\) is not radial._ We already know that \(w\) changes sign. Suppose on the contrary that \(w\) is radial, and let \(w_{e}:[0,\infty)\to\mathbb{R}\) be defined by \(w_{e}(|x|)=w(x)\) for all \(x\in\mathbb{R}^{N}\). By the oscillation estimate given in [15, Proposition 5.3], \(w_{e}\) changes sign at most twice, which implies that \[r=\inf\{\rho\in(0,\infty)\,:\,w_{e}(t)\neq 0\quad\text{ for all }t>\rho\},\] is a well-defined positive number with \[w_{e}(r)=0,\] so \(r\) is the largest zero of \(w_{e}\) on the half line \((0,\infty)\). We may also assume, without loss of generality, that \(w_{e}(t)>0\) for \(t>r\). We now fix \(a>0\) and let \(P_{a}w\) denote the polarization of \(w\) with respect to the half space \(H_{a}:=\{x_{1}<a\}\), i.e. \[P_{a}w(x):=\begin{cases}\max\{w(x),w(\sigma_{a}(x))\},&\text{ for }x\in H_{a},\\ \min\{w(x),w(\sigma_{a}(x))\},&\text{ for }x\not\in H_{a}.\end{cases}\] Here \(x\mapsto\sigma_{a}(x)=(2a-x_{1},x_{2},\dots,x_{N})\) denotes the reflection at \(\partial H_{a}\). Since \(w\) changes sign, the function \(P_{a}w\) changes sign as well. Moreover, by the same argument as in [6, Lemma 3.5] (see also [7, Lemma 4.2] and [2, Lemma 2.3]), the inequalities \[\langle P_{a}w,\tilde{w}\rangle_{s}\leq\Lambda_{2}\int_{\mathbb{R}^{N}}( \tilde{w})^{2}u^{p-1}dx \tag{2.12}\] hold for both \(\tilde{w}=(P_{a}w)^{+}\) and \(\tilde{w}=-(P_{a}w)^{-}\). For the convenience of the reader, we include the proof of (2.12), and we restrict our attention to \(\tilde{w}=(P_{a}w)^{+}\) (the corresponding inequality for \(\tilde{w}=-(P_{a}w)^{-}\) can then be deduced by considering \(-w\) in place of \(w\)). First, we note that \((P_{a}w)^{+}=P_{a}w^{+}\) on \(\mathbb{R}^{N}\) since \(t\mapsto t^{+}\) is a monotone function on \(\mathbb{R}\). Moreover, by combining [2, Lemma 2.3] with the fact that \[\int_{\mathbb{R}^{N}}(P_{a}w)^{+}P_{a}w\,dx=\int_{\mathbb{R}^{N}}|(P_{a}w)^{+ }|^{2}dx=\int_{\mathbb{R}^{N}}|(P_{a}w^{+})|^{2}dx=\int_{\mathbb{R}^{N}}|w^{+ }|^{2}dx=\int_{\mathbb{R}^{N}}w^{+}w\,dx,\] we have \[\langle P_{a}w,(P_{a}w)^{+}\rangle_{s}\leq\langle w,w^{+}\rangle_{s}. \tag{2.13}\] Moreover, since \(u^{p-1}\) is a radially decreasing function and therefore invariant under the polarization \(P_{a}\), it follows from the general Hardy-Littlewood inequality for polarization (see e.g. [4]) that \[\int_{\mathbb{R}^{N}}[(P_{a}w)^{+}]^{2}u^{p-1}dx=\int_{\mathbb{R}^{N}}(P_{a}w ^{+})^{2}u^{p-1}dx\geq\int_{\mathbb{R}^{N}}(w^{+})^{2}u^{p-1}dx. \tag{2.14}\] Combining (2.13), (2.14) and the fact that \[\langle w,w^{+}\rangle_{s}\leq\lambda_{2}\int_{\mathbb{R}^{N}}(w^{+})^{2}w^{p-1}dx\] by Lemma 2.3, we obtain (2.12). Applying now Lemma 2.3 to the function \(P_{a}w\) and using (2.12), we now deduce that \(P_{a}w\) is an eigenfunction of (2.6) corresponding to the eigenvalue \(\Lambda=\Lambda_{2}\). We also note that \(P_{a}w\) is not radial. This follows from the fact that at the point \(\overline{x}=(r+2a,0)\not\in H_{a}\), we have \(\sigma_{a}(r+2a)=(-r,0)\) and \(P_{a}w(\overline{x})\leq w_{e}(|\sigma(\overline{x})|)\leq 0\) by definition of \(r\). On the other hand \(-\overline{x}\in H_{a}\) and thus \[P_{a}w(-\overline{x})=\max(w_{e}(|-\overline{x}|),w_{e}(|\sigma_{a}(- \overline{x}|)|)\geq w_{e}(|-\overline{x}|)=w_{e}(r+2a)>0.\] As a consequence, since \(P_{a}w\) is continuous, \(P_{a}w\) is not radial, so in particular \(P_{a}w\not\equiv w\). We now consider the function \(v_{a}=w-P_{a}w\in H^{s}(\mathbb{R}^{N})\), which is also an eigenfunction of (2.6). Moreover, \(v_{a}\) is odd with respect to the reflection \(\sigma_{a}\) and, since \(P_{a}w\geq w\) on \(H_{a}\), we have \(v_{a}\leq 0\), \(v_{a}\not\equiv 0\) on \(H_{a}\). Then Lemma 5.3 from the appendix implies that \(v_{a}<0\) on \(H_{a}\). Consequently, we have \[w<w\circ\sigma_{a}\quad\text{on }H_{a}. \tag{2.15}\] Therefore, since (2.15) holds for all \(a>0\), we conclude that the function \(w\) is strictly increasing in the \(x_{1}\)-variable in the half space \(H_{0}:=\{x_{1}<0\}\). This contradicts the fact that \(w\in L^{2}(\mathbb{R}^{N})\) is radial and sign changing, and the proof of **Claim 1** is thus finished. **Claim 2.**_We have \(\Lambda_{2}=p\), and all eigenfunctions of (2.6) corresponding to \(\Lambda_{2}\) are of the form_ \[w(x)=\sum_{i=1}^{N}d_{i}\partial_{x_{i}}u(x)\qquad\text{with coefficients }d_{i}\in\mathbb{R}. \tag{2.16}\] To prove the claim, we note first that \(\Lambda_{2}\leq p\), which follows by simply using the functions \(\partial_{x_{i}}u\), after normalization, as test functions in the variational characterization (2.7). We then consider \(w\) an eigenfunction corresponding to \(\Lambda_{2}\) with \(\|w\|_{L^{2}(\mathbb{R}^{N})}=1\). By elliptic regularity we have \(w\in C^{2}(\mathbb{R}^{N})\). We fix a unit vector \(\nu\) and define the hyperplane \(T_{\nu}:=\{x\cdot\nu=0\}\) and we denote by \(\sigma_{\nu}\) the reflection with respect to \(T_{\nu}\). Moreover, we define \[w^{\nu}:=\frac{w-w\circ\sigma_{\nu}}{2}\in H^{s}(\mathbb{R}^{N})\cap C^{2}( \mathbb{R}^{N}).\] Clearly since \(u\) is radial, then \(w^{\nu}\) solves \((-\Delta)^{s}w^{\nu}+w^{\nu}=\Lambda_{2}u^{p-1}w^{\nu}\) in \(\mathbb{R}^{N}\). By Lemma 2.2, we have \(v:=-\nabla u\cdot\nu\in C^{\infty}(\mathbb{R}^{N})\cap H^{s}(\mathbb{R}^{N})\), and \(v\) is a pointwise solution of the equation \((-\Delta)^{s}v+v=pu^{p-1}v\) in \(\mathbb{R}^{N}\) which is odd with respect to the reflection at \(\nu\) and \(v\gtrapprox 0\) on \(\{x\cdot\nu>0\}\). Applying Lemma 5.3 from the appendix, we have \(v>0\) in \(\{x\cdot\nu>0\}\) and \(\nabla v\cdot\nu>0\) on \(\{x\cdot\nu=0\}\). Therefore, \[x\mapsto\frac{v(x)}{x\cdot\nu}\qquad\text{extends to a positive $C^{\infty}$-function on $\mathbb{R}^{N}$.} \tag{2.17}\] We let \[\chi\in C^{\infty}_{c}(-2,2),\quad\text{ with }0\leq\chi\leq 1\text{ on } \mathbb{R}\text{ and }\chi\equiv 1\text{ on }(-1,1). \tag{2.18}\] For \(R\in\mathbb{N}\), we define \(w_{R}^{\nu}(x)=w^{\nu}(x)\chi(|x|/R)\) for all \(x\in\mathbb{R}^{N}\), and we note that \(w_{R}/v\in C_{c}(\mathbb{R}^{N})\) thanks to (2.17). Applying Lemma 2.1, we get \[[w_{R}^{\nu}]_{s}^{2}+\int_{\mathbb{R}^{N}}(w_{R}^{\nu})^{2}dx-p\int_{\mathbb{ R}^{N}}u^{p-1}(w_{R}^{\nu})^{2}dx=\int_{\{x\cdot\nu>0\}\cap\{y\cdot\nu>0\}}H_{w_{R} ^{\nu},v}^{\nu}(x,y)dxdy. \tag{2.19}\] It is not difficult to see that \(\|w_{R}^{\nu}\|_{H^{s}(\mathbb{R}^{N})}\to\|w^{\nu}\|_{H^{s}(\mathbb{R}^{N})}\) as \(R\to\infty\). On the other hand by the dominated convergence theorem, as \(R\to\infty\), \[\int_{\mathbb{R}^{N}}(w_{R}^{\nu})^{2}dx\to\int_{\mathbb{R}^{N}}(w^{\nu})^{2} dx\qquad\text{and}\qquad\int_{\mathbb{R}^{N}}u^{p-1}(w_{R}^{\nu})^{2}dx\to\int_{ \mathbb{R}^{N}}u^{p-1}(w^{\nu})^{2}dx.\] We thus apply Fatou's lemma, recalling (2.3), and obtain that \[(\Lambda_{2}-p)\int_{\mathbb{R}^{N}}u^{p-1}(w^{\nu})^{2}dx\geq\int_{\{x\cdot \nu>0\}\cap\{y\cdot\nu>0\}}H_{w^{\nu},v}^{\nu}(x,y)dxdy\geq 0. \tag{2.20}\] Since \(\Lambda_{2}\leq p\) we obtain \(H_{w^{\nu},v}^{\nu}\equiv 0\) on \(\{x\cdot\nu>0\}\cap\{y\cdot\nu>0\}\). In view of (2.3), this implies that there exists \(\kappa_{\nu}\in\mathbb{R}\) such that \[\frac{w(x)-w\circ\sigma_{\nu}(x)}{2}=\kappa_{\nu}\nabla u(x)\cdot\nu=\kappa_{ \nu}\partial_{r}u(x)\frac{\nu\cdot x}{|x|}\qquad\text{ for all }x\in\mathbb{R}^{N}.\] We recall that this holds for an arbitrary unit vector \(\nu\in S^{N-1}\). Moreover, a posteriori we find that \[\kappa_{\nu}=\|\nabla u\cdot\nu\|_{L^{2}(\mathbb{R}^{N})}^{-2}\int_{\mathbb{ R}^{N}}w\nabla u\cdot\nu\,dx\qquad\text{for }\nu\in S^{N-1},\] so the function \(\nu\mapsto h(\nu):=\kappa_{\nu}\) is smooth on \(S^{N-1}\). Moreover, the function \(\partial_{r}u\) is radial and nonpositive, so we may write \(\partial_{r}u(x)=-U(|x|)\) with a function \(U:(0,\infty)\to[0,\infty)\). We may then apply Corollary 5.2 in the appendix to obtain the representation \[w(x)=w_{*}(x)+h_{*}\partial_{r}u(x)\frac{\nu_{*}\cdot x}{|x|}=w_{*}(x)+h_{*} \nabla u(x)\cdot\nu_{*}\qquad\text{for }x\in\mathbb{R}^{N} \tag{2.21}\] with a fixed vector \(\nu_{*}\in S^{N-1}\), a constant \(h_{*}\in\mathbb{R}\) and a radial function \(w_{*}:\mathbb{R}^{N}\to\mathbb{R}\). Moreover, \(w\) is nonradial by **Claim 1**, which implies that there exists a unit vector \(\overline{\nu}\) such that \(\frac{w-w\circ\sigma_{\overline{\nu}}}{2}\not\equiv 0\). Using this in (2.20), we find that \(\Lambda_{2}=p\). But then both \(w\) and the function \(x\mapsto\nabla u(x)\cdot\nu_{*}\) are eigenfunctions of (2.6) corresponding to \(\Lambda=\Lambda_{2}\), and thus, by (2.21), the function \(w_{*}\) also solves (2.6) with \(\Lambda=\Lambda_{2}\). Invoking **Claim 1** again yields \(w_{*}\equiv 0\), and therefore the representation (2.16) follows from (2.21). Thus **Claim 2** is proved. To conclude, we note that, by compactness of the embedding \(H^{s}(\mathbb{R}^{N})\hookrightarrow L^{2}(\mathbb{R}^{N},u^{p-1}dx)\), the infimum in (2.11) is attained. Moreover, assuming by contradiction that we have equality in (2.11), we see that every minimizer \(w\) is an eigenfunction of (2.6) corresponding to the eigenvalue \(\Lambda=\Lambda_{2}=p\), and thus it is of the form (2.16) by **Claim 2**. This contradicts the fact that \(w\in\mathcal{M}_{u}\), and thus the strict inequality in (2.11) is true. The proof of Theorem 1.1 is thus finished. **Remark 2.4**.: _We note that the above argument to prove Theorem 1.1 is flexible enough to be applied to positive solutions \(u\in H^{s}(\mathbb{R}^{N})\) to the corresponding relativistic problem \((-\Delta+m^{2})^{s}u=u^{p}\) in \(\mathbb{R}^{N}\) with \(m>0\). This follows from the fact that the variational principle in Lemma 2.3 and the inequality (2.13) still hold if we replace \(|x-y|^{-N-2s}\) with \(K(|x-y|)\) for some decreasing function \(K\). More precisely, \(K\) should be chosen such that \(u\mapsto\int_{\mathbb{R}^{N}}\int_{\mathbb{R}^{N}}(u(x)-x(y))^{2}K(|x-y|)dxdy+ \|u\|_{L^{2}(\mathbb{R}^{N})}^{2}\) is equivalent to \(\left\langle u,u\right\rangle_{s}\) on \(H^{s}(\mathbb{R}^{N})\). Moreover, the oscillation estimates given in [15, Proposition 5.3] for second eigenfunctions apply in the context of the relativistic problem thanks to the extension theorem in [13, Theorem 7.1]. We also recall from [13] that, for all \(u\in H^{s}(\mathbb{R}^{N})\),_ \[\int_{\mathbb{R}^{N}}u(x)[(-\Delta+m^{2})^{s}-m^{2s}]u(x)dx=c^{\prime}_{N,s}\int _{\mathbb{R}^{N}}\!\int_{\mathbb{R}^{N}}\!(u(x)-u(y))^{2}\mathcal{K}^{m}_{ \frac{N+2s}{2}}(m|x-y|)dxdy,\] _where \(c^{\prime}_{N,s}:=\frac{s2^{\frac{2s+2-N}{2}}}{\pi^{\frac{N}{2}}\,\Gamma(1-s)}\) and \(\mathcal{K}^{m}_{\nu}:(0,\infty)\to(0,\infty)\) is given by_ \[\mathcal{K}^{m}_{\nu}(r)=m^{2\nu}r^{-\nu}K_{\nu}(r),\] _and \(K_{\nu}\) is the modified Bessel function of the second kind. From the identity \(K^{\prime}_{\nu}(r)=-\frac{\nu}{r}K_{\nu}-K_{\nu-1}\) and the fact that \(K_{-\nu}=K_{\nu}\) for \(\nu>0\), we see that \(\mathcal{K}^{m}_{\nu}\) is strictly decreasing on \((0,\infty)\)._ ### Nondegeneracy in the ball case We define \[\mathcal{H}^{s}(B):=\{u\in H^{s}(\mathbb{R}^{N})\,:\,u=0\text{ on }\mathbb{R}^{N} \setminus B\}\] which is a Hilbert space endowed with the norm \(u\mapsto[u]_{s}\), recalling (1.2). We recall the energy functional associated to (1.3) given by \[E_{p,B}\in C^{2}(\mathcal{H}^{s}(B),\mathbb{R}),\qquad E_{p,B}(u):=\frac{1}{2 }[u]_{s}^{2}+\frac{\lambda}{2}\int_{B}u^{2}dx-\frac{1}{p+1}\int_{B}|u|^{p+1}dx. \tag{2.22}\] We recall that (weak) solutions of (1.3) are precisely the positive functions in \(\mathcal{H}^{s}(B)\) which are critical points of \(E_{p,B}\). For \(u\in\mathcal{H}^{s}(B)\) a solution to (1.3), we consider the weighted eigenvalue problem \[(-\Delta)^{s}w+\lambda w=\Lambda u^{p-1}w\qquad\text{ in }B,\qquad\text{ where }\Lambda\in\mathbb{R}\text{ and }w\in\mathcal{H}^{s}(B). \tag{2.23}\] The compact embedding \(\mathcal{H}^{s}(B)\hookrightarrow L^{2}(B;u^{p-1}dx)\) implies the existence of a sequence of discrete eigenvalues \(0<\Lambda_{1}<\Lambda_{2}\leq\ldots.\) Since \(u\) is positive and satisfies (2.23) with \(\Lambda=1\), we see that \(\Lambda_{1}=1\) is the first eigenvalue. We observe that the linear operator associated to the quadratic form \(E^{\prime\prime}_{p,B}(u)\) is given by \((-\Delta)^{s}+\lambda-pu^{p-1}\). Moreover, similarly as in the case of \(\mathbb{R}^{N}\), we have the variational characterization \[\Lambda_{2}:=\inf\Bigl{\{}[w]_{s}^{2}+\lambda\int_{B}u^{2}dx\,:\,w\in\mathcal{ H}^{s}(B),\,\int_{B}w^{2}u^{p-1}dx=1,\,\int_{B}ww^{p}dx=0\Bigr{\}}. \tag{2.24}\] Therefore, Theorem 1.3 will follow once we have established that \(\Lambda_{2}>p\). We start collecting some properties of solutions to (1.3). **Lemma 2.5**.: _Let \(u\in\mathcal{H}^{s}(B)\) satisfy (1.3). Then the following statements hold._ * \(u\in C^{\infty}(B)\cap C^{s}(\mathbb{R}^{N})\) _and_ \(u\) _is radially symmetric and strictly decreasing._ * \(\min_{x\in\overline{B}}\frac{u(x)}{(1-|x|)^{s}}>0\)_._ * \(\lim_{|x|\nearrow 1}(1-|x|)^{1-s}\nabla u(x)\cdot x<0\)_._ Proof.: From [22, Proposition 3.1] we have that \(u\in L^{\infty}(B)\). Hence by a classical boostrap argument combined with interior and boundary regularity, in [23], we find that \(u\in C^{s}(\mathbb{R}^{N})\cap C^{\infty}(B)\). Thanks to [17, Corollary 1.2] we deduce that \(u\) is radially symmetric and strictly decreasing in \(B\). This proves \((i)\). Now \((ii)\) follows from the fractional Hopf lemma, see e.g. [11, Proposition 3.3]. Finally by [10] we have \(\lim_{|x|\nearrow 1}(1-|x|)^{1-s}\nabla u(x)\cdot x=-s\lim_{|x|\nearrow 1}\frac{u(x)} {(1-|x|)^{s}}\) and \((iii)\) follows. Proof of Theorem 1.3.: Let \(v_{2}\) be an eigenfunction of (2.23) corresponding to the second eigenvalue \(\Lambda=\Lambda_{2}\). It satisfies \[(-\Delta)^{s}v_{2}+\lambda v_{2}=\Lambda_{2}u^{p-1}v_{2}\qquad\text{ in }B\qquad\text{ and }\int_{B}v_{2}u^{p}dx=0.\] From this, we deduce that \[E_{p,B}^{\prime\prime}(u)[v_{2},v_{2}]=[v_{2}]_{s}^{2}+\lambda\int_{B}v_{2}^{2 }dx-p\int_{B}u^{p-1}v_{2}^{2}dx=(\Lambda_{2}-p)\int_{B}u^{p-1}v_{2}^{2}dx. \tag{2.25}\] Assume by contradiction that \(\Lambda_{2}\leq p\). Then (2.25) implies that the second eigenvalue of \((-\Delta)^{s}+\lambda-pu^{p-1}\) in \(\mathcal{H}^{s}(B)\) satisfies \[\mu:=\inf_{\begin{subarray}{c}w\in\mathcal{H}^{s}(B)\\ \int_{B}wu^{p}dx=0\end{subarray}}\frac{E_{p,B}^{\prime\prime}(u)[w,w]}{\int_{B }w^{2}dx}\leq 0. \tag{2.26}\] By [6, Proposition 3.9]2, there exists \(w\in\mathcal{H}^{s}(B)\) with \(w\not\equiv 0\) satisfying Footnote 2: It was proved in [6] for \(N\geq 2\) but by carefully looking at the proof, we see that it works also for \(N=1\), see also the proof of Theorem 1.1. \[(-\Delta)^{s}w+\lambda w=pu^{p-1}w+\mu w\qquad\text{ in }B\qquad\text{ and }\qquad x_{1}\mapsto w(x_{1},\cdot)\text{ is odd}. \tag{2.27}\] By Lemma 2.5, we have \(v:=-\partial_{x_{1}}u\in C^{\infty}(B)\cap L^{1}(\mathbb{R}^{N})\), and \(v\) is a pointwise solution of the equation \((-\Delta)^{s}v+\lambda v=pu^{p-1}v\) in \(B\) which is odd with respect to the reflection at \(\{x_{1}>0\}\) and satisfies \[v\geq 0\quad\text{in }\{x_{1}>0\}\qquad\text{and}\qquad v\not\equiv 0\quad \text{in }\{x_{1}>0\}\cap B.\] Indeed, these inequalities follow since \(v(x)=-x_{1}\frac{\partial_{r}u(x)}{|x|}\) and \(\partial_{r}u\leq 0\), \(\partial_{r}u\not\equiv 0\) in \(B\setminus\{0\}\). Now, by Lemma 5.3 from the appendix, we have \(v>0\) in \(\{x_{1}>0\}\cap B\) and \(\partial_{x_{1}}v>0\) on \(\{x_{1}=0\}\cap B\). Consequently, \[x\mapsto\frac{v(x)}{x_{1}}\qquad\text{extends to a positive }C^{\infty}\text{-function on }B. \tag{2.28}\] Next we note that, by fractional elliptic regularity, \(w\in C^{\infty}(B)\cap L^{\infty}(B)\). For \(k\in\mathbb{N}\), we define the functions \[\zeta_{k}(x)=1-\chi(k(1-|x|)), \tag{2.29}\] where \(\chi\) is given by (2.18), and we put \(w_{k}:=\zeta_{k}w\in C^{1}_{c}(B)\). Therefore \(w_{k}/v\in C_{c}(B)\) by (2.28). We can thus apply Lemma 2.1 to obtain \[[w_{k}]^{2}+\lambda\int_{B}w_{k}^{2}dx-p\int_{B}u^{p-1}w_{k}^{2}dx=\int_{\{x_{ 1}>0\}\cap\{y_{1}>0\}}H_{w_{k},v}^{e_{1}}(x,y)dxdy. \tag{2.30}\] By [9, Lemma 2.2], we have \[\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{(w_{k}(x)-w_{k}(y))^{2}}{|x-y |^{N+2s}}dydx\to\int_{\mathbb{R}^{N}\times\mathbb{R}^{N}}\frac{(w(x)-w(y))^{2} }{|x-y|^{N+2s}}dydx\quad\text{as }k\to\infty.\] In addition by the dominated convergence theorem \(\int_{B}u^{p-1}w_{k}^{2}dx\to\int_{B}u^{p-1}w^{2}dx\) and \(\int_{B}w_{k}^{2}dx\to\int_{B}w^{2}dx\) as \(k\to\infty\). We can thus apply Fatou's lemma in (2.30) to deduce that \[[w]_{s}^{2}+\lambda\int_{B}w^{2}dx-p\int_{B}u^{p-1}w^{2}dx\geq\int_{\{x_{1}>0 \}\cap\{y_{1}>0\}}H_{w,v}^{e_{1}}(x,y)dxdy,\] where, recalling (2.3), \[H^{e_{1}}_{w,v}(x,y):=c_{N,s}v(x)v(y)\left[w(x)/v(x)-w(y)/v(y)\right]^ {2}\] \[\times\left(|x-y|^{-N-2s}-\left((x_{1}+y_{1})^{2}+|\widetilde{x}- \widetilde{y}|^{2}\right)^{-(N+2s)/2}\right)\geq 0\quad\text{for all }x,y\in\{x_{1}>0\} \cap\{y_{1}>0\}\.\] From this, (2.27) and (2.26), we see that \[0\geq\mu\int_{B}w^{2}dx=[w]_{s}^{2}+\lambda\int_{B}w^{2}dx-p\int_{B}u^{p-1}w^{2 }dx\geq\int_{\{x_{1}>0\}\cap\{y_{1}>0\}}H^{e_{1}}_{w,v}(x,y)dxdy\geq 0.\] This implies that \(H^{e_{1}}_{w,v}\equiv 0\) on \(\{x_{1}>0\}\cap\{y_{1}>0\}\) and thus \(w\not\equiv 0\) is proportional to \(v=\partial_{x_{1}}u\) on \(\{x_{1}>0\}\). This is impossible because \(\partial_{x_{1}}u\) is unbounded as \(|x|\to 1\) on \(\{x_{1}>0\}\) by Lemma 2.5\((iii)\). We thus conclude that \(\Lambda_{2}>p\). ## 3. Uniqueness in the case of the ball In this section, we shall prove that for all \(p\in(1,2_{*}^{s}-1)\) and \(\lambda>-\lambda_{1}(B)\), the solution \(u_{p}\in\mathcal{H}^{s}(B)\) of the problem \[(\mathcal{P}_{p})\qquad\qquad(-\Delta)^{s}u_{p}+\lambda u_{p}=u_{p}^{p}\quad \text{ in }B,\qquad u_{p}>0\quad\text{ in }B\] is unique. We start with the following uniform estimates. **Lemma 3.1**.: _Let \(1<p_{0}<2_{s}^{*}-1\). Then there exists \(\delta>0\) and \(C>0\) such that for all \(p\in(p_{0}-\delta,p_{0}+\delta)\) and any \(u\in\mathcal{H}^{s}(B)\cap C^{s}(\mathbb{R}^{N})\) solving \((\mathcal{P}_{p})\) we have_ * \(\|u\|_{L^{\infty}(B)}\leq C\)_;_ * \([u]_{s}+\|u\|_{C^{s}(\mathbb{R}^{N})}\leq C\)_._ _Moreover, let \((p_{n})_{n\in\mathbb{N}}\) be a sequence converging to some \(\overline{p}\in(1,2_{s}^{*}-1)\). For \(n\in\mathbb{N}\), we let \(u_{n}\in\mathcal{H}^{s}(B)\) be a solution to \((\mathcal{P}_{p_{n}})\). Then, \((u_{n})_{n\in\mathbb{N}}\) possesses a subsequence that weakly converges in \(\mathcal{H}^{s}(B)\) to a solution of \((\mathcal{P}_{\overline{p}})\)._ Proof.: Arguing by contradiction, we first suppose that there exist a sequence \(p_{n}\to p_{0}\) and a sequence of positive solutions \(u_{n}\in\mathcal{H}^{s}(B)\cap C^{s}(\mathbb{R}^{N})\) of \((\mathcal{P}_{p_{n}})\) such that \(b_{n}:=\|u_{n}\|_{L^{\infty}(B)}\to\infty\) as \(n\to\infty\). We define \[v_{n}(x):=\frac{1}{b_{n}}u_{n}(x/b_{n}^{\frac{p_{n}-1}{2s}}).\] By Lemma 2.5\((i)\), \(\|v_{n}\|_{L^{\infty}(\mathbb{R}^{N})}=v_{n}(0)=1\). By direct computations, \[(-\Delta)^{s}v_{n}+\lambda b_{n}^{1-p_{n}}v_{n}=v_{n}^{p_{n}}\qquad\text{in }B_{r_{n}}(0)\quad\text{ with }\quad r_{n}:=b_{n}^{\frac{p_{n}-1}{2s}}.\] By fractional elliptic regularity theory (see [23, Theorem 1.1]), there exists \(\alpha>0\) such that the functions \(v_{n}\) are uniformly bounded in \(C^{2s+\alpha}(K)\) for any compact set \(K\subset\mathbb{R}^{N}\). After passing to a subsequence, we may thus assume that \(v_{n}\to\overline{v}\) in \(C^{2s+\beta}_{loc}(\mathbb{R}^{N})\) for \(0<\beta<\alpha\), where \(\overline{v}\) satisfies \((-\Delta)^{s}\overline{v}=\overline{v}^{p_{0}}\) in \(\mathbb{R}^{N}\) and \(\overline{v}(0)=1\). Hence by [18, Remark 1.9] we have that \(v\equiv 0\) which is not possible. We thus conclude that there exists \(\delta>0\) and \(C>0\) such that \[\|u\|_{L^{\infty}(B)}\leq C\quad\text{for all }p\in(p_{0}-\delta,p_{0}+\delta) \text{ and all }u\in\mathcal{H}^{s}(B)\cap C^{s}(\mathbb{R}^{N})\text{ solving }(\mathcal{P}_{p}).\] This proves (i). Now (ii) follows from boundary regularity estimate in [23, Theorem 1.2] and the uniform \(L^{\infty}\) bound in (i), again after making \(C\) larger if necessary. Next, we note that, since \(\lambda>-\lambda_{1}(B)\), by the Poincare inequality for all \(u\in\mathcal{H}^{s}(B)\) solving \((\mathcal{P}_{p})\), we have \[\min\Bigl{\{}1,\Bigl{(}1+\frac{\lambda}{\lambda_{1}(B)}\Bigr{)}\Bigr{\}}[u]_{s }^{2}\leq[u]_{s}^{2}+\lambda\int_{B}u^{2}dx\leq\int_{B}u^{p+1}dx.\] From this, the Sobolev and Holder inequalities, there exists \(\overline{C}=\overline{C}(N,s,\lambda)>0\) such that \[\overline{C}\left(\int_{B}u^{2^{*}_{s}}dx\right)^{\frac{2}{2^{*}_{s}}}\leq[u]_{s }^{2}+\lambda\int_{B}u^{2}dx\leq\int_{B}u^{p+1}dx\leq\left|B\right|^{1-\frac{p+ 1}{2^{*}_{s}}}\left(\int_{B}u^{2^{*}_{s}}dx\right)^{\frac{p+1}{2^{*}_{s}}}.\] We thus conclude that for every \(u\in\mathcal{H}^{s}(B)\) solving \((\mathcal{P}_{p})\), we have \[\left(\int_{B}u^{2^{*}_{s}}dx\right)^{\frac{p-1}{2^{*}_{s}}}\geq\overline{C} \left|B\right|^{-1+\frac{p+1}{2^{*}_{s}}}. \tag{3.1}\] Let \((p_{n})_{n\in\mathbb{N}}\) be a sequence converging to \(\overline{p}\in(1,2^{*}_{s}-1)\) and let \(u_{n}\in\mathcal{H}^{s}(B)\) be a solution of \((\mathcal{P}_{p_{n}})\) for all \(n\in\mathbb{N}\). By (ii) we have that, up to a subsequence, the sequence \((u_{n})_{n\in\mathbb{N}}\) converges weakly in \(\mathcal{H}^{s}(B)\) and strongly in \(C(\overline{B})\) to some \(v\geq 0\). Moreover \(v\in\mathcal{H}^{s}(B)\cap C(\overline{B})\) weakly solves \((-\Delta)^{s}v+\lambda v=v^{\overline{p}}\) in \(B\). Now (3.1) implies that, for all \(n\in\mathbb{N}\) \[\left(\int_{B}u_{n}^{2^{*}_{s}}dx\right)^{\frac{p_{n}-1}{2^{*}_{s}}}\geq \overline{C}\left|B\right|^{-1+\frac{p_{n}+1}{2^{*}_{s}}}.\] Hence letting \(n\to\infty\), we see that \(\left\|v\right\|_{L^{2^{*}_{s}}(B)}>0\), so that \(v\gtrapprox 0\) in \(B\). We then write \((-\Delta)^{s}v+V_{v}\,v=0\) with the (frozen) potentials \(V_{v}=\lambda-v^{\overline{p}-1}\in L^{\infty}(B)\) and thus by the strong maximum principle, see [11, Proposition 3.3 and Remark 3.5], we have \(v>0\) in \(B\) and the lemma follows. Proof of Theorem 1.4.: From3[8] there exists \(p_{0}>1\) such that for all \(p\in(1,p_{0})\) there exists a unique positive function \(u_{p}\in\mathcal{H}^{s}(B)\) satisfying \((-\Delta)^{s}u_{p}+\lambda u_{p}=u_{p}^{p}\) in \(B\). We let \(p_{*}>1\) be the largest number such that \((1,p_{*})\) has this uniqueness property. Footnote 3: This fact was proven in [8] for \(N\geq 2\) however the same proof works also for \(N=1\) **Claim.** We have that \(p_{*}=2^{*}_{s}-1\). Suppose by contradiction that \(p_{*}<2^{*}_{s}-1\). We fix \(\overline{p}\in(1,p_{*})\) and \(\beta\in(0,\min\{(\overline{p}-1)s,s\})\). We introduce the Banach space4 Footnote 4: by \((-\Delta)^{s}u\in C^{\beta}(\overline{B})\), we mean there exists \(f\in C^{\beta}(\overline{B})\) such that \((-\Delta)^{s}u=f\) in \(\mathcal{D}^{\prime}(B)\). We note in this case that \((-\Delta)^{s}u(x)=f(x)\) for all \(x\in B\) because \(u\in C^{2s+\beta}_{loc}(B)\cap C^{s}(\overline{B})\) by regularity theory. \[\mathcal{C}_{0}^{2s+\beta}:=\left\{u\in C^{\beta}(\mathbb{R}^{N}),\qquad u=0 \text{ in }\mathbb{R}^{N}\setminus B,\ \ (-\Delta)^{s}u\in C^{\beta}(\overline{B})\right\},\] endowed with the norm \(\left\|u\right\|_{C^{\beta}(\mathbb{R}^{N})}+\left\|(-\Delta)^{s}u\right\|_{C ^{\beta}(\overline{B})}\). Note that, by Lemma 2.5, all solutions to (1.3) belongs to \(\mathcal{C}_{0}^{2s+\beta}\). We finally define \[F:(1,\infty)\times\mathcal{C}_{0}^{2s+\beta}\to C^{\beta}(\overline{B}),\qquad F (p,u)=(-\Delta)^{s}u+\lambda u-|u|^{p}.\] It is easy to see that \(F\) is of class \(C^{1}\) on \((1,\infty)\times\mathcal{C}_{0}^{2s+\beta}\). We have that \(F(p_{*},u_{p_{*}})=0\) and \(\partial_{u}F(p_{*},u_{p_{*}})=(-\Delta)^{s}+\lambda-p_{*}u_{p_{*}}^{p_{*}-1}\), which has empty kernel by Theorem 1.3. It is easily seen that \((-\Delta)^{s}+\lambda:\mathcal{C}_{0}^{2s+\beta}\to C^{\beta}(\overline{B})\) is a Fredholm map of index zero. In addition since \(u_{p_{*}}\in C^{s}(\overline{B})\) we have that \(u_{p_{*}}^{p_{*}-1}\in C^{\beta}(\overline{B})\) by our choice of \(\beta\) and also by the Arzela-Ascoli theorem the map \(v\mapsto u_{p_{*}}^{p_{*}-1}v:\mathcal{C}_{0}^{2s+\beta}\to C^{\beta}( \overline{B})\) is compact5. As a consequence, \(\partial_{u}F(p_{*},u_{p_{*}}):\mathcal{C}_{0}^{2s+\beta}\to C^{\beta}( \overline{B})\) is an isomorphism. It then follows from the implicit function theorem that there exists \(\delta>0\) such that for all \(p\in(p_{*}-\delta,p_{*}+\delta)\), there exists a unique \(u_{p}\in B_{\mathcal{C}_{0}^{2s+\beta}}(u_{p_{*}},\delta)\) satisfying \(F(p,u_{p})=0\). Suppose that there exists an other solution of \((\mathcal{P}_{p_{*}})\), \(\widetilde{u}_{p_{*}}\) then \(F(p_{*},\widetilde{u}_{p_{*}})=0\). Similarly, by nondegeneracy from Theorem 1.3, decreasing \(\delta\) if necessary, we have that for all \(p\in(p_{*}-\delta,p_{*}+\delta)\), there exists a unique \(\widetilde{u}_{p}\in B_{c^{2s+\beta}_{0}}(\widetilde{u}_{p_{*}},\delta)\) satisfying \(F(p,\widetilde{u}_{p})=0\). Note that taking \(\delta\) smaller if necessary, \(u_{p}(0)>0\) and \(\widetilde{u}_{p}(0)>0\) for all \(p\in(p_{*}-\delta,p_{*},+\delta)\), thanks to the continuity of the curves \(p\mapsto u_{p}\) and \(p\mapsto\widetilde{u}_{p}\) as maps \((p_{*}-\delta,p_{*}+\delta)\to C^{2s+\beta}_{0}\), which follows from the implicit function theorem. Since \(\lambda>-\lambda_{1}(B)\), the maximum principle implies that \(u_{p}>0\) in \(B\) and \(\widetilde{u}_{p}>0\) in \(B\), so that they satisfy \((\mathcal{P}_{p})\) for all \(p\in(p_{*}-\delta,p_{*}+\delta)\). From the definition of \(p_{*}\), we have that \(u_{p}=\widetilde{u}_{p}\) for all \(p\in(p_{*}-\delta,p_{*})\). We can let \(p\nearrow p_{*}\) and we obtain \(u_{p_{*}}=\widetilde{u}_{p_{*}}\) This implies that \(u_{p_{*}}\) is the unique solution of \((\mathcal{P}_{p_{*}})\). To obtain a contradiction on the assumption on \(p_{*}\), we prove that there exists \(\varepsilon_{*}\in(0,\delta)\) such that for all \(p\in(p_{*},p_{*}+\varepsilon_{*})\) then \((\mathcal{P}_{p})\) possesses a unique solution. If such an \(\varepsilon_{*}\) does not exist, then we can find a sequence \((p_{j})_{j\in\mathbb{N}}\) with \(p_{j}\searrow p_{*}\) such that for all \(j\in\mathbb{N}\), there exist two solutions \(u_{p_{j}}\) and \(\widetilde{u}_{p_{j}}\) of \((\mathcal{P}_{p_{j}})\) with the property that \(u_{p_{j}}\neq\widetilde{u}_{p_{j}}\). By Lemma 3.1, both \(u_{p_{j}}\) and \(\widetilde{u}_{p_{j}}\) converge to \(u_{p_{*}}\) in \(\mathcal{H}^{s}(B)\cap C(\mathbb{R}^{N})\) by the uniqueness of \(u_{p_{*}}\). We next note that \(\theta_{j}:=\frac{u_{p_{j}}-\widetilde{u}_{p_{j}}}{\|u_{p_{j}}-\widetilde{u}_{ p_{j}}\|_{L^{2}(B)}}\) satisfies \((-\Delta)^{s}\theta_{j}+\lambda\theta_{j}=g_{j}\theta_{j}\) in \(B\), with \[g_{j}=p_{j}\int_{0}^{1}(tu_{p_{j}}+(1-t)\widetilde{u}_{p_{j}})^{p_{j}-1}dt,\] and \(\|\theta_{j}\|_{L^{2}(B)}=1\). By Lemma 3.1, \([\theta_{j}]_{s}\leq|\lambda|+\|g_{j}\|_{L^{\infty}(B)}\) is uniformly bounded for all \(j\). Hence, up to a subsequence, we see that \(\theta_{j}\) weakly converges in \(\mathcal{H}^{s}(B)\) and strongly in \(L^{2}(B)\) to some \(\overline{\theta}\) in \(\mathcal{H}^{s}(B)\) satisfying \(\|\overline{\theta}\|_{L^{2}(B)}=1\) and \[(-\Delta)^{s}\overline{\theta}+\lambda\overline{\theta}-p_{*}u_{p_{*}}^{p_{*}- 1}\overline{\theta}=0\quad\text{ in }B.\] This is impossible by Theorem 1.3 and thus \(\varepsilon_{*}\) exists, contradicting the definition of \(p_{*}\). We thus get \(p_{*}+1=2_{s}^{*}\) as claimed. ## 4. Uniqueness in the case of \(\mathbb{R}^{N}\) Proof of Theorem 1.2.: Here, one can follow the argument of [15, Section 8], replacing ground state with solution. Note that Theorem 1.1 implies that the hypothesis of [15, Proposition 8.1] ar verified. One thus therefore repeat the proof of [15, Proposition 8.4], replacing ground state solution with any radial solution to (1.1), to conclude uniqueness of radial solutions to (1.1) in \(H^{s}(\mathbb{R}^{N})\). ## 5. Appendix In the following, we consider, as before, for \(\nu\in S^{N-1}\), the reflection \[\sigma_{\nu}:\mathbb{R}^{N}\to\mathbb{R}^{N},\qquad\sigma_{\nu}(x)=x-2(x\cdot \nu)\nu\] with respect to the hyperplane \(T_{\nu}:=\{x\in\mathbb{R}^{N}\,:\,x\cdot\nu=0\}\). **Lemma 5.1**.: _Let \(w,h\in C^{2}(S^{N-1})\) be functions with the property that_ \[\frac{w(z)-w(\sigma_{\nu}(z))}{2}=h(\nu)\,z\cdot\nu\qquad\text{for every }\nu,z\in S^{N-1}. \tag{5.1}\] _Let, moreover, \(\nu_{max}\in S^{N-1}\) be a point with \(h_{max}:=h(\nu_{max})=\max\limits_{S^{N-1}}h\). Then we have_ \[w(z)=[w(\nu_{max})-h_{max}]+h_{max}\,z\cdot\nu_{max}\qquad\text{for }z\in S^{N-1}. \tag{5.2}\] _In particular, \(w\) is a sum of a constant and an odd function with respect to the reflection \(\sigma_{\nu_{max}}\)._ Proof.: Since the problem is rotationally invariant, we may assume, without loss of generality, that \(h\) takes its maximum at \(\nu_{max}=e_{N}:=(0,\ldots,0,1)\). We first consider the case \(N=2\). We then write \(x_{\theta}=(\cos\theta,\sin\theta)\in S^{1}\) for \(\theta\in\mathbb{R}\) and regard all functions as \(2\pi\)-periodic functions of the angle \(\theta\). For a fixed angle \(\theta\), we consider \(\nu_{\theta}:=(-\sin\theta,\cos\theta)\). Then the reflection at the hyperplane \(T=T_{\nu_{\theta}}:=\{x\in S^{1}\,:\,x\cdot\nu_{\theta}=0\}\) corresponds in the angle coordinate to the map \[\vartheta\mapsto 2\theta-\vartheta\] Setting \[\tilde{w}(\theta)=w(x_{\theta})=w(\cos\theta,\sin\theta)\qquad\text{and} \qquad\tilde{h}(\theta)=h(\nu_{\theta})=h(-\sin\theta,\cos\theta),\] we may then reformulate assumption (5.1) as \[\frac{w(\theta+s)-w(\theta-s)}{2}=\tilde{h}(\theta)\,x_{\theta} \cdot\nu_{\theta}=\tilde{h}(\theta)\,(\cos(\theta+s),\sin(\theta+s))\cdot(- \sin\theta,\cos\theta)\] \[=\tilde{h}(\theta)\Big{(}\sin(\theta+s)\cos\theta-\cos(\theta+s) \sin\theta\Big{)}=\tilde{h}(\theta)\sin s\quad\text{for }\theta,s\in\mathbb{R}.\] Differentiating in \(\theta\) at \(s=0\) gives \[\partial_{\theta}w(\theta)=\tilde{h}(\theta)\qquad\text{for all }\theta\in \mathbb{R}. \tag{5.3}\] Consequently, we may reformulate the assumption again as \[\frac{1}{2}\int_{\theta-s}^{\theta+s}\tilde{h}(\tau)d\tau=\tilde{h}(\theta) \sin s\quad\text{for }\theta,s\in\mathbb{R}.\] Differentiating this identity three times in \(s\) gives \[\frac{\tilde{h}^{\prime\prime}(\theta+s)+\tilde{h}^{\prime\prime}(\theta-s)}{2 }=-\tilde{h}(\theta)\cos s\quad\text{for }\theta,s\in\mathbb{R}.\] Evaluating at \(s=0\) gives \(\tilde{h}^{\prime\prime}(\theta)=-\tilde{h}(\theta)\), from which we deduce that \[\tilde{h}(\theta)=h_{max}\cos\theta\qquad\text{for }\theta\in\mathbb{R}. \tag{5.4}\] Here we used the fact that \(\tilde{h}\) takes its maximum at zero, since \(h\) takes its maximum at \(\nu_{max}=(0,1)\) by assumption. Combining (5.3) and (5.4) gives \[w(x_{\theta}) =\tilde{w}(\theta)=\tilde{w}(\frac{\pi}{2})+h_{max}\int_{\frac{ \pi}{2}}^{\theta}\cos\vartheta d\vartheta=\tilde{w}(\frac{\pi}{2})+h_{max}( \sin\theta-1)\] \[=w(\nu_{max})+h_{max}(x_{\theta}\cdot\nu_{max}-1)\qquad\text{ for } \theta\in\mathbb{R},\] which gives (5.2) in the case \(N=2\). In the general case \(N\geq 2\), we may repeat the above argument in the \(2\)-dimensional subspace \(\operatorname{span}\{e_{*},e_{N}\}\), where \(e_{*}\in S^{N-2}\times\{0\}\) is chosen arbitrarily. This then yields (5.2) for arbitrary \(z\in S^{N-1}\). **Corollary 5.2**.: _Let \(w\in C^{2}(\mathbb{R}^{N})\), \(h\in C^{2}(S^{N-1})\) and \(U:(0,\infty)\to[0,\infty)\) be functions with the property that_ \[\frac{w(x)-w(\sigma_{\nu}(x))}{2}=h(\nu)U(|x|)\,\frac{x}{|x|}\cdot\nu\qquad \text{for every }\nu\in S^{N-1},x\in\mathbb{R}^{N}\setminus\{0\}.\] _Let, moreover, \(\nu_{max}\in S^{N-1}\) be a point with \(h_{max}:=h(\nu_{max})=\max\limits_{S^{N-1}}h\). Then we have_ \[w(x)=[w(|x|\nu_{max})-h_{max}\,U(|x|)]+h_{max}\,U(|x|)\,\frac{x}{|x|}\cdot\nu_{ max}\qquad\text{for }x\in\mathbb{R}^{N}\setminus\{0\}.\] _In particular, \(w\) is a sum of a radial function and an odd function with respect to the reflection \(\sigma_{\nu_{max}}\)._ Proof.: It suffices to apply Lemma 5.1, for fixed \(r>0\), to the functions \[S^{N-1}\to\mathbb{R},\qquad z\mapsto w(rz)\] in place of \(w\) and \[S^{N-1}\to\mathbb{R},\qquad\nu\mapsto U(r)h(\nu)\] in place of \(h\). **Lemma 5.3**.: _Let \(H_{+}:=\{x\in\mathbb{R}^{N}\,:\,x_{1}>0\}\), \(T:=\{x\in\mathbb{R}^{N}\,:\,x_{1}=0\}\), and let \(\Omega\subset\mathbb{R}^{N}\) be a bounded set of class \(C^{2}\) which is symmetric with respect to reflection of the \(x_{1}\)-coordinate, and let \(\Omega_{+}:=\Omega\cap H_{+}\). Moreover, let \(c\in L^{\infty}(\Omega)\), \(\alpha>0\) and \(v\in L^{1}(\mathbb{R}^{N};(1+|x|)^{-N-2s})\cap C^{2s+\alpha}(\Omega)\) satisfy_ \[\left\{\begin{aligned} (-\Delta)^{s}v+c(x)v& \geq 0\qquad\text{in }\Omega_{+}\text{,}\\ v&\geq 0\qquad\text{in }H_{+}\text{,}\\ v&\not\equiv 0\qquad\text{in }\Omega_{+}\text{,}\\ v(-x_{1},x^{\prime})=-v(x)\qquad\text{for }x=(x_{1},x^{\prime}) \in\mathbb{R}^{N}\text{.}\end{aligned}\right. \tag{5.5}\] _Then we have_ \[v>0\quad\text{in }\Omega_{+}\qquad\text{and}\qquad\liminf_{t\to 0^{+}}\frac{ v(t,x^{\prime})}{t}>0\qquad\text{for every }(0,x^{\prime})\in T\cap\Omega.\] Proof.: In the case where, in addition, \(v\in H^{s}(\mathbb{R}^{N})\), the conclusion is contained in [11, Cor. 3.3] and [25, Prop. 2.2]. In fact, [25, Prop. 2.2] also assumes the equality \((-\Delta)^{s}v+c(x)=0\) in \(\Omega^{+}\), but the proof given there only requires that \((-\Delta)^{s}v+c(x)\geq 0\) in \(\Omega^{+}\). To get rid of the additional assumption \(v\in H^{s}(\mathbb{R}^{N})\), we use a cut-off argument. So we let \(v\in L^{1}(\mathbb{R}^{N};(1+|x|)^{-N-2s})\cap C^{2s+\alpha}(\Omega)\) satisfy (5.5), and we let \(\tilde{x}\in\Omega\) with \(\tilde{x}_{1}\geq 0\). We choose a radial cut-off function \(\psi\in C_{c}^{\infty}(\Omega)\) with \(0\leq\psi\leq 1\) in such a way that there exists a (sufficiently large) neighborhood \(U\subset\Omega\) of \(\tilde{x}\) with \(\psi\equiv 1\) in \(U\) and \(v\not\equiv 0\) in \(U_{+}:=U\cap H_{+}\). We may assume also that \(U\) is symmetric with respect to the \(x_{1}\)-coordinate. For \(x\in U_{+}\) we then have \[(-\Delta)^{s}(\psi v)(x)=(-\Delta)^{s}v(x)+(-\Delta)^{s}((\psi-1) v)(x) \geq-c(x)v(x)+c_{N,s}f(x)\] \[=-c(x)(\psi v)(x)+c_{N,s}f(x),\] where \[f(x)=\int_{\mathbb{R}^{N}\setminus U}\frac{(1-\psi)(y)v(y)}{|x-y|^{N+2s}}\,dy =\int_{(\mathbb{R}^{N}\setminus U)\cap H_{+}}(1-\psi)(y)v(y)\Big{(}\frac{1}{| x-y|^{N+2s}}-\frac{1}{|x-\tilde{y}|^{N+2s}}\Big{)}\,dy\geq 0\] and we write \(\bar{y}=(-y_{1},y^{\prime})\) for \(y=(y_{1},y^{\prime})\in\mathbb{R}^{N}\). Here we have used the oddness of the function \(y\mapsto(1-\psi)(y)v(y)\). Consequently, we have \[(-\Delta)^{s}(\psi v)+c(x)(\psi v) \geq 0\qquad\text{in }U_{+}\text{,}\] \[\psi v \geq 0\qquad\text{in }H_{+}\text{,}\] \[\psi v \not\equiv 0\qquad\text{in }U_{+}\text{,}\] \[(\psi v)(-x_{1},x^{\prime})=-(\psi v)(x)\qquad\text{for }x=(x_{1},x^{ \prime})\in\mathbb{R}^{N}\text{.}\] Since moreover \(\psi v\in H^{s}(\mathbb{R}^{N})\), [11, Cor. 3.3] gives \(v(\tilde{x})=\psi v(\tilde{x})>0\) if \(\tilde{x}\in\Omega_{+}\), and [25, Prop. 2.2] gives \[\liminf_{t\to 0^{+}}\frac{v(t,x^{\prime})}{t}=\liminf_{t\to 0^{+}}\frac{(\psi v)(t,x^{ \prime})}{t}>0\quad\text{if }\tilde{x}=(0,x^{\prime})\in\Omega\cap T.\] The claim thus follows.
2310.17218
Prototypical Contrastive Learning-based CLIP Fine-tuning for Object Re-identification
This work aims to adapt large-scale pre-trained vision-language models, such as contrastive language-image pretraining (CLIP), to enhance the performance of object reidentification (Re-ID) across various supervision settings. Although prompt learning has enabled a recent work named CLIP-ReID to achieve promising performance, the underlying mechanisms and the necessity of prompt learning remain unclear due to the absence of semantic labels in ReID tasks. In this work, we first analyze the role prompt learning in CLIP-ReID and identify its limitations. Based on our investigations, we propose a simple yet effective approach to adapt CLIP for supervised object Re-ID. Our approach directly fine-tunes the image encoder of CLIP using a prototypical contrastive learning (PCL) loss, eliminating the need for prompt learning. Experimental results on both person and vehicle Re-ID datasets demonstrate the competitiveness of our method compared to CLIP-ReID. Furthermore, we extend our PCL-based CLIP fine-tuning approach to unsupervised scenarios, where we achieve state-of-the art performance.
Jiachen Li, Xiaojin Gong
2023-10-26T08:12:53Z
http://arxiv.org/abs/2310.17218v1
# Prototypical Contrastive Learning-based CLIP Fine-tuning ###### Abstract This work aims to adapt large-scale pre-trained vision-language models, such as contrastive language-image pre-training (CLIP), to enhance the performance of object re-identification (Re-ID) across various supervision settings. Although prompt learning has enabled a recent work named CLIP-ReID to achieve promising performance, the underlying mechanisms and the necessity of prompt learning remain unclear due to the absence of semantic labels in Re-ID tasks. In this work, we first analyze the role prompt learning in CLIP-ReID and identify its limitations. Based on our investigations, we propose a simple yet effective approach to adapt CLIP for supervised object Re-ID. Our approach directly fine-tunes the image encoder of CLIP using a prototypical contrastive learning (PCL) loss, eliminating the need for prompt learning. Experimental results on both person and vehicle Re-ID datasets demonstrate the competitiveness of our method compared to CLIP-ReID. Furthermore, we extend our PCL-based CLIP fine-tuning approach to unsupervised scenarios, where we achieve state-of-the-art performance. ## 1 Introduction Recently, contrastive language-image pre-training (CLIP) [34] and other pre-trained vision-language models [8, 17] have attracted great attention in vision community. CLIP is trained on a dataset of 400 million text-image pairs collected from the internet. Through unsupervised cross-modal contrastive learning from the large-scale dataset, it is capable of learning diverse visual and language semantic concepts and acquiring remarkable transfer abilities. As a result, CLIP has been successfully adapted to various downstream vision tasks, including zero-shot/few-shot recognition [53, 54], object detection [46, 51], semantic segmentation [29], and more. Various techniques, such as learning additional adaptation layers [10] and learning textual or visual prompts [53, 20, 54, 18], have been developed to facilitate the adaptation of CLIP to specific downstream tasks. Among these adaptation techniques, prompt learning has gained popularity due to its superior performance and low computational cost. Both textual [53, 54] and multi-modal [20] prompt learning use class names from ground-truth labels to form input text descriptions for recognition or classification tasks. The inclusion of fixed class names helps the CLIP model transfer textual semantics to visual concepts, resulting in great robustness to noise [45] and remarkable generalization ability to unseen data [54]. Unfortunately, in person/vehicle re-identification, class names do not exist as the class labels represent ID indexes and lack semantic meaning. This poses a challenge when employing prompt learning techniques to adapt pre-trained vision-language models. A recent work called CLIP-ReID [25] addresses this issue by introducing a two-stage strategy. In the first stage, it learns a set of textual prompts for each ID while keeping the CLIP model fixed. Then, in the second stage, the learned prompts are utilized to fine-tune the image encoder of CLIP using a proposed Figure 1: t-SNE [40] visualization of randomly selected 7 IDs from MSMT17. (a) shows that the text centroids learned in CLIP-ReID stage-1 are pretty close to the image centroids, which reveals their implicit equivalence. (b) shows that PCL is also able to learn high-quality feature space only with image centroids. Best view with color. image-to-text cross-entropy loss, in conjunction with the commonly used ID loss and triplet loss [30]. This approach has demonstrated impressive performance in supervised Re-ID, seeming like highlighting the potential of prompt learning in adapting CLIP to the Re-ID task. However, upon investigating the mechanisms that enable effective textual prompt learning in CLIP-ReID, we have made the following speculations: 1) Unlike CoOP [54] that utilizes fixed class names to transfer textual semantics, the prompt learning in CLIP-ReID essentially learns a textual feature centroid for each ID, as shown in Figure 1. 2) The image-to-text cross-entropy loss introduced in CLIP-ReID serves as a centroid-based loss [32, 44], attracting images of the same ID towards their respective textual centroids. Inspired by these findings, we propose to utilize a prototypical contrastive learning (PCL) loss [24] to directly fine-tune the CLIP's image encoder without the need of prompt learning. Our PCL loss leverages the ID centroids of up-to-date visual features for fine-tuning, preventing from the disturbance caused by outdated textual centroids and potential misalignment between textual and visual features. Experimental results demonstrate that simply fine-tuning CLIP with a single PCL loss performs competitively with CLIP-ReID [25], achieving significantly higher performance compared to fine-tuning CLIP with ID loss and triplet loss. The aforementioned fine-tuning of CLIP is performed under full supervision, where ID labels are available. In this work, we also aim to adapt CLIP to unsupervised object Re-ID. The dominant approach for unsupervised Re-ID is clustering-based, which generates pseudo labels through clustering and then use the labels to learn a Re-ID model iteratively. The iterative learning scheme and the varying number of pseudo labels at different iterations make it challenging to learn prompts similar to CLIP-ReID [25]. Fortunately, our PCL loss-based CLIP fine-tuning fits well within this framework. Considering that state-of-the-art (SoTA) methods such as ClusterContrast [7], CAP [41], and O2CAP [42] already adopt PCL-wise losses, we can simply replace their backbone with the image encoder of CLIP and fine-tune the models using their own PCL-wise losses. However, directly replacing and fine-tuning the CLIP image encoder leads to a divergence issue due to the instability of training vision transformers under unsupervised setting. To address this issue, we adopt the patch projection layer frozen trick [5], which allows our model to converge effectively and achieve significantly higher performance compared to SoTA methods. In summary, our work makes the following contributions: * By investigating the mechanisms of prompt learning in CLIP-ReID, we propose a prototypical contrastive learning loss (PCL)-based CLIP fine-tuning method. Our approach is simple yet effective in adapting CLIP to object Re-ID tasks. * We employ the PCL-based fine-tuning approach to adapt CLIP for various Re-ID settings, including fully supervised and unsupervised Re-ID tasks. Extensive experiments show that our fine-tuning method achieves competitive performance in both of these settings. ## 2 Related Work ### Object Re-identification In this work, we investigate the adaptation of CLIP to the Re-ID task under different settings, including fully supervised and purely unsupervised Re-ID. Therefore, we provide a concise review of these two tasks. In the past decade, fully supervised object Re-ID has witnessed remarkable advancements, primarily driven by the utilization of deep convolutional neural networks (CNNs) like ResNet-50 [14], in combination with the use of ID loss and triplet loss [16, 30]. Numerous CNN-based methods have been developed, leveraging multi-granularity features [38], human semantics [48], attention mechanisms [2], etc. More recently, transformer architectures [15, 36, 56] and pre-trained vision-language models [25, 34] have also been employed to boost Re-ID performance. Purely unsupervised Re-ID has also witnessed significant progress in recent years. A majority of research is clustering-based, which involves generating pseudo labels through clustering and iteratively learning a Re-ID model using these labels. Previous methods have made efforts to refine noisy pseudo labels [42, 6, 47], leverage contrastive learning losses [7, 41, 42], and design network architectures [23, 31]. To the best of our knowledge, the utilization of pre-trained vision-language models has not been explored in unsupervised Re-ID. ### CLIP and Prompt Learning Contrastive language-image pre-training (CLIP) [34] is a vision-language model that undergoes pre-training on millions of text-image pairs available on the Internet. It jointly trains a text encoder and an image encoder using two directional InfoNCE losses, which are extensively used in contrastive learning [39, 4, 13]. Beneficial from the knowledge learned from the large-scale dataset, CLIP has demonstrated remarkable generalization ability and has been effectively transferred to various downstream vision tasks, including zero-shot recognition [53, 54], object detection [51, 46], semantic segmentation [29], among others. Prompt learning [18, 20, 53, 54] has gained popularity in effectively transferring CLIP to specific downstream tasks. As a pioneering work, CoOp [54] introduces the concept of learning text prompts, which significantly outper formed hand-crafted prompts in CLIP adaptation. Building upon this, CoCoOp [53] further enhances generalizability by allowing text prompts to be conditioned on inputs, VPT [18] and MaPLe [20] extend the learning to visual and multi-modal prompts. In both textual [53, 54] and multi-modal [20] prompt learning, the class names of ground-truth labels are utilized to form input text descriptions, contributing remarkable generalization ability to unseen data [54] and enhancing robustness to noise [45]. However, in object re-identification, class names do not exist as the labels are represented by ID indexes. In order to adapt CLIP to Re-ID tasks, CLIP-ReID [25] introduces a two-stage strategy that first learns a set of textual prompts for each ID and then employs a prompt-based loss to fine-tune CLIP, resulting in impressive performance. Nevertheless, when using our prototypical contrastive learning loss to fine-tune the CLIP image encoder, this prompt learning strategy brings almost no improvement for both fully supervised and unsupervised Re-ID tasks. These results suggest that prompt learning may not be necessary for achieving strong performance in Re-ID tasks and our PCL loss-based fine-tuning approach can be a promising alternative. ### Prototypical Contrastive Learning Prototypical contrastive learning (PCL) [24] is a type of contrastive learning method that operates at the cluster level. In this approach, each class or cluster is represented by a prototype, which is a central representation of the instances within that cluster. The primary objective of PCL is to attract each instance towards its own cluster's prototype while simultaneously pushing it away from the prototypes of other clusters. By leveraging the cluster-level information, prototypical contrastive learning can effectively capture local semantic structures, resulting in more effective learning compared to conventional instance-level contrastive learning methods commonly used in uni-modal and multi-modal unsupervised representation learning tasks [4, 13, 34]. In recent years, prototypical contrastive learning has been successfully applied to unsupervised Re-ID tasks. Various PCL-wise losses, such as leveraging camera-aware prototypes [41] or online associated prototypes [42], have been developed, leading to significant improvements in performance. In this work, we explore the use of the PCL loss for fine-tuning CLIP in both fully supervised and unsupervised Re-ID tasks. ## 3 Revisit CLIP-ReID We first revisit the CLIP-ReID method [25]. It proposes a two-stage strategy to adapt CLIP [34] to supervised object Re-ID. Let us denote the pre-trained text and image encoders of CLIP as \(\mathcal{T}(\cdot)\) and \(\mathcal{I}(\cdot)\), respectively. These two encoders map a sequence of text prompt tokens and an image into features \(\mathbf{f}^{T}\) and \(\mathbf{f}^{I}\) in a joint embedding space. In the first stage, CLIP-ReID learns a set of ID-specific text tokens while freezing both the text and image encoders like CoOP [54]. For an image \(i\) with an ID label \(y_{i}=c\left(c\in\{1,\cdots,C\}\right)\), the paired text sequence input into \(\mathcal{T}(\cdot)\) is designed as "A photo of \([X]^{c}_{1}[X]^{c}_{2}\cdots[X]^{c}_{M}\) person/vehicle". Here, \([X]^{c}_{m}\) (\(m\in\{1,\cdots,M\}\)) is a learnable token with the same dimension as word embedding, \(C\) denotes the total number of IDs and \(M\) is the number of learnable tokens. The text tokens are learned by optimizing the sum of the following image-to-text and text-to-image losses: \[\mathcal{L}_{i2t}=-\frac{1}{|K_{i}|}\sum_{k\in K_{i}}\log\frac{\exp(s(\mathbf{ f}^{T}_{k},\mathbf{f}^{I}_{i})/\tau)}{\sum_{j=1}^{B}\exp(s(\mathbf{f}^{T}_{j}, \mathbf{f}^{I}_{i})/\tau)} \tag{1}\] \[\mathcal{L}_{t2i}=-\frac{1}{|K_{i}|}\sum_{k\in K_{i}}\log\frac{\exp(s(\mathbf{ f}^{T}_{i},\mathbf{f}^{I}_{k})/\tau)}{\sum_{j=1}^{B}\exp(s(\mathbf{f}^{T}_{i}, \mathbf{f}^{I}_{j})/\tau)} \tag{2}\] where \(s(\cdot,\cdot)\) represents the cosine similarity, \(B\) is the batch size and \(\tau\) is a temperature factor learned by CLIP. \(K_{i}=\{k|y_{k}=y_{i},k\in\{1,2,...,B\}\}\) denotes the index set of positive image samples and \(|\cdot|\) is the cardinality. In the second stage, CLIP-ReID fine-tunes the image encoder of CLIP while fixing both the learned prompts and the text encoder. The image encoder is optimized with respect to a ID loss \(\mathcal{L}_{id}\) and a triplet loss \(\mathcal{L}_{tri}\) that are widely used in supervised Re-ID [30], together with an image-to-text cross-entropy loss defined as below: \[\mathcal{L}_{i2tce}=-\sum_{k=1}^{C}q_{k}\log\frac{\exp(s(\mathbf{f}^{T}_{k}, \mathbf{f}^{I}_{i})/\tau)}{\sum_{j=1}^{C}\exp(s(\mathbf{f}^{T}_{j},\mathbf{f} ^{I}_{i})/\tau)}, \tag{3}\] where \(q_{k}\) is a smoothed ID label. **Discussion.** In the CLIP-ReID framework, each ID class \(c\) is associated with a textual feature \(\mathbf{f}^{T}_{c}\), which is obtained by feeding a sequence containing the learned ID-specific prompts into the text encoder. The aforementioned loss \(\mathcal{L}i2tce\) serves to attract the visual feature of an image towards its corresponding textual feature while pushing it away from textual features of other classes. This implies that each class's textual feature is well-aligned with its visual feature centroid. However, achieving such alignment is not guaranteed due to the following reasons. Firstly, although the training of CLIP encourages the alignment of textual and visual features in a joint embedding space, there may still be slight separation due to the modality gap [26, 37]. Secondly, even if the two encoders are initially well-aligned in CLIP, fine-tuning the image encoder while keeping the text encoder fixed may introduce misalignment issues. Therefore, while the loss \(\mathcal{L}i2tce\) significantly improves performance compared to using only the ID loss and triplet loss for fine-tuning CLIP, the presence of misalignment issues can have a negative impact on the fine-tuning process, resulting in sub-optimal performance. ## 4 PCL-based CLIP Fine-tuning Based on our understanding of the mechanisms behind the effectiveness and potential limitations of prompt learning in CLIP-ReID [25], we propose a direct fine-tuning approach for CLIP using prototypical contrastive learning (PCL). In this work, we first apply PCL-based CLIP fine-tuning for the supervised Re-ID task and then extend our approach to unsupervised settings. By leveraging the benefits of PCL, we aim to improve the performance of CLIP in adapting to Re-ID tasks while bypassing the need for prompt learning. ### Fine-tuning for Supervised Re-ID **Prototypical contrastive learning.** Prototypical contrastive learning has been widely used in unsupervised representation learning [24] and unsupervised Re-ID [41, 7, 42] tasks. The objective of PCL is to bring an instance closer to its cluster centroid while pushing it away from the centroids of other clusters. By this means, PCL learns to distinguish between different clusters and capture the underlying structure of the data in an unsupervised manner. When applying prototypical contrastive learning to supervised Re-ID, the clusters obtained through unsupervised clustering are replaced with the ground-truth ID classes. Consequently, for an image \(i\) with its corresponding visual feature \(\mathbf{f}_{i}^{I}\) obtained from the image encoder of CLIP, and its ID label \(y_{i}\), the PCL loss is defined as follows: \[\mathcal{L}_{pcl}=-\log\frac{\exp(s(\mathcal{K}[y_{i}],\mathbf{f}_{i}^{I})/ \tau)}{\sum_{j=1}^{C}\exp(s(\mathcal{K}[j],\mathbf{f}_{i}^{I})/\tau)}, \tag{4}\] in which \(\mathcal{K}[j]\) represents the visual feature centroid of class \(j\) stored in a memory bank \(\mathcal{K}\), and the remaining symbols have the same definitions as previously mentioned. **Memory bank.** In PCL, an external memory bank \(\mathcal{K}\in R^{d\times C}\) is constructed to store the feature centroids of all ID classes. Each centroid is initialized by averaging the visual features of all images belonging to that class. During the fine-tuning the CLIP image encoder, the centroid is updated using momentum as follows: \[\mathcal{K}[y_{i}]\leftarrow\mu\mathcal{K}[y_{i}]+(1-\mu)\mathbf{f}_{i}^{I} \tag{5}\] where \(\mu\) is a momentum factor. **Discussion.** In contrast to CLIP-ReID [25] that utilizes a loss to attract or repel an image towards its textual feature centroid, our PCL loss operates directly on the visual features, eliminating the need for alignment between textual and visual features. Experimental results show that fine-tuning CLIP with a single PCL loss achieves competitive performance compared to CLIP-ReID, which employs three losses for fine-tuning. Furthermore, while prototypical contrastive learning has also been successfully applied to supervised image classification [21], the ID loss and triplet loss remain dominant in supervised Re-ID methods. Our study Figure 2: The framework of our PCL-CLIP model for supervised Re-ID. Different from CLIP-ReID that consists of a prompt learning stage and a fine-tuning stage, our approach directly fine-tune CLIP with a single prototypical contrastive learning (PCL) loss. In our framework, a memory bank is built to store up-to-date visual feature centroid of each ID. indicates that the PCL loss is more effective than these two losses for adapting CLIP to supervised Re-ID tasks. ### Fine-tuning for Unsupervised Re-ID **PCL in unsupervised Re-ID.** As mentioned earlier, prototypical contrastive learning has been successfully applied to unsupervised Re-ID tasks. Recent methods such as ClusterContrast [7], CAP [41], and O2CAP [42] have designed various PCL variant losses within a clustering-based framework, leading to impressive performance. For instance, ClusterContrast [7] utilizes the PCL loss defined in Eq.(4) based on pseudo labels generated through clustering. CAP [41] additionally introduces an intra-camera PCL loss that divides each cluster into sub-clusters based on camera views. O2CAP [41] incorporates an online PCL loss that rectifies noisy clusters through online association. Considering the similar mechanisms of prototypical contrastive learning employed in these unsupervised methods, we directly replace their feature extraction backbones with the CLIP image encoder and utilize their respective losses for fine-tuning. By leveraging these established techniques, we aim to leverage the strengths of PCL in unsupervised Re-ID tasks while benefiting from the powerful feature extraction capabilities of the CLIP image encoder. **Divergence issue.** However, directly fune-tuning the vision transformer (ViT)-based image encoder of CLIP for unsupervised Re-ID leads to divergence. This divergence issue is similar to the instability observed by Chen et al. [5] during their self-supervised learning of vision transformers. To mitigate this problem, we adopt the trick proposed by them [5] and freeze the patch projection layer. The parameters of this layer remain unchanged throughout the entire fine-tuning process, as they were pre-trained in CLIP. By employing this strategy, the fine-tuning process is stabilized and better converged. **Discussion.** The clustering-based framework for unsupervised Re-ID involves conducting clustering and learning Re-ID models iteratively. This iterative process poses challenges when employing CLIP-ReID [25] to adapt CLIP, as the number of pseudo ID labels varies at each iteration, making it cumbersome to perform prompt learning. In contrast, our single-stage approach fine-tunes CLIP without prompt learning, providing a more convenient solution for adapting CLIP to unsupervised Re-ID tasks. ## 5 Experiments ### Datasets & Evaluation Metrics We evaluate the proposed method on two person Re-ID datasets: Market-1501 [49] and MSMT17 [43], as well as one vehicle Re-ID dataset: VeRi-776 [27]. Consistent with common practices, we utilize the mean Average Precision (mAP) and Cumulative Matching Characteristic (CMC) at Rank-1, Rank-5, and Rank-10 as the evaluation metrics. ### Implementation Details We utilize the ViT-based image encoder of CLIP [34] for fine-tuning. Specifically, we select the ViT-B/16 backbone, which consists of \(12\) transformer layers, with each layer employing \(6\) attention heads. Following CLIP-ReID [25], we pass the output of the encoder through a linear projection layer to reduce the feature dimension from \(768\) to \(512\). Additionally, we incorporate BNNeck [30] to both the output layer and the linear projection layer. The features generated from the two BNNecks are concatenated and L2-normalized to produce the final visual feature of an image. In addition, each input image is resized to \(256\times 128\) and augmented with random horizontal flipping, cropping and random erasing [52]. For the fine-tuning process, we utilize the SGD optimizer with the learning rate of \(3.5\times 10^{-4}\) and the weight decay of \(5\times 10^{-4}\). In supervised Re-ID, we fine-tune the PCL-based CLIP model for \(50\) epochs, with \(200\) iterations per epoch. Other settings are kept the same as CLIP-ReID. When applying our PCL-based fine-tuning technique to unsupervised Re-ID methods, such as ClusterContrast [7], CAP [41], and O2CAP [42], we maintain most of the original settings from these methods. Moreover, all experiments are conducted on a single RTX A6000 GPU using the PyTorch toolkit [33]. ### Ablation Studies We first conduct a series of experiments to validate the effectiveness of our proposed method. These experiments are carried out on the MSMT17 dataset. **Is prompt learning necessary for adapting CLIP to supervised Re-ID?** We begin by investigating the necessity of prompt learning when adapting CLIP to supervised Re-ID. In Table 1, from the results of Baseline2 vs. CLIP-ReID3, we observe that when fine-tuning CLIP with the ID loss \(\mathcal{L}_{id}\) and the triplet loss \(\mathcal{L}_{tri}\), prompt learning (_i.e._, using the loss \(\mathcal{L}_{i2tce}\) that is based on learned prompts) significantly improves the model's performance, as demonstrated in the work of CLIP-ReID [25]. However, when the prototypical contrastive learning loss \(\mathcal{L}_{pcl}\) is employed for CLIP fine-tuning, the inclusion of the loss \(\mathcal{L}_{i2tce}\) actually hampers the performance, as shown by the results of PCL-CLIP1 vs. PCL-CLIP2. This suggests that prompt learning is not necessary when an appropriate training loss is chosen. **How does different losses affect the fine-tuning of CLIP?** The aforementioned experiments highlight the critical role of the loss function in shaping the learned representations and influencing the performance of the fine-tuned CLIP model. To further explore this, we conduct experiments using different loss functions for fine-tuning and report the results in Table 1. As demonstrated by Baseline1-2 and PCL-CLIP2-5, fine-tuning with the ID loss alone yields inferior performance. However, utilizing a single PCL loss allows us to achieve highly competitive results. Furthermore, the combination of the PCL loss with the ID loss further enhances the performance. Figure 3 depicts the mAP and Rank-1 performance of PCL-CLIP2, PCL-CLIP4, and CLIP-ReID as they vary across iterations. It demonstrates that both PCL-CLIP2 and PCL-CLIP4 also converge at a faster rate. **Is prompt learning necessary for adapting CLIP to unsupervised Re-ID?** We further conduct an experiment to investigate whether prompt learning can improve performance in unsupervised scenarios. However, due to the iterative learning mechanism and the varying number of clusters at different epochs, it is not straightforward or cumbersome to adopt the prompt learning strategy of CLIP-ReID for unsupervised Re-ID. Therefore, we conduct two experiments: one with prompt learning applied every 10 epochs and another with prompt learning applied only at the last epoch. The results are presented in Table 2. The results indicate that prompt learning, even when applied every 10 epochs, only leads to a marginal improvement in performance compared to our PCL-based direct fine-tuning approach. This suggests that in unsupervised scenarios, the benefits of prompt learning may not be significant also. ### Comparison to State-of-the-Art Finally, we evaluate the performance of our proposed method, referred to as PCL-CLIP, against state-of-the-art methods on person and vehicle Re-ID datasets. The comparative results are presented in Table 3 and Table 4, respectively. **Comparison on supervised person Re-ID.** In the supervised person Re-ID task, we compare our approach with eight recent methods. Among these methods, five [3, 19, 22, 35, 55] utilize CNN-based backbones, while three [55, 25, 56] employ ViT-based backbones. Notably, both CLIP-ReID [25] and our approach adapt the pre-trained CLIP model, while the remaining methods are pre-trained on ImageNet. It is evident that the use of large-scale pre-trained models yields significant performance improvements compared to the other methods, particularly on the MSMT17 dataset. Our method achieves competitive performance with CLIP-ReID when a single PCL loss is employed. However, when incorporating an ID loss, our method exhibits a substantial improvement on MSMT17. **Comparison on unsupervised person Re-ID.** In unsupervised person Re-ID, we compare our approach with eight recent methods as well. Among these methods, six [1, 6, 7, 28, 41, 42] utilize CNN-based backbones and two [23, 31] employ ViT-based backbones. To evaluate the performance of our CLIP fine-tuning approach, we apply it to Cluster-Contrast (CC) [7], CAP [41], and O2CAP [42] by replacing their original backbones with the CLIP image encoder. Comparing our CLIP fine-tuning approach with these methods [7, 41, 42], we observe significant performance improvements on both the Market1501 and MSMT17 datasets. Additionally, TransReID-SSL [31] and TMGF [23] utilize ViT backbones that are pre-trained on the large-scale unlabeled dataset LUPerson [9]. However, our CLIP fine-tuning approach with O2CAP, outperforms them by a considerable margin on the MSMT17 dataset. These results demonstrate the effectiveness of the CLIP fine-tuning approach in unsupervised scenarios, surpassing existing methods and achieving notable performance improvements on challenging datasets. **Comparison on vehicle Re-ID.** Table 4 presents the \begin{table} \begin{tabular}{l|l c c|c c} \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{Loss Function} & \multicolumn{2}{c}{MSMT17} \\ \cline{2-6} & \(\mathcal{L}_{id}\) & \(\mathcal{L}_{tri}\) & \(\mathcal{L}_{i2tce}\) & \(\mathcal{L}_{pol}\) & mAP & Rank-1 \\ \hline Baseline1 & ✓ & & & & 44.5 & 70.0 \\ Baseline2 & ✓ & ✓ & & & 66.2 & 84.3 \\ \hline CLIP-ReID1 & & & ✓ & 49.5 & 76.8 \\ CLIP-ReID2 & & ✓ & ✓ & 71.4 & 87.8 \\ CLIP-ReID3 & ✓ & ✓ & ✓ & 73.4 & 88.7 \\ \hline PCL-CLIP1 & & & ✓ & ✓ & 71.2 & 87.4 \\ \hline PCL-CLIP2 & & & & ✓ & 73.8 & 89.2 \\ PCL-CLIP3 & & ✓ & & ✓ & 73.9 & 88.7 \\ PCL-CLIP4 & ✓ & & & ✓ & 76.1 & 89.8 \\ PCL-CLIP5 & ✓ & ✓ & & ✓ & 76.1 & 89.6 \\ \hline \end{tabular} \end{table} Table 1: Ablation study on loss functions used for fine-tuning CLIP for supervised Re-ID. \begin{table} \begin{tabular}{l|c|c c|c c} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Prompt Learning} & \multicolumn{2}{c|}{Loss Function} & \multicolumn{2}{c}{MSMT17} \\ \cline{3-6} & & \(\mathcal{L}_{2tce}\) & \(\mathcal{L}_{pol}\) & mAP & Rank-1 \\ \hline PCL-CLIP\({}_{\text{O2CAP}}\)1 & every 10 ep & ✓ & ✓ & 66.2 & 85.8 \\ PCL-CLIP\({}_{\text{O2CAP}}\)2 & last ep & ✓ & ✓ & 65.7 & 85.0 \\ \hline PCL-CLIP\({}_{\text{O2CAP}}\)3 & None & & ✓ & 65.5 & 84.9 \\ \hline \end{tabular} \end{table} Table 2: Ablation study on the use of prompt learning for adapting CLIP to unsupervised Re-ID. Figure 3: The performance of CLIP-ReID, PCL-CLIP2, and PCL-CLIP4 varying during the fine-tuning process. The solid line denotes mean average precision (mAP) and the dash line denotes the rank-1 accuracy. Best view with color. comparison results on a vehicle Re-ID dataset. The table includes five fully supervised methods [15, 19, 25, 50] and seven unsupervised methods [6, 7, 11, 42, 47] for reference. The proposed method achieves comparable results with state-of-the-art methods in the fully supervised setting. In unsupervised scenarios, our CLIP fine-tuning approach, specifically with the O2CAP loss, outperforms previous methods by a considerable margin. ## 6 Conclusion In this work we have presented a simple yet effective approach to adapt CLIP to both supervised and unsupervised Re-ID tasks. Our approach involves directly fine-tuning the image encoder of CLIP using a single prototypical contrastive learning (PCL) loss, eliminating the need for prompt learning. Remarkably, our method achieves competitive performance compared to CLIP-ReID, which requires both prompt learning and fine-tuning. Moreover, by incorporating the PCL loss alongside the ID loss during fine-tuning, we observe a significant improvement over CLIP-ReID on MSMT17. Our findings highlight the potential of our approach to simplify the adaptation of CLIP for Re-ID tasks while achieving comparable or even superior \begin{table} \begin{tabular}{l|c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{4}{c}{VeRi-776} \\ \cline{2-5} & mAP & Rank-1 & Rank-5 & Rank-10 \\ \hline \multicolumn{5}{l}{_Fully supervised methods_} \\ \hline VehicleNet performance to existing methods.
2308.15645
AskIt: Unified Programming Interface for Programming with Large Language Models
Large Language Models (LLMs) exhibit a unique phenomenon known as emergent abilities, demonstrating adeptness across numerous tasks, from text summarization to code generation. While these abilities open up novel avenues in software design and crafting, their incorporation presents substantial challenges. Developers face decisions regarding the use of LLMs for directly performing tasks within applications as well as for generating and executing code to accomplish these tasks. Moreover, effective prompt design becomes a critical concern, given the necessity of extracting data from natural language outputs. To address these complexities, this paper introduces AskIt, a domain-specific language (DSL) specifically designed for LLMs. AskIt simplifies LLM integration by providing a unified interface that not only allows for direct task execution using LLMs but also supports the entire cycle of code generation and execution. This dual capability is achieved through (1) type-guided output control, (2) template-based function definitions, and (3) prompt generation for both usage modes. Our evaluations underscore AskIt's effectiveness. Across 50 tasks, AskIt generated concise prompts, achieving a 16.14 % reduction in prompt length compared to benchmarks. Additionally, by enabling a seamless transition between using LLMs directly in applications and for generating code, AskIt achieved significant efficiency improvements, as observed in our GSM8K benchmark experiments. The implementations of AskIt in TypeScript and Python are available at https://github.com/katsumiok/ts-askit and https://github.com/katsumiok/pyaskit, respectively.
Katsumi Okuda, Saman Amarasinghe
2023-08-29T21:44:27Z
http://arxiv.org/abs/2308.15645v2
# _Asklt_: Unified Programming Interface for Programming with Large Language Models ###### Abstract. In the evolving landscape of software development, Large Language Models (LLMs) exhibit a unique phenomenon known as _emergent abilities_, demonstrating adeptness across numerous tasks, from text summarization to code generation. While these abilities open up novel avenues in software design and crafting, their incorporation presents substantial challenges. Developers grapple with decisions surrounding the direct embedding of LLMs within applications versus employing them for code generation. Moreover, effective prompt design becomes a critical concern, given the necessity of data extraction from natural language outputs. To address these intricacies, this paper introduces _Asklt_, a domain-specific language (DSL) specifically designed for LLMs. Asklt simplifies LLM integration, offering type-guided output control, template-based function definitions, and a unified interface that diminishes the distinction between LLM-based code generation and application integration. Furthermore, through Programming by Example (PBE), Asklt harnesses the power of few-shot learning at the programming language level. Our evaluations underscore Asklt's potency. Across 50 tasks, Asklt generated concise prompts for the given tasks, achieving a 16.14% reduction in prompt length relative to benchmarks. Additionally, by enabling the transition from direct LLM application usage to function generation, Asklt achieved significant speedups, as observed in our GSM8K benchmark experiments. Through these advancements, Asklt streamlines the integration of LLMs in software development, offering a more efficient, versatile approach for leveraging emergent abilities. The implementations of Asklt in TypeScript and Python are available at [https://github.com/katsumiok/ts-askit](https://github.com/katsumiok/ts-askit) and [https://github.com/katsumiok/pyaskit](https://github.com/katsumiok/pyaskit), respectively. ## 1. Introduction Recent studies (Wei et al., 2022) have unveiled the remarkable abilities of LLMs, which become increasingly pronounced with model scaling. These abilities span a wide range of tasks, including arithmetic operations, question answering, text summarization, language translation, code generation, and creative text composition. Intriguingly, these capabilities are not imparted explicitly but are organically cultivated through vast exposure to natural language data during training. This phenomenon, termed _emergent abilities_, distinguishes LLMs. The notion of _emergent abilities_ is captivating, hinting that with further advancements in language models, even more sophisticated capabilities may emerge. The rise of these emergent abilities holds significant implications for software development, potentially altering the very methods by which software is crafted. Developers can incorporate LLMs within applications to handle tasks such as question answering, text summarization, or language translation. Another application of LLMs is in code generation. Tools like Jigsaw (Jain et al., 2022) and Codex/Copilot (Chen et al., 2021) harness LLMs to convert natural language descriptions into code. Even without these specific tools, developers can leverage LLM-based chatbots, like ChatGPT based on GPT-4 (OpenAI, 2023), BingAI, and Bard, for the same purpose. However, integrating LLMs into software development is not without challenges. One primary decision developers face is whether to embed the LLM directly into the application or employ it for code generation. The distinction between these two applications is stark, making it laborious to transition between them later. For instance, while one could incorporate an LLM directly into an application to sort a list of numbers, another approach would be to utilize the LLM to generate the code for sorting. Choosing between these methodologies post-decision can be laborious. These methodologies differ significantly, and altering the chosen approach subsequently demands considerable effort. Moreover, regardless of the approach, developers must devise effective prompts, extract pertinent data from the LLM's output, and then process it. If the application integrates an LLM for its functionality, code must be written to parse the LLM's response -- a non-trivial task given the natural language format. Hence, specifying the desired data format within the prompt is often adopted to ease response parsing. Yet, this necessitates precise, task-specific prompt design. When LLMs are used for code generation, the resultant code must be manually integrated into the application. In response to these challenges, we present _Asklt_: a domain-specific language (DSL) tailored for LLMs. Asklt offers a harmonized programming interface across varied tasks, featuring (1) type-guided output control, (2) template-based function definitions, (3) code generation capabilities, and (4) Programming by Example (PBE). The type-guided output control obviates the need for data format specification within natural language prompts, eliminating the intricate prompt engineering previously essential for response extraction. Template-based function definitions allow developers to craft functions leveraging an LLM, using prompts tailored to specific tasks. Such templates can accept input parameters that seamlessly map to the defined function's parameters. With code generation, there's no demarcation between integrating an LLM into an application and using it for code generation, allowing effortless transitions between the two without adjusting the prompt template. The programming interface also accepts the examples of input and output to define a function for employing few-shot learning [14] in a programming language level. This can be considered as a form of general-purpose PBE. We demonstrate Asklt's applicability across a wide range of LLM tasks. By using Asklt to implement 50 common tasks, we show that Asklt can generate 7.56 lines of TypeScript code and 6.52 lines of Python code on average. We also confirmed that Asklt can reduce the length of prompt by 16.14% on average compared to the original prompts used in the OpenAI Evals 1 benchmark. Footnote 1: [https://github.com/openai/evals](https://github.com/openai/evals) Additionally, we measured the speedup of functions defined with Asklt when we transitioned from using an LLM as part of the application to executing equivalent functions generated by the LLM. An experiment with the GSM8K benchmark [10] revealed that Asklt-generated functions using GPT-4 achieved a speedup of 275,092.55x in TypeScript and 6,969,904.73x in Python, respectively, compared to the same functions using GPT-4 as part of the application. The contributions of this paper are summarized as follows: 1. **Design and Implementation of Asklt for LLMs:** * We introduce a unified programming interface tailored for LLMs to accommodate various tasks. * Our type-guided output control eradicates the need for intricate prompt engineering, simplifying user interactions with LLMs. * We design template-based function definition, which eases the reuse of LLM tasks. 2. **Simplifying Integration and Code Generation:** * We eliminate the boundary between the direct application integration of an LLM and its use for code generation. * Our approach facilitates smooth transitioning between the two methodologies, significantly reducing the developmental overhead and effort. * Our interface enables programming by example in a programming language, which is based on few-shot learning on the underlying LLM. 3. **Extensive Experimental Validation:** * We implement AskIt in TypeScript and Python and evaluate them across a diverse set of tasks, showcasing its potency in code generation and efficiency in prompt reduction. * Through benchmarking, we demonstrate considerable speedups in tasks when leveraging AskIt, underscoring its operational efficiency and efficacy. 4. **Advancing the Broader Understanding of LLMs:** * We categorize tasks that can be performed by LLMs and identify the challenges faced when integrating LLMs into software development. * Our exploration unravels the challenges faced during LLM integration and elucidates how AskIt serves as a panacea for these challenges. The remainder of this paper is structured as follows. Section 2 provides background information on LLMs and their applications in software development. Section 3 introduces AskIt, a DSL for LLMs. Section 4 presents the evaluation of AskIt. Section 5 discusses related work. Section 6 concludes the paper. ## 2. Motivating Examples To underscore the need for a unified and streamlined approach to incorporating LLMs into programming tasks, this section explores two distinct applications that could benefit from LLMs. The first demonstrates the potential of LLMs in sentiment analysis of product reviews, while the second discusses a file access task that stores the results of the sentiment analysis in a local file system. In both cases, the software developer must craft a prompt and either interpret the response from the LLM or integrate the generated code into their source code. ### Examples #### 2.1.1. Using an LLM as Part of an Application Consider a scenario in which a developer is writing a program to analyze the sentiment of product reviews. Although this task traditionally relies on complex natural language processing pipelines or machine learning models, an LLM like GPT-4 can significantly simplify the process. With an appropriately crafted prompt, the LLM can interpret and deduce the sentiment behind a given review. Below is a simplified pseudo-code representation: ``` 1review="Theproductisfantastic.Itexceedallmyexpectations." 2prompt="Determinethesentimentofthisreview:"+review+"".Thefinalsentimentshouldbeenclosedin[and]like[negative]." 3response=LLM.predict(prompt)#response:"Thesentimentofthereviewis[positive]." 4sentiment=parse_sentiment(response)#sentiment:"positive" ``` where # denotes a comment. Line 1 initializes the review. In practice, this would typically be sourced from a database or another data source, but it's hardcoded here for illustrative purposes. Line 2 crafts the prompt by integrating the review with a templated structure. Line 3 engages an LLM, processing the prompt to generate a response. Finally, Line 4 extracts the sentiment from the response. This scenario introduces two major challenges: * **Parsing the LLM's response:** Developers must write code to extract the sentiment from the LLM's natural language output. Due to potential variability in the LLM's responses, based on the prompt and its inherent behavior, this extraction is far from trivial. * **Crafting the prompt:** This task requires a deep understanding of natural language processing and familiarity with potential LLM responses. By specifying the desired response format with [ and ], the developer can make the subsequent parsing easy. While the provided example is straightforward, with relatively simple prompt construction and response parsing, more complex problems can introduce challenges. In these scenarios, techniques like Chain of Thought (CoT) (Kojima et al., 2022) become essential, guiding the LLM to produce better responses. If the expected output involves multiple facets, such as a list or several values, crafting the prompt and response parsing can become more complex, demanding additional effort from the developer. Thus, while LLMs like GPT-4 offer powerful capabilities, developers must skillfully craft their prompts and parsing methods to ensure accuracy and reliability. #### 2.1.2. Using an LLM to Generate Code Expanding upon the sentiment analysis task, let's delve into a scenario where the results of the sentiment analysis need to be saved to a local CSV (comman-separated values) file. Although this is not an inherent function of LLMs, their ability to generate relevant code exemplifies their adaptability. Typically, LLMs generate code snippets in response to a high-level description provided by the user. For instance, a developer might employ an LLM, seeking to generate Python code that saves sentiment analysis results into a local file. This interaction often takes place within platforms like ChatGPT, Bard, or BingAI. Here, the developer would pose a task description, such as "Generate a Python function to log a product review and its associated sentiment into a specific CSV file." Figure 1 illustrates an interaction with ChatGPT. After the developer inputs the task description, ChatGPT responds with a pertinent code snippet. The developer can then manually copy and incorporate this snippet into their existing codebase. This generated function opens the specified CSV file in append mode and saves the'review' and'sentiment' as a new row. However, it's important to note that ChatGPT cannot execute this generated code directly since it doesn't have access to the local file system. Thus, developers need to manually integrate this snippet into their software environment. Sometimes, this incorporation requires tweaks to make the newly added code align with the existing codebase. While this code generation approach is useful, it still demands manual intervention which could be made more efficient. Figure 1. Code generation by LLM (ChatGPT) ### Classification and Examples of Problem Types for LLMs In the previous examples, we presented two distinct types of tasks that can be addressed or facilitated by LLMs. One involves using LLMs as part of an application, while the other entails using LLMs to generate code. To optimize the use of LLMs, we categorize tasks based on the following two dimensions: * **Directly Answerable or Not:** Determines whether the LLMs can directly answer the task. * **Codable or Non-Codable:** Indicates whether LLMs can generate code to do the task. These dimensions are orthogonal to each other. We can leverage LLMs if either of the dimensions holds true. In other words, we can integrate LLMs into an application if the task is directly answerable, and we can employ LLMs to implement code if the task is codable. Given these dimensions, we can group tasks into three categories: non-codable but directly answerable tasks, intersecting tasks, and codable but not directly answerable tasks, as illustrated in Figure 2. Table 1 lists examples for each category. The prior two examples fall under the categories of non-codable but directly answerable tasks and codable but not directly answerable tasks, respectively. The sentiment analysis task represents a non-codable but directly answerable task because LLMs can immediately address it, whereas traditional programming methods might not achieve comparable accuracy. Conversely, the \begin{table} \begin{tabular}{l l} \hline \hline **Problem Category** & **Example Tasks and Prompts** \\ \hline \multirow{8}{*}{Non-codable but Directly Answerable Tasks} & Question Answering [Lewis et al., 2019]: “Who won the Nobel Prize in Literature in 2020?” \\ & Text Summarization [Narayan et al., 2018]: “Can you summarize the key points of the ‘Getrysburg \\ & Address?” \\ & Language Translation [Brown et al., 2020]: “Translate the phrase ‘Artificial Intelligence’ into \\ & French.” \\ & Sentiment Analysis [Socher et al., 2013]: “Is the sentiment in this review positive or negative?” \\ & Explanation of Concepts [Yih et al., 2016]: “Can you explain quantum physics in simple terms?” \\ & Predicting Text [Srivastava et al., 2018]: “Contine the story: ’Once upon a time in a land far, far away.” \\ & Paraphrasing [Prakash et al., 2016]: “Can you paraphrase the sentence ”The quick brown fox \\ & jumps over the lazy dog?” \\ \hline \multirow{4}{*}{Intersecting Tasks (Codable and Directly Answerable)} & Math Problems: “What is 7 times 8?” “Calculate the determinant of the matrix [[1, 2], [3, 4]].” \\ & String Manipulation: “What is the reverse of the string ‘hello?” \\ & Sorting and Searching Algorithms: “What is the smallest number in the list [1, 3, 7, 0]?” \\ & Regex Pattern Matching: “Check if the string ‘abc123’ matches the regex pattern ’‘wdudududs’. ” \\ \hline \multirow{4}{*}{Codable but Not Directly Answerable Tasks} & Real-time Data: “Fetch the current weather in New York City.” \\ & User-specific Data: “Count the most frequent word in a text file.” \\ \cline{1-1} & Database Interaction: “Fetch the latest order from an online store database.” \\ \cline{1-1} & Specific Algorithms: “Calculate the shortest paths with the Bellman-Ford algorithm.” \\ \cline{1-1} & Specific Libraries: “Find the mean of a set of numbers using the NumPy library.” \\ \hline \hline \end{tabular} \end{table} Table 1. Classification and examples of problem types for LLMs Figure 2. Classification of tasks file access task is codable but not directly answerable because traditional programming techniques can handle it, but it isn't straightforwardly answered by LLMs. It's worth noting that some tasks can be addressed both directly by LLMs or through LLM-generated code. For such tasks, either solution may be chosen. Take, for example, the mathematical query "What is 7 times 8?". This could be resolved either by employing LLMs within an application or by using LLMs to generate code. Generally, intersecting tasks exhibit superior performance when tackled by generated code than when directly addressed by LLMs. However, the delineation of tasks isn't always evident and can be ambiguous. The boundaries separating the three categories are often blurred. As we will illustrate with our experimental results, certain mathematical problems are answerable by LLMs but resist coding by LLMs. The challenge arises from the distinct implementation needs of the two approaches. When LLMs are incorporated into an application, we must craft code to parse the LLM responses. Conversely, when LLMs are used for code generation, the resulting code must be manually integrated into our source code. A unified interface for these strategies is absent, complicating the transition between the two methods. Should such a unified interface exist, transitioning between the two techniques would be more straightforward. This adaptability is essential, especially given the inherent ambiguity in task classification. Furthermore, as LLMs continue to evolve, the borders defining the three categories are bound to shift. ## 3. Design and Implementation ### Overview Our Domain Specific Language (DSL), _Asklt_, offers two APIs: ask and define. They serve as a unified interface by borrowing the syntax of function calls from the host programming language. Hence, they can be used wherever function calls are permitted. These APIs address a wide array of tasks, such as non-codable yet directly answerable tasks, intersecting tasks, and codable but not directly answerable tasks, as detailed in the previous section. The features of the APIs are as follows: 1. Type-guided output control: Asklt's type system allows developers to specify the expected output type of a task. This specification is reflected in synthesized prompts, negating the need for manual prompt engineering for output control and simplifying response parsing. 2. Template-based function definitions: Asklt's template-based function definitions let developers craft functions that leverage LLMs to execute specific computational and linguistic tasks. These templates can accept input parameters that correspond effortlessly to the defined function's parameters. 3. Code generation: Asklt's code generation features bridge the gap between integrating an LLM into an application and using it for code generation. This ensures seamless transitions between the two methodologies, significantly cutting down the developmental overhead and effort. As a proof of concept, we implemented TypeScript and Python versions of Asklt. The DSL compiler is fashioned as a TypeScript compiler plugin for TypeScript and as a Python library for Python. Asklt compiler and runtime synthesize the prompt for the LLMs and the parser for the response based on the type information of the function and variables embedded in the template expression. It also generates the function that implements codable tasks. In the following, we illustrate these features using the same examples provided in the previous section. Although we use TypeScript for the examples, a similar syntax can be adopted for Python. The implementation in Python is discussed later in 3.7. _Type-Guided Output Control._ A typical example of a non-codable yet directly answerable task is determining the sentiment of a review. We assume the sentiment can be either positive or negative. Instead of detailing the expected output format in the prompt, we can specify the expected output type in the DSL. For instance, the following code is valid AskIt code for the sentiment analysis task: ``` letsentiment=awaitask<'positive'|'negative'>{'Whatisthesentimentofthefollowingreview:"Theproductisfantastic.Itexceedsallmyexpectations."'}; ``` Here, ask is an API that accepts a prompt and returns a response. The response's type is indicated in the type parameter of ask. In this instance, 'positive'|'negative' is a union type, which consists of two string literal types and signifies that the response is either 'positive' or 'negative'. This type information aids in generating the prompt for the LLMs. After executing the code, the variable sentiment will be assigned the value 'positive'.await is a keyword that indicates the asynchronous execution of the ask API. The ask API returns a promise, and theawait keyword is used to wait for the promise to be resolved. Moreover, a prompt can be parameterized by using a prompt template as an argument for ask. Using a prompt template, the example above can be rewritten as: ``` letsentiment=awaitask<'positive'|'negative'>{'Whatisthesentimentof({review})?'}; ``` Here, review is a string type variable. {{ and }} mark the start and end of a variable in the prompt template, respectively. The variable review captures the symbol declared in the same scope. _Template-based Function Definitions._ In practical software development, the same task often needs replication. AskIt introduces a mechanism to formulate a function to repeatedly perform the same task. For instance, a function can be designed to return the sentiment of a review: ``` letgetSentiment=define<'positive'|'negative'>{'Whatisthesentimentof({review})?'}; ``` Here, define is an API that accepts a prompt template and returns a function. The type parameter of define determines the function's return value. The function's parameter is defined in the prompt template. In this example, the function receives a variable named review. The parameter in the template prompt corresponds to the parameter of the function defined with the same name. By giving an actual argument, the defined function can be called as follows: ``` letsentiment=awaitgetSentiment((review:'Theproductisfantastic.Itexceedsallmyexpectations.')); ``` Upon execution, sentiment will hold the value 'positive'. _Code Generation._ As discussed previously, LLMs can be employed for code generation. For example, LLMs can be used to implement a function that appends a review and its sentiment to a CSV file. There's no need to use different APIs for code generation. Our cohesive interface enables function generation: ``` letappendReviewToCsv=definevooi>('Append{{review}}and{{sentiment}}asanewrowintheCSVfilenamed{{filename}}'}; ``` Here, filename, review, and sentiment are variables. The above code can be invoked anywhere in the source code: ``` 1askit_api::==ask|define 2ask::="ask"="<"TYPE">"="("prompt_templateexamples?")" 3define::="define"="<"TYPEparam_types?">"="("prompt_templateexamples?examples?")" 4prompt_template::=STRING_LITERAL 5param_types::=","="("IDENTIFIER":"TYPE(","IDENTIFIER":"TYPE)*")" 6examples::=","["L"example(","example)*"]" 7example::="("input"="input","output":"CONSTANT_EXPRESSION")" 8input::="("IDENTIFIER":"CONSTANT_EXPRESSION(","IDENTIFIER":"CONSTANT_EXPRESSION)*")" ``` Listing 1: Syntax of Asklt for TypeScript Before the code's execution, the DSL compiler, with the assistance of an LLM, will generate a function that appends a review and its sentiment to a CSV file. ### Syntax The Asklt syntax builds upon the function call structure of the host programming language. Listing 1 presents the Asklt syntax tailored for TypeScript. For this grammar, we assume that TYPE, STRING_LITERAL, IDENTIFIER, and CONSTANT_EXPRESSION are non-terminal symbols. They denote the type, string literal, identifier, and constant expression in the host programming language, respectively. Upper-case letters represent non-terminal symbols defined by the host language. The primary APIs provided by Asklt are ask and define (Line 1). The ask API takes the response type of an LLM as a type parameter and takes a prompt template as a function parameter. Optionally, it can also take examples of the task's input-output pairs (Line 2). These examples facilitate few-shot learning (Brown et al., 2020) and provide a way of Programming by Example (PBE) (Gulwani, 2011). In contrast, the define API takes the LLM's response type and optional parameter types as type parameters (Line 3). Like ask, define can incorporate a prompt template and examples. Moreover, define can accept two sets of input-output examples. While the first set is used for few-shot learning, the second set is utilized for validating the generated code. The prompt template is essentially a string literal (Line 4), but it can have placeholders for variables. These placeholders are identifiers enclosed between {{ and }}. The variable name within this placeholder should be a valid identifier of the host programming language. Parameter types are key-value pairs listed within { and }, separated by commas (Line 5). Here, the key signifies the variable name, and the value represents its type in the host language. Examples consist of input-output pairs. They are enclosed within [ and ] and separated by commas (Line 6). Each example, bounded by { and }, has an input key, which links to a task input, and an output key, pointing to the task output (Line 7). An input is a collection of key-value pairs, where the key is a variable name and the value is a constant expression defined by the host language. The output is a standalone constant expression. ### Computation Flow In a proof of concept, we implemented Asklt for TypeScript and a DSL compiler as a TypeScript compiler plugin. The DSL compiler is triggered when the TypeScript compiler compiles the source code. The source code is written in TypeScript extended with our DSL, and the output of the DSL compiler is TypeScript code. The computational flow of the DSL compiler is illustrated in Figure 3. The left side of the figure shows the computational flow at the compilation time, and the right side shows the computational flow at the runtime. When the DSL compiler is triggered, it traverses the Abstract Syntax Tree (AST) of the source code and converts the AskIt APIs to specific functions written in TypeScript ( ). If the call to define is detected and it is a codable task, the DSL compiler generates a function ( ) - ( ). First, the DSL compiler generates a prompt to ask an LLM to code the task ( ). Then, the prompt is passed to the LLM ( ). The LLM generates a response with a code for the task ( ), and the DSL compiler receives the response and parses the response to extract the generated code ( ). Finally, the DSL compiler validates the code and stores it. At the same time, a call to define is replaced with a call to the generated function ( ). The DSL compiler also updates calls to ask and define even if they are not codable tasks. The DSL compiler extracts the type information from the type parameter of ask and define and encodes them into data to be used to generate a prompt at runtime ( ). The user program at the runtime consists of the generated functions, the updated user program and the DSL runtime. When the user program is executed, the updated user program calls the generated functions if the task is codable. If the call to ask or functions defined by define is detected, the DSL runtime generates a prompt based on the type information extracted at the compilation time ( ). Then, the prompt is passed to the LLM ( ). The LLM generates a response that contains the data in the specified type ( ), and the DSL runtime receives the response and parses it to extract the answer( ). ### Code Generation for Codable Tasks Our DSL compiler generates a function that implements the task specified by the prompt template passed to the define call. This generation occurs at the compilation time of the user program, as shown in Figure 3, where the host language is TypeScript. All calls to define are examined to determine if the task is codable. If it is, the DSL compiler generates a function and replaces the call to define with the generated function. As a result, calls to generated functions are executed without invoking the LLM at runtime. We provide two ways to specify the task for codability by LLMs. The first method allows users to specify the name of a source file containing the call to define. In this case, the DSL compiler generates functions for all the calls to define in the specified source file. The second, Figure 3. Computational flow of _AskIt_ DSL for statically type language more granular method lets the user specify the name of the function to be generated. This function name corresponds to the variable name to which the result of the define call is assigned. For all calls to define designated as codable, the DSL compiler follows these steps to generate a function with an LLM: **Step 1:**: The DSL compiler creates a prompt for the LLM based on the prompt template given to the define call. **Step 2:**: The DSL compiler sends this prompt to the LLM and receives the response from the LLM. **Step 3:**: The DSL compiler parses this response to extract the task's code and validates it. Step 2 and Step 3 are executed multiple times until a generated code passes the validation in Step 3. The validation includes a syntactic check and a semantic check using execution with test examples. The validation using test examples is explained later in 3.6. In Step 1, the DSL compiler formulates a prompt to request the LLM to implement the task. This prompt instructs the LLM to complete the body of the function. The function signature is derived from the type information of the type parameter from the define call. Both the return type and parameter types are obtained from the define call's type parameter. The DSL compiler assigns a unique name to the function and outlines the empty function body for the LLM to fill in. We adopt a one-shot learning approach for function generation. In the generated prompt, we first provide a sample code generation to elucidate the code generation process. Then, we direct the LLM to generate a function implementing the specified task. For instance, consider the scenario where the DSL compiler creates a function for the subsequent call to define: ``` letcalculatefactorial=define<number,{n:number}>("Calculatefactorialof{(n)}") ``` The first and second type parameters specify the return type and parameter type of the defined function. From this call, the DSL compiler generates a function whose signature is as follows: Figure 4: Prompt for asking the LLM to code the task functioncalculateFactorial({n}:{n: number}): number ``` We can call this function with a named argument, like calculateFactorial({n: 10}). We adopt named parameters instead of positional parameters since they are more robust for the modification of the prompt. Named parameters are not affected by the appearance order in a template prompt. The return type and parameter types originate from the type parameter of the define call. The DSL compiler assigns a unique name to the function and delineates the empty function for the LLM to complete. The prompt that instructs the LLM to implement the function body is displayed in Figure 4. This prompt comprises three segments. The initial two segments are always the same regardless of the task. They provide the LLM with an example of an input and output. This example entails constructing a function that accepts two numbers and outputs their sum. The initial segment requests the LLM to implement the function. While the function body is empty, the prompt details the task to be done as a comment inside the body. The third segment is the task-specific part and instructs the LLM to implement the given task. This expected response is code that implements the function. The structure of the instruction to the LLM is the same as the instruction in the first segment. In Step 2, the DSL compiler sends the created prompt to the LLM and receives the response from the LLM. This step is executed using a low-level API provided by the LLM. In our implementation, we use OpenAI API for this step. In Step 3, the DSL compiler extracts the code from the response. The LLM's reply is expected to contain the generated function in markdown's code block format: "typescript...". As such, our DSL compiler can extract the function by finding the code block. The DSL compiler checks the code syntactically and, optionally, checks it semantically by executing the function with test examples. ### Interaction with an LLM for Directly Answerable Tasks For each non-codable define and ask call, the DSL compiler just extracts the type information from the type parameter and encodes them into data to be used to generate a prompt at runtime. Calls to functions defined by define and calls to ask are replaced with a call to our DSL runtime that takes the type information as a parameter in addition to the original parameters. Our DSL runtime interacts with an LLM to execute the task specified by the prompt template passed to the define or ask call. The steps of interaction between the DSL runtime and the LLM are as follows: **Step 1:**: The DSL runtime creates a prompt for the LLM based on the prompt template given to the define call. **Step 2:**: The DSL runtime sends this prompt to the LLM and receives the response from the LLM. **Step 3:**: The DSL runtime parses this response to extract the answer and validates it using the type information. **Step 2 and Step 3 are repeated until an answer in the valid type is available.** In Step 1, the Asklt runtime generates a prompt to ask the LLM to perform the task specified by the prompt template passed to the define or ask call. It also uses the return type specified in the type parameter of the define or ask call to generate a prompt that constrains the LLM's response. The difficulty of interacting with the LLM lies in extracting the answer from the LLM's response. LLM responses are typically in natural language, making answer extraction challenging. To address this issue, we constrain the LLM's response to be in JSON (JavaScript Object Notation) format. The core idea of our prompt generation is to leverage the LLM's understanding of the grammar and semantics of programming languages. For instance, an LLM, like GPT, can comprehend the grammar of JSON. By requesting the LLM to answer in JSON format, we simplify the task of extracting the answer from its response. However, merely specifying the JSON format does not guarantee the ease of answer extraction since the JSON structure may vary. This issue can be resolved by constraining the LLM's response to a specific JSON format. Fortunately, LLMs can grasp the semantics of types in programming languages. For instance, GPT can understand the semantics of TypeScript types. Furthermore, TypeScript types are ideal for constraining the JSON structure as they can be viewed as a JSON schema. For instance, the type {x: number; y: number} can be perceived as a JSON schema. For example it accepts JSON object {"x": 1, "y": -1} but denies JSON object [1, -1]. This approach is retained even when the host language is not TypeScript. Our Asklt implementation for Python uses TypeScript types to constrain the LLM's JSON response, even though Python is the host language. As an example, consider a scenario where the LLM's response is expected to be a list of dictionaries. A function might be defined as follows: ``` typeBook={title:string;author:string;year:number} letgetBooks=define&Book[]>("List{{n}}classicbookson{{(subject)}."} ``` Here, Line 1 defines a type Book and Line 2 defines a function getBooks that returns a list of Book. This function can be invoked as: ``` letcsBooks=getBooks({n:5,subject:"computerscience"}) ``` When this function is called during runtime, the DSL runtime creates a prompt as displayed in Listing 2. In Listing 2, the initial line indicates that the response should be in JSON format enclosed with "json and ". Lines 2-4 provide an example of the expected JSON format. We illustrate that the response should contain both an answer and a reason, exemplified by the provided response. Lines 1-4 are a standard statement, always generated regardless of the function's parameters. Lines 5-8 are produced based on the function's type information. The'reason' is always designated as string, regardless of the function's type information. Conversely, the 'answer' is task-specific. In this instance, the type of 'answer' is delineated as { title: string;author:string;year:number}[] since the function's type information is Book[]. Line 9 is another fixed statement, always generated irrespective of the task description. We instruct the LLM to elucidate its answer in the'reason' field. This promotes the Chain of Thought (CoT) [Wei et al.2022b]. Lines 11-12 are constructed based on the prompt template passed to the define call and arguments passed to the function. { and } in the prompt template are replaced with single quotes (Line 11), and the values of each parameter are appended to the prompt template (Line 12). In Step 2, the DSL runtime sends the prompt to the LLM and receives the response from the LLM. This step uses the low-level API provided by the LLM. We use OpenAI API in our implmentation. In Step 3, the DSL runtime parses the response and extracts the answer from the response. The response is expected to contain the JSON object. The DSL runtime extracts the JSON object and then checks if the JSON object matches the expected type given to the AskIt API. ### Programming by Example AskIt introduces support for few-shot learning in accomplishing specific tasks. Few-shot learning is a machine learning technique where a model is trained to perform a task based on a limited number of examples. Incorporating this technique into a programming language effectively transforms it into a form of Programming by Example (PBE) (Gulwani, 2011). PBE allows users to specify tasks by providing input-output examples of that task. Given that AskIt's unified interface caters to both directly answerable tasks and codable tasks, it naturally facilitates PBE for both categories. The define and ask functions in AskIt can optionally accept examples for PBE, which are structured as arrays of input-output pairs. For codable tasks, these examples influence the prompt produced by the DSL compiler, while for directly answerable tasks, they affect the DSL runtime prompt. AskIt's define function further allows another type of examples. This secondary category is intended to test the resultant code. These examples are similarly provided as input-output pairs. Drawing parallels from conventional machine learning, the former examples serve as training data, and the latter as test data. If all test examples are successfully passed by the generated code, it's deemed correct. However, if any test example fails, the DSL compiler attempts to regenerate the code until it reaches its maximum retry count. Listing 3 illustrates how to utilize the define function to specify a task example, specifically adding two numbers in base 2. The developer provides the task example rather than explicitly stating the base as 2. Lines 1-6 offer the training examples, whereas lines 7-10 present the test examples. These training examples guide the prompt creation for the LLM, while test examples evaluate the generated code's accuracy. ### Implementation for a Dynamically Typed Language Our DSL compiler can be implemented in a dynamically typed language. In a dynamically typed language, type information is provided at runtime. Hence, the code generation for codable tasks should be done at runtime instead of at compilation time. Our implementation of Asklt for Python is fully realized as a library. The API of Asklt for Python is almost identical to the API of Asklt for TypeScript, except for the following two points: 1. The return type of ask and define is specified as a parameter of the function rather than as a type parameter. 2. Compilation is invoked explicitly by calling the compile method on the function returned by define. The first point concerns how the type information is provided to define. In the case of the Python implementation, the type is specified by a type object provided as the first argument of the function rather than a type parameter. Asklt for Python offers APIs to create a type object for the return type of ask and define. The provided APIs are listed in Table 2. The first column is the name of the API, and the second column describes the type created by the API. The third column provides usage examples, and the fourth column indicates the equivalent type in TypeScript. For instance, the same task introduced in 3.5 can be implemented in Python as follows: ``` Book=dict({"title":str,"author":str,"year":int}) getBooks=define(List(Book),"List{{m}}classicbookson{{subject}}.") ``` The first line defines a type object using the provided APIs. The second line defines a function that returns a list of Book. The second point of difference concerns how code generation is conducted. In Python's case, users must explicitly specify when code generation occurs. For this purpose, functions defined by define return a function object that implements the compile method. When the compile method is invoked, code generation proceeds in the same manner as the compilation time of Asklt for TypeScript. For instance, the task described in Section 3.4 can be implemented in Python as follows: ``` calculateFactorial=define(int,"Calculatethefactorialof{{n}}").compile() ``` When the compile method is invoked, code generation takes place, resulting in the return of a function object that implements the task. The generated code is cached in a file upon its initial creation, ensuring that code generation happens only once, regardless of how many times the compile method is called. \begin{table} \begin{tabular}{l l l l} \hline \hline API & Description & Usage Example & Equivalent Type in TypeScript \\ \hline int & Integer & int & number \\ float & Floating Point Number & float & number \\ bool & Boolean & bool & boolean \\ str & String & str & string \\ literal & Literal & literal(123) & 123 \\ list & List & list(int) & number[] \\ dict & Dictionary & dict({ ’x’:int, ’y’:int}) & {x: number, y: number} \\ union & Union & union(literal(’yes’),literal(’no’)) & ’yes’ | ’no’ \\ \hline \hline \end{tabular} \end{table} Table 2. Types and their examples Experimental Evaluation To evaluate the effectiveness of AskIt, we conducted a series of experiments. Each experiment was designed to answer distinct questions about our DSL, specifically targeting different task types: * Codable tasks: * How does AskIt reduce the LOC required to implement codable tasks? * Are examples of tasks effective for improving the accuracy of generated code? * Directly answerable tasks: * How does AskIt reduce the LOC of prompt generation for directly answerable tasks? * Intersecting tasks: * How does the speed and performance of functions generated by AskIt compare to the same function before code generation? To address these questions, we carried out three different experiments for each task category. ### Codable Tasks To address RQ1 and RQ2, we designed an experiment that involved implementing a set of 50 tasks using AskIt. To ensure these tasks were both relevant and realistic, we enlisted the help of ChatGPT. Specifically, we inquired about the 50 most commonly requested TypeScript coding tasks. These 50 tasks subsequently served as the foundation for our implementation in TypeScript and Python using AskIt. To verify the correctness of the generated code, we supplied AskIt with example tests for each task. If a test failed, AskIt would attempt code regeneration up to a predefined maximum retry limit, which was set to 9. In this experiment, we specified "gpt-3.5-turbo-16k" as the backend LLM for AskIt. Our results are presented in table 3. The first column enumerates the 50 tasks. The table's second column displays the template prompt used in both TypeScript and Python implementations. The third column indicates the return type utilized in the define call for each task. The fourth column delineates the parameter types utilized in the define call for each task. We only use parameter types for TypeScript since Python implementation does not use parameter types. Columns five and six enumerate the lines of code (LOC) in the generated TypeScript code and the associated retries. LOC counts only substantive lines, omitting empty lines or comment-only lines. The next two columns present analogous details for Python. On average, AskIt produced 7.56 lines for TypeScript and 6.52 lines for Python. Considering that each AskIt function definition resulted in a single line, an effective reduction of 6.56 and 5.52 lines was achieved for TypeScript and Python, respectively. Although all tasks were successfully rendered in TypeScript, tasks #11 and #21-24 encountered issues in Python. This stems from the Python variant of AskIt not leveraging parameter types for prompt generation in the LLM. For instance, in Task #11 for Python, we presumed the parameter type for xs was Array. Contrarily, the generated code assumed it was set. The retry count for successful code generation varied between 0 and 7 for Python. In a few instances, the initially generated code did not pass the example test. Even though the retry count seems negligible, it's imperative to recognize that it's not consistently zero. This indicates that the LLM can occasionally produce erroneous code. As an example, the code for Task #14 in Python failed its initial run, computing the Fibonacci numbers up to n + 1 rather than n, necessitating seven retries. Thus, supplying AskIt with task examples is vital for assuring the precision of the outputted code. ### Directly Answerable Tasks To answer RQ3, we transformed existing prompts for LLMs into AskIt prompts tailored for directly answerable tasks. We then compared the lengths of the original prompts with those of the AskIt prompts. Our source of these prompts was the OpenAI Evals2. The OpenAI Evals repository contains over 300 benchmarks, representing real-world use cases of LLMs. Notably, a majority of these benchmarks originate from real-world LLM users. Footnote 2: [https://github.com/openai/evals](https://github.com/openai/evals) Each benchmark in the repository consists of multiple test cases. In turn, each test case includes a prompt and the anticipated LLM response. For this experiment, we restricted our focus to the first 50 benchmarks from OpenAI Evals. Additionally, we selected only the first test case from each benchmark, given that all the test cases within a particular benchmark share a similar type but with varying inputs. Our modification process for the AskIt prompts involved eliminating superfluous information. This includes phrases dictating the LLM's response format or prompting the LLM to elucidate its answer's rationale. The AskIt prompt inherently incorporates such information. For instance, consider this excerpt from the original prompt for the benchmark _2d_movement.dev.v0_: \begin{table} \begin{tabular}{l l l l l l l} \hline \hline \multicolumn{1}{c}{*} & \multicolumn{1}{c}{Improving Required} & \multicolumn{1}{c}{Return Type in Typoscript} & \multicolumn{1}{c}{Improvement-Type (Topologies Only)} & \multicolumn{1}{c}{Typoscript} & \multicolumn{1}{c}{Python} \\ & & & & LOC & Berry & LOC & Berry \\ \hline 1 & **move** the stripe ([03]). & **strip** & ( : the LLM's output generation. Instead, we detail the expected LLM response type within the AskIt prompt. In the aforementioned case, the response type is designated as { x: number, y: number }. Owing to most of the benchmarks being unsolvable by GPT-3.5 and GPT-4, we solely ensured that our modified prompt yielded an output format congruent with the LLM's expected response, as delineated in the test case. Table 4 presents the results. The first column lists the benchmark names, the second indicates the return type utilized with the AskIt interface, the third specifies the original prompt's length, and the fourth provides the length of the AskIt prompt. The fifth and sixth columns enumerate the reduction in the number of characters and the percentage reduction in the number of characters, respectively. On average, we observed a reduction of 16.14% in character count from the original prompt. The second column exclusively displays the top-level type of the benchmark's return type. For instance, a return type of number[] would be represented simply as list in the table. All the types deployed in the benchmarks are presented in Figure 5. Notably, literal types are absent from Table 4 since they predominantly appear in conjunction with the union type. For instance, in a choice quiz-solving benchmark, choices are represented as a union of literal types, as evidenced by the return type "A" | "B" | "C" | "D" for such benchmarks. ### Intersecting Tasks One of the benefits of the unified interface provided by AskIt is that it improves the performance of intersecting tasks by using the LLM to generate code for the task without rewriting the prompt template. In this section, we compare the performance of the function that uses the LLM to generate code for the task and the function that uses the LLM to answer the task directly to answer RQ4. We used the GSM8K (Cobbe et al., 2021) benchmark, a dataset of high-quality grade school math word problems. This benchmark, developed by OpenAI, allows for the evaluation of language model performance on multi-step mathematical reasoning tasks. An example of a problem in the GSM8K benchmark is as follows: James decides to run 3 sprints 3 times a week. He runs 60 meters each sprint. How many total meters does he run a week? We converted numerical values surrounded by spaces in the problem description into variables since the generated programs are often reused with different values in a practical setting. For example, the above problem is converted into the following TypeScript program: Figure 5. Number of uses for each type \begin{tabular}{l l l r r r} \hline \hline Prompt ID & Return Type & Original Length & Reduced Length & Absolute Reduction & Percentage Reduction \\ \hline 2d\_movement\_dev.v0 & dict & 506 & 387 & 119 & 23.52 \\ 3d\_globe\_movement\_dev.v0 & str & 633 & 549 & 84 & 13.27 \\ Unfamiliar-Chinese-Charaterderdev.v0 & dict & 136 & 136 & 0 & 0.00 \\ aba\_mpre\_time\_fake\_dev.v0 & bool & 233 & 222 & 11 & 4.72 \\ abstraet-causal-reasoning-symbolic\_dev.v0 & union & 383 & 270 & 113 & 29.50 \\ abstraetTitle\_test\_v1 & union & 1531 & 1523 & 8 & 0.52 \\ actors\_seqare\_dev.v0 & list & 403 & 304 & 99 & 24.57 \\ abtr\_stage\_laws\_dev.v0 & list & 193 & 142 & 51 & 26.42 \\ afrikans-lexicon\_dev.v0 & bool & 130 & 90 & 40 & 30.77 \\ alme\_evaluation\_dev.v0 & number & 208 & 152 & 56 & 26.92 \\ allegrue-word-problem\_s1.simple-v0 & number & 211 & 136 & 75 & 35.55 \\ allegrue-information\_dev.v0 & union & 246 & 246 & 0 & 0.00 \\ allegrue\_numerical\_systems\_dev.v0 & str & 411 & 211 & 220 & 48.66 \\ ambiguous-sentences\_dev.v0 & str & 134 & 134 & 0 & 0.00 \\ anagrams\_test\_v1 & str & 74 & 74 & 0 & 0.00 \\ arc\_dev.v0 & str & 1070 & 1070 & 0 & 0.00 \\ arithmetic-expression-meta\_dev.v0 & str & 601 & 441 & 160 & 26.62 \\ arithmetic puzzles\_dev.v0 & number & 992 & 850 & 142 & 14.31 \\ acsl-digit-recognition\_dev.v0 & number & 496 & 409 & 87 & 17.54 \\ acsl-wordart\_dev.v0 & str & 815 & 807 & 8 & 0.98 \\ acsl-classifiers\_dev.v0 & bool & 403 & 242 & 161 & 39.95 \\ arpl\_examples\_dev.v0 & union & 693 & 625 & 68 & 9.31 \\ automata-and complexity\_dev.v0 & bool & 283 & 128 & 155 & 54.77 \\ backgamming-ll2-move\_dev.v0 & bool & 743 & 621 & 122 & 16.42 \\ balance-chemical-equation\_dev.v0 & union & 296 & 218 & 78 & 26.35 \\ base64-decode-simple\_dev.v0 & str & 377 & 326 & 51 & 13.53 \\ beam analysis\_dev.v0 & number & 234 & 184 & 50 & 21.37 \\ belarusian-ramdev.v0 & bool & 165 & 150 & 15 & 9.09 \\ belarusian-lexicon\_dev.v0 & bool & 108 & 93 & 15 & 13.89 \\ belarusian-numern\_dev.v0 & number & 246 & 179 & 67 & 27.24 \\ belarusian-orongraphy\_dev.v0 & str & 218 & 218 & 0 & 0.00 \\ belarusian-proverba\_dev.v0 & str & 240 & 145 & 95 & 39.58 \\ belarusian-rhnyer\_dev.v0 & union & 180 & 159 & 21 & 11.67 \\ belarusian-resistant\_translation\_dev.v0 & str & 347 & 347 & 0 & 0.00 \\ belarusian-syllable-count\_dev.v0 & number & 169 & 127 & 42 & 24.85 \\ belarusian-synonyms\_dev.v0 & bool & 183 & 168 & 15 & 8.20 \\ benaminaminome\_to\_hex.v0 & str & 66 & 66 & 0 & 0.00 \\ best\_dev.v0 & str & 35 & 35 & 0 & 0.00 \\ bigrams\_dev.v0 & number & 300 & 300 & 0 & 0.00 \\ bitwise\_dev.v0 & str & 502 & 502 & 0 & 0.00 \\ blackfoot-numerals-modern\_dev.v0 & number & 707 & 305 & 402 & 56.86 \\ body-movement\_dev.v0 & union & 510 & 465 & 45 & 8.82 \\ born\_first\_dev.v0 & bool & 91 & 76 & 15 & 16.48 \\ brazil\_lectron\_dev & bool & 138 & 98 & 40 & 28.99 \\ brazil\_last\_v1 & str & 1068 & 1068 & 0 & 0.00 \\ building\_floorplan\_test\_v1 & number & 960 & 870 & 90 & 9.38 \\ Bulgarian-lexicon\_dev.v0 & bool & 134 & 94 & 40 & 29.85 \\ canto\_wu\_communication\_dev.v0 & str & 301 & 301 & 0 & 0.00 \\ canto\_wu\_communication\_fewshot\_dev.v0 & str & 445 & 45 & 0 & 0.00 \\ \hline \hline \end{tabular} We used the original values as test examples to check the correctness of the generated program. We execute the program in the following two ways: (1) directly executing the program generated by Asklt, and (2) compiling the program generated by Asklt and executing the compiled program. If the generated program failed to pass the test example, we retried up to 9 times to generate the correct program. The GSM8K benchmark consists of training data and test data. We only use the test data for our evaluation since we use GPT without fine-tuning with the training data. Hence, each problem in the test data is solved by using the GPT in a zero-shot learning setting. The test data contained 1,319 problems. We used "gpt-4" as the backend LLM for Asklt in this experiment. All the time measurements were conducted on a machine with an Apple M1 CPU and 16GB of RAM. \begin{table} \begin{tabular}{l l r r r r} \hline \hline Prompt ID & Return Type & Original Length & Reduced Length & Absolute Reduction & Percentage Reduction \\ \hline 2d\_movement\_dev.v0 & dict & 506 & 387 & 119 & 23.52 \\ 3d\_globe\_movement\_dev.v0 & str & 633 & 549 & 84 & 13.27 \\ Unfamiliar-Chinese-Charaterderdev.v0 & dict & 136 & 136 & 0 & 0.00 \\ aba\_mpre\_time\_fake\_dev.v0 & bool & 233 & 222 & 11 & 4.72 \\ abstraet-causal-reasoning-symbolic\_dev.v0 & union & 383 & 270 & 113 & 29.50 \\ abstraetTitle\_test\_v1 & union & 1531 & 1523 & 8 & 0.52 \\ actors\_seqare\_dev.v0 & list & 403 & 304 & 99 & 24.57 \\ aditheary\_state\_laws\_dev.v0 & list & 193 & 142 & 51 & 26.42 \\ afrikans-lexicon\_dev.v0 & bool & 130 & 90 & 40 & 30.77 \\ alme\_evaluation\_dev.v0 & number & 208 & 152 & 56 & 26.92 \\ allegrue-word-problem\_s1.simple-v0 & number & 211 & 136 & 75 & 35.55 \\ allegrue\_information\_dev.v0 & union & 246 & 246 & 0 & 0.00 \\ allegrue\_numerical\_systems\_dev.v0 & str & 411 & 211 & 220 & 48.66 \\ ambiguous-sentences\_dev.v0 & str & 134 & 134 & 0 & 0.00 \\ anagrams\_test\_v1 & str & 74 & 74 & 0 & 0.00 \\ arc\_dev.v0 & str & 1070 & 1070 & 0 & 0.00 \\ arithmetic-expression-meta\_dev.v0 & str & 601 & 441 & 160 & 26.62 \\ arithmetic puzzles\_dev.v0 & number & 992 & 850 & 142 & 14.31 \\ ascl-digit-recognition\_dev.v0 & number & 496 & 409 & 87 & 17.54 \\ asci-wordart\_dev.v0 & str & 815 & 807 & 8 & 0.98 \\ ascl-classifiers\_dev.v0 & bool & 403 & 242 & 161 & 39.95 \\ apl\_examples\_dev.v0 & union & 693 & 625 & 68 & 9.31 \\ automata-and complexity\_dev.v0 & bool & 283 & 128 & 155 & 54.77 \\ backgamming-ll2-move\_dev.v0 & bool & 743 & 621 & 122 & 16.42 \\ balance-chemical-equation\_dev.v0 & union & 296 & 218 & 78 & 26.35 \\ base64-decode-simple\_dev.v0 & str & 377 & 326 & 51 & 13.53 \\ beam analysis\_dev.v0 & number & 234 & 184 & 50 & 21.37 \\ belarusian-ramdev.v0 & bool & 165 & 150 & 15 & 9.09 \\ belarusian-lexicon\_dev.v0 & bool & 108 & 93 & 15 & 13.89 \\ belarusian-numern\_dev.v0 In TypeScript, 1,138 problems were solved by GPT-4. In Python, GPT-4 directly solved 1,159 problems. The difference is not significant. Since both implementations use the same GPT model and the same prompt, the difference seems to come from the randomness of the response of GPT-4. We use these 1,138 and 1,159 problems for program generation. We successfully generated the program for 1,114 and 1,134 problems in TypeScript and Python, respectively. The results are shown in Table 5 and Table 6 for TypeScript and Python, respectively. The first column shows the retry count to obtain the result in the expected type. The second column shows the latency of GPT to answer the problem. The third column shows the execution time generated by Asklt. The fourth column shows the time for generating the correct program. This time includes the time for validating the generated program and retrying to generate the correct program. The fifth column shows the number of retries to generate the correct program. The sixth column shows the speedup ratio of the execution time of the generated program to the latency of GPT. On average, the generated codes answered the problem 275,092.55x and 6,969,904.73x times faster in TypeScript and Python, respectively, than using the LLM directly to answer the problem. While the speedup ratio is different between TypeScript and Python, the generated code is significantly faster than the LLM in both cases. ## 5. Related Work ### Programming Support for LLMs LMQL (Beurer-Kellner et al., 2023) is a query language specifically designed for large language models (LLMs), combining natural language prompts with the expressiveness of Python. It provides features such as constraints, debugging, retrieval, and control flow to facilitate interaction with LLMs. LMQL offers full Python support, enabling powerful control flow and logic in a prompting logic. LMQL allows model developers to declare logical constraints governing model output. These get turned into "token-level prediction masks" - tokens being what LLMs deal with. While it supports type constraints, the supported types are limited and not integrated with the type system of the underlying programming language. For example, LMQL does not support the ability to define custom types. LLMChain3 is a library that provides a prompt template that supports parameters like Asklt. It generates prompts by filling in the template with the parameters. While LLMChain \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{1}{c}{Retry Count} & \multicolumn{1}{c}{Latency [s]} & \multicolumn{1}{c}{Execution Time [us]} & \multicolumn{1}{c}{Compilation Time [s]} & \multicolumn{1}{c}{Recompilation Count} & \multicolumn{1}{c}{Speedup} \\ \hline count & 1,134.00 & 1,134.00 & 1,134.00 & 1,134.00 & 1,134.00 & 1,134.00 & 1,134.00 \\ mean & 0.35 & 22.97 & 5.09 & 20.38 & 0.12 & 6,969,904.73 \\ min & 0.00 & 4.08 & 0.50 & 3.90 & 0.00 & 15,069.51 \\ max & 9.00 & 202.53 & 877.04 & 647.88 & 9.00 & 75,961,275.56 \\ \hline \hline \end{tabular} \end{table} Table 6. Performance evaluation of generated programs in Python using the GSM8K benchmark \begin{table} \begin{tabular}{l l l l l l l} \hline \hline & \multicolumn{1}{c}{Retry Count} & \multicolumn{1}{c}{Latency [s]} & \multicolumn{1}{c}{Execution Time [us]} & \multicolumn{1}{c}{Compilation Time [s]} & \multicolumn{1}{c}{Recompilation Count} & \multicolumn{1}{c}{Speedup} \\ \hline count & 1,114.00 & 1,114.00 & 1,114.00 & 1,114.00 & 1,114.00 & 1,114.00 \\ mean & 0.29 & 13.28 & 49.11 & 14.19 & 0.17 & 275,092.55 \\ min & 0.00 & 3.23 & 35.50 & 4.01 & 0.00 & 18,120.40 \\ max & 9.00 & 119.58 & 334.83 & 155.78 & 9.00 & 2,463,390.15 \\ \hline \hline \end{tabular} \end{table} Table 5. Performance evaluation of generated programs in TypeScript using the GSM8K benchmark provides apply_and_parse function to parse the response of LLM, the user needs to specify the parse to extract the answer from the response. On the other hand, Asklt automatically parses the response and extracts the answer from the response based on the type information. LMQL and LLMChain also do not support code generation like Asklt does. While code can be generated using them if the user writes the prompt to do so, the generated code cannot be seamlessly integrated into the rest of the program. Another approach to integrating LLMs into programming is to enable LLMs to use APIs so that they can access broader and more dynamic knowledge bases, as well as perform complex computational tasks. The challenge is the complexity of integrating millions of changing APIs, which can have overlapping functionalities and nuanced limitations. Gorilla (Patil et al., 2023) proposes using self-instruct fine-tuning and retrieval to enable LLMs to accurately select from large, overlapping, and changing sets of tools expressed via their APIs and API documentation. ### Underlying Large Language Models (LLMs) In our work, we utilize GPT-3.5 and GPT-4 (OpenAI, 2023) as the underlying LLMs. There exist other LLMs specialized in code generation, such as Code Llama 4, CodeWhisperer 5, and codeT5 (Wang et al., 2021). These models are fine-tuned on extensive code corpora. Interestingly, their parameter count tends to be smaller compared to general-purpose LLMs, such as GPT-3.5 and GPT-4. Incorporating these specialized LLMs into Asklt represents a promising avenue for future exploration. Footnote 4: [https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/) Footnote 5: [https://aws.amazon.com/jp/codewhisperer/](https://aws.amazon.com/jp/codewhisperer/) ### Programming by Example An innovative combination of PBE with natural language processing was presented by Raza et al. (Raza et al., 2015). Their work focused on compositional program synthesis, allowing users to provide natural language descriptions alongside examples. Yin and Neubig (Yin and Neubig, 2017) showcased a syntactic neural model for general-purpose code generation. Their approach utilized a combination of natural language and examples, echoing the capabilities of the earlier work by Raza but with a focus on leveraging the representational power of neural networks. Asklt can also support PBE with a prompt written in a natural language. Asklt can integrate this capability with a programming language. Traditional PBEs are more task-specific. One of the earliest significant works in this field was by Gulwani (Gulwani, 2011), which revolved around automating string processing tasks in spreadsheets using input-output examples. The system, while narrowly focused on spreadsheet transformations, laid the foundation for the integration of PBE into widely used software like Microsoft Excel, a feature known as FlashFill. Expanding on the PBE paradigm, Gulwani et al. (Gulwani et al., 2011) delved into the synthesis of loop-free programs. This approach emphasized the creation of more complex programs through PBE without the complexities introduced by loops. In contrast to traditional, task-specific PBE approaches, Asklt, grounded in the capabilities of LLMs, offers users the flexibility to handle a broader array of tasks that LLMs can accommodate. ## 6. Conclusion In this paper, we introduced a domain-specific language (DSL), _Asklt_. Asklt provides a unified interface for interacting with large language models (LLMs) for various tasks. The unified interface supports (1) Type-guided output control of LLMs, (2) Template-based function definition, (3) Code generation for codable tasks, and (4) Programming by example. We implemented Asklt for TypeScript and Python and evaluated them with three different experiments.
2310.00764
Equilibria and bifurcations in contact dynamics
We provide a systematic study of equilibria of contact vector fields and the bifurcations that occur generically in 1-parameter families, and express the conclusions in terms of the Hamiltonian functions that generate the vector fields. Equilibria occur at points where the zero-level set of the Hamiltonian function is either singular or is tangent to the contact structure. The eigenvalues at an equilibrium have an interesting structure: there is always one particular real eigenvalue of any equilibrium, related to the contact structure, that we call the principal coefficient, while the other eigenvalues arise in quadruplets, similar to the symplectic case except they are translated by a real number equal to half the principal coefficient. There are two types of codimension 1 equilibria, named Type I, arising where the zero-set of the Hamiltonian is singular, and Type II where it is not, but there is a degeneracy related again to the principal coefficient and the contact of the zero level-set of the Hamiltonian with the contact structure. Both give rise generically to saddle-node bifurcations. Some special features include: (i) for Type II singularities, Hopf bifurcations cannot occur in dimension 3, but they may in dimension 5 or more; (ii) for Type I singularities, a fold-Hopf bifurcation can occur with codimension 1 in any dimension, and (iii) again for Type I, and in dimension at least 5, a fold-multi-Hopf bifurcation (where several pairs of eigenvalues pass through the imaginary axis simultaneously together with one through the origin) may also occur with codimension 1.
James Montaldi
2023-10-01T19:04:57Z
http://arxiv.org/abs/2310.00764v1
# Equilibria and bifurcations in ###### Abstract We provide a systematic study of equilibria of contact vector fields and the bifurcations that occur generically in \(1\)-parameter families, and express the conclusions in terms of the Hamiltonian functions that generate the vector fields. Equilibria occur at points where the zero-level set of the Hamiltonian function is either singular or is tangent to the contact structure. The eigenvalues at an equilibrium have an interesting structure: there is always one particular real eigenvalue of any equilibrium, related to the contact structure, that we call the principal coefficient, while the other eigenvalues arise in quadruplets, similar to the symplectic case except they are translated by a real number equal to half the principal coefficient. There are two types of codimension 1 equilibria, named Type I, arising where the zero-set of the Hamiltonian is singular, and Type II where it is not, but there is a degeneracy related again to the principal coefficient and the contact of the zero level-set of the Hamiltonian with the contact structure. Both give rise generically to saddle-node bifurcations. Some special features include: (i) for Type II singularities, Hopf bifurcations cannot occur in dimension 3, but they may in dimension 5 or more; (ii) for Type I singularities, a fold-Hopf bifurcation can occur with codimension 1 in any dimension, and (iii) again for Type I, and in dimension at least 5, a fold-multi-Hopf bifurcation (where several pairs of eigenvalues pass through the imaginary axis simultaneously together with one through the origin) may also occur with codimension 1. _MSC 2020_: 37G10; 37J55; 53E50; _Keywords_: bifurcations, contact structure, Hamiltonian system, ###### Contents * 1 Background * 2 Equilibria * 2.1 Non-degenerate equilibria in dimension 3 * 2.2 Non-degenerate equilibria in higher dimensions * 2.3 Hopf bifurcation * 2.4 Degenerate equilibria * 2.5 Dependence on \(\eta\) * 2.6 Principal coefficients of contact diffeomorphisms * 3 * 3 Degeneracy of Type I * 3.1 Type I fold singularity * 3.2 Fold singularity in \(\mathbb{R}^{3}\) * 3.3 Type I saddle-node bifurcations * 4 Degeneracy of Type II * 4.1 Type II fold singularity * 4.2 Fold singularity in \(\mathbb{R}^{3}\) * 4.3 Type II saddle-node bifurcations * 5 Legendre vector fields * A Recognizing fold singularities ## Introduction A contact vector field on a contact manifold is one whose flow preserves the contact structure. We study the most elementary aspects of the dynamics of such vector fields; namely, equilibria, their stability and their generic 1-parameter bifurcations. There has been considerable interest in contact vector fields in recent years, in several different directions. For example, they play a role in thermodynamics (see for example A. Bravetti [7] and references therein as well as D. Gromov [15]), in Hamiltonian-like systems with dissipation, both classical [19] and quantum [9], in fluid dynamics [12, 14], and others. An interesting application of contact geometry to neuroscience can be found in an article of Petitot [25]. For more details, examples and discussions see the review by Bravetti [6]. There are few dynamical studies beyond setting up a model, though Gromov and Cairnes [16] consider dynamics for a diatomic gas, and Liu et al [21] consider periodic motion in some restricted settings, and a recent paper of Entov and Polterovich [10] discuss trajectories with special properties. It was also found by Bravetti and Tapias [5] that there is an invariant measure defined on the open dense subset of the open submanifold where the Hamiltonian is non-zero, and Bravetti et al. [4] consider the type of dynamics on the complement, that is where \(H=0\). A contact structure \(\xi\) on a manifold \(M\) consists of, for each \(x\in M\), a hyperplane \(\xi(x)\subset T_{x}M\) such that this hyperplane field is maximally non-integrable. The easiest way to define the maximally non-integrable property is to choose any (local) 1-form \(\eta\) such that \(\ker\eta(x)=\xi(x)\) in the domain of \(\eta\) (these are called contact 1-forms), and the non-integrability requirement is that the volume form \(\eta\wedge(\mathrm{d}\eta)^{n}\) should not vanish anywhere (this is independent of the choice of contact 1-form \(\eta\)). For general background on contact structures, the reader can consult [1, 2, 3, 11, 20]. In many areas, for example thermodynamics and jet bundles, the form \(\eta\) plays a primary role, with \(\xi\) being a secondary construction. On the other hand, many authors put \(\xi\) in the forefront, and choose a contact form for calculational convenience. A further approach is taken by Grabowska and Grabowski [13]; they explicitly put \(\xi\) at the forefront by considering the line bundle \(\xi^{\circ}\subset T^{*}M\) whose fibre is the annihilator of \(\xi\) Any contact form \(\eta\) for \(\xi\) is a section of this bundle. Hamiltonians are then defined to be functions on this line bundle that are homogeneous of degree 1. In other words, they consider all possible contact 1-forms together. We begin the paper by recalling the basic well-known properties of contact geometry and contact vector fields, and describe how every contact vector field has a unique Hamiltonian function which generates it. In SS2 we begin the study of equilibria: what are the conditions on the Hamiltonian for a point to be an equilibrium, and what are the conditions for an equilibrium to be nondegenerate? It turns out that generically equilibria occur where the zero-level of the Hamiltonian is tangent to the contact hyperplane. A central result (Theorem 2.4 and its corollary) states that at an equilibrium one of the eigenvalues, which we call the principal coefficient, is real and related to the Reeb vector field, and the others arise in quadruplets similar to the symplectic case, but translated by a real number equal to one half the principal coefficient. We also show how Hopf bifurcations can arise in dimension at least 5. We identify two ways in which an equilibrium can degenerate. The first, which we call Type I degeneracy, arises where \(H\) has a critical point on its zero-level set, and the second, Type II, when the restriction to the contact plane is degenerate. The following two sections describe the nature of the Type I and Type II degeneracies, respectively and a brief analysis of the resulting saddle node bifurcations. We end with a short discussion of Legendre vector fields; that is, vector fields on Legendre submanifolds. In particular we show that, given any vector field on a Legendre submanifold, there is an extension of it to a contact vector field on the ambient contact manifold. A consequence of this is that the bifurcation theory of Legendre vector fields is the same as for ordinary vector fields in that dimension. The paper ends with a short appendix containing an elementary singularity theoretic calculation for recognizing folds and their versal unfoldings. ## 1 Background Here we establish some notation used throughout. Every object we consider will be assumed to be smooth. Let \((M,\xi)\) be a contact manifold, with \(\dim M=2n+1\). This means that \(\xi\) is a subbundle of \(TM\) of rank \(2n\) which is maximally non-integrable. The non-integrability condition is most easily expressed in terms of 1-forms vanishing on \(\xi\): let \(\eta\) be any 1-form on \(M\) satisfying \(\ker\eta=\xi\) (possibly defined locally). Then the non-integrability condition states that \(\eta\wedge(\mathrm{d}\eta)^{n}\) is nowhere zero. For details, see for example [1, 2, 3, 11, 20]. We say that a 1-form \(\eta\) with the property that at each point of its domain of definition, \(\ker\eta=\xi\) is a _contact form_ for \(\xi\). Such an \(\eta\) is determined up to non-zero scalar multiples: that is, if \(\eta_{1},\eta_{2}\) are contact forms for \(\xi\) then there is a nowhere zero function \(f\) on \(M\) such that \(\eta_{2}=f\eta_{1}\). We write \(\mathcal{X}_{\xi}\) for the space of vector fields on \(M\) whose flow preserves the contact structure. Let \(X\in\mathcal{X}_{\xi}\). The easiest way of determining whether a given vector field \(X\) on \(M\) preserves \(\xi\) is to introduce a contact 1-form \(\eta\). One sees that \(X\) preserves \(\xi\) if and only there is a function \(f\) (possibly zero) such that \(L_{X}\eta=f\eta\). **Definition 1.1**.: Given a contact 1-form \(\eta\) on \((M,\xi)\) the _Reeb vector field_\(\mathcal{R}\) is uniquely determined by the conditions \[\iota_{\mathcal{R}}\eta=1\quad\text{and}\quad\iota_{\mathcal{R}}\mathrm{d}\eta=0.\] Note that the Reeb vector field is dependent on the choice of contact form; indeed not even its direction is intrinsically associated to \(\xi\). Introducing a contact form \(\eta\) allows one to state the following well-known criterion. **Proposition 1.2**.: _Let \(Y\) be any (smooth) vector field on \(M\), let \(\eta\) be any contact 1-form for \((M,\xi)\) and let \(h=-\eta(Y)\). Then \(Y\in\mathcal{X}_{\xi}\) if and only if_ \[\iota_{Y}\mathrm{d}\eta=\mathrm{d}h-\mathcal{R}(h)\eta,\] _where \(\mathcal{R}\) is the Reeb vector field associated to \(\eta\) and \(\mathcal{R}(h)\) is the derivation of \(h\) along the vector field \(\mathcal{R}\); that is, \(\mathcal{R}(h)(x)=\mathrm{d}h_{x}(\mathcal{R}(x))\)._ Proof.: Let \(Y\) be a vector field on \(M\). We have \(L_{Y}\eta=\mathrm{d}(\iota_{Y}\eta)+\iota_{Y}\mathrm{d}\eta=-\mathrm{d}h+ \iota_{Y}\mathrm{d}\eta\). That is, for any vector field \(Y\) \[\iota_{Y}\mathrm{d}\eta=\mathrm{d}h+L_{Y}\eta.\] Suppose first that \(Y\) preserves \(\xi\); that is, \(L_{Y}\eta=f\eta\) for some function \(f\). Now \(\iota_{\mathcal{R}}\mathrm{d}\eta=0\), and hence \[0 = \iota_{\mathcal{R}}\iota_{Y}\mathrm{d}\eta\] \[= \iota_{\mathcal{R}}(\mathrm{d}h+f\eta)\] \[= \mathcal{R}(h)+f\] whence \(f=-\mathcal{R}(h)\). That is, any vector field \(Y\in\mathcal{X}_{\xi}\) satisfies \(L_{Y}\eta=-\mathcal{R}(h)\eta\), and hence the expression for \(\iota_{Y}\mathrm{d}\eta\) follows. Conversely, if \(\iota_{Y}\mathrm{d}\eta=\mathrm{d}h-\mathcal{R}(h)\eta\) then \(L_{Y}\eta=-\mathrm{d}h+\mathrm{d}h-\mathcal{R}(h)\eta=-\mathcal{R}(h)\eta\) so that \(Y\in\mathcal{X}_{\xi}\). **Definition 1.3**.: Let \(X\) be a contact vector field on \((M,\xi)\) and let \(\eta\) be a contact 1-form. The function \(H=-\iota_{X}\eta\) is called the _Hamiltonian_ of the vector field (associated to \(\eta\)). We will usually take \(\eta\) as given, but if \(\eta\) were replaced by \(f\eta\) for some non-zero smooth function \(f\), then \(H\) would be replaced by \(fH\). By Proposition 1.2, the Hamiltonian satisfies \[\iota_{X}\mathrm{d}\eta=\mathrm{d}H-\mathcal{R}(H)\eta. \tag{1.1}\] The definition gives a linear map \(\mathcal{X}_{\xi}\to C^{\infty}(M,\mathbb{R})\), \(X\mapsto-\iota_{X}\eta\). This is in fact an isomorphism, whose inverse is as follows. Given a 'Hamiltonian' function \(H\), the associated vector field \(X=X_{H}\) is defined implicitly by the equations \[\eta(X_{H}) = -H \tag{2a}\] \[\iota x_{{}_{H}}\mathrm{d}\eta = \mathrm{d}H-\mathcal{R}(H)\,\eta \tag{2b}\] The first equation determines the normal component of \(X_{H}\), and the second the 'tangential' component (ie the component on \(\xi\)). Proposition 1.2 shows that such vector fields preserve the contact structure. The fact that this is an isomorphism allows us to parametrize the space of smooth contact vector fields by using smooth functions on \(M\). Thus one likes to describe any property of the vector field in terms of its Hamiltonian function. For example, from the definition of \(H\), it follows that \(X_{H}(x_{0})\in\xi(x_{0})\) if and only if \(H(x_{0})=0\). Note that \(X_{H+C}=X_{H}-C\mathcal{R}\), so adding a constant to \(H\) changes the dynamics. In particular the Reeb vector field itself is the contact vector field associated to the constant function \(H=-1\). One important contrast with Hamiltonian vector fields on a symplectic manifold is that in general the Hamiltonian is not a conserved quantity. Indeed, applying (2b) to \(X_{H}\) gives \(0=\mathrm{d}H(X_{H})-\mathcal{R}(H)\eta(X_{H})\) which by (2a) leads to \[\frac{\mathrm{d}}{\mathrm{d}t}H=\mathrm{d}H(X_{H})=-\mathcal{R}(H)\,H. \tag{3}\] In particular only \(H^{-1}(0)\) is an invariant level set in general. This formula for \(\frac{\mathrm{d}}{\mathrm{d}t}H(t)\) shows also that \(H^{-1}(0)\) is attracting if and only if \(\mathcal{R}(H)>0\) along this hypersurface; this is important when using contact vector fields to model dissipation. Another useful property of the Reeb vector field is that it determines whether the form \(\eta\) is preserved by \(X\): using Cartan's formula, one shows that if \(X\in\mathcal{X}_{\zeta}\) then \[L_{X}\eta=-\mathcal{R}(H)\eta\] where \(H=-\eta(X)\). In particular \(X\) preserves \(\eta\) if and only if \(\mathcal{R}(H)=0\), which in turn is equivalent, by (3), to \(X_{H}\) preserving every level set of the Hamiltonian; such vector fields are often called _strict_ contact, or _conservative_, vector fields. Moreover, if \(\nu=\eta\wedge(\mathrm{d}\eta)^{n}\) is the contact volume form, then it follows from the above that \[L_{X}\nu=-(n+1)\mathcal{R}(H)\nu.\] Darboux coordinatesRecall (for a proof see for example [11, 20]) that if \(x_{0}\in M\) then there is a neighbourhood of \(x_{0}\) and coordinates \(q_{1},\ldots,q_{n},p_{1},\ldots,p_{n},z\) such that \[\eta=\mathrm{d}z-p_{i}\mathrm{d}q_{i} \tag{4}\] (where the summation convention is understood). These are called _Darboux_ or _canonical coordinates_. For this \(\eta\), the contact hyperplane \(\xi\) has basis \[\{\partial_{p_{j}},\,\partial_{q_{j}}+p_{j}\partial_{z}\}\quad(j=1,\ldots,n). \tag{5}\] Notice that on \(\xi\) this basis is _canonical_, in the sense that \[\mathrm{d}\eta\,(\partial_{q_{j}}+p_{j}\partial_{z},\,\partial_{p_{i}})=\delta_{ ij},\quad\text{etc.},\] where \(\delta_{ij}\) is the Kronecker delta. On \(\mathbb{R}^{2n+1}\) with canonical/Darboux coordinates as above, we have \(\mathcal{R}=\partial_{z}\). To describe the relation between \(H\) and \(X_{H}\) one can use the method of coefficients. Given a (Hamiltonian) function \(H\), write \(X_{H}=a_{i}\partial_{q_{i}}+b_{i}\partial_{p_{i}}+c\partial_{z}\). Then equations (1.2) give \[c-p_{i}\,a_{i}=-H,\quad\text{and}\quad a_{i}\mathrm{d}p_{i}-b_{i}\mathrm{d}q_{ i}=(H_{q_{i}}\mathrm{d}q_{i}+H_{p_{i}}\mathrm{d}p_{i}+H_{z}\mathrm{d}z)-H_{z}( \mathrm{d}z-p_{i}\mathrm{d}q_{i}).\] Equating coefficients shows that \[X_{H}\,=\,H_{p_{j}}\partial_{q_{j}}-(H_{q_{j}}+p_{j}H_{z})\partial_{p_{j}}+(p _{j}H_{p_{j}}-H)\partial_{z}. \tag{1.6}\] Or, as equations of motion (\(j=1,\ldots,n\)), \[\left\{\begin{array}{rcl}\dot{q}_{j}&=&H_{p_{j}}\\ \dot{p}_{j}&=&-H_{q_{j}}-p_{j}H_{z}\\ \dot{z}&=&p_{j}H_{p_{j}}-H.\end{array}\right. \tag{1.7}\] Not Poisson bracketsIn the more familiar symplectic setting, the Poisson brackets are defined by, \(\{H,f\}:=X_{H}(f)\) - the derivative of \(f\) along the vector field \(X_{H}\) associated to the Hamiltonian \(H\). A key property (following from the skew-symmetry of the symplectic form) is that \(\{g,\,f\}=-\{f,g\}\). In the contact setting, one can of course define a 'bracket' in the same way, but it is no longer skew-symmetric. A simple calculation shows \[X_{H}(f)=\{H,f\}_{\{p_{i},q_{i}\}}+p_{i}\,\{H,f\}_{\{p_{i},z\}}-Hf_{z}, \tag{1.8}\] where, given variables \(x,y\), we write \(\{H,f\}_{\{x,y\}}=H_{x}f_{y}-H_{y}f_{x}\), following the notation of [8]. The lack of skew-symmetry is in the final term; in particular it is a derivation of \(f\) but not of \(H\). The expression for \(X_{H}(H)\) recovers the one in (1.3). **Remark 1.4**.: It is perhaps a natural question to ask about linear contact vector fields on \((\mathbb{R}^{2n+1},\eta)\) (or perhaps unnatural since \(\eta\) is not linear). Using these Darboux coordinates, it is straightforward to check that the space of linear contact vector fields is only \((n^{2}+1)\)-dimensional. With coordinates \((q_{i},p_{j},z)\) they have matrix \[L=\begin{pmatrix}A&0&0\\ 0&-A^{T}+aI_{n}&0\\ 0&0&a\end{pmatrix},\] The Hamiltonian of this vector field is \(H=p^{T}Aq-az\). where \(a\in\mathbb{R}\) and \(A\) is any \(n\times n\) real matrix. As a Lie algebra, this is isomorphic to \(\mathfrak{gl}_{n}(\mathbb{R})\times\mathbb{R}\). See Remark 2.10 for an extension to weighted homogeneity. ## 2 Equilibria Let \((M,\xi)\) be a contact manifold, and let \(X\) be a contact vector field. Our study is local (in neighbourhoods of equilibria) and it will be convenient to fix a contact \(1\)-form \(\eta\), which one can always do locally. Let \(H=-\iota_{X}\eta\) be the associated Hamiltonian. Equilibria occur where \(X=0\). The definition of \(X=X_{H}\) in (1.2) then yields the following conditions on the Hamiltonian function at an equilibrium point, \[H=0,\quad\mathrm{d}H=\mathcal{R}(H)\,\eta.\] Notice that the second equation in particular implies \(\mathrm{d}H\) is parallel to \(\eta\), and if at a point \(x\in M\), \(\mathrm{d}H_{x}=-\tau\eta_{x}\) then \(\mathcal{R}(H)=\mathrm{d}H(\mathcal{R})=-\tau\eta(\mathcal{R})=-\tau\) so the second equation is in fact equivalent to \(\mathrm{d}H\) being parallel to \(\eta\) and hence equivalent to \(\mathrm{d}H\big{|}_{\xi}=0\). This shows, **Proposition 2.1**.: _Let \((M,\xi)\) be a contact manifold and \(\eta\) a (local) choice of contact form. Suppose \(X\in\mathcal{X}_{\zeta}\) and \(H=-\eta(X)\) is the associated Hamiltonian function. A point \(x_{0}\in M\) is an equilibrium point of the vector field if and only if_ \[H(x_{0})=0\quad\text{and}\quad\mathrm{d}H_{x_{0}}=-\tau\eta_{x_{0}},\] _for some \(-\tau\in\mathbb{R}\). In this case \(\tau=-\mathcal{R}(H)(x_{0})\)._ (We use \(-\tau\) here to compensate for the minus sign in the definition of \(H\).) At an equilibrium point, we call the quantity \(\tau=-\mathcal{R}(H)(x_{0})\) the _principal coefficient_ of the equilibrium (we will see below that it is an eigenvalue). It is not hard to show (see Proposition 2.9) that this depends only on \(X\) and not on the choice of contact \(1\)-form \(\eta\). We note that for conservative contact vector fields, the principal coefficient always vanishes. The proposition has a simple geometric interpretation. Namely, equilibria occur where either \(H\) has a critical point on \(H^{-1}(0)\) or the contact hyperplane is tangent to the zero level-set of the Hamiltonian. **Proposition 2.2**.: _Using the notation of the previous proposition, the equilibrium point \(x_{0}\) is non-degenerate if_ 1. _the principal coefficient_ \(\tau\neq 0\)_, and_ 2. _the bilinear form on the contact hyperplane_ \(\xi(x_{0})\) _given by_ \[\left(\mathbf{u},\mathbf{v}\right)\,\longmapsto\,\mathrm{D}_{\mathbf{u}}\left( \mathrm{d}H+\tau\eta\right)\left(\mathbf{v}\right)\] _is non-degenerate._ Here we use \(\mathrm{D}\) to denote the ordinary derivative, as distinct from the exterior derivative. Note that if \(\alpha\) is a \(1\)-form and \(\alpha(x_{0})=0\) then \(D\alpha(\mathbf{u})=D_{\mathbf{u}}\alpha\) (the derivative of \(\alpha\) in the direction \(\mathbf{u}\)) is a well-defined quantity in the cotangent space at \(x_{0}\); that is, it is independent of any choice of coordinates. Moreover, in any coordinates, one has \[\mathrm{D}(\mathrm{d}H+\tau\eta)=\mathrm{D}^{2}H+\tau\mathrm{D}\eta.\] The first term of this bilinear form is the Hessian of \(H\) (which does depend on coordinates, unless \(H\) is singular at this point, in which case \(\tau=0\)). **Definition 2.3**.: We call the bilinear form \(\mathrm{D}\left(\mathrm{d}H+\tau\eta\right)\) on \(\xi\) at an equilibrium point the _amended Hessian_ of \(H\), and we denote it \(\mathrm{Hess}^{\prime}\), or \(\mathrm{Hess}^{\prime}(H)\) if needed. To clarify the definition of the amended Hessian we can use local coordinates. Let \(\eta=a_{i}\mathrm{d}x^{i}\). Then degeneracy of the bilinear form means \(\exists\mathbf{u}\in\xi\), \(\mathbf{u}\neq 0\), such that \[\forall\mathbf{v}\in\xi,\ \ \ \left(\frac{\partial^{2}H}{\partial x^{j}\partial x ^{i}}+\tau\frac{\partial a_{i}}{\partial x^{j}}\right)u^{j}v^{i}=0.\] In particular, using Darboux coordinates on \(\mathbb{R}^{2n+1}\) about the point in question, with \(\eta=\mathrm{d}z-p_{j}\mathrm{d}q_{j}\), the amended Hessian is the \(2n\times 2n\) matrix, \[\mathrm{Hess}^{\prime}=\begin{pmatrix}H_{qq}&H_{pq}-\tau I_{n}\\ H_{qp}&H_{pp}\end{pmatrix} \tag{2.1}\] where \(H_{qq}\) is the \(n\times n\) block \(H_{q_{i}q_{j}}\) etc. It is clear from this expression that the amended Hessian is not in general symmetric; indeed, it may have complex eigenvalues. For points other than the origin in Darboux coordinates, we can use the basis for \(\xi\) given in (1.5), for which the expression for the amended Hessian becomes \[\mathrm{Hess}^{\prime}=\begin{pmatrix}H_{q_{i}q_{j}}+2p_{i}H_{q_{j}z}+H_{zz}p _{i}p_{j}&H_{z}\delta_{ij}+H_{p_{i}q_{j}}+p_{i}H_{p_{j}z}\\ H_{p_{j}q_{i}}+p_{i}H_{p_{j}z}&H_{p_{i}p_{j}}\end{pmatrix}.\] Proof.: We prove this using the Hamiltonian; further below we see an argument using the vector field. The equations for an equilibrium in Proposition 2.1 are equations of \((x,-\tau)\in M\times\mathbb{R}\) (or \(\mathbb{R}^{2n+1}\times\mathbb{R}\)). Differentiating these equations in the direction of \(\mathbf{u}\) gives \[\mathrm{d}H(\mathbf{u})=0,\ \ \ \mathrm{D}^{2}H(\mathbf{u})+\tau\mathrm{D}_{ \mathbf{u}}\eta=-\hat{\tau}\eta\] where \(\mathrm{D}_{\mathbf{u}}\eta\) is the derivative of \(\eta\) in the \(\mathbf{u}\) direction, and \(\hat{\tau}\in\mathbb{R}\). Firstly, if \(\tau=0\) then the first equation is void, and there are thus \(2n+1\) equations in \(2n+2\) variables, and the equations are degenerate. However, if \(\tau\neq 0\) then the first equation tells us \(\mathbf{u}\in\xi\). For a given \(\mathbf{u}\in\xi\), the existence of \(\hat{\tau}\) satisfying the second equation is equivalent to having a zero of the restriction of the linear form \((\mathrm{D}^{2}H(\mathbf{u})+\tau\mathrm{D}_{\mathbf{u}}\eta)\) to \(\xi\). Thus non-degeneracy is equivalent to the following bilinear form on \(\xi\) being non-degenerate: \[(\mathbf{u},\mathbf{v})\longmapsto\mathrm{D}^{2}H(\mathbf{u},\mathbf{v})+\tau \mathrm{D}_{\mathbf{u}}\eta(\mathbf{v}).\] Now consider the linearization \(L:T_{x_{0}}M\to T_{x_{0}}M\) of the vector field \(X_{H}\) at an equilibrium point \(x_{0}\). **Theorem 2.4**.: _Let \(x_{0}\in M\) be an equilibrium point of \(X_{H}\), with principal coefficient \(\tau\), and let \(L\) be the linear part of the vector field at \(x_{0}\). Then,_ 1. \(L\) _leaves_ \(\xi\) _invariant; we will denote the restriction to_ \(\xi\) _by_ \(L_{\xi}\) _._ 2. _The linear vector field_ \(L_{\xi}-\frac{1}{2}\tau\,I_{\xi}\) _on_ \(\xi\) _is Hamiltonian, where_ \(I_{\xi}\) _is the identity map on_ \(\xi\)_, and the symplectic structure on_ \(\xi\) _is given by the 2-form_ \(\mathrm{d}\eta\)_; the Hamiltonian function is given by the symmetric part of the amended Hessian._ The simplest proof of this statement uses the expression for \(L\) in local Darboux coordinates. Since this expression will be useful later, we calculate it here. Differentiating the local expression (1.7) for the vector field, using Darboux coordinates \((q_{j},p_{j},z)\) one finds (here \(j\) denotes the row and \(i\) the column within each block), \[\mathrm{D}(X_{H})=\begin{pmatrix}H_{p_{j}q_{i}}&H_{p_{j}p_{i}}&H_{p_{j}z}\\ -H_{q_{j}q_{i}}-p_{j}H_{q_{i}z}&-H_{q_{i}p_{j}}-H_{z}\delta_{ij}-p_{j}H_{p_{i}z }&-H_{q_{j}z}-p_{j}H_{zz}\\ p_{k}H_{q_{i}p_{k}}-H_{q_{i}}&p_{k}H_{p_{i}p_{k}}&p_{k}H_{p_{k}z}-H_{z}\end{pmatrix}.\] Evaluating this at the origin, which we assume to be an equilibrium point, gives the linear part of the vector field: \[L=\begin{pmatrix}H_{p_{j}q_{i}}&H_{p_{j}p_{i}}&H_{p_{j}z}\\ -H_{q_{i}q_{j}}&-H_{q_{i}p_{j}}+\tau\delta_{ij}&-H_{q_{j}z}\\ 0&0&\tau\end{pmatrix}. \tag{2.2}\] For the record, we note that the trace of this matrix is given by \[\mathrm{tr}\,L=(n+1)\tau. \tag{2.3}\] It will also be useful to have the expression for the restriction \(L_{\xi}\): \[L_{\xi}=\begin{pmatrix}H_{p_{j}q_{i}}&H_{p_{j}p_{i}}\\ -H_{q_{i}q_{j}}&-H_{q_{i}p_{j}}+\tau\delta_{ij}\end{pmatrix}. \tag{2.4}\] Proof.: (i) The invariance of \(\xi\) under \(L\) follows from the zeros in the bottom row of \(L\) in (2.2) (more geometrically it follows from the fact that the flow preserves the distribution \(\xi\), which also implies that \(\eta(x_{0})\) is an eigen-covector, or left eigenvector, of \(L\); see Sec 2.6 below). (ii) The restriction of \(L\) to \(\xi\) is given in (2.4). Hence, \[L_{\xi}-\tfrac{1}{2}\tau\,I_{\xi}=\begin{pmatrix}H_{p_{j}q_{i}}-\tfrac{1}{2} \tau\delta_{ij}&H_{p_{j}p_{i}}\\ -H_{q_{i}q_{j}}&-H_{q_{i}p_{j}}+\tfrac{1}{2}\tau\delta_{ij}\end{pmatrix}.\] Let \(J=\begin{pmatrix}0&\delta_{ij}\\ -\delta_{ij}&0\end{pmatrix}\) -- the matrix associated to the symplectic structure \(\mathrm{d}\eta=\mathrm{d}q_{i}\wedge\mathrm{d}\rho_{i}\) -- then \[J\left(L_{\xi}-\tfrac{1}{2}\tau\,I_{\xi}\right)=\begin{pmatrix}H_{q_{i}q_{j}}& H_{q_{i}p_{j}}-\tfrac{1}{2}\tau\delta_{ij}\\ H_{p_{j}q_{i}}-\tfrac{1}{2}\tau\delta_{ij}&H_{p_{j}p_{i}}\end{pmatrix},\] which is the Hessian matrix at the origin of \(H-\tfrac{1}{2}\tau\,p_{i}\,q_{i}\), restricted to \(\xi\). That is \(L_{\xi}-\tfrac{1}{2}\tau\,I_{\xi}\) is the linear vector field on \(\xi\) associated to this quadratic Hamiltonian (see eg, [2]). Finally we see that this Hessian matrix is precisely the symmetric part of the amended Hessian (2.1). It is well-known [1, 2] that eigenvalues of a Hamiltonian (infinitesimally symplectic) matrix arise in quadruplets \(\{\pm\lambda,\pm\tilde{\lambda}\}\) (not necessarily all distinct). It follows from the theorem that the eigenvalues of \(L_{\xi}-\frac{1}{2}\tau\,I_{\xi}\) arise in these symplectic quadruplets and the following result is then immediate. **Corollary 2.5**.: _One of the eigenvalues of a contact equilibrium is equal to the principal coefficient \(\tau\), while the others arise in quadruplets of the form_ \[\left\{\tfrac{1}{2}\tau\pm\lambda,\tfrac{1}{2}\tau\pm\tilde{\lambda}\right\}.\] _The eigenvalue \(\tau\) corresponds to the eigen-covector \(\eta\), while the others arise from the restriction to \(\xi\)._ We call these _contact quadruplets_ of eigenvalues. Note that if \(\tau\neq 0\) at most 2 members of such a quadruplet may vanish or be pure imaginary. The eigenvalue \(\tau\) we also call the _principal eigenvalue_. Recall that an equilibrium point of a vector field is non-degenerate provided the linear part has no zero eigenvalues. It follows immediately from (2.2) that this is equivalent to, 1. \(\tau\neq 0\), and 2. in local Darboux coordinates about \(x_{0}\), the \(2n\times 2n\) matrix at \(x_{0}\), \[\begin{pmatrix}H_{qq}&H_{pq}-\tau I_{n}\\ H_{qp}&H_{pp}\end{pmatrix}\] is invertible, where we write \(H_{qq}\) for the \(n\times n\) matrix (\(H_{q_{i},q_{j}}\)) evaluated at \(x_{0}\), etc.; this matrix is the amended Hessian (Definition 2.3) in local Darboux coordinates. This is equivalent to the non-degeneracy described in Proposition 2.2. One sees for example that the equilibrium at the origin in \(\mathbb{R}^{3}\) for the Hamiltonian \(H=z+pq\) is non-degenerate, while the one for \(H=z-pq\) is degenerate (the eigenvalues can be read off the matrix \(L\) in Remark 1.4). ### Non-degenerate equilibria in dimension 3 Using Darboux coordinates in a neighbourhood of the origin, we consider the general Hamiltonian assuming the origin is an equilibrium point and expanded to order 2: \[H=-\tau z+Aq^{2}+Bqp+Cp^{2}+DDqz+EEpz+Fz^{2}+O(3). \tag{2.5}\] Note that \(\tau\) is the principal coefficient (or eigenvalue) of \(H\) at the origin. The amended Hessian (Definition 2.3) for this Hamiltonian is \[\mathrm{Hess}^{\prime}:=\begin{pmatrix}2A&B-\tau\\ B&2C\end{pmatrix}.\] For non-degeneracy, we require \(\tau\neq 0\) and \(\det\operatorname{Hess}^{\prime}\neq 0\). We remark that the eigenvalues of the amended Hessian are \((A+C)\pm\frac{1}{2}\sqrt{4(A-C)^{2}+B(B-\tau)}\) which are complex if \(B\tau\) is sufficiently large (positive). The linear part of the vector field at the origin is, by (2.2), \[L=\begin{pmatrix}B&2C&EE\\ -2A&-B+\tau&-DD\\ 0&0&\tau\end{pmatrix}. \tag{2.6}\] This has determinant \(-\tau(B^{2}+B-\tau-4AC)\). If this is non-zero then the origin is an isolated non-degenerate equilibrium. The eigenvalues of \(L\) are \[\tau,\quad\frac{1}{2}\left(\tau\pm\sqrt{(2B-\tau)^{2}-16AC}\right).\] The equilibrium is _asymptotically stable_ if the real parts of the three eigenvalues all have negative real part. Thus we have (recall \(\tau\) is the principal coefficient or eigenvalue), **Theorem 2.6**.: _The origin is an asymptotically stable equilibrium if_ \[\tau<0\quad and\quad B^{2}-B\tau-4AC<0. \tag{2.7}\] _If either of the inequalities is reversed then the equilibrium is unstable._ ### Non-degenerate equilibria in higher dimensions From (2.2), we see that \(\operatorname{tr}(L)=(n+1)\tau\), so \(\tau<0\) is a necessary condition for the asymptotic stability of an equilibrium point. As discussed in Corollary 2.5, one eigenvalue is \(\tau\) and the others arise in contact quadruplets, which are of the form \[\left\{\tfrac{1}{2}\tau\pm\lambda,\,\tfrac{1}{2}\tau\pm\tilde{\lambda}\right\}.\] **Theorem 2.7**.: _Suppose an equilibrium has negative principal coefficient (\(\tau<0\)) and the symmetric part of the amended Hessian is positive or negative definite, then the equilibrium is asymptotically stable._ This sufficient condition is certainly not necessary in general. Proof.: If the symmetric part of the amended Hessian is definite, then all the eigenvalues of the associated Hamiltonian system from Theorem 2.4 are pure imaginary. In this case, and with \(\tau<0\), the contact quadruplets all have negative real part equal to \(\tau/2\) ### Hopf bifurcation Recall that in a dynamical system, a Hopf (or Andronov-Hopf) bifurcation occurs when a pair of eigenvalues of a non-degenerate equilibrium pass through the imaginary axis [17, 18]. This gives rise to the existence of a family of periodic orbits emanating from the equilibrium point. Using the information above it is straightforward to construct examples of Hopf bifurcation in systems with dimension at least 5. In dimension 3 it follows from the structure of the contact quadruplets that a simple Hopf bifurcation is not possible (on the other hand a fold-Hopf bifurcation is possible -- see SS3.3 below). An explicit example in dimension 5 is to let \[H_{\lambda}=z+p_{1}q_{2}-q_{1}p_{2}+2\lambda q_{1}p_{1}+O(3).\] The origin is an equilibrium point with principal coefficient \(-1\) (for all \(\lambda\)); the corresponding principal coefficient is \(-1\), while the other eigenvalues are \[\lambda\pm\sqrt{-1+\lambda^{2}},\quad-1-\lambda\pm\sqrt{-1+\lambda^{2}}.\] When \(\lambda=0\) these form the contact quadruplet \(\{\pm i,\,-1\pm i\}.\) As \(\lambda\) varies, the first two cross the imaginary axis with non-zero velocity (their real part is equal to \(\lambda\)), as shown in Figure 2.1. Without the \(O(3)\) terms, this system is linear (see Remark 1.4) and this would give rise to a'vertical' Hopf bifurcation, meaning that the periodic orbits all occur for \(\lambda=0\) (in fact in the \(q_{1}q_{2}\) plane). The addition of suitable higher order terms would make it a sub- or super-critical Hopf bifurcation. Note that for \(\lambda<0\) (small) the origin is asymptotically stable, while for \(\lambda>0\) it is unstable. ### Degenerate equilibria It follows from Proposition 2.2 that there are two distinct ways in which an equilibrium can be degenerate. We call them Type I and Type II degeneracies as follows. **Definition 2.8**.: A degenerate equilibrium \(x_{0}\) with simple zero eigenvalue, of a contact Hamiltonian system is of * _Type I_ if the principal coefficient vanishes (in this case \(H\) has a critical point at \(x_{0}\)); * _Type II_ if the amended Hessian matrix is degenerate. Figure 2.1: Contact quadruplet exhibiting a Hopf bifurcation in dimension 5 (the grey dot represents the principal coefficient) — see §2.3. We will see that these are generically of codimension 1. Higher codimension degeneracies can occur that combine the two types, but we do not study these in this paper. There follow below two parallel sections, one on Type I singularities and the other on Type II singularities. The analysis of the first type is the more straightforward, because the condition for a fold singularity only depends on the 2-jet of the Hamiltonian, while for Type II it depends on its 3-jet. In each section, we begin with a general discussion of the singularities in \(\mathbb{R}^{2n+1}\) and then follow it with a section on the 3-dimensional cases. Before we proceed with that analysis, we address the question of the dependence of the principal coefficient and amended Hessian on the choice of contact form. ### Dependence on \(\eta\) Recall that, given a contact manifold \((M,\xi)\) and a contact vector field \(X\), the Hamiltonian itself depends on the choice of contact form \(\eta\). We show directly that the principal coefficient of \(X\) at an equilibrium point is independent of the choice of contact form, and the amended Hessian is well-defined up to scalar multiple (although the first part also follows from the fact that the principal coefficient is \(-2\) times the principal coefficient of the vector field). **Proposition 2.9**.: _Let \(X\) be a contact vector field on the contact manifold \((M,\xi)\), and let \(x_{0}\in M\) be an equilibrium point._ 1. _The principal coefficient_ \(\tau\) _of_ \(X\) _at_ \(x_{0}\) _is independent of the choice of contact form._ 2. _The amended Hessian is well-defined up to scalar multiple. More precisely, if_ \(\eta_{1},\eta_{2}\) _are two 1-forms representing_ \(\xi\)_, so that_ \(\eta_{2}=f\eta_{1}\) _for some non-zero function_ \(f\)_, then the bilinear forms on_ \(\xi\) _at the equilibrium point_ \(x_{0}\) _satisfy_ \(\operatorname{Hess}^{\prime}_{2}=f(x_{0})\operatorname{Hess}^{\prime}_{1}\)_._ Proof.: (i) Since \(\tau\) is an eigenvalue of the vector field, it does not depend on any choice arising from the contact form \(\eta\). It also follows from the fact that \(\operatorname{tr}(L)=(n+1)\tau\), see (2.3). (ii) For \(\mathbf{u},\mathbf{v}\in\xi\), the amended Hessians are defined by \[\operatorname{Hess}^{\prime}_{j}(\mathbf{u},\mathbf{v})=\operatorname{D}_{ \mathbf{u}}(\operatorname{d}H_{j}+\tau\eta_{j})(\mathbf{v}).\] Now, \(\eta_{2}=f\eta_{1}\) implies \(H_{2}=fH_{1}\), and hence \[\operatorname{d}H_{2}=f\operatorname{d}H_{1}+H_{1}\operatorname{d}f\] and then \[\operatorname{d}H_{2}+\tau\eta_{2}=f(\operatorname{d}H_{1}+\tau\eta_{1})+H_{ 1}\operatorname{d}f.\] Thus (in any coordinate system), \[\operatorname{D}\left(dH_{2}+\tau\eta_{2}\right)=f\operatorname{D}( \operatorname{d}H_{1}+\tau\eta_{1})+\operatorname{d}f\otimes(\operatorname{d }H_{1}+\tau\eta_{1})+\operatorname{d}H_{1}\otimes\operatorname{d}f+H_{1} \operatorname{D}^{2}f.\] Then, at the equilibrium point \(x_{0}\) and restricting to \(\xi\), all but the first term vanishes, showing that \[\operatorname{Hess}^{\prime}_{2}=f(x_{0})\operatorname{Hess}^{\prime}_{1}\] as required. **Remark 2.10**.: Perhaps more natural than Remark 1.4 is to assign weights to the Darboux coordinates: \[\operatorname{wt}(q_{i})=\operatorname{wt}(p_{i})=1,\,\operatorname{wt}(z)=2.\] Then \(\eta\) is homogenous of degree \(2\). And given a Hamiltonian function which is homogeneous of degree \(d\) then the vector field has degrees \(d-1\) in the first \(2n\) components and \(d\) in the last component, meaning that the vector field itself is homogeneous of degree \(d-2\). From (1.8) one sees that if \(H\) and \(f\) are weighted homogeneous, with \(\deg(H)=d\), \(\deg(f)=r\), then \(X_{H}(f)\) has degree \(r+d-2\). For example, let \(H\) be the general Hamiltonian of weighted degree \(2\) in \(\mathbb{R}^{3}\), \[H=-\tau z+Aq^{2}+Bqp+Cp^{2}\] \((A,B,C,\tau\in\mathbb{R})\), then the corresponding vector field is, \[X_{H}=\begin{pmatrix}2Bq+2Cp\\ (-2B+\tau)\,p-2Aq\\ -A\,q^{2}+Cp^{2}+\tau z\end{pmatrix}.\] which is of weighted degree \(0\). ### Principal coefficients of contact diffeomorphisms Here we remark on a geometric view of the principal coefficient. Given a contact manifold \((M,\xi)\), denote by \(\xi^{\circ}\subset T^{*}M\) the line bundle of linear forms vanishing on \(\xi\) (that is, \(\xi^{\circ}\) is the annihilator of \(\xi\)). Now any diffeomorphism \(\Phi\) of \(M\) preserving \(\xi\) will also preserve \(\xi^{\circ}\). **Definition 2.11**.: Suppose \(x_{0}\in M\) is a fixed point of such a contact diffeomorphism. Then it (or rather its cotangent lift) maps \(\xi^{\circ}(x_{0})\) to itself, acting by scalar multiplication. We call the corresponding scalar the _principal coefficient_ of the diffeomorphism at the fixed point. Recall that if \(\Phi\) is a diffeomorphism then the cotangent lift \(\Phi^{*}\) is given by \[\langle\Phi^{*}(\alpha_{y}),\,v_{x}\rangle:=\langle\alpha_{y},\,\mathrm{d}\Phi _{x}(v_{x})\rangle\,,\] where \(y=\Phi(x)\), \(\alpha_{y}\in T^{*}_{y}M\) and \(v_{x}\in T_{x}M\). If, in a neighbourhood of a fixed point, we chose a contact \(1\)-form \(\eta\), then the contact diffeomorphism maps \(\eta\) to another contact \(1\)-form, which is of the form \(f\,\eta\), where \(f\) is a non-vanishing function; that is \(\Phi^{*}\eta=f\,\eta\). The principal coefficient of the diffeomorphism \(\Phi\) at a fixed point \(x_{0}\) is then just \(f(x_{0})\). This value is clearly independent of the choice of contact form. Now suppose \(X\in\mathcal{X}_{\zeta}\) is a contact vector field and \(\Phi_{t}\) its flow. Let \(x_{0}\) be an equilibrium point of \(X\). Choosing an arbitrary contact form \(\eta\) in a neighbourhood of \(x_{0}\), let \(\Phi_{t}^{*}\eta=f_{t}\,\eta\). Let \(\tau\) be the principal coefficient of \(X\). Then we have \(f_{t}(x_{0})=\exp(t\tau)\), and \[\tau=\frac{\mathrm{d}}{\mathrm{d}t}\,f_{t}(x_{0})\big{|}_{t\,=\,0}.\] ## 3 Degeneracy of Type I A type I degeneracy of an equilibrium is one where the principal coefficient vanishes; in other words it arises at a singular point of the zero-level of \(H\). We consider now the conditions for this to be a simple degeneracy (i.e., a fold). First we assume the zero eigenvalue of the linear part of the vector field at the equilibrium is simple, and then we ask when the singularity is of fold type. ### Type I fold singularity With \(\tau=0\), the linear part \(L\) takes the form (see (2.2)) \[L=\begin{pmatrix}H_{pq}&H_{pp}&H_{pz}\\ -H_{qq}&-H_{qp}&-H_{qz}\\ 0&0&0\end{pmatrix}. \tag{3.1}\] Here \(H_{qq}=\left(H_{q_{i}q_{j}}\right)\) etc. This clearly has corank at least \(1\), and as stated above, we begin by requiring that \(0\) is a simple eigenvalue, which in particular means the matrix has corank \(1\). More precisely it requires that the top left \(2n\times 2n\) block \(L_{\xi}\) be non-degenerate (equivalently, the Hessian of the Hamiltonian on \(\xi\) be non-degenerate). **Theorem 3.1**.: _Suppose a contact dynamical system with Hamiltonian \(H\) has a degenerate equilibrium with vanishing principal coefficient. The singularity is a fold if,_ \[\Delta_{2}:=H_{zq}\mathbf{a}_{q}+H_{zp}\mathbf{a}_{p}+H_{zz}\neq 0\] _where \(\mathbf{a}_{q},\mathbf{a}_{p}\in\mathbb{R}^{n}\) satisfy_ \[\left\{\begin{array}{rcl}H_{qq}\mathbf{a}_{q}+H_{qp}\mathbf{a}_{p}+H_{qz}&=& 0,\\ H_{pq}\mathbf{a}_{q}+H_{pp}\mathbf{a}_{p}+H_{pz}&=&0.\end{array}\right. \tag{3.2}\] _Furthermore, in this case, the family \(H_{\lambda}=H-\lambda\) is a versal unfolding of the singularity (a saddle-node bifurcation)._ Explicitly, the \(i^{\text{th}}\) component of the first equation in (3.2) is \[\sum_{j}\left(H_{q_{1}q_{j}}(\mathbf{a}_{q})_{j}+H_{q_{i}p_{j}}(\mathbf{a}_{p })_{j}\right)+H_{q_{i}z}=0.\] The components of the second equation and the expression for \(\Delta_{2}\) are similar. Proof.: We apply Lemma A.1 from the appendix. To do so we need non-zero vectors \(\mathbf{v}\in\text{coker}L\) and \(\mathbf{a}\in\ker L\) to check whether \(\mathbf{v}\mathbf{D}^{2}(X_{H})\mathbf{a}^{2}\neq 0\). Now \(\mathbf{v}=(0,0,\ldots,0,1)\) is clearly a non-zero element of the cokernel (ie, \(\mathbf{v}L=0\)). To find \(\mathbf{a}\in\ker L\) we know \(\mathbf{a}\not\in\xi\) so we can choose it of the form \(\mathbf{a}=(\mathbf{a}_{q},\,\mathbf{a}_{p},\,1)^{T}\) with \(\mathbf{a}_{q},\mathbf{a}_{p}\in\mathbb{R}^{n}\). Then (3.2) is precisely the condition that \(\mathbf{a}\in\ker L\). By Lemma A.1, we require \(\mathbf{v}\mathrm{D}^{2}(X_{H})\mathbf{a}^{2}\neq 0\). Now \(\mathbf{v}\mathrm{D}^{2}(X_{H})\) is the Hessian matrix of the final component \(\mathbf{v}\,X_{H}\) of \(X_{H}\), so let \(f=\mathbf{v}\,X_{H}=p_{j}H_{p_{j}}-H\) (a Legendre transform of \(H\)). Then, at the origin, one finds \[\mathrm{D}^{2}f=\begin{pmatrix}-H_{qq}&0&-H_{zq}\\ 0&H_{pp}&0\\ -H_{qz}&0&-H_{zz}\end{pmatrix}.\] In order that \(\mathbf{v}\mathrm{D}^{2}(X_{H})\mathbf{a}^{2}\neq 0\), we require \[\mathbf{a}^{T}\left[\mathrm{D}^{2}f\right]\,\mathbf{a}\neq 0.\] Expanding that in terms of \(\mathbf{a}_{q}\) and \(\mathbf{a}_{p}\), and simplifying using (3.2) gives the result. ### Fold singularity in \(\mathbb{R}^{3}\) We translate the condition of the theorem above to conditions on the coefficients in the Taylor series for \(H\). In this case (Type I) the theorem above shows we only need the Taylor series to order 2 (at an equilibrium point). With vanishing principal coefficient, the lowest order terms of the Hamiltonian at an equilibrium point (in Darboux coordinates) are quadratic: \[H_{0}=Aq^{2}+B\,qp+Cp^{2}+Dqz+Epz+Fz^{2}+O(3). \tag{3.3}\] Recall that the linearization at the origin is (with \(\tau=0\)) \[L=\begin{pmatrix}B&2C&E\\ -2A&-B&-D\\ 0&0&0\end{pmatrix}.\] Consider the two quantities, \[\left\{\begin{array}{ll}\Delta_{1}:=&B^{2}-4AC\\ \Delta_{2}:=&4(B^{2}-4AC)F+AE^{2}-BDE+CD^{2}.\end{array}\right. \tag{3.4}\] If \(\Delta_{1}\neq 0\) then \(0\) is a simple eigenvalue of \(L\). We assume this from now on. **Corollary 3.2**.: _Consider a contact vector field on \(\mathbb{R}^{3}\) with a singularity of type I at the origin, and hence with Hamiltonian as above, with \(\Delta_{1}\neq 0\). (i) The vector field has a fold singularity if and only if \(\Delta_{2}\neq 0\). (ii) In this case, the family \(H_{\lambda}=H_{0}-\lambda\) gives a versal unfolding of the vector field, resulting in a saddle-node bifurcation of equilibria._ This is a particular case of Theorem 3.1 above, but note that the \(\Delta_{2}\) here is \((B^{2}-4AC)\) times the \(\Delta_{2}\) defined in the theorem. ### Type I saddle-node bifurcations Consider the 1-parameter family of Hamiltonian functions \[H_{\lambda}=-\lambda+Aq^{2}+Bpq+Cp^{2}+Dqz+Epz+Fz^{2}+O(3);\] here \(\lambda\) is the parameter, and \(A,Q,\ldots,F\) are fixed and satisfy \(\Delta_{1}\neq 0,\,\Delta_{2}\neq 0\). At \(\lambda=0\) this has a degenerate equilibrium of type I at the origin. The linearization \(L_{0}\) at the bifurcation point is given in (2.6) but with \(\tau=0\), and the (amended) Hessian is \[\mathrm{Hess}^{\prime}=\begin{pmatrix}2A&B\\ B&2C\end{pmatrix}\] which we are assuming is non-degenerate (\(\Delta_{1}\neq 0\)). To simplify calculations, let us consider the cases where \(D=E=0\) and \(F=1\). Similar results hold more generally. Then \[H_{\lambda}=-\lambda+Aq^{2}+Bqp+Cp^{2}+z^{2}.\] The vector field is \[X_{\lambda}=\begin{pmatrix}Bq+2Cp\\ -2Aq-Bp-2pz\\ -Aq^{2}+Cp^{2}-z^{2}+\lambda\end{pmatrix}\] There are no equilibria (near 0) for \(\lambda<0\) and two for \(\lambda>0\): * \((q,p,z)=(0,0,\sqrt{\lambda})\): the principal coefficient is \(\tau=-2\sqrt{\lambda}\), and the eigenvalues of the linear part at the equilibrium point are \[+\tau,\,+\tfrac{1}{2}\tau\pm\tfrac{1}{2}\sqrt{(B+\tau)^{2}-4AC}\] (with \(\tau=-2\sqrt{\lambda}<0\)). * \((q,p,z)=(0,0,-\sqrt{\lambda})\): the principal coefficient is \(\tau=2\sqrt{\lambda}\), and the eigenvalues of the linear part are as before, \(\tau,\,\tfrac{1}{2}\tau\pm\tfrac{1}{2}\sqrt{(B-\tau)^{2}-4AC}\) (with \(\tau>0\)). This is therefore an unstable equilibrium. This is a saddle-node bifurcation, but there are two cases to consider: * \(B^{2}<4AC\) ('elliptic'): for \(\lambda=0\) the non-zero eigenvalues are pure imaginary. As \(\lambda\) is varied, their real parts become \(-2\sqrt{\lambda}\) and \(-\sqrt{\lambda}\) along the 'top' branch (\(z>0\)), and \(2\sqrt{\lambda}\) and \(\sqrt{\lambda}\) along the 'bottom' branch (\(z<0\)). The top branch of equilibria are therefore asymptotically stable, while the equilibria on the bottom branch are unstable. Note that if we changed to \(F=-1\) then the equilibria would occur for \(\lambda<0\), but otherwise the analysis would be unchanged. Moreover, passing through 0 along the curve of equilibria, two eigenvalues cross the imaginary axis showing this is a fold-Hopf bifurcation which is normally a codimension 2 bifurcation, see Guckenheimer and Holmes [17, Sec 7.4] and Kuznetsov [18, Sec 8.5], but here this is exhibited as a codimension 1 phenomenon. Which dynamical phenomena are associated to this bifurcation needs further consideration -- presumably different values of the coefficients will lead to different paths through the generic codimension-2 fold-Hopf bifurcation described in [17, 18]. 2. \(B^{2}>4AC\) ('hyperbolic'): for \(\lambda=0\) the non-zero eigenvalues are real, one positive, one negative. As \(\lambda\) is varied, their signs don't change and the bifurcating equilibria are therefore both unstable, and of the two equilibria one will have 1 negative and 2 positive eigenvalues while the other has 1 positive and 2 negative eigenvalues. In 5 and more dimensionsA similar analysis in dimension 5 or more allows for an elliptic case, where the Hessian of the Hamiltonian on the contact plane is positive or negative definite. In this case each 'quadruplet' of eigenvalues of \(L_{0}\) will be pure imaginary, and as one moves along the saddle-node curve the eigenvalues will generically move across the imaginary axis. This would be a fold-multi-Hopf bifurcation, which has not been analyzed. It would usually be a codimension 3 phenomenon in \(\mathbb{R}^{5}\) (or codimension \(n+1\) in \(\mathbb{R}^{2n+1}\)), but in this contact setting arises as codimension 1. See Figure 3.2. An added complication could arise if there are any resonances between the imaginary eigenvalues when \(\lambda=0\). Geometric remark 3.3Suppose that the Hessian \(\mathrm{D}^{2}H_{0}\) is positive definite at the Type I equilibrium \(x_{0}\). Then (at least in a neighbourhood of \(x_{0}\)), the zero-set of the Hamiltonian \(H_{0}\) is just the one point, and the positive level sets of \(H_{0}\) are (topologically) spheres. It follows that the zero level-set of \(H_{\lambda}=H_{0}-\lambda\) is one of those spheres when \(\lambda>0\) is fixed (and small). As already remarked, equilibria occur at points where the contact hyperplane is tangent to the sphere \(H_{\lambda}^{-1}(0)\). If there were no equilibria on the sphere then there would be a nowhere vanishing vector field on the sphere, which is impossible for topological reasons since the sphere has even dimension and its Euler characteristic is 2. Therefore there must be equilibria for each \(\lambda>0\) (sufficiently small), as we have seen by direct calculation in the 3-dimensional case. On the other hand, when \(\lambda<0\) the zero level-set is empty, at least near \(x_{0}\) so there are no equilibria. A similar argument applies if the Hessian is negative definite, changing the sign of \(\lambda\). Figure 3.1: Type I bifurcations: (a) the elliptic case, (b) the hyperbolic case. A solid curve represents an asymptotically stable equilibrium, and a dashed curve represents an unstable equilibrium. **Remark 3.4**.: Bravetti et al. [4] consider the dynamics on \(S=H^{-1}(0)\) under the assumption that \(\mathcal{R}(H)\neq 0\) along that hypersurface \(S\). Since \(\mathcal{R}(H)=0\) at a Type I degeneracy, it would be interesting to understand how this degeneracy and its associated saddle-node bifurcation influences their findings. ## 4 Degeneracy of Type II Here we consider degenerate equilibria with non-zero principal coefficent. In this case the degeneracy is in the restriction \(L_{\xi}\) of \(L\) to \(\xi\). At the level of eigenvalues, we are assuming the principal eigenvalue \(\tau\neq 0\) and there is a (simple) zero eigenvalue. This means that one of the contact quadruplets \(\left\{\frac{1}{2}\tau\pm\lambda,\,\frac{1}{2}\tau\pm\tilde{\lambda}\right\}\) contains zero. This implies \(\lambda=\pm\tau/2\). Then the quadruplet is simply \(\left\{\tau,0\right\}\). Therefore at a degenerate equilibrium of Type II, \(\tau\) is a double eigenvalue. See Figure 4.1. However, it is not possible for the double eigenvalue to become a complex conjugate pair, as the principal eigenvalue always remains real. ### Type II fold singularity In Darboux coordinates, we saw in (2.2) that the linear approximation at the origin is \[L=\begin{pmatrix}L_{\xi}&\rho\\ 0&\tau\end{pmatrix}\] where \(L_{\xi}\) and \(\rho\) are the \(2n\times 2n\) matrix and \(2n\)-vector, \[L_{\xi}=\begin{pmatrix}H_{pq}&H_{pp}\\ -H_{qq}&-H_{qp}+\tau\,I_{n}\end{pmatrix},\qquad\rho=\begin{pmatrix}H_{pz}\\ -H_{qz}\end{pmatrix}.\] Since \(\tau\neq 0\), for a degenerate equilibrium we need that \(\det L_{\xi}=0\), and for zero to be a simple eigenvalue we require \(L_{\xi}\) to have \(\operatorname{rank}2n-1\). We will apply Lemma A.1 to find conditions that ensure this has a fold singularity. In order to do this we need elements of the kernel and cokernel of \(L\). Let \((\mathbf{a}_{q},\mathbf{a}_{p})\in\mathbb{R}^{n}\times\mathbb{R}^{n}\) and Figure 3.2: Contact quadruplet for a Type I degeneracy exhibiting a fold+double-Hopf bifurcation in dimension \(5\). The grey dot represents the principal coefficient eigenvalue and is equal to twice the real part of the other eigenvalues. If we ignore the extreme dots, we would have a \(3\)-dimensional fold–Hopf bifurcation. \((\mathbf{v}_{p},-\mathbf{v}_{q})\in(\mathbb{R}^{n}\times\mathbb{R}^{n})^{*}\) be such that \[(\mathbf{v}_{p},-\mathbf{v}_{q})L_{\zeta}=0\quad\text{and}\quad L_{\zeta}\begin{pmatrix} \mathbf{a}_{q}\\ \mathbf{a}_{p}\end{pmatrix}=0.\] Then \(\mathbf{v}L=0\) and \(L\mathbf{a}=0\) where \[\mathbf{v}=(\mathbf{v}_{p},-\mathbf{v}_{q},\zeta)\quad\text{and}\quad\mathbf{ a}=\begin{pmatrix}\mathbf{a}_{q}\\ \mathbf{a}_{p}\\ 0\end{pmatrix},\] and \(\zeta=-\frac{1}{\tau}(\mathbf{v}_{p}H_{pz}+\mathbf{v}_{q}H_{qz})\). To ensure this is a fold, rather than a more degenerate singularity, Lemma A.1 says we need \[\mathbf{v}\mathrm{D}^{2}(X_{H})\,\mathbf{a}^{2}\neq 0. \tag{12}\] This condition is equivalent to \(\mathrm{D}^{2}_{\mathbf{a}}(\mathbf{v}X_{H})\neq 0\) (recall \(\mathbf{v}\) is a fixed covector). Unlike the Type I case, this depends on the 3-jet of the Hamiltonian at the equilibrium point. Written out in terms of partial derivatives, we require \[\mathrm{D}^{2}_{\mathbf{a}}\big{(}\mathbf{v}_{q}H_{q}-(\mathbf{v}_{q}p)H_{z} \big{)}+\mathbf{v}_{p}H_{p}+\zeta(pH_{p}-H)\big{)}\neq 0.\] Figure 1: Typical ‘motion’ of eigenvalues through a saddle-node bifurcation of Type II in 3 dimensions (above) and an example in 5 dimensions (below), both with \(\tau<0\). The grey dot is the principal coefficient. Reflecting in the imaginary axis would show the typical motion for \(\tau>0\) where all equilibria would be unstable. which expands to (after evaluating at the origin) \[\begin{array}{rll}\left(\mathbf{v}_{q}H_{qqq}+\mathbf{v}_{p}H_{pqq}-\zeta\,H_{qq }\right)\mathbf{a}_{q}^{2}&\\ \qquad+2\left(\mathbf{v}_{q}H_{qqp}+\mathbf{v}_{p}H_{pqp}\right)\mathbf{a}_{q} \mathbf{a}_{p}&\\ \qquad+\left(\mathbf{v}_{q}\,H_{qpp}+\mathbf{v}_{p}H_{ppp}\right)\mathbf{a}_{ p}^{2}&\\ \qquad-(\mathbf{v}_{q}\mathbf{a}_{p})\left(H_{zq}\mathbf{a}_{q}+H_{zp}\mathbf{ a}_{p}\right)&\neq&0.\end{array} \tag{23}\] The notation should be self-explanatory. For example, with summation over repeated indices understood \((i,j,k=1,\ldots,n)\), \[\mathbf{v}_{p}\,H_{pqp}\mathbf{a}_{q}\mathbf{a}_{p}=(\mathbf{v}_{p})_{k}\, \left(\frac{\partial^{3}H}{\partial p_{k}\partial q_{i}\partial p_{j}}\right) (\mathbf{a}_{q})_{i}\,(\mathbf{a}_{p})_{j}.\] This proves the first part of the following theorem. **Theorem 4.1**.: _Consider an equilibrium point in \(\mathbb{R}^{2n+1}\) with a Type II degeneracy; that is, \(\tau\neq 0\) and \(\operatorname{rank}(L_{\xi})=2n-1\). Then, using the notation introduced above,_ 1. _the vector field has a fold singularity provided condition (_23_) holds, and_ 2. _in this case the family_ \(H_{\lambda}=H-\lambda(\alpha q+\beta p+\gamma)\)_, with_ \(\alpha,\beta\in(\mathbb{R}^{n})^{*}\) _and_ \(\gamma\in\mathbb{R}\)_, gives a versal unfolding of the singularity of the vector field, resulting in a saddle-node bifurcation of equilibria, provided_ \[(\beta,-\alpha,-\gamma)^{T}\not\in\operatorname{Image}(L).\] Proof.: Part (i) is already proved by the calculation above. (ii) For the given Hamiltonian \(H_{\lambda}\), \[X_{\lambda}\;=\;X_{0}+\lambda\begin{pmatrix}\beta\\ -\alpha\\ -\alpha q-\gamma\end{pmatrix},\] where \(X_{\lambda}\) is the vector field associated to \(H_{\lambda}\). It follows that the velocity of the deformation satisfies \(\dot{X}_{\lambda}(0)=(\beta,\,-\alpha,\,-\gamma)^{T}\) and hence the statement follows from Lemma A.1(ii) in the appendix. For part (ii), if \((H_{zq},H_{zp})\neq(0,0)\) (i.e., \(\rho\neq 0\)) we find \(H_{\lambda}=H-\lambda\) is a versal unfolding of the fold singularity (similar to the Type I case), whereas if \(\rho=0\) it is not versal. ### Fold singularity in \(\mathbb{R}^{3}\) Suppose the origin in \(\mathbb{R}^{3}\) is a degenerate equilibrium point of type II. In this case we can write the 3-jet of the Hamiltonian at the origin as \[H\,= \,-\tau z+Aq^{2}+Bqp+Cp^{2}+Dqz+Epz+Fz^{2}+\] \[\,\,\,+\sum_{i\,\leq\,j\leq\,k}P_{i,j,k}\,x_{i}\,x_{j}\,x_{k}\,+ \,O(4), \tag{24}\] where \(\tau\neq 0\) and in the cubic terms, \(x_{1}=q\), \(x_{2}=p\), \(x_{3}=z\). Define the following 3 polynomials in the coefficients of the 3-jet of \(H\) at the origin (an equilibrium point), where we write \(B_{1}=B-\tau\): \[\left\{\begin{array}{rcl}h_{0}&=&B\,B_{1}-4AC,\\ h_{1}&=&B^{2}(3BE-6CD-E\tau)\\ &&\quad+\,24\,B\,C^{2}\,P_{1,1,1}-4\,B\,C\,(3\,B-\tau)\,P_{1,1,2}\\ &&\quad+\,2\,B^{2}\,(3B-2\tau)\,P_{1,2,2}-12\,AB^{2}\,P_{2,2,2},\\ h_{2}&=&2AB_{1}\,(3BE-6CD-E\tau)\\ &&\quad+12B_{1}^{2}\,CP_{1,1,1}-2B_{1}^{2}\,(3B-\tau)\,P_{1,1,2}\\ &&\quad+4AB_{1}\,(3B-2\tau)\,P_{1,2,2}-24\,A^{2}B_{1}\,P_{2,2,2}.\end{array}\right. \tag{4.4}\] Note that \(h_{0}=\det(\mathrm{Hess}^{\prime})\), and that the cubic coefficients \(P_{i,j,k}\) that appear here are the coefficients of the terms not involving \(z\). For example, \(P_{2,2,2}=\frac{1}{6}H_{ppp}(0)\). **Theorem 4.2**.: _Consider an equilibrium point in \(\mathbb{R}^{3}\) with a Type II degeneracy; that is, \(\tau\neq 0\) and \(h_{0}=0\). Then_ 1. _the vector field has a fold singularity provided_ \(h_{1},h_{2}\) _do not both vanish; and_ 2. _in this case the family_ \(H_{\lambda}=H-\lambda(\alpha q+\beta p+\gamma)\) _(with_ \(\alpha,\beta,\gamma\in\mathbb{R}\)_) gives a versal unfolding of the singularity of the vector field, resulting in a saddle-node bifurcation of equilibria, provided_ \[(\beta,-\alpha,-\gamma)^{T}\not\in\mathrm{Image}(L).\] Proof.: (i) Again, we rely on Lemma A.1. The key is that one needs to use different expressions for \(\mathbf{a}\) and \(\mathbf{v}\) depending on the values of \(A,B,C\), and this leads to two separate non-degeneracy conditions: in fact it suffices to consider values of \(B\) as follows. \[L=\begin{pmatrix}B&2C&E\\ -2A&-B+\tau&-D\\ 0&0&\tau\end{pmatrix}.\] First, suppose \(B\neq 0\). Then we can use the non-zero vectors (recall \(\tau\neq 0\)) \[\mathbf{a}=(2C,\,-B,\,0)^{T},\quad\mathbf{v}=-(2A,\,B,\,\frac{1}{\tau}(BD-2AE) \,).\] Then computing \(\mathbf{v}D^{2}(X_{H})\mathbf{a}^{2}\), after some simplification using \(B(B-\tau)=4AC\), we find \[\mathbf{v}D^{2}(X_{H})\mathbf{a}^{2}=h_{1}\] which for a fold we require to be non-zero in the case \(B\neq 0\) (note that \(B\) is a factor of \(h_{1}\)). Now suppose \(B_{1}\neq 0\) (that is, \(B\neq\tau\)). This time we use \[\mathbf{a}=(B_{1},\,-2A,\,0)^{T},\qquad\mathbf{v}=\left(B_{1},\,2C,\,\frac{1}{ \tau}(2CD-B_{1}E)\right)\] With these vectors, both of which are non-zero, we obtain \(\mathbf{v}D^{2}(X_{H})\,\mathbf{a}^{2}=h_{2}\), which for a fold we require to be non-zero when \(B_{1}\neq 0\). (Note that \(B_{1}\) is a factor of \(h_{2}\).) Since \(\tau\neq 0\), \(B\) and \(B_{1}\) cannot both vanish simultaneously and part (i) of the theorem is proved, (ii) This is the statement of Theorem 4.1(ii) in this context. ### Type II saddle-node bifurcations We illustrate some cases of the theorem above in 3 dimensions. In the first example, we analyze the bifurcating equilibrium points, and in later examples we just record the condition on the 3-jet for the Type II equilibrium to be a fold. **Example 4.3**.: Let \(H=z-pq+p^{2}q-\lambda q\). At \(\lambda=0\) this is a degenerate equilibrium at the origin with eigenvalues \(-1,-1,0\) and principal coefficient \(\tau=-1\). We have \(h_{0}=h_{2}=0\), but \(h_{1}\neq 0\) so the equilibrium is a fold singularity. As \(\lambda\) varies this family has a saddle-node bifurcation, with equilibria at \((q,p,z)=(0,\pm\sqrt{\lambda},0)\) for \(\lambda\geq 0\). On one branch, the \(0\) eigenvalue becomes negative and on the other it becomes positive, as illustrated in Figure 4.1. * \((q,p,z)=(0,\sqrt{\lambda},0)\); at this point the eigenvalues are \(-1,-1+2\sqrt{\lambda}\) and \(-2\sqrt{\lambda}\) and the equilibrium is asymptotically stable (for small values of \(\lambda\)). * \((q,p,z)=(0,-\sqrt{\lambda},0)\); this point has one positive and two negative eigenvalues and so is unstable. There follows a table showing the non-degeneracy condition (up to a non-zero factor) for several simple values of \(A,B,C\), and an admissible unfolding term. The unfolding term is independent of the values of \(D,E\) and the \(P_{i,j,k}\). In each of them, the analysis is similar to Example 4.3 above, and in fact that example is an instance of the penultimate of this list. \begin{tabular}{c c c c} \(H_{0}\) & \(h_{1}\) & \(h_{2}\) & unfolding term \\ \hline \(A=B=C=0\) & \(0\) & \(P_{1,1,2}\) & \(p\) \\ \(A=B=0,C=1\) & \(0\) & \(6P_{1,1,1}+\tau P_{1,1,2}\) & \(p\) \\ \(A=1,B=C=0\) & \(0\) & \(\tau E+\tau^{2}P_{1,1,2}+4\tau P_{1,2,2}+12P_{2,2,2}\) & \(p\) \\ \(A=B_{1}=C=0\) & \(E+P_{1,2,2}\) & \(0\) & \(q\) \\ \(A=1,B_{1}=C=0\) & \(\tau(E+P_{1,2,2})-6P_{2,2,2}\) & \(0\) & \(p\) \\ \end{tabular} ## 5 Legendre vector fields In this section we consider the bifurcation theory of contact vector fields which are tangent to a given Legendre submanifold, and show the fact that they are the restriction of contact vector fields adds no restriction; that is, the bifurcation theory on the Legendre submanifold is the same as for generic vector fields in \(\mathbb{R}^{n}\). The main theorem is in essence due to Maschke [22]. Recall that a Legendre submanifold of \(M^{2n+1}\) is a submanifold of dimension \(n\) that is everywhere tangent to the contact structure, and that this is the maximal possible dimension of such a submanifold. In particular, we show in the theorem below that any given vector field (or family of vector fields) on a given Legendre submanifold can be extended to a contact vector field (or family of such) on the ambient contact manifold. Throughout this section we let \(\mathcal{L}\) be a fixed Legendre submanifold of \((M,\xi)\), or of \(\mathbb{R}^{2n+1}\) as our analysis is local. The following property of contact flows is due to Mrugala et al [24, Theorem 3], and is a direct analogue of a property of invariant Lagrangian submanifolds for symplectic flows. **Proposition 5.1**.: _Given a Hamiltonian \(H\), the Legendre submanifold \(\mathcal{L}\) is invariant under the flow of \(X_{H}\) if and only if \(H|_{\mathcal{L}}\equiv 0\)._ Proof.: Firstly if \(\mathcal{L}\) is invariant under the flow, then at every point of \(\mathcal{L}\) the vector field is tangent to \(\mathcal{L}\) and hence is contained in the contact hyperplane, which implies \(H=0\). Conversely, suppose \(H|_{\mathcal{L}}=0\), and consider the flow induced by the vector field. Now this flow preserves \(H^{-1}(0)\) (as noted in SS1) and the vector field is therefore contained in the contact hyperplane at each point of \(H^{-1}(0)\). If \(\mathcal{L}\) is not invariant, let \(x_{0}\) be a point where \(X_{H}\) is not tangent to \(\mathcal{L}\) and let \(U\) be a neighbourhood of \(x_{0}\) in \(\mathcal{L}\) where this continues to hold. Consider the image of \(U\) under the flow. This will be a submanifold of dimension \(n+1\) tangent to the contact structure, which is not possible. Generating functionsFollowing Arnold [2, Appendix 4], using Darboux coordinates one can (locally) generate any Legendre submanifold of \(\mathbb{R}^{2n+1}\) as follows. Given a Legendre submanifold \(\mathcal{L}\subset\mathbb{R}^{2n+1}\), there is a subset \(I\subset\{1,\ldots,n\}\) and a smooth function \(S(q_{i},p_{a})\) (\(i\in I,a\not\in I\)) such that \(\mathcal{L}\) is (locally) parametrized by \(q_{i}\), \(p_{a}\) by the following formulae \[q_{a}=-\frac{\partial S}{\partial p_{a}},\quad p_{i}=\frac{\partial S}{ \partial q_{i}},\quad z=S-p_{a}\frac{\partial S}{\partial p_{a}},\] Conversely, given any such subset \(I\) and generating function \(S(q_{i},p_{a})\), the graph as given generates a Legendre submanifold. As mentioned, the following is essentially due to Maschke [22], although there only for \(S=S(q_{i})\) (i.e., \(I=\{1,\ldots,n\}\)), and without the (trivial) inclusion of parameters \(\lambda\). **Theorem 5.2**.: _Let \(\mathcal{L}\) be a Legendre submanifold of \(\mathbb{R}^{2n+1}\) parametrized by \((q_{i},p_{a})\) as above, and let_ \[Y_{\lambda}=f_{j}(q_{i},p_{a},\lambda)\frac{\partial}{\partial q_{j}}+f_{b}(q_ {i},p_{a},\lambda)\frac{\partial}{\partial p_{b}}\] _be an arbitrary family of vector fields on \(\mathcal{L}\), where the \(f_{i}\) are smooth functions depending on parameter(s) \(\lambda\in\mathbb{R}^{\ell}\). Then there exists a family of contact vector fields \(X_{\lambda}\) on a neighbourhood of \(\mathcal{L}\) in \(M\) whose restriction to \(\mathcal{L}\) is equal to \(Y_{\lambda}\): that is for \(x\in\mathcal{L}\), \(X_{\lambda}(x)=Y_{\lambda}(x)\)._ Proof.: Let \(\tilde{f}_{r}(q,p,z,\lambda)=f_{r}(q_{i},p_{a},\lambda)\) and \(\tilde{S}(q,p,z)=S(q_{i},p_{a})\) be the trivial extensions of \(f_{r}\) and \(S\) respectively, to a neighbourhood of \(\mathcal{L}\) in \(\mathbb{R}^{2n+1}\) (\(r=1,\ldots,n\)) -- that is, independent of \(q_{a}\), \(p_{i}\), \(z\). Define \[H(q,p,z,\lambda)=\left(p_{i}-\frac{\partial\tilde{S}}{\partial q_{i}}\right)\tilde {f}_{i}(q,p,z,\lambda)-\left(q_{a}+\frac{\partial\tilde{S}}{\partial p_{a}} \right)\tilde{f}_{a}(q,p,z,\lambda).\] Clearly, \(H\big{|}_{\mathcal{L}}=0\) and hence \(\mathcal{L}\) is invariant under the flow of \(X_{H}\), which is to say, \(X_{H}\) is tangent to \(\mathcal{L}\). This implies that to check whether at points of \(\mathcal{L}\) we have \(X_{H}=Y\), we only need check the effect of \(X_{H}\) on the coordinates \(q_{i},p_{a}\) of \(\mathcal{L}\). Now, at points \(x=(q,p,z)\in\mathcal{L}\), \[\begin{array}{rclrcl}X_{H}(q_{i})&=&\dot{q}_{i}&=&H_{p_{i}}&=&f_{i}(q_{i},p_ {a},\lambda),\\ X_{H}(p_{a})&=&\dot{p}_{a}&=&-H_{q_{a}}-p_{a}H_{z}&=&f_{a}(q_{i},p_{a},\lambda),\end{array}\] the latter since \(H\) is independent of \(z\). Hence the contact vector field \(X_{H}\) coincides with \(Y\) at points of \(\mathcal{L}\), as required. We remark that the extension chosen is in fact a conservative contact vector field, since \(\mathcal{R}(H)=H_{z}=0\). Had we allowed more general extensions of \(f_{i}(q_{i},p_{a},\lambda)\) to \(\tilde{f}_{i}(q,p,z,\lambda)\) we would obtain other contact extensions of the vector field \(Y\). ## Appendix A Recognizing fold singularities In this appendix we derive a simple condition for recognizing when a map germ of corank 1 has a fold singularity, and when a deformation of a fold singularity is versal. For details on \(\mathcal{K}\)-equivalence see for example [23] (note that \(\mathcal{K}\)-equivalence is also called contact equivalence, but that could be confusing in the current context). Recall that for a map-germ \((\mathbb{R}^{n},0)\to(\mathbb{R}^{n},0)\), a _fold_ singularity is the least degenerate singularity, and is any germ \(\mathcal{K}\)-equivalent to \[(x,\mathbf{y})\longmapsto(x^{2},\mathbf{y})\] with \(x\in\mathbb{R},\mathbf{y}\in\mathbb{R}^{n-1}\). It has \(\mathcal{K}\)-codimension 1, and \[(x,\mathbf{y};\lambda)\longmapsto(x^{2}-\lambda,\mathbf{y})\] is a versal deformation (or unfolding). **Lemma A.1**.: 1. _A corank-1 map-germ_ \(F:(\mathbb{R}^{n},0)\to(\mathbb{R}^{n},0)\) _has a fold singularity at the origin if and only if there are non-zero vectors_ \(\mathbf{a}\in\mathbb{R}^{3}\) _and_ \(\mathbf{v}\in\left(\mathbb{R}^{3}\right)^{*}\) _such that_ \[\mathrm{D}F\,\mathbf{a}=0,\quad\mathbf{v}\mathrm{D}F=0,\quad\mathbf{v} \mathrm{D}^{2}F\,\mathbf{a}^{2}\neq 0,\] (A.1) _where the differentials are evaluated at the origin._ 2. _Given such a fold singularity, any 1-parameter deformation_ \(F_{\lambda}\) _(with_ \(F_{0}=F\)_) is versal if and only if_ \(\dot{F}(0)\not\in\mathrm{Image}(\mathrm{d}F(0))\)_, where_ \(\dot{F}=\frac{\partial F_{\lambda}}{\partial\lambda}\big{|}_{\lambda=0}\)_, and_ \(\dot{F}(0)\) _is its value at the origin._ Proof.: Any map-germ \((\mathbb{R}^{n},0)\to(\mathbb{R}^{n},0)\) of corank \(1\) is contact equivalent to the map \[G(x,\mathbf{y})=(g(x),\mathbf{y}) \tag{10}\] for some smooth function-germ \(g:(\mathbb{R},0)\to(\mathbb{R},0)\) with \(g(0)=g^{\prime}(0)=0\). Here \(\mathbf{y}\in\mathbb{R}^{n-1}\) (see [23, p.167] for a proof.) (i) Suppose the corank \(1\) map-germ \(F:(\mathbb{R}^{n},0)\to(\mathbb{R}^{n},0)\) is a fold singularity. Then so is \(G\) in (10) and \(g\) can be chosen to be \(g(x)=x^{2}\) (and more generally \(g^{\prime\prime}(0)\neq 0\)). Clearly, for \(G\) the conditions (10) hold, with \(\mathbf{a}=(1,\ \mathbf{0})^{T}\) and \(\mathbf{v}=(1,\ \mathbf{0})\). Conversly, if \(g^{\prime\prime}(0)=0\) then \(G\) is not a fold and the condition \(\mathbf{v}D^{2}G\mathbf{a}^{2}\neq 0\). There remains to show that conditions (10) are unchanged under a contact equivalence. This is a simple calculation, as follows. Now \(G\) and \(F\) are \(\mathcal{K}\)-equivalent iff \(G(\mathbf{x})=A(\mathbf{x})F\circ\phi(\mathbf{x})\) where \(A\) is an invertible \(\mathbf{x}\)-dependent matrix and \(\phi\) is a diffeomorphism (all germs at \(0\)). Now \(G\) satisfies the conditions of the lemma: \(\mathbf{w}\mathrm{D}G=0\), \(\mathrm{D}G\,\mathbf{b}=0\), \(\mathbf{w}\mathrm{D}^{2}G\,\mathbf{b}^{2}\neq 0\). Then \[\mathrm{D}G(\mathbf{x})=(\mathrm{D}A(\mathbf{x}))F\circ\phi+A(\mathbf{x}) \mathrm{D}F(\phi(\mathbf{x}))\mathrm{D}\phi(\mathbf{x})\] so at \(\mathbf{x}=0\) where \(F=G=0\) we have \(\mathrm{D}G(0)=A(0)\,\mathrm{D}F(0)\,\mathrm{D}\phi(0)\). Then if \(\mathbf{w}\mathrm{D}G=0\) let \(\mathbf{v}=\mathbf{w}\,A\), and if \(\mathrm{D}G\,\mathbf{a}=0\) let \(\mathbf{a}=\mathrm{D}\phi\,\mathbf{b}\) then \(\mathbf{v}\mathrm{D}F=0\) and \(\mathrm{D}F\,\mathbf{a}=0\). Moreover, \[\mathrm{D}^{2}G= (\mathrm{D}^{2}A(\mathbf{x}))F\circ\phi+\mathrm{D}A(\mathbf{x}) \mathrm{D}F(\phi(\mathbf{x}))\mathrm{D}\phi(\mathbf{x})\] \[+A(\mathbf{x})\mathrm{D}^{2}F(\phi(\mathbf{x}))\mathrm{D}\phi( \mathbf{x})^{2}+A(\mathbf{x})\mathrm{D}F(\phi(\mathbf{x}))\mathrm{D}^{2}\phi( \mathbf{x})\] Then putting \(\mathbf{x}=0\) and using \(F(0)=G(0)=0\), we see \[\mathbf{w}\mathrm{D}^{2}G\,\mathbf{b}^{2}=\mathbf{v}\mathrm{D}^{2}F\,\mathbf{ a}^{2}\] so that the latter is also nonzero. (ii) A similar method of proof works here too: \(F\) is \(\mathcal{K}\)-equivalent to \(G:(x,\mathbf{y})\mapsto(x^{2},\mathbf{y})\) and the family \(G_{\lambda}=G+\lambda\mathbf{u}\) is a versal unfolding if and only if \(\mathbf{u}\not\in\mathbb{R}\{e_{2},\dots,\,e_{n}\}\), from the standard versality theorem for contact equivalence, eg. [23]. The importance of the deformations being _versal_ is that any two versal unfoldings of \(\mathcal{K}\)-equivalent map-germs are themselves equivalent. In particular, any versal unfolding of a fold singularity is equivalent to the map \((x;\lambda)\mapsto x^{2}-\lambda\), showing that for \(\lambda>0\) there are two zeros, while for \(\lambda<0\) there are none (or vice versa if the sign of the \(\lambda\) term is changed). That is, a versal unfolding of a fold is a saddle-node bifurcation. However, \(\mathcal{K}\)-equivalence of the vector fields does not respect eigenvalues. AcknowledgementsI would like to thank Alessandro Bravetti and Luis Garcia-Naranjo for commenting on an early draft and suggesting some further references.
2305.15861
On the Weisfeiler-Leman dimension of permutation graphs
It is proved that the Weisfeiler-Leman dimension of the class of permutation graphs is at most 18. Previously it was only known that this dimension is finite (Gru{\ss}ien, 2017).
Jin Guo, Alexander L. Gavrilyuk, Ilia Ponomarenko
2023-05-25T08:53:19Z
http://arxiv.org/abs/2305.15861v1
# On the Weisfeiler-Leman dimension of permutation graphs ###### Abstract. It is proved that the Weisfeiler-Leman dimension of the class of permutation graphs is at most \(18\). Previously it was only known that this dimension is finite (Gruksen, 2017). The research of Alexander Gavrilyuk is supported by JSPS KAKENHI Grant Number 22K03403. The research of Jin Guo is supported by the National Natural Science Foundation of China (Grant No. 11961017). ## 1. Introduction The _Welgemeemeeme_ of a graph \(X\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeemeemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeemeemeemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welgemeemeemeemeemeemeemeemeemeeme_ of a graph \(G\) is a graph \(G\) with vertex set \(\Omega\). The _Welg Now, let \(X\) be a permutation graph which is not uniquely orientable. Then \(X\) admits a non-trivial modular decomposition, and, as was implicitly proved in [8], this decomposition can be chosen in a canonical way. In Section 4, we use this fact to prove that this decomposition "can be seen" inside the coherent configuration \(\widehat{\mathcal{X}}^{(2)}\). This enables us to guarantee that the separability number of \(\mathcal{X}\) is less than or equal to the maximum separability number of the coherent configurations of the smaller graphs associated with the modular decomposition of \(X\). The paper is organized as follows. In order to make the proof as self-contained as possible, we present in Section 2 the necessary notation and facts about permutation graphs and in Section 3 those from the theory of coherent configurations. More details about these topics can be found in the monographs [15] and [7], respectively. In Section 4 we study an interplay between modular decompositions of graphs and their coherent configurations, as applied to algebraic isomorphisms. Sections 5 and 6 are devoted to the proof of Theorem 1.1 by induction as described above. ### Notation Throughout the paper, \(\Omega\) stands for a finite set. For \(\Delta\subseteq\Omega\), the diagonal of the Cartesian product \(\Delta\times\Delta\) is denoted by \(1_{\Delta}\). For an equivalence relation \(e\) on a subset of \(\Omega\), we denote by \(\Omega/e\) the set of all classes of \(e\). For a binary relation \(r\subseteq\Omega\times\Omega\), we set \(r^{*}=\{(\beta,\alpha)\colon\ (\alpha,\beta)\in r\}\), and \(r^{f}=\{(\alpha^{f},\beta^{f})\colon\ (\alpha,\beta)\in r\}\) for any bijection \(f\) from \(\Omega\) to another set. The **left support** of \(r\) is defined to be the set \(\{\alpha\in\Omega\colon\ \alpha r\neq\varnothing\}\), where \(\alpha r=\{\beta\in\Omega\colon\ (\alpha,\beta)\in r\}\) is the **neighborhood** of a point \(\alpha\in\Omega\) in the relation \(r\). The **right support** of \(r\) is the left support of \(r^{*}\). For relations \(r,s\subseteq\Omega\times\Omega\), we put \[r\cdot s=\{(\alpha,\beta)\colon\ (\alpha,\gamma)\in r,\ (\gamma,\beta)\in s \text{ for some }\gamma\in\Omega\}.\] For \(\Delta,\Gamma\subseteq\Omega\), we set \(r_{\Delta,\Gamma}=r\cap(\Delta\times\Gamma)\) and abbreviate \(r_{\Delta}:=r_{\Delta,\Delta}\). For a set \(S\) of relations on \(\Omega\), we denote by \(S^{\cup}\) the set of all unions of the elements of \(S\) and put \(S^{*}=\{s^{*}\colon\ s\in S\}\) and \(S^{f}=\{s^{f}\colon s\in S\}\) for any bijection \(f\) from \(\Omega\) to another set. ## 2. Permutation graphs and their modular decomposition ### Basic definitions By a **graph** we mean a finite simple undirected graph, i.e., a pair \(X=(\Omega,E)\) of a finite set \(\Omega\) of vertices and an irreflexive symmetric relation \(E\subseteq\Omega\times\Omega\). The elements of \(E=:E(X)\) are called **edges1**, and \(E\) is the **edge set** of the graph \(X\). Two vertices \(\alpha\) and \(\beta\) are **adjacent** (in \(X\)) whenever \((\alpha,\beta)\in E\) (equivalently, \((\beta,\alpha)\in E\)). The subgraph of \(X\) induced by a set \(\Delta\subseteq\Omega\) is denoted by \(X_{\Delta}=(\Delta,E_{\Delta})\). A graph \(X=(\Omega,E)\) is **connected** if the transitive reflexive closure of \(E\) equals \(\Omega^{2}\). The **complement** of \(X\) is the graph \(\overline{X}=(\Omega,\overline{E})\) where \(\overline{E}=\Omega^{2}\setminus(E\cup 1_{\Omega})\). A graph is **coconnected** if its complement is connected. Footnote 1: Traditionally, an _edge_ means a subset of two adjacent vertices. In this paper, by an edge we mean an ordered pair of adjacent vertices, which is usually called an _arc_. ### Orientations of graphs An **orientation** of a graph \(X=(\Omega,E)\) is a subset \(A\subseteq E\) such that \(A\cap A^{*}=\varnothing\) and \(A\cup A^{*}=E\). A orientation \(A\) is called **transitive** if \(A\cdot A\subseteq A\), which means that \((\alpha,\beta),(\beta,\gamma)\in A\) implies \((\alpha,\gamma)\in A\). A graph is called **transitively orientable** (or a **comparability graph**) if it admits a transitive orientation. A transitive orientation of a complete graph is called a **transitive tournament**. Following [15, Chapter 5], we define the \(\Gamma\)-**relation** of \(X\) as a binary relation on \(E\) consisting of all pairs \((\alpha,\beta)\) and \((\alpha^{\prime},\beta^{\prime})\) such that \[(\alpha=\alpha^{\prime}\text{ and }(\beta,\beta^{\prime})\notin E)\quad\text{ or } \quad(\beta=\beta^{\prime}\text{ and }(\alpha,\alpha^{\prime})\notin E)\,.\] It is easily seen that this relation is symmetric and reflexive. The transitive closure of the \(\Gamma\)-relation is an equivalence relation and hence partitions \(E\) into equivalence classes called the **implication classes** of \(X\). The implication class of \(X\) containing an edge \(\mathbf{e}\in E\) is denoted by \(I_{X}(\mathbf{e})\). In what follows, given a graph \(X\), we abbreviate, \(I(\mathbf{e})=I_{X}(\mathbf{e})\) if \(\mathbf{e}\in E\) and \(I(\mathbf{e})=I_{\overline{X}}(\mathbf{e})\) if \(\mathbf{e}\in\overline{E}\). **Lemma 2.1**.: ([15, Theorem 5.1, Theorem 5.4])__ _Let \(X\) be a transitively orientable graph and \(\mathbf{e}\) any of its edges. Then:_ 1. \(I_{X}(\mathbf{e})\cap I_{X}(\mathbf{e})^{*}=\varnothing\)_;_ 2. _if_ \(A\) _is a transitive orientation of_ \(X\)_, then_ \(I_{X}(\mathbf{e})\subseteq A\) _or_ \(I_{X}(\mathbf{e})^{*}\subseteq A\)_._ A graph is said to be **uniquely partially orderable (UPO)** if it admits at most two transitive orientations. (Note that if a UPO graph contains at least one edge, then there are exactly two transitive orientations, one being the reversal of the other.) For example, a connected bipartite graph is always UPO, while a complete graph \(K_{n}\) on \(n\) vertices is UPO if and only if \(n\leq 2\). A graph \(X\) is called a **permutation graph** if and only if \(X\) and \(\overline{X}\) are transitively orientable. Clearly, the class of permutation graph is closed under taking complement and induced subgraphs. A (permutation) graph \(X\) is **uniquely orientable** if both \(X\) and \(\overline{X}\) are UPO graphs. It follows from Lemma 2.1 that a permutation graph \(X=(\Omega,E)\) is uniquely orientable if and only if \[I_{X}(\mathbf{e})\cup I_{X}(\mathbf{e})^{*}\cup I_{\overline{X}}(\mathbf{f}) \cup I_{\overline{X}}(\mathbf{f})^{*}=\Omega^{2}\setminus 1_{\Omega} \tag{1}\] holds for every \(\mathbf{e}\in E\) and every \(\mathbf{f}\in\overline{E}\). **Lemma 2.2**.: (cf. [15, Chapter 5, Exercise 5]) _Let \(A\) and \(\overline{A}\) be transitive orientations of a (permutation) graph and its complement, respectively. Then \(A\cup\overline{A}\) is a transitive tournament._ Proof.: Clearly, the relation \(A\cup\overline{A}\) is anti-symmetric and irreflexive. To prove that it is transitive, it suffices to show that it does not contain a cycle of length \(3\). Suppose on the contrary that \((\alpha,\beta),(\beta,\gamma),(\gamma,\alpha)\in A\cup\overline{A}\). Without loss of generality, we may assume \((\alpha,\beta),(\beta,\gamma)\in A\). Then \((\alpha,\gamma)\in A\), as \(A\) is a transitive orientation of the graph, a contradiction. ### Composition graph Let \(X_{0}\) be a graph with vertex set \(\{\alpha_{1},\alpha_{2},\ldots,\alpha_{n}\}\) and let \(\mathcal{C}=\{X_{1},\ldots,X_{n}\}\) be a collection of graphs whose vertex sets are pairwise disjoint. The **composition graph** \[X:=X_{0}[X_{1},X_{2},\ldots,X_{n}]=X_{0}[\mathcal{C}]\] is formed as follows: for all \(1\leq i,j\leq n\), replace the vertex \(\alpha_{i}\) in \(X_{0}\) with the graph \(X_{i}\) and join each vertex of \(X_{i}\) to each vertex of \(X_{j}\) whenever \(\alpha_{i}\) is adjacent to \(\alpha_{j}\) in \(X_{0}\). The vertex set \(\Omega_{i}\) of the graph \(X_{i}\), \(i=1,\ldots,n\), is **partitive**, i.e., every vertex of \(X\), not belonging to \(\Omega_{i}\), is adjacent to either all or none of the vertices of \(\Omega_{i}\). The sets \(\Omega_{i}\) are called the **modules** of \(X\), and the corresponding decomposition of the vertex set of \(X\) is called **modular**. The composition graph \(X\) is **non-trivial** if \(n\geq 2\) and \(|\Omega_{i}|\geq 2\) for at least one \(i\). **Lemma 2.3**.: _Let \(e\) be an equivalence relation on \(\Omega\). Then a graph \((\Omega,E)\) is a composition graph whose modules are the equivalence classes of \(e\) if and only if_ \[e\cdot(E-e)=E-e=(E-e)\cdot e. \tag{2}\] Proof.: One has \((\alpha,\beta)\in E-e\) if and only if \((\alpha,\beta)\in E\) and \(\alpha,\beta\) belong to different equivalence classes of \(e\). Thus, Eq. (2) holds if and only if, for every two distinct \(\Delta,\Gamma\in\Omega/e\), the graph \((\Delta\cup\Gamma,E_{\Delta,\Gamma})\) is empty or complete bipartite. The latter condition holds if and only if \((\Omega,E)\) is a composition graph whose modules are the classes of \(\Omega/e\). In what follows, a graph \(X=(\Omega,E)\) satisfying Eq. (2) for some equivalence relation \(e\) will be refereed to as a **composition graph with respect to \(e\)**. Note that this is equivalent to saying that \(X=X_{0}[\mathcal{C}]\) where \(\mathcal{C}=\{X_{\Delta}\ |\ \Delta\in\Omega/e\}\) and \(X_{0}\) is the **quotient graph** of \(X\) modulo \(e\) (that is, the vertex set of \(X_{0}\) is \(\Omega/e\) and two distinct vertices are adjacent in \(X_{0}\) whenever there is an edge between them). Let \(X=(\Omega,E)\) be a graph. Two vertices \(\alpha,\beta\) are called \(0\)**-twins** (\(1\)**-twins**, respectively) if \(\alpha,\beta\) are not adjacent (are adjacent, respectively) and, for any vertex \(\gamma\in\Omega\setminus\{\alpha,\beta\}\), the set \(\gamma E\) contains either both \(\alpha\) and \(\beta\) or none of them. A graph is **irreducible** if it contains neither distinct \(0\)-twins nor \(1\)-twins, and **reducible** otherwise. The relation "_to be \(0\)-twins in \(X\)_" is an equivalence relation on \(\Omega\), called the \(0\)**-equivalence** of \(X\). The relation "_to coincide or to be \(1\)-twins in \(X\)_" is also an equivalence relation on \(\Omega\), called the \(1\)**-equivalence** of \(X\). The following statement is obvious. **Lemma 2.4**.: _A reducible graph is a non-trivial composition graph whose modules are either all classes of the \(0\)-equivalence or all classes of the \(1\)-equivalence._ Let \(P_{m}\) denote a path graph on \(m\) vertices. **Lemma 2.5**.: _A uniquely orientable graph is either connected and coconnected or is \(P_{m}\) or \(\overline{P_{m}}\), \(m=2,3\). In particular, an irreducible uniquely orientable graph is connected and coconnected._ Proof.: Without loss of generality, suppose that a uniquely orientable graph \(X\) is not connected. Then \(\overline{X}\) is connected and it is a composition graph with two modules, namely, the vertex set of any connected component of \(X\) and its complement in \(\Omega\). By [15, Theorem 5.12], every partitive set of a UPO graph induces an empty graph. It follows that \(X=K_{n}\cup K_{m}\) for some integers \(n,m\). Since \(X\) is a UPO graph, it follows that \(n\cdot m\leq 2\), and we are done. For a graph \(X=(\Omega,E)\) and \(\mathbf{e}\in E\cup\overline{E}\), let us denote by \(\Omega(\mathbf{e})\) the set of vertices incident to at least one element from \(I(\mathbf{e})\), and by \(X(\mathbf{e})\) the subgraph induced by \(\Omega(\mathbf{e})\) in \(X\) or in \(\overline{X}\) depending on whether \(\mathbf{e}\in E\) or \(\mathbf{e}\in\overline{E}\). **Corollary 2.6**.: _Let \(X=(\Omega,E)\) be an irreducible uniquely orientable graph. Then \(\Omega=\Omega(\mathbf{e})\) for every \(\mathbf{e}\in E\cup\overline{E}\)._ Proof.: Since \(P_{m}\) and \(\overline{P_{m}}\), \(m=2,3\), are not irreducible, \(X\) and \(\overline{X}\) are connected by Lemma 2.5. The lemma then follows from Eq. (1). In the rest of this section, we show that an irreducible permutation graph is a certain composition graph; this observation was implicitly used in [8]. Given a graph \(X=(\Omega,E)\), define a binary relation \(\sim\) on \(\Omega\) as follows: \(\alpha\sim\beta\) if and only if \(\alpha=\beta\) or there is an edge \(\mathbf{e}\in E\) such that the graph \(X(\mathbf{e})\) is uniquely orientable and \(\alpha,\beta\in\Omega(\mathbf{e})\). **Lemma 2.7**.: _Let \(X\) be an irreducible permutation graph that is not uniquely orientable. Then the relation \(\sim\) defined by \(X\) is a non-trivial equivalence relation, the classes of which induce uniquely orientable graphs._ Proof.: It follows from [8, Lemma 6] that whenever, for \(\mathbf{e},\mathbf{e}^{\prime}\in E\), the subgraphs \(X(\mathbf{e})\) and \(X(\mathbf{e}^{\prime})\) are uniquely orientable, the sets \(\Omega(\mathbf{e})\) and \(\Omega(\mathbf{e}^{\prime})\) are either equal or disjoint. Therefore, \(\sim\) is an equivalence relation whose equivalence classes induce uniquely orientable graphs. Moreover, by [8, Lemma 5] and the fact that \(X\) is irreducible, there exist an edge \(\mathbf{e}\) such that the subgraph \(X(\mathbf{e})\) is uniquely orientable. Since \(X\) is not uniquely orientable, \(X\neq X(\mathbf{e})\) and we are done. **Lemma 2.8**.: _In the notation of Lemma 2.7, \(X\) is a non-trivial composition graph with respect to the equivalence relation \(\sim\)._ Proof.: It suffices to show that every \(\Delta\in\Omega/\!\!\sim\) is a partitive set of \(X\). This is obviously true if \(|\Delta|=1\). If \(|\Delta|>1\), then, by the definition of \(\sim\), there exists \(\mathbf{e}\in E\) such that \(\Delta=\Omega(\mathbf{e})\). It follows from [8, Lemma 3] that the set \(\Omega(\mathbf{e})\) is partitive. Therefore, \(X\) a composition graph with respect to the equivalence relation \(\sim\). By Lemma 2.7 this relation is non-trivial, hence so is the composition graph. **Remark 2.9**.: _Every module of the composition graph in Lemma 2.8 either induces a uniquely orientable non-empty subgraph of \(X\) or is a singleton._ Note that the composition graph defined in Lemma 2.8 is minimal with respect to \(|\mathcal{C}|\) among all composition graphs \(X=X_{0}[\mathcal{C}]\) where \(X_{0}\) is a permutation graph and \(\mathcal{C}\) is a collection of uniquely orientable subgraphs of \(X\). **Lemma 2.10**.: (cf. [15, Theorem 5.8]) _Let \(X\) be the composition graph as in Lemma 2.8. Then for all distinct modules \(\Delta,\Delta^{\prime}\) and every \(\mathbf{e}\in\Delta\times\Delta^{\prime}\), the implication class \(I(\mathbf{e})\) intersects neither \(\Delta\times\Delta\) nor \(\Delta^{\prime}\times\Delta^{\prime}\), and, moreover, \(\Delta\times\Delta^{\prime}\subseteq I(\mathbf{e})\) and \(\Delta^{\prime}\times\Delta\subseteq I(\mathbf{e})^{*}\)._ Proof.: Assume on the contrary that \(I(\mathbf{e})\) contains, say, \(\mathbf{f}\in\Delta\times\Delta\). Then \(I(\mathbf{f})\subseteq\Delta\times\Delta\) by the definition of \(\sim\) and Eq. (1). Since two implication classes either coincide or are disjoint, we obtain \(I(\mathbf{e})=I\left(\mathbf{f}\right)\subseteq\Delta\times\Delta\), a contradiction. Further, let \(\mathbf{e}=(\alpha,\beta)\) and \(\mathbf{e}\in E\) (the case \(\mathbf{e}\in\overline{E}\) is similar). Then \(\{\alpha\}\times\Delta^{\prime}\subseteq E_{\Delta,\Delta^{\prime}}\). Note the complement of \(X_{\Delta^{\prime}}\) is connected by Lemma 2.5, since the graph \(X\) is irreducible and so is \(X_{\Delta^{\prime}}\). Therefore, for every \(\beta^{\prime}\in\Delta^{\prime}\), there is a path from \(\beta\) to \(\beta^{\prime}\) in the complement of \(X_{\Delta^{\prime}}\). This implies that \((\alpha,\beta^{\prime})\in I(\mathbf{e})\) and hence we obtain \(\{\alpha\}\times\Delta^{\prime}\subseteq I(\mathbf{e})\). Similarly, one can see that \(\Delta\times\{\beta\}\subseteq I(\mathbf{e})\). Thus, the implication class \(I(\mathbf{e})\) intersects non-trivially the implication class \(I(\mathbf{e}^{\prime})\) of every other edge \(\mathbf{e}^{\prime}\in E_{\Delta,\Delta^{\prime}}\), and hence \(I(\mathbf{e})=I(\mathbf{e}^{\prime})\). This implies \(E_{\Delta,\Delta^{\prime}}\subseteq I(\mathbf{e})\) and, by symmetry, \(E_{\Delta^{\prime},\Delta}\subseteq I(\mathbf{e})^{*}\). ## 3. Coherent configurations and their extensions In this section we provide a short background of the theory of coherent configurations. We mainly follow the notation and terminology from [7], where further details and all unexplained facts can be found. ### Basic definitions Let \(\Omega\) be a finite set and \(S\) a partition of \(\Omega^{2}\); in particular, the elements of \(S\) are treated as binary relations on \(\Omega\). A pair \(\mathcal{X}=(\Omega,S)\) is called a **coherent configuration** on \(\Omega\) if the following conditions are satisfied: 1. the diagonal relation \(1_{\Omega}\) is a union of some relations of \(S\), 2. for each \(s\in S\), the relation \(s^{*}\) belongs to \(S\), 3. given \(r,s,t\in S\), the number \(c^{t}_{rs}=|\alpha r\cap\beta s^{*}|\) does not depend on \((\alpha,\beta)\in t\). In what follows, we write \(S=S(\mathcal{X})\); any relation belonging to \(S\) (respectively, \(S^{\cup}\)) is called a **basis relation** (respectively, a **relation** of \(\mathcal{X}\)). A set \(\Delta\subseteq\Omega\) is called a **fiber** of \(\mathcal{X}\) if the relation \(1_{\Delta}\) is basis. The set of all fibers is denoted by \(F=F(\mathcal{X})\). Any element of \(F^{\cup}\) is called a **homogeneity set** of \(\mathcal{X}\). In particular, the (left, right) support of every relation of \(\mathcal{X}\) is a homogeneity set. ### Isomorphisms and separability Let \(\mathcal{X}=(\Omega,S)\) and \(\mathcal{X}^{\prime}=(\Omega^{\prime},S^{\prime})\) be two coherent configurations. A bijection \(f\colon\Omega\to\Omega^{\prime}\) is called a (combinatorial) **isomorphism** from \(\mathcal{X}\) to \(\mathcal{X}^{\prime}\) if the relation \(s^{f}\) belongs to \(S^{\prime}\) for every \(s\in S\). The combinatorial isomorphism \(f\) induces a natural bijection \(\varphi\colon S\to S^{\prime}\), \(s\mapsto s^{f}\). One can see that \(\varphi\) preserves the numbers from the condition (C3), namely, the numbers \(c^{t}_{rs}\) and \(c^{t^{\varphi}}_{r\varphi,s\varphi}\) are equal for all \(r,s,t\in S\). Every bijection \(\varphi\colon S\to S^{\prime}\) having this property is called an **algebraic isomorphism**, written as \(\varphi\colon\mathcal{X}\to\mathcal{X}^{\prime}\). A coherent configuration is called **separable** if every algebraic isomorphism from it to another coherent configuration is induced by an isomorphism. An algebraic isomorphism \(\varphi\colon\mathcal{X}\to\mathcal{X}^{\prime}\) induces a uniquely determined bijection \(S^{\cup}\to S^{\prime\cup}\) denoted also by \(\varphi\). For any \(\Delta\in F^{\cup}\), we have \(\varphi(1_{\Delta})=1_{\Delta\varphi}\) for a uniquely determined \(\Delta^{\varphi}\in F^{\prime\cup}\). The induced mapping \(\Delta\mapsto\Delta^{\varphi}\) defines a bijection \(F^{\cup}\to F^{\prime\cup}\) that takes \(F\) to \(F^{\prime}\). ### Parabolics An equivalence relation \(e\) on a set \(\Delta\subseteq\Omega\) is called a **partial parabolic** of the coherent configuration \(\mathcal{X}\) if \(e\) is the union of some basis relations; if, in addition, \(\Delta=\Omega\), then \(e\) is called a **parabolic** of \(\mathcal{X}\). Note that the transitive closure of any symmetric relation of \(\mathcal{X}\) is a partial parabolic. A (partial) parabolic \(e\) is said to be **decomposable** if \(e\) is the union of pairwise disjoint non-empty partial parabolics; we say that \(e\) is **indecomposable** if it is not decomposable. Every partial parabolic is the disjoint union of uniquely determined indecomposable partial parabolics; they are called the **indecomposable components** of \(e\). Let \(e\) be a partial parabolic, and let \(\Delta\in\Omega/e\). Denote by \(S_{\Omega/e}\) and \(S_{\Delta}\) the sets of all non-empty relations \[s_{\Omega/e}=\{(\Delta,\Gamma)\in\Omega/e\times\Omega/e\colon\ s_{\Delta, \Gamma}\neq\varnothing\}\quad\text{and}\quad s_{\Delta}=s_{\Delta,\Delta},\] respectively, where \(s\in S\). Then the pairs \(\mathcal{X}_{\Omega/e}=(\Omega/e,S_{\Omega/e})\) and \(\mathcal{X}_{\Delta}=(\Delta,S_{\Delta})\) are coherent configurations called the **quotient** of \(\mathcal{X}\) modulo \(e\) and **restriction** of \(\mathcal{X}\) to \(\Delta\). Note that when \(e=1_{\Delta}\) for a homogeneity set \(\Delta\), we have \(\mathcal{X}_{\Omega/e}=\mathcal{X}_{\Delta}\). Let \(e^{\circ}\) be an indecomposable component of \(e\). Then \(e^{\circ}_{\Omega/e}\) is a reflexive relation of \(\mathcal{X}_{\Omega/e}\), that is, \(e^{\circ}_{\Omega/e}=1_{\Delta^{\circ}}\) where \(\Delta^{\circ}\) is a homogeneity set of \(\mathcal{X}_{\Omega/e}\). Let \(\Pi(e)\) denote the set of all \(\Delta^{\circ}\) as \(e^{\circ}\) runs over the set of all indecomposable component of \(e\). Note that \(\Pi(e)\) is a partition of \(\Omega/e\), while the corresponding equivalence relation is a parabolic of \(\mathcal{X}_{\Omega/e}\). Any algebraic isomorphism \(\varphi\colon\mathcal{X}\to\mathcal{X}^{\prime}\) induces a bijection between partial parabolics of \(\mathcal{X}\) and \(\mathcal{X}^{\prime}\) that preserves the property of a partial parabolic to be indecomposable. Let \(e\) be a partial parabolic of \(\mathcal{X}\) and \(e^{\prime}=\varphi(e)\). Then the mapping \(s_{\Omega/e}\mapsto\varphi(s)_{\Omega^{\prime}/e^{\prime}}\), \(s\in S\), defines an algebraic isomorphism \(\varphi_{\Omega/e}\colon\mathcal{X}_{\Omega/e}\to\mathcal{X}^{\prime}_{\Omega ^{\prime}/e^{\prime}}\). We say that the classes \(\Delta\in\Omega/e\) and \(\Delta^{\prime}\in\Omega^{\prime}/e^{\prime}\) are \(\varphi\)-**associated** if \(\varphi\) takes the indecomposable component of \(e\) containing \(\Delta\) as a class to the indecomposable component of \(e^{\prime}\) containing \(\Delta^{\prime}\) as a class. According to [7, Example 2.3.16], in this case \(\varphi\) induces an algebraic isomorphism \(\varphi_{\Delta,\Delta^{\prime}}\colon\mathcal{X}_{\Delta}\to\mathcal{X}^{ \prime}_{\Delta^{\prime}}\) such that \(\varphi_{\Delta,\Delta^{\prime}}(s_{\Delta})=\varphi(s)_{\Delta^{\prime}}\) for every \(s\in S\). ### Coherent closure There is a natural partial order \(\,\leq\,\) on the set of all coherent configurations on the same set \(\Omega\). Namely, given two such coherent configurations \(\mathcal{X}\) and \(\mathcal{X}^{\prime}\), we set \(\mathcal{X}\leq\mathcal{X}^{\prime}\) if and only if each basis relation of \(\mathcal{X}\) is the union of some basis relations of \(\mathcal{X}^{\prime}\). In other words, this means that the partition of \(\Omega^{2}\) into basis relations of \(\mathcal{X}^{\prime}\) is finer than the partition of \(\Omega^{2}\) into basis relations of \(\mathcal{X}\). The minimal and maximal elements with respect to this ordering are the **trivial** and **discrete** coherent configurations: the basis relations of the former one are the reflexive relation \(1_{\Omega}\) and its complement in \(\Omega\times\Omega\) (if \(|\Omega|\geq 1\)), whereas the basis relations of the latter one are singletons. Note that the trivial and discrete coherent configurations are separable. **Lemma 3.1**.: _A coherent configuration on \(\Omega\) having a transitive tournament on \(\Omega\) as a relation is discrete._ Proof.: Let \(s\) be a relation of a coherent configuration on \(\Omega\). Suppose that the supports of \(s\) equal \(\Omega\) and \(s\) is a transitive tournament. Then there are no two distinct points \(\alpha,\beta\) such that \(|\alpha s|=|\beta s|\). Hence, no two distinct points lie in the same fiber, which means that the coherent configuration is discrete. The **coherent closure**\(\mathsf{WL}(T)\) of a set \(T\) of binary relations on \(\Omega\), is defined to be the smallest coherent configuration on \(\Omega\) such that each relation of \(T\) is a union of some basis relations. The coherent closure is canonical with respect to algebraic isomorphisms in the sense that if \(\varphi,\psi\colon\mathsf{WL}\left(T\right)\to\mathsf{WL}\left(T^{\prime}\right)\) are algebraic isomorphisms such that \(\varphi(t)=\psi(t)\) for all \(t\in T\), then \(\varphi=\psi\). Furthermore, the coherent closure is a closure operator2 on the set of all partitions of \(\Omega^{2}\) satisfying conditions (C1) and (C2) in the definition of a coherent configuration. Footnote 2: with respect to the natural partial order on the partitions of the same set. For points \(\alpha,\beta,\ldots\in\Omega\), we denote by \(\mathcal{X}_{\alpha,\beta,\ldots}\) the coherent closure of the union of \(S\) and the set \(\{1_{\{\alpha\}},1_{\{\beta\}},\ldots\}\). For an equivalence relation \(e\) on \(\Omega\), we denote by \(\mathcal{X}_{e}\) the coherent closure of the union of \(S\) and \(\{e\}\). For a partition \(\pi\) of \(\Omega\), we denote by \(\mathcal{X}_{\pi}\) the coherent closure of the union of \(S\) and all of \(1_{\Delta}\), \(\Delta\in\pi\). The **coherent configuration of a graph \(X\)** is defined to be the coherent closure of its edge set: \(\mathsf{WL}(X)=\mathsf{WL}(\{E(X)\})\). Note that \(\mathsf{WL}(X)=\mathsf{WL}(\overline{X})\). **Lemma 3.2**.: _The \(0\)-equivalence and \(1\)-equivalence of a graph are parabolics of its coherent configuration._ Proof.: Follows from [14, Proposition 4.10]. ### Direct sum and tensor product Let \(\mathcal{X}=(\Omega,S)\) and \(\mathcal{X}^{\prime}=(\Omega^{\prime},S^{\prime})\) be two coherent configurations. Denote by \(\Omega\sqcup\Omega^{\prime}\) the disjoint union of \(\Omega\) and \(\Omega^{\prime}\), and by \(S\boxplus S^{\prime}\) the union of the set \(S\sqcup S^{\prime}\) and the set of all relations \(\Delta\times\Delta^{\prime}\) and \(\Delta^{\prime}\times\Delta\) with \(\Delta\in F\) and \(\Delta^{\prime}\in F^{\prime}\). Then the pair \[\mathcal{X}\boxplus\mathcal{X}^{\prime}=(\Omega\sqcup\Omega^{\prime},S \boxplus S^{\prime})\] is a coherent configuration called the **direct sum** of \(\mathcal{X}\) and \(\mathcal{X}^{\prime}\). One can see that \(\mathcal{X}\boxplus\mathcal{X}^{\prime}\) is the smallest coherent configuration \(\mathcal{Y}\) on \(\Omega\sqcup\Omega^{\prime}\) such that \(\mathcal{X}=\mathcal{Y}_{\Omega}\) and \(\mathcal{X}^{\prime}=\mathcal{Y}_{\Omega^{\prime}}\). It should be noted that \(\mathcal{X}\boxplus\mathcal{X}^{\prime}\) is separable if and only if so are \(\mathcal{X}\) and \(\mathcal{X}^{\prime}\), see [7, Corollary 3.2.8]. **Lemma 3.3**.: _Let \(X\) be a composition graph and \(\pi\) be the partition of the vertex set of \(X\) into the modules of this composition. Then \(\mathsf{WL}\left(X\right)_{\pi}=\boxplus_{\Delta\in\pi}\mathsf{WL}\left(X_{ \Delta}\right)\)._ Proof.: Let \(X=(\Omega,E)\). Then, given distinct \(\Delta,\Gamma\in\pi\), we have \[E_{\Delta,\Gamma}=\varnothing\ \ \text{or}\ \ \Delta\times\Gamma\subseteq E.\] Let \(X^{\prime}=(\Omega,E^{\prime})\) be the graph obtained from \(X\) by removing the set \(E_{0}\) of all edges \(E_{\Delta,\Gamma}\), \(\Delta\neq\Gamma\). Then the vertex set of every connected component of \(X^{\prime}\) is contained in some \(\Delta\in\pi\). Moreover, \(E^{\prime}=E\setminus E_{0}\) and \(E=E^{\prime}\cup E_{0}\). Since \(E_{0}\) is a relation of both \(\mathsf{WL}\left(X^{\prime}\right)_{\pi}\) and \(\mathsf{WL}\left(X\right)_{\pi}\), we obtain \[\mathsf{WL}\left(X^{\prime}\right)_{\pi}=\mathsf{WL}\left(X\right)_{\pi}.\] It is easily seen that each graph \(X_{\Delta}\) is a union of connected components of the graph \(X^{\prime}\), so that the lemma follows from [7, Exercise 3.7.35]. Given coherent configurations \(\mathcal{X}_{1}=(\Omega_{1},S_{1})\) and \(\mathcal{X}_{2}=(\Omega_{2},S_{2})\) denote by \(S_{1}\otimes S_{2}\) the set of all relations \[s_{1}\otimes s_{2}=\left\{((\alpha_{1},\alpha_{2}),(\beta_{1},\beta_{2}))\in( \Omega_{1}\times\Omega_{2})^{2}:\ (\alpha_{1},\beta_{1})\in s_{1},\ (\alpha_{2},\beta_{2})\in s_{2}\right\},\] where \(s_{1}\in S_{1}\) and \(s_{2}\in S_{2}\). Then the pair \(\mathcal{X}_{1}\otimes\mathcal{X}_{2}=(\Omega_{1}\times\Omega_{2},S_{1}\otimes S _{2})\) is a coherent configuration. It is called the **tensor product** of \(\mathcal{X}_{1}\) and \(\mathcal{X}_{2}\). For each positive integer \(m\), the \(m\)-tensor power of \(\mathcal{X}\) is denoted by \(\mathcal{X}^{m}\). ### Extensions of coherent configurations and their algebraic isomorphisms Let \(\mathcal{X}\) be a coherent configuration on \(\Omega\) and \(m\) a positive integer. The \(m\)**-extension** of \(\mathcal{X}\) is by definition the following coherent configuration on \(\Omega^{m}\): \[\widehat{\mathcal{X}}^{(m)}=\mathsf{WL}\left(S\left(\mathcal{X}^{m}\right) \cup\{1_{\operatorname{Diag}(\Omega^{m})}\}\right)\] where \(\mathcal{X}^{m}\) is the \(m\)-fold tensor power of \(\mathcal{X}\). We observe that except for trivial cases the \(m\)-extension is a non-homogeneous coherent configuration for all \(m\geq 2\). From the definition it follows that \(\widehat{\mathcal{X}}^{(1)}=\mathcal{X}\) and \(\widehat{\mathcal{X}}^{(m)}\geq\mathcal{X}^{m}\). Let \(\varphi\colon\mathcal{X}\to\mathcal{X}^{\prime}\) be an algebraic isomorphism. Then \(\varphi\) induces the component-wise algebraic isomorphism \(\varphi^{m}:\mathcal{X}^{m}\to\mathcal{X}^{\prime\prime m}\). An algebraic isomorphism \(\psi\colon\widehat{\mathcal{X}}^{(m)}\to\widehat{\mathcal{X}}^{\prime(m)}\) is called an \(m\)**-extension** if 1. \((\operatorname{Diag}(\Omega^{m}))^{\psi}=\operatorname{Diag}((\Omega^{\prime}) ^{m})\), 2. \(s^{\psi}=s^{\varphi^{m}}\) for all \(s\in S(\mathcal{X}^{m})\). Each algebraic isomorphism obviously has a \(1\)-extension, which always coincides with it. Furthermore, for any \(m\) the existence of the \(m\)-extension of \(\varphi\) implies its uniqueness (this follows from the canonicity of the coherent closure with respect to algebraic isomorphisms); we denote it by \(\widehat{\varphi}^{(m)}\). We note that not every algebraic isomorphism has an \(m\)-extension. An algebraic isomorphism is called an \(m\)**-isomorphism** if it has the \(m\)-extension. Every algebraic isomorphism induced by some isomorphism is an \(m\)-isomorphism for all \(m\). Note that every \(m\)-isomorphism is a \(k\)-isomorphism for all \(k\leq m\). The coherent configuration \[\overline{\mathcal{X}}=\overline{\mathcal{X}}^{(m)}=((\widehat{\mathcal{X}}^ {(m)})_{\operatorname{Diag}(\Omega^{m})})^{\partial_{m}^{-1}},\] where \(\partial_{m}\colon\alpha\mapsto(\alpha,\dots,\alpha)\) is the diagonal mapping from \(\Omega\) to \(\Omega^{m}\), is called the \(m\)**-closure** of \(\mathcal{X}\); if \(\mathcal{X}=\overline{\mathcal{X}}^{(m)}\), then \(\mathcal{X}\) is said to be \(m\)**-closed** (in particular, any coherent configuration is \(1\)-closed). Any \(m\)-isomorphism \(\varphi\) uniquely extends to the algebraic isomorphism \(\overline{\varphi}^{(m)}\) between the corresponding \(m\)-closures. A coherent configuration \(\mathcal{X}\) is said to be \(m\)**-separable** if, for all \(\mathcal{X}^{\prime}\), every \(m\)-isomorphism from \(\mathcal{X}\) to \(\mathcal{X}^{\prime}\) is induced by an isomorphism. The integer \[s(\mathcal{X})=\min\{m:\ \mathcal{X}\text{ is $m$-separable}\}\] is called the **separability number** of \(\mathcal{X}\). Obviously, \(s(\mathcal{X})=1\) if and only if a coherent configuration \(\mathcal{X}\) is separable. We will frequently use throughout the paper the following lemma (see [10, Lemma 6.2]). Below, given \(s\subseteq\Omega^{2}\) and \(i,j\in\{1,\dots,m\}\), we set \[\operatorname{cyl}_{s}(i,j)=\{(x,y)\in\Omega^{m}\times\Omega^{m}\colon\ (x_{i},y_{j})\in s\}.\] **Lemma 3.4**.: _Let \(\mathcal{X}\) be a coherent configuration on \(\Omega\) and \(m\) a positive integer. For all \(s\in S(\mathcal{X})^{\cup}\) and \(i,j\in\{1,\dots,m\}\), one has:_ 1. \(\operatorname{cyl}_{s}(i,j)\) _is a relation of the coherent configuration_ \(\widehat{\mathcal{X}}^{(m)}\)_;_ 2. \(\operatorname{cyl}_{s}(i,j)^{\widehat{\varphi}}=\operatorname{cyl}_{s} \overline{\nu}(i,j)\) _for any_ \(m\)_-isomorphism_ \(\varphi\)_, where_ \(\widehat{\varphi}=\widehat{\varphi}^{(m)}\) _and_ \(\overline{\varphi}=\overline{\varphi}^{(m)}\)_._ ## 4. Composition graphs and their coherent configurations In this section we study the coherent configurations of composition graphs and their algebraic isomorphisms. Throughout the section, let \(X=(\Omega,E)\) be a composition graph with respect to an equivalence relation \(e\) on \(\Omega\), \(X_{0}\) the quotient graph of \(X\) modulo \(e\), and \(\pi=\Omega/e\). **Lemma 4.1**.: _In the above notation, one has_ \[\operatorname{\mathsf{WL}}\left(X\right)_{\pi}=\boxplus_{\Delta\in\pi} \operatorname{\mathsf{WL}}\left(X_{\Delta}\right)\quad\text{and}\quad \operatorname{\mathsf{WL}}\left(X\right)_{\pi}\geq\operatorname{\mathsf{WL}} \left(X\right)_{e}.\] _In particular, the restriction of \(\operatorname{\mathsf{WL}}\left(X\right)_{e}\) to any \(\Delta\in\pi\) equals \(\operatorname{\mathsf{WL}}\left(X_{\Delta}\right)\)._ Proof.: The first formula follows from Lemma 3.3. Further, as \(e\) is a parabolic of \(\operatorname{\mathsf{WL}}\left(X\right)_{\pi}\) and \(\operatorname{\mathsf{WL}}\left(X\right)_{\pi}\geq\operatorname{\mathsf{WL} }\left(X\right)\), one has \(\operatorname{\mathsf{WL}}\left(X\right)_{\pi}\geq\operatorname{\mathsf{WL} }\left(X\right)_{e}\) and hence: \[\operatorname{\mathsf{WL}}\left(X_{\Delta}\right)=\left(\operatorname{\mathsf{ WL}}\left(X\right)_{\pi}\right)_{\Delta}\geq\left(\operatorname{\mathsf{WL}}\left(X \right)_{e}\right)_{\Delta}\geq\operatorname{\mathsf{WL}}\left(X_{\Delta}\right),\] whence the lemma follows. The next theorem involves the notation and definitions from Section 3.3. **Theorem 4.2**.: _In the above notation, let \(\mathcal{X}^{\prime}\) be a coherent configuration on \(\Omega^{\prime}\), \(\varphi\colon\operatorname{\mathsf{WL}}\left(X\right)_{e}\to\mathcal{X}^{\prime}\) be an algebraic isomorphism, \(e^{\prime}=\varphi(e)\), \(\pi^{\prime}=\Omega^{\prime}/e^{\prime}\), and \(E^{\prime}=\varphi(E)\). Then the graph \(X^{\prime}=(\Omega^{\prime},E^{\prime})\) is a composition graph with respect to \(e^{\prime}\). Moreover, one has:_ 1. \(\mathcal{X}^{\prime}=\mathsf{WL}\left(X^{\prime}\right)_{e^{\prime}}\)_;_ 2. \(\varphi_{\Delta,\Delta^{\prime}}\left(\mathsf{WL}\left(X_{\Delta}\right)\right)= \mathsf{WL}\left(X^{\prime}_{\Delta^{\prime}}\right)\) _for all_ \(\varphi\)_-associated_ \(\Delta\in\pi\) _and_ \(\Delta^{\prime}\in\pi^{\prime}\)_;_ 3. \(\varphi\) _induces an algebraic isomorphism_ \(\varphi_{0}\colon\mathsf{WL}(X_{0})_{\Pi}\to\mathsf{WL}(X^{\prime}_{0})_{\Pi^{ \prime}}\)_, where_ \(\Pi=\Pi(e)\) _and_ \(\Pi^{\prime}=\Pi(e^{\prime})\)_._ Proof.: Recall that \(E\) is a relation of the coherent configuration \(\mathcal{X}=\mathsf{WL}\left(X\right)_{e}\) and, by Lemma 2.3, one has \(e\cdot(E-e)=(E-e)\cdot e=E-e\). Since algebraic isomorphism respect the set-theoretical operations with relations, we obtain \(e^{\prime}\cdot(E^{\prime}-e^{\prime})=(E^{\prime}-e^{\prime})\cdot e^{\prime }=E^{\prime}-e^{\prime}\). Hence, by Lemma 2.3, \(X^{\prime}\) is a composition graph with respect to \(e^{\prime}\). Then (i) follows from the definition of \(X^{\prime}\), whereas (ii) follows from the last statement of Lemma 4.1. Finally, the algebraic isomorphism \(\varphi\) induces an algebraic isomorphism \(\varphi_{0}\colon\mathcal{X}_{\Omega/e}\to\mathcal{X}^{\prime}_{\Omega^{ \prime}/e^{\prime}}\) which takes the edge set \(E_{\Omega/e}\) of \(X_{0}\) to the edge set \(E^{\prime}_{\Omega^{\prime}/e^{\prime}}\) of \(X^{\prime}_{0}\). Therefore, \(\varphi_{0}(\mathsf{WL}(X_{0}))=\mathsf{WL}(X^{\prime}_{0})\). It remains to observe that \(\varphi\) and hence \(\varphi_{0}\) takes the classes of \(\Pi\) to those of \(\Pi^{\prime}\). **Theorem 4.3**.: _In the notation of Theorem 4.2, the following holds:_ 1. _if_ \(\varphi_{0}\) _and each of_ \(\varphi_{\Delta,\Delta^{\prime}}\) _are induced by isomorphisms, then so is_ \(\varphi\)_;_ 2. _if_ \(\varphi\) _is an_ \(m\)_-isomorphism for some natural_ \(m\)_, then so are_ \(\varphi_{0}\) _and each of_ \(\varphi_{\Delta,\Delta^{\prime}}\)_._ Proof.: Let \(f_{0}\colon\pi\to\pi^{\prime},\Delta\mapsto\Delta^{\prime}\) be a bijection that induces \(\varphi_{0}\). Further, let \(f_{\Delta,\Delta^{\prime}}\colon\Delta\to\Delta^{\prime}\) be a bijection that induces \(\varphi_{\Delta,\Delta^{\prime}}\). Then there exists a uniquely determined bijection \(f\colon\Omega\to\Omega^{\prime}\) such that \(f\mid_{\Delta}=f_{\Delta,\Delta^{\prime}}\) for all \(\Delta\in\pi\). We shall prove that \(f\) induces \(\varphi\), i.e., \(s^{f}=\varphi(s)\) for all basis relations \(s\) of the coherent configuration \(\mathsf{WL}\left(X\right)_{e}\). Let us first assume that \(s\cap e=\varnothing\). Then, by Lemma 4.1 and by the definition of a direct sum of coherent configurations, one can see that \[s=\bigcup_{(\Delta_{1},\Delta_{2})\in s_{\Omega/e}}\Delta_{1}\times\Delta_{2}\] and then \[s^{f}=\bigcup_{(\Delta^{\prime}_{1},\Delta^{\prime}_{2})\in(s_{\Omega/e})^{f_{ 0}}}\Delta^{\prime}_{1}\times\Delta^{\prime}_{2}.\] Since \(f_{0}\) induces \(\varphi_{0}\), it follows that \((s_{\Omega/e})^{f_{0}}=\varphi_{0}(s_{\Omega/e})=\varphi(s)_{\Omega^{\prime}/ \varphi(e)}=s^{\prime}_{\Omega^{\prime}/e^{\prime}}\), where \(s^{\prime}=\varphi(s)\). Thus, \(s^{f}=\varphi(s)\). Now let \(s\subseteq e\). Denote by \(e^{\circ}\) be the indecomposable component of \(e\) that contains \(s\). Then \(s=\bigcup_{\Delta\in\Lambda}s_{\Delta}\), where \(\Lambda=\Omega/e^{\circ}\) is a class of \(\Pi\). Define \(\Lambda^{\prime}=\Omega^{\prime}/\varphi(e^{\circ})\), which is a class of \(\Pi^{\prime}\). Then: \[s^{f}=\bigcup_{\Delta^{\prime}\in\Lambda^{f}}(s_{\Delta})^{f}=\bigcup_{\Delta^ {\prime}\in\Lambda^{\prime}}(s_{\Delta})^{f_{\Delta,\Delta^{\prime}}}=\bigcup _{\Delta^{\prime}\in\Lambda^{\prime}}\varphi_{\Delta,\Delta^{\prime}}\left(s_{ \Delta}\right)=\bigcup_{\Delta^{\prime}\in\Lambda^{\prime}}\varphi(s)_{\Delta^ {\prime}}=\varphi(s).\] (ii) Put \(\mathcal{X}=\mathsf{WL}\left(X\right)_{e}\) and let \(\mathcal{X}^{\prime}\) be as in Theorem 4.2(i). Denote by \(\widehat{\mathcal{X}},\widehat{\mathcal{X}}^{\prime}\) the \(m\)-extensions of \(\mathcal{X}\) and \(\mathcal{X}^{\prime}\), respectively, and by \(\widehat{\varphi}\colon\widehat{\mathcal{X}}\to\widehat{\mathcal{X}}^{\prime}\) the \(m\)-extension of \(\varphi\). Then, for every \(\Delta\in\pi\), we have \(\mathsf{WL}(X_{\Delta})=\mathcal{X}_{\Delta}\) by Lemma 4.1 and \[\widehat{\mathcal{X}}_{\Delta^{m}}\geq\widehat{\mathcal{X}_{\Delta}}^{\,(m)}.\] It follows from Lemma 3.4(i) that \(\cap_{1\leq i,j\leq m}\operatorname{cyl}_{e}(i,j)\) is a relation of \(\widehat{\mathcal{X}}\). Therefore, the support of this relation, which equals \(\Omega^{\prime}=\cup_{\Delta\in\pi}\Delta^{m}\), is a homogeneity set of \(\widehat{\mathcal{X}}\). Furthermore, since \(e^{m}\) is a parabolic of \(\mathcal{X}^{m}\), it follows that \((e^{m})_{\Omega^{\prime}}\) is a partial parabolic of \(\widehat{\mathcal{X}}\), whose classes are \(\Delta^{m}\), \(\Delta\in\pi\). Hence we can define an algebraic isomorphism \(\widehat{\psi}=\widehat{\varphi}_{\Delta^{m}}\) from \(\widehat{\mathcal{X}}_{\Delta^{m}}\) to \(\widehat{\mathcal{X}}^{\prime}_{(\Delta^{\prime})^{m}}\). Since \(\operatorname{Diag}\left(\Delta^{m}\right)=\operatorname{Diag}\left(\Omega^{ m}\right)\cap\Delta^{m}\), one can see that \(\widehat{\psi}\) takes \(\operatorname{Diag}\left(\Delta^{m}\right)\) to \(\operatorname{Diag}\left((\Delta^{\prime})^{m}\right)\). Moreover, for a basis relation \(s=s_{1}\otimes\cdots\otimes s_{m}\) of \(\mathcal{X}^{m}\), \[\widehat{\psi}(s_{\Delta^{m}})=\widehat{\varphi}_{\Delta^{m}}((s_{1})_{\Delta }\otimes\cdots\otimes(s_{m})_{\Delta})=\widehat{\varphi}(s_{1}\otimes\cdots \otimes s_{m})_{\Delta^{m}}=(\varphi_{\Delta,\Delta^{\prime}})^{m}(s).\] Thus, \(\widehat{\psi}\) is the \(m\)-extension of \(\varphi_{\Delta,\Delta^{\prime}}\). ## 5. Uniquely orientable graphs In this section, we obtain an upper bound on the separability number of uniquely orientable graphs. This bound follows from Theorem 5.2. Its proof requires the lemma below, which will also be used in Section 6. In what follows, let \(X=(\Omega,E)\) be a graph, \(\mathcal{X}=\mathsf{WL}(X)\), \(\widehat{\mathcal{X}}\) denote \(\mathcal{X}^{(2)}\), and \(\partial=\partial_{2}\colon\alpha\mapsto(\alpha,\alpha)\) be the diagonal mapping from \(\Omega\) to \(\Omega^{2}\). **Lemma 5.1**.: _Let \(X\) be an irreducible permutation graph. Denote by \(e_{1}=e_{1}(X)\) and \(e_{2}=e_{2}(X)\) partial equivalence relations on \(\Omega^{2}\) such that_ \[\Omega^{2}/e_{1}=\{I(\mathbf{e})\mid\mathbf{e}\in E\}\quad\text{and}\quad \Omega^{2}/e_{2}=\{\Delta^{2}\setminus\operatorname{Diag}\left(\Delta^{2} \right)\mid\Delta\in\Omega/{\sim}\}.\] _Then \(e_{1},e_{2}\) are partial parabolics of \(\widehat{\mathcal{X}}\)._ Proof.: By Lemma 3.4, the coherent configuration \(\widehat{\mathcal{X}}\) has the following relations \[L_{1} = \operatorname{cyl}_{1_{\Omega}}(2,2)\cap\operatorname{cyl}_{ \overline{E}}(1,1)\cap\operatorname{cyl}_{E}(1,2)\cap\operatorname{cyl}_{E}(2,1),\] \[L_{2} = \operatorname{cyl}_{1_{\Omega}}(1,1)\cap\operatorname{cyl}_{ \overline{E}}(2,2)\cap\operatorname{cyl}_{E}(1,2)\cap\operatorname{cyl}_{E}(2,1).\] Hence \(\widehat{\mathcal{X}}\) has the relation \(L_{1}\cup L_{2}\), which is exactly the \(\Gamma\)-relation on the edges of \(X\). Therefore, \(e_{1}\) coincides with the transitive closure of \(L_{1}\cup L_{2}\), which is a partial parabolic of \(\widehat{\mathcal{X}}\). Let us prove that \(e_{2}\) is a partial parabolic of \(\widehat{\mathcal{X}}\). Define a partial parabolic \(e_{0}=e_{1}(X)\cup e_{1}(\overline{X})\cup\operatorname{l}_{\operatorname{ Diag}\left(\Omega^{2}\right)}\) of \(\widehat{\mathcal{X}}\). Observe that by Eq. (1) every class \(\Delta^{2}\setminus\operatorname{Diag}\left(\Delta^{2}\right)\) of \(e_{2}\), \(\Delta\in\Omega/{\sim}\), is the union of the four implication classes of \(X_{\Delta}\), which are classes of \(e_{0}\). In order to identify the classes of \(e_{2}\), let us define an auxiliary relation \[s:=\left(\operatorname{cyl}_{1_{\Omega}}(1,1)\cup\operatorname{cyl}_{1_{ \Omega}}(2,2)\right)\cap\left(\left(E\cup\overline{E}\right)\times \operatorname{Diag}\left(\Omega^{2}\right)\right)\] Observe that \(s\) is a relation of \(\widehat{\mathcal{X}}\). Indeed, \(E\) coincides with the support of \(\operatorname{cyl}_{1_{\Omega}}(1,1)\cap\operatorname{cyl}_{E}(1,2)\cap \operatorname{cyl}_{1_{\Omega}}(2,2)\), which is a relation of \(\widehat{\mathcal{X}}\) by Lemma 3.4. Hence \(E\) (and similarly \(\overline{E}\)) is a homogeneity set of \(\widehat{\mathcal{X}}\), whence \(s\in S(\mathcal{X})^{\cup}\). Note that \(s\) consists of all pairs \(((\alpha,\beta),(\gamma,\gamma))\) where \((\alpha,\beta)\in E\cup\overline{E}\) and \(\gamma\in\{\alpha,\beta\}\). Put \(N(\mathbf{e}):=\cup_{\mathbf{f}\in I(\mathbf{e})}\mathbf{f}s\) (here we recall that \(\mathbf{f}s\) means the neighborhood of a point \(\mathbf{f}\) in the relation \(s\)). Then, for every \(\mathbf{e}\in E\cup\overline{E}\), it follows from the definition of \(\Omega(\mathbf{e})\) (see the paragraph before Corollary 2.6) that \[N(\mathbf{e})=\Omega(\mathbf{e})^{\partial}.\] Suppose that \(\Delta\in\Omega/{\sim}\) and \(\mathbf{e}\in E\cup\overline{E}\). If \(\mathbf{e}\in\Delta\times\Delta^{\prime}\) for some \(\Delta^{\prime}\in\Omega/{\sim}\), \(\Delta\neq\Delta^{\prime}\), then \(\Delta^{\partial},\Delta^{\prime\partial}\subsetneq N(\mathbf{e})\) by Lemma 2.10. On the other hand, if \(\mathbf{e}\in\Delta^{2}\setminus\operatorname{Diag}\left(\Delta^{2}\right)\), then \(N(\mathbf{e})=\Omega(\mathbf{e})^{\partial}=\Delta^{\partial}\) by Corollary 2.6; hence, given \(\mathbf{f}\in E\cup\overline{E}\), one has \[N(\mathbf{e})=N(\mathbf{f})\quad\text{if and only if}\quad\mathbf{f}\in\Delta^{2} \setminus\operatorname{Diag}\left(\Delta^{2}\right). \tag{3}\] Now, if \(\rho\) denotes the natural surjection \(\rho\colon\Omega^{2}\to\Omega^{2}/e_{0}\), then: * the \(\rho\)-image of \(\operatorname{Diag}\left(\Omega^{2}\right)\) is naturally identified with \(\operatorname{Diag}\left(\Omega^{2}\right)\) itself; * the \(\rho\)-image of \(I(\mathbf{e})\) is a singleton for every \(\mathbf{e}\in E\cup\overline{E}\); * the set of all pairs \(\rho(I(\mathbf{e}))\) and \(\rho(I(\mathbf{f}))\) such that \(\rho(N(\mathbf{e}))=\rho(N(\mathbf{f}))\) is an equivalence relation, say \(e^{\prime}\), on \(\rho\left(E\cup\overline{E}\right)\). Further, since \(s\) is a relation of \(\widehat{\mathcal{X}}\), \(\rho(s)\) is a relation of the quotient of the coherent configuration \(\widehat{\mathcal{X}}\) modulo \(e_{0}\). Therefore, according to [7, Exercise 2.7.8(1)], we see that the equivalence relation \(e^{\prime}\) is a partial parabolic of this quotient. It follows that \(e^{*}=\rho^{-1}(e^{\prime})\) is a partial parabolic of \(\widehat{\mathcal{X}}\) (see [7, Theorem 3.1.11]). Clearly, every class of \(e^{*}\) is a union of some classes of \(e_{0}\). Every class of \(e_{2}\) that intersects some class \(\Lambda\) of \(e^{*}\) must coincide with \(\Lambda\) by virtue of Eq. (3). To complete the proof, put \(\overline{e}=e^{*}\setminus e_{2}\) and define a binary relation \[s_{1}=\{((\alpha,\beta),(\alpha^{\prime},\beta^{\prime}))\in(E\cup\overline{E} )^{2}:\ ((\alpha,\alpha^{\prime}),(\alpha,\beta))\in\overline{e}\},\] It is easily seen that \(s_{1}=(\overline{e}\cap\operatorname{cyl}_{1_{\Omega}}(1,1))\cdot \operatorname{cyl}_{1_{\Omega}}(2,1)\) and hence \(s_{1}\) is a relation of \(\widehat{\mathcal{X}}\). By the above, a class \(\Lambda\) of \(e^{*}\) belongs to \(e_{2}\) if and only if \(s_{1}\cap\Lambda^{2}=\varnothing\). Consequently, \(e_{2}=e^{*}\setminus(e^{*}\cdot(s_{1}\cap e^{*})\cdot e^{*})\), which implies that \(e_{2}\) is a partial parabolic of \(\widehat{\mathcal{X}}\). **Theorem 5.2**.: _Let \(X\) be a uniquely orientable graph. Then for all \(\mathbf{e}\in E\) and \(\mathbf{f}\in\overline{E}\) the coherent configuration \(\widehat{\mathcal{X}_{\mathbf{e},\mathbf{f}}}\) is discrete._ Proof.: Choose an arbitrary \(\mathbf{e}\in E\) and show that \(I(\mathbf{e})^{\partial}=\{(\alpha^{\partial},\beta^{\partial})\mid(\alpha, \beta)\in I(\mathbf{e})\}\) is a relation of \(\widehat{\mathcal{X}_{\mathbf{e}}}\) (the case \(\mathbf{e}\in\overline{E}\) is similar, as the graph \(\overline{X}\) is also uniquely orientable). Since \(I(\mathbf{e})\) and \(I(\mathbf{e})^{*}\) are the only transitive orientations of \(X\), the partial parabolic \(e_{1}\) of \(\widehat{\mathcal{X}}\), defined in Lemma 5.1, has the set of classes \(\Omega^{2}/e_{1}=\{I(\mathbf{e}),I(\mathbf{e})^{*}\}\). It follows that \(I(\mathbf{e})\), which is the neighborhood of \(\mathbf{e}\) in \(e_{1}\), is a homogeneity set of \(\widehat{\mathcal{X}_{\mathbf{e}}}\) (see [7, Lemma 3.3.5]). Since \(\Delta:=\operatorname{Diag}\left(\Omega^{2}\right)\) is a homogeneity set of \(\widehat{\mathcal{X}}\), \[s_{1}=(\Delta\times I(\mathbf{e}))\cap\operatorname{cyl}_{1_{\Omega}}(1,1) \quad\text{and}\quad s_{2}=(I(\mathbf{e})\times\Delta)\cap\operatorname{cyl} _{1_{\Omega}}(2,2)\] are relations of \(\widehat{\mathcal{X}_{\mathbf{e}}}\). Hence \(s_{1}\cdot s_{2}=\{(\alpha^{\partial},\beta^{\partial})\mid(\alpha,\beta)\in I (\mathbf{e})\}\) is a relation of \(\widehat{\mathcal{X}_{\mathbf{e}}}\), as required. Let \(\mathbf{e}\in E\) and \(\mathbf{f}\in\overline{E}\). By the above, \(I(\mathbf{e})^{\partial}\cup I(\mathbf{f})^{\partial}\) is a relation of \(\widehat{\mathcal{X}_{\mathbf{e},\mathbf{f}}}\), as the set of relations of \(\widehat{\mathcal{X}_{\mathbf{e},\mathbf{f}}}\) includes the relations of both \(\widehat{\mathcal{X}_{\mathbf{e}}}\) and \(\widehat{\mathcal{X}_{\mathbf{f}}}\). Furthermore, by Lemma 2.2, it is a transitive tournament on \(\Delta\). Therefore, by Lemma 3.1, the restriction of \(\widehat{\mathcal{X}_{\mathbf{e},\mathbf{f}}}\) to \(\Delta\) is discrete, and so is \(\widehat{\mathcal{X}_{\mathbf{e},\mathbf{f}}}\). The theorem is proved. **Proposition 5.3**.: _The separability number of a uniquely orientable graph is at most \(6\)._ Proof.: Let \(X\) be a uniquely orientable graph and \(\mathcal{X}=\mathsf{WL}\left(X\right)\). By Theorem 5.2, a \(2\)-point extension of \(\widehat{\mathcal{X}}\) is a discrete coherent configuration. Hence, by Theorem [10, Theorem 4.6(1)], the separability number of \(\widehat{\mathcal{X}}\) is at most \(3\). Finally, by Theorem [10, Theorem 4.6(3)], the separability number of \(\mathcal{X}\) is at most \(2\cdot 3=6\). ## 6. Proof of Theorem 1.1 We follow the notation from Section 5. Let \(X\) be a permutation graph. By [7, Corollary 4.6.24], it suffices to verify that the separability number of the coherent configuration \(\mathcal{X}=\mathsf{WL}\left(X\right)\) is at most \(6\). If \(X\) is uniquely orientable, then the result follows from Proposition 5.3. Suppose that \(X\) is not uniquely orientable. Let \(\psi\colon\mathcal{X}\to\mathcal{X}^{\prime}\) be a \(6\)-isomorphism. Then \(\psi\) has the \(2\)-extension \(\widehat{\psi}\). We will need the following lemma. **Lemma 6.1**.: _Assume that \(X\) is not uniquely orientable. Then it is a non-trivial composition graph with respect to an equivalence relation \(e\) such that \(e^{\partial}\) is a partial parabolic of the coherent configuration \(\widehat{\mathcal{X}}\)._ Proof.: If \(X\) is reducible, the result follows from Lemmas 2.4 (with \(e\) being either the \(0\)- or \(1\)-equivalence) and 3.2. Hence, in what follows, we assume that \(X\) is irreducible. Then \(X\) is a non-trivial composition graph as defined in Lemma 2.8, and let \(e\) denote the equivalence relation \(\sim\) (see the paragraph before Lemma 2.7). Let \(e_{2}\) denote the partial parabolic of \(\widehat{\mathcal{X}}\) as defined in Lemma 5.1, and \(\Lambda\) the support of \(e_{2}\). Denote by \(s\) a relation on \(\Lambda\times\operatorname{Diag}\left(\Omega^{2}\right)\) such that two points are in \(s\) if their first or second coordinates are equal, i.e., \[s=\left(\Lambda\times\operatorname{Diag}\left(\Omega^{2}\right)\right)\cap \left(\operatorname{cyl}_{\Omega}(1,1)\cup\operatorname{cyl}_{\Omega}(2,2) \right).\] Then the transitive closure \(t\) of \(s^{*}\cdot s\) (which is obviously reflexive and symmetric) is a partial parabolic of \(\widehat{\mathcal{X}}_{\operatorname{Diag}\left(\Omega^{2}\right)}\). By the definition of \(e_{2}\), the classes of \(t\) coincide with \(\Delta^{\partial}\), where \(\Delta\) runs over the non-singleton classes of \(e\). Thus, \(e^{\partial}=t\cup 1_{\operatorname{Diag}\left(\Omega^{2}\right)\setminus T}\), where \(T\) is the support of \(t\). Since \(T\) is a homogeneity set, \(e^{\partial}\) is a partial parabolic of \(\widehat{\mathcal{X}}\). The lemma is proved. Let \(e\) be the equivalence relation as defined in Lemma 6.1. By induction, we may assume that the coherent configuration \(\operatorname{\mathsf{WL}}\left(X_{0}\right)\), where \(X_{0}\) is the quotient graph of \(X\) modulo \(e\), and the coherent configurations \(\operatorname{\mathsf{WL}}\left(X_{\Delta}\right)\), \(\Delta\in\Omega/e\), are all \(6\)-separable. Put \(\widehat{e}=e^{\partial}\). Then \(\mathcal{X}^{\prime}\) coincides with the image of \(\widehat{\psi}(\mathcal{X}^{\partial})\) under the bijection \(\partial^{\prime}\colon(\alpha^{\prime},\alpha^{\prime})\mapsto\alpha^{\prime}\), \(\alpha^{\prime}\in\Omega^{\prime}\). It follows that \(\widehat{\psi}\) induces an algebraic isomorphism \(\varphi\colon\mathcal{X}_{e}\to\mathcal{X}^{\prime}_{e^{\prime}}\) where \(e^{\prime}=(\widehat{\psi}(\widehat{e}))^{\partial^{\prime}}\), and such that \[\varphi(s)=\psi(s)\text{ for all }s\in S(\mathcal{X}). \tag{4}\] Therefore, \(\mathcal{X}\), \(\mathcal{X}^{\prime}\) and \(\varphi\) satisfy the hypothesis of Theorem 4.2. Hence there exist the algebraic isomorphisms \(\varphi_{\Delta,\Delta^{\prime}}\) and \(\varphi_{0}\) as defined in Theorem 4.2(ii) and (iii). Further, by Theorem 4.3(ii), they are \(6\)-isomorphisms. Since \(\operatorname{\mathsf{WL}}\left(X_{0}\right)\) and \(\operatorname{\mathsf{WL}}\left(X_{\Delta}\right)\), for all \(\Delta\in\Omega/e\), are \(6\)-separable, \(\varphi_{0}\) and all \(\varphi_{\Delta,\Delta^{\prime}}\) are induced by isomorphisms. Thus, by Theorem 4.3(i), \(\varphi\) is also induced by isomorphism. By Eq. (4), it follows that \(\psi\) is induced by isomorphism, whence \(\mathcal{X}\) is \(6\)-separable.
2302.01858
A Computational Separation Between Quantum No-cloning and No-teleportation
Two of the fundamental no-go theorems of quantum information are the no-cloning theorem (that it is impossible to make copies of general quantum states) and the no-teleportation theorem (the prohibition on sending quantum states over classical channels without pre-shared entanglement). They are known to be equivalent, in the sense that a collection of quantum states is teleportable without entanglement if and only if it is clonable. Our main result suggests that this is not the case when computational efficiency is considered. We give a collection of quantum states and quantum oracles relative to which these states are efficiently clonable but not efficiently teleportable without entanglement. Given that the opposite scenario is impossible (states that can be teleported without entanglement can always trivially be cloned), this gives the most complete quantum oracle separation possible between these two important no-go properties. We additionally study the complexity class $\mathsf{clonableQMA}$, a subset of $\mathsf{QMA}$ whose witnesses are efficiently clonable. As a consequence of our main result, we give a quantum oracle separation between $\mathsf{clonableQMA}$ and the class $\mathsf{QCMA}$, whose witnesses are restricted to classical strings. We also propose a candidate oracle-free promise problem separating these classes. We finally demonstrate an application of clonable-but-not-teleportable states to cryptography, by showing how such states can be used to protect against key exfiltration.
Barak Nehoran, Mark Zhandry
2023-02-03T17:05:38Z
http://arxiv.org/abs/2302.01858v2
# A Computational Separation Between ###### Abstract Two of the fundamental no-go theorems of quantum information are the no-cloning theorem (that it is impossible to make copies of general quantum states) and the no-teleportation theorem (the prohibition on sending quantum states over classical channels without pre-shared entanglement). They are known to be equivalent, in the sense that a collection of quantum states is teleportable without entanglement if and only if it is clonable. Our main result suggests that this is not the case when computational efficiency is considered. We give a collection of quantum states and quantum oracles relative to which these states are efficiently clonable but _not_ efficiently teleportable without entanglement. Given that the opposite scenario is impossible (states that can be teleported without entanglement can always trivially be cloned), this gives the most complete quantum oracle separation possible between these two important no-go properties. We additionally study the complexity class \(\mathsf{clonableQMA}\), a subset of \(\mathsf{QMA}\) whose witnesses are efficiently clonable. As a consequence of our main result, we give a quantum oracle separation between \(\mathsf{clonableQMA}\) and the class \(\mathsf{QCMA}\), whose witnesses are restricted to classical strings. We also propose a candidate oracle-free promise problem separating these classes. We finally demonstrate an application of clonable-but-not-teleportable states to cryptography, by showing how such states can be used to protect against key exfiltration. ## 1 Introduction One of the defining features of quantum information is the no-cloning theorem: that it is impossible to copy a general quantum state [11, 2, 12]. Another fundamental no-go theorem is the no-teleportation theorem: that it is impossible (without any pre-shared entanglement) to send quantum information over a classical channel [10]. Because of the potential confusion with the very possible task of quantum teleportation [1], we prefer to use the term _telegraphing_ to refer to this latter task. These two no-go theorems are understood to be _equivalent_, in the following sense: given a set of quantum states \(S=\{|\psi_{1}\rangle,|\psi_{2}\rangle,\cdots\}\), then states in \(S\) can be cloned if and only if they can be telegraphed. Here, \(S\) being cloned means there is a process mapping \(|\psi_{i}\rangle\) to two copies of \(|\psi_{i}\rangle\). \(S\) being telegraphed means there is a deconstruction process which maps \(|\psi_{i}\rangle\) into classical information \(\tau\), and a reconstruction process that maps \(\tau\) back to \(|\psi_{i}\rangle\). In fact, both cloning and telegraphing are possible if and only if the states in \(S\) are orthogonal. Introducing Computational Constraints.The above discussion is information-theoretic. Here, we ask: _what happens when computational constraints are considered_? We consider a set \(S\) to be computationally clonable if there is a _polynomial-time_ quantum algorithm that solves the cloning task on \(S\). Likewise, we consider \(S\) to be computationally telegraphable if there is both a polynomial-time deconstruction and corresponding polynomial-time reconstruction procedure for \(S\). We observe the trivial relationship that computational telegraphing implies computational cloning: by running reconstruction twice on the deconstructed classical information \(\tau\), one obtains two copies of \(|\psi_{i}\rangle\), therefore cloning. However, the converse is a priori unclear: if a state can be cloned efficiently, it is not clear if there is an efficient process to deconstruct the state into a classical \(\tau\) and also an efficient process to turn \(\tau\) back into the quantum state. ### Our Results In this work, we provide evidence that no-cloning and no-telegraphing are _not_ equivalent properties in the computationally bounded setting. Our main theorem is: **Theorem 1.1** (Informal).: _There exists a quantum oracle \(\mathcal{O}\) and a set of quantum states \(S\) such that \(S\) can be efficiently cloned relative to \(\mathcal{O}\), but there is no efficient telegraphing procedure relative to \(\mathcal{O}\). Even more, there is no telegraphing procedure where the reconstruction is efficient, even if we allow deconstruction to be unbounded._ In other words, while no-cloning implies no-telegraphing, the converse is not true, at least relative to a quantum oracle. We prove this theorem by starting from a certain set of orthogonal but computationally _unclonable_ states (related to those used by [1]). By the trivial relationship that telegraphing implies clonability, we observe that these states cannot be efficiently telegraphed. We then augment the setup with a cloning oracle that can clone the states. The main technical difficulty is that we need to show that adding this cloning oracle does not allow for telegraphing. We do this through a multistep process, gradually converting any supposed telegraphing scheme that uses the cloning oracle into a telegraphing scheme that does not use the cloning oracle, reaching a contradiction. An interesting consequence of our proof is that the no-telegraphing property holds, _even if the sender is allowed to be inefficient_. The only party that needs to be efficient to achieve a separation is the receiver. We additionally bring to light certain applications of clonable-but-untelegraphable states to both complexity theory and cryptography. Complexity Theory.An important open problem in quantum complexity theory is the question of whether quantum states are more powerful than classical strings as proofs (or witnesses) for efficient quantum computation. This is the question of whether the class \(\mathsf{QMA}\) of problems which have efficiently verifiable quantum proofs is contained in the class \(\mathsf{QCMA}\) of problems where a classical proof suffices [1]. A number of recent works [1, 2, 10, 11] have endeavored to give increasingly strong oracle separations between the two classes. We take a slightly different approach, inspired by clonable-but-untelegraphable states. We define a class \(\mathsf{clonableQMA}\) of problems which have quantum proofs that are _efficiently clonable_. It is easy to see that \(\mathsf{QCMA}\subseteq\mathsf{clonableQMA}\subseteq\mathsf{QMA}\), and we argue that \(\mathsf{clonableQMA}\) is not likely equal to either of the other two. We use the clonable-but-untelegraphable states of Theorem 1.1 to show a quantum oracle separation with \(\mathsf{QCMA}\): **Theorem 1.2** (Informal).: _There exists a unitary quantum oracle \(\mathcal{O}\) such that \(\mathsf{clonableQMA}^{\mathcal{O}}\) is not contained in \(\mathsf{QCMA}^{\mathcal{O}}\)._ We also give a candidate _oracle-free_ promise problem separating these classes, and we show that any such problem would immediately yield clonable-but-untelegraphable quantum states. Finally, we argue that it is unlikely that \(\mathsf{QMA}\) is contained in \(\mathsf{clonableQMA}\), as it would mean that every \(\mathsf{QMA}\)-complete problem would have efficiently clonable witnesses and act as a barrier against the existence of public-key quantum money. Cryptography.While no-cloning has seen significant attention in cryptography (e.g. [1, 1, 1, 1, 2]), no-telegraphing has so far received little-to-no attention. We give a proof-of-concept application of clonable-but-untelegraphable states to protecting against key exfiltration. See Section 1.2 below for a description of the result. This motivates the use of no-telegraphing as an important cryptographic tool. ### Motivation The importance of the interplay between quantum information and computational complexity is becoming increasingly clear. For example, computational complexity played a crucial role in Harlow and Hayden's resolution to the black hole Firewall Paradox [11]. This interplay is also fundamentally important for many cryptographic applications. For example, despite certain information-theoretically secure quantum protocols [1], most cryptographic tasks still require computational constraints even when using quantum information [12, 13]. Nevertheless, combining quantum information with computational constraints opens up numerous possibilities, from minimizing computational assumptions [10, 1] to classically-impossible applications [1]. The previous examples show that scenarios with quantum information can be fundamentally altered by the presence of computational considerations. It is therefore important to develop a broad understanding of quantum information in the computationally bounded setting. This includes the famous no-go theorems of quantum information. Numerous prior works have studied no-cloning in the computational setting (see references in Section 1.3). However, the computational difficulty of telegraphing has, to the best of our knowledge, not been previously studied. As our work shows, the equivalence of two of the most important quantum no-go theorems no longer holds in the computationally bounded setting, giving a very different picture and allowing for new possibilities that do not exist in the information-theoretic setting. Cryptographic Applications.Besides addressing fundamental questions, we also explore potential cryptographic applications of our separation. Concretely, consider the following key exfiltration scenario: a server contains a cryptographic key, say, for decrypting messages. An attacker remotely compromises the server, and then attempts to exfiltrate the key, sending it from the compromised server to the attacker's computer. A classical approach to mitigate this problem is _big key cryptography_[1, 2, 10, 11], where the secret decryption key is made inconveniently large. This may make it impossible for the attacker to exfiltrate the key (perhaps there are bandwidth constraints on outgoing messages from the server) or at least makes it easier to detect the exfiltration (the large communication of the key may be easily detected). Unfortunately, such large keys are also inconvenient for the honest server, as now the server needs significant storage for each key (perhaps the server is storing keys for many different users). Moreover, decrypting using the key may be problematic, since the server will have to compute on a large key, which at least requires reading it from storage. If the server is decrypting many messages simulteneously using parallelism, then each process would presumably need to separately load the entire key from memory. A quantum proposal would be to have decryption keys be quantum states. It is still reasonable to consider such a setting where all communication is classical: after all, the messages being encrypted and decrypted may just be classical. The server could therefore force all outgoing communication to be classical by measuring it. This would prevent the remote adversary from exfiltrating the key, by the non-telegraphability of the key. Since telegraphing trivially implies cloning, we note that any classical program which has been quantum copy protected [1] will be immune from classical exfiltration. Copy protection for decryption keys was first considered by [10], and was constructed from indistinguishability obfuscation by [11], along with copy protection for pseudorandom functions and signatures. However, using copy protection comes with its own limitations. Indeed, suppose the server is decrypting a large volume of incoming communication under a single decryption key. Classically, the server could divide the communication across several processors, which each decrypt in parallel. Unfortunately, this requires giving each processor a copy of the key. While trivial classically, the whole point of copy protection is to prevent copying. In fact, [1] consider exactly the task of preventing the use of parallelism via copy protection. The server could simply store numerous copies of each copy-protected key, but it would have to store these keys forever, even when the server is sitting idle or processing other tasks. This could be a major burden on the server. It also requires security to hold given multiple copies of the program, a non-trivial extension to single-copy security [12]. Instead, we imagine a protocol where the quantum keys _are_ copy-able, but remain impossible to telegraph. This would protect against exfiltration, while allowing the server to only store a single copy of the key for long-term use. Then, if the incoming communication load ever becomes large, it can copy the key and spread the copies amongst several quantum processors that process the communication in parallel. After the load subsides and processors would return to being idle, the copies of the key can simply be discarded. Assuming states that can be cloned but not telegraphed, we show how to realize an encryption scheme with the above features: **Theorem 1.3** (Informal).: _Assume the existence of clonable-but-untelegraphable states which can be efficiently sampled. Additionally assume the existence of extractable witness encryption for \(\mathsf{QMA}\). Then there exists public key encryption with quantum secret keys that can be cloned but not exfiltrated._ For the necessary witness encryption, we could use [1] as a candidate. Note that the states we construct relative to an oracle in Theorem 1.1 are efficiently sampleable. However, witness encryption requires non-black box use of the \(\mathsf{QMA}\) language, meaning it cannot be applied to query-aided languages like that implied by Theorem 1.1. However, any standard-model realization of clonable-but-non-exfiltrateable states would suffice, and our Theorem 1.1 gives some evidence that such states exist. This is just one potential application of no-telegraphing that does not follow immediatly fom no-cloning. Our hope is that this work will motivate further study of no-telegraphing in cryptography. On Oracles.Our separation between no-cloning and no-telegraphing requires oracles. Given the current state of complexity theory and the fact that these no-go properties are equivalent for computationally unbounded adversaries, we cannot hope to achieve unconditional separations between them in the standard model. As such, either computational assumptions or a relativized separation (that is, oracles) are required. For cryptographic applications such as Theorem 1.3, certainly a standard-model construction from computational assumptions would be needed. On the other hand, by using oracles, we are able to give an unconditional separation, independent of what assumptions may or may not hold. While such a relativized separation does not necessarily rule out a standard-model equivalence, it shows a fundamental barrier to such an equivalence. Indeed, an immediate corollary of Theorem1.1 is: **Corollary 1.4**.: _There is no black box reduction showing the equivalence of cloning and telegraphing in the computational setting._ We also note that our oracles as stated are sampled from a distribution, rather than being fixed oracles. This is typical of the cryptographic black-box separation literature. In the setting of uniform adversaries, a routine argument allows us to turn this into a fixed oracle relative to which the separations hold. We do this explicitly in the proof of Theorem1.2 to get a separation relative to a fixed unitary oracle, and we further note that this directly implies such a separation between cloning and telegraphing as well. ### Other Related Work Cloning in the complexity-theoretic setting has been extensively studied during the last decade, in the context of public key quantum money [1, 1, 2, 12, 19, 20] and copy protection [1, 2, 1]. A recent development in quantum money has been quantum money with _classical_ communication [11, 1, 22]. This can be seen as a complement to our separation, giving a setting where a quantum state _is_ telegraphable, but not clonable. In order to overcome the trivial telegraphing-implies-cloning result, however, these works move to _interactive_ telegraphing, involving two or more messages between sender and receiver. Moreover, telegraphing happens in only a weak sense: the receiver does not get the original quantum state. Instead, the sender's quantum money state is actually irreversibly destroyed, but in the process the receiver is able to create a single new quantum money state. ### Technical Overview Let \(f\) be a random function with codomain much smaller than domain. Our clonable-but-not-telegraphable states will be the the superpositions over pre-images of \(f\): \[|\psi_{z}\rangle=\frac{1}{\sqrt{|f^{-1}(z)|}}\sum_{x|f(x)=z}|x\rangle\] where \(f^{-1}(z):=\{x|f(x)=z\}\) is the set of preimages of \(z\) in \(f\). As of now, the \(|\psi_{z}\rangle\) are easily shown to be _un_clonable: if one could create two copies of \(|\psi_{z}\rangle\), then measuring each would give two pre-images \(x_{1},x_{2}\) such that \(f(x_{1})=z=f(x_{2})\). Since \(f\) has a small codomain, there are exponentially many \(x\) in the support of \(|\psi_{z}\rangle\), and therefore \(x_{1}\neq x_{2}\) with overwhelming probability. Thus we obtain a collision for \(f\), which is known to be intractable for query-bounded algorithms to random oracles, even ones with small codomains [12]. That the \(|\psi_{z}\rangle\) are unclonable seems to be counterproductive for our aims. But it allows us to also readily prove that the \(|\psi_{z}\rangle\) are also un-telegraphable: if one could telegraph \(|\psi_{z}\rangle\), it means one can generate a classical \(a_{z}\) such that from \(a_{z}\) it is possible to reconstruct \(|\psi_{z}\rangle\). But by running reconstruction multiple times, one obtains multiple copies of \(|\psi_{z}\rangle\), contradicting no-cloning. This is not exactly how we prove un-telegraphability, but provides an intuition for why it should be true. Now that we have an untelegraphable set of states, we make them clonable by adding a cloning oracle, which very roughly maps \[|\psi_{z}\rangle\mapsto|\psi_{z}\rangle|\psi_{z}\rangle\] for all valid states \(|\psi_{z}\rangle\) and does nothing on states that are not uniform superpositions of pre-images. This clearly makes the \(|\psi_{z}\rangle\) clonable. The challenge is then to prove that telegraphing is still impossible, even given this cloning oracle. This is proved through a sequence of stages: * **Stage 1.** Here, we remove the cloning oracle, and just consider the oracle \(f\). We show that, with arbitrary classical advice \(a_{z}\) of polynomially-bounded size dependent on \(z\) (which could have been constructed in an arbitrary inefficient manner), it is impossible for a query-bounded algorithm to reconstruct \(|\psi_{z}\rangle\). This is proved by showing that such a reconstruction procedure could be used to contradict known lower bounds on the hardness of finding \(K\) collisions [14]. The above shows that even if we give the _sender_ the cloning oracle, then telegraphing is still impossible for a query-bounded receiver, as long as the receiver does not have access to the cloning oracle. Indeed, the hypothetical output of such a sender would be an \(a_{z}\) contradicting the above. * **Stage 2.** Here, we upgrade the receiver to have a limited cloning oracle that only clones a single \(|\psi_{z}\rangle\), namely the unique state \(|\psi_{z}\rangle\) that the receiver is trying to reconstruct. The intuition is that such a limited cloning oracle is of no use, since in order to query it on a useful input, the receiver needs to have \(|\psi_{z}\rangle\) in the first place. We make this formal using a careful analysis. * **Stage 3.** Finally, we give the receiver the full cloning oracle. We show that if there is such a query-bounded receiver that can successfully reconstruct, then we can compile it into a query-bounded receiver for Stage 2, reaching a contradiction. This is the most technically challenging part of our proof. The rough idea is that the Stage 2 receiver will simulate a set of imposter oracles, where it forwards queries relating to \(z\) to its own \(z\)-only cloning oracle, and all other queries it handles for itself. This simulation is not perfect, and care is required to prove that the simulation still allows for successful reconstruction. Putting these together, we prove Theorem 1.1, that there cannot exist _any_ telegraphing scheme for the set of \(|\psi_{z}\rangle\) with a query-bounded receiver (regardless of whether the sender is query bounded). **Remark 1.5**.: _The above description requires two oracles, a classical random oracle (queryable in superposition) and the cloning oracle. We first note that superposition access to a random oracle is in particular unitary, so the classical random oracle is also a unitary. Second, we can view these two oracles as a single quantum oracle, which operates on two sets of registers, applying one oracle to one register and the other oracle to the other. For the single combined oracle to be equivalent to the two individual oracles, we only need that the individual oracles have efficiently constructible fixed points. This is true of both the oracles we use. Thus, we obtain a separation relative to a single oracle sampled from an appropriate distribution._ ### Open Problems and Future Directions Variations on Telegraphing and Cloning.In this work, we consider telegraphing to occur in a single directed round. We can see, however, as mentioned above, that there are settings in which allowing multiple rounds of classical interaction provides new capabilities. An open problem would thus be to show that our scheme resists telegraphing even by adversaries that are allowed multiple rounds of interaction, or alternatively to give another clonable scheme which does. Such a result would suggest that under computational constraints, interactive telegraphing is a completely independent property that neither implies nor is implied by cloning. Along similar lines, one can also define extended versions of cloning and telegraphing in which the adversaries are given multiple copies of the state to clone or telegraph. They would then be tasked with producing one additional copy in the case of cloning, or telegraphing one instance across the classical channel. These extended tasks appear to be inherently easier, as the multiple copies may allow for partial tomography of the state. At the very least, they are no more difficult. What kinds of states are resistant to these extended tasks under computational constraints, and what are the implication relationships between them? Cloning and Telegraphing the Functionality of a State.For some applications, the functionality of quantum states is more important than the states themselves. For example, if several different cryptographic keys all allow performing the same decryption, then it may be important to guarantee that no procedure that is given one such key is able to produce another valid key, even if it is not identical to the original. One way to capture this is by replacing the individual states in the collection \(S\) (of states to be cloned or telegraphed) with sets of states that all have equivalent functionality, or alternatively, by the subspaces spanned by such sets. Such a definition in particular would be general enough to include the schemes for quantum money with classical communication mentioned in Section 1.3. In particular, single states are trivial singleton sets for functionality, so any cloning or telegraphing scheme for states is also a scheme for functionality, which means that our separation result also holds in this context. Interestingly, however, considering the functionality of states rather than the states themselves has the potential to allow new or stronger separations that do not exist for tasks on states. No-go Properties Exclusive to Efficient Computation.An artifact of our proof of the separation between cloning and telegraphing is that we actually prove a stronger separation between cloning and a simplified task that we call _reconstruction_. This motivates giving this task its own no-go property: A set of quantum states \(S\) is computationally reconstructible if there is a polynomial-time reconstruction procedure that constructs a state in \(S\) from a classical description that uniquely identifies it from within the set. Note, however, that this task is trivial for unbounded procedures: Any quantum state on a finite Hilbert space can be uniquely identified with its list of complex amplitudes, up to some precision error, which means that an unbounded procedure can always reconstruct it. This means that unlike no-cloning and no-telegraphing, no-reconstruction exists exclusively in the context of efficient computation. What other quantum no-go properties have until now been overlooked because they do not appear until computational efficiency is considered? Separation by a Classical Oracle.The separations presented in this paper, between efficient cloning and efficient telegraphing, as well as between clonableQMA and QCMA, both use quantum oracles. This is very natural for the cloning vs. telegraphing problem, as the cloning oracle we use is precisely the black-box cloner, from which we show that no efficient telegraphing protocol can be constructed. On the other hand, for the separation between clonableQMA and QCMA, it is reasonable to wonder whether a fully quantum oracle is necessary. That is, is there a classical oracle, accessible in superposition, relative to which the separation between clonableQMA and QCMA still holds? Such an oracle would as an immediate consequence also separate QCMA from QMA, an open problem that remains unresolved for standard classical oracles despite much recent progress. ### Paper Outline We start with some preliminaries in Section 2. In Section 3, we formally define the no-go tasks of cloning and telegraphing, as well as the related task of reconstruction. Section 4 forms the main technical part of the paper, and includes the formal statement and proof of Theorem 1.1. The complexity theoretic applications of clonable-but-untelegraphable states appear in Section 5, where we define the complexity class clonableQMA, and give the proof of Theorem 1.2. We end with applications to cryptography in Section 6, with a formalization of the notion of parallelizable but non-exfiltratable encryption and the formal statement and proof of Theorem 1.3. ## 2 Preliminaries ### Quantum Computation and Computational Efficiency We give only a basic primer on quantum information and refer the reader to the textbook by Nielsen and Chuang [12] for a more complete background. A quantum state \(|\psi\rangle\) is a normalized vector in a Hilbert space \(\mathcal{H}\). If we choose a basis \(\{|\phi_{i}\rangle\}_{i}\) for \(\mathcal{H}\), then we can measure \(|\psi\rangle\) in that basis, at which point we get outcome \(i\) with probability \(|\langle\phi_{i}|\psi\rangle|^{2}\). A distribution \(\{p_{i}\}_{i}\) over a set of quantum states \(|\psi_{i}\rangle\), often called a mixed state, can be represented as a density matrix \(\rho\), which has the form \(\rho=\sum_{i}p_{i}|\psi_{i}\rangle\langle\psi_{i}|\), and which captures the randomness of the quantum state and anything that can be measured about it. If we measure \(\rho\) in the basis \(\{|\phi_{i}\rangle\}_{i}\), we get outcome \(i\) with probability \(\langle\phi_{i}|\rho|\phi_{i}\rangle=\sum_{j}p_{j}|\langle\phi_{i}|\psi_{j} \rangle|^{2}\). We can course-grain this measurement by employing a projection-valued measurement (PVM), which groups together the projectors \(|\phi_{i}\rangle\left\langle\phi_{i}\right|\) into measurement operators \(\Pi_{k}=\sum_{i\in S_{k}}|\phi_{i}\rangle\left\langle\phi_{i}\right|\), where \(\sum_{k}\Pi_{k}=\mathbb{I}\) and \(\mathbb{I}\) is the identity operator on \(\mathcal{H}\). If we measure \(\rho\) with the PVM described by \(\{\Pi_{k}\}_{k}\), we get outcome \(k\) with probability \(\mathsf{tr}(\Pi_{k}\rho)\). We say that \(\{\Pi_{k}\}_{k}\) is a binary projective measurement if has only two potential outcomes. Quantum states are transformed to other quantum states via unitary transformations. A unitary \(\mathcal{U}\) transforms a pure quantum state \(|\psi\rangle\) by sending it to \(\mathcal{U}|\psi\rangle\), and it transforms a mixed state \(\rho\) by sending it to \(\mathcal{U}\rho\mathcal{U}^{\dagger}\). A quantum algorithm is a sequence of basic unitary transformations \(\mathcal{U}_{1},\cdots,\mathcal{U}_{t}\) chosen from some set \(U\). It acts on a pure quantum state \(|\psi\rangle\) by applying each one in turn as \(\mathcal{U}_{t}\mathcal{U}_{t-1}\cdots\mathcal{U}_{1}|\psi\rangle\) (and on a mixed state analogously). If an algorithm performs \(t\) such unitaries, then we say that the algorithm runs in time \(t\). Note that while non-unitary transformations such as measurement or randomness can occur in general, for our purposes, we can always assume without loss of generality that all such non-unitary behavior occurs in a single measurement at the end of the algorithm. A quantum oracle is a predefined unitary transformation that is applied atomically in a black-box fashion. That is, it is performed as if it were a single operation which cannot be broken down and does not need to have a computationally feasible implementation. Quantum oracles can in particular also implement a classical function \(f:\mathcal{X}\rightarrow\mathcal{Y}\) by applying the unitary that transforms each \(|x\rangle|y\rangle\) where \(x\in\mathcal{X}\) and \(y\in\mathcal{Y}\) to \(|x\rangle|y\oplus f(x)\rangle\). In this case, we refer to the oracle, and to the function \(f\) itself, as a classical oracle, despite it operating on data that may be quantum. Each time that an oracle is applied is called a query to that oracle. A quantum oracle algorithm is an algorithm that may at any point perform a query to any oracle from a predefined collection of oracles by applying the unitary for that oracle onto some subset of its registers. If this occurs \(q\) times, we say that it is a \(q\)-query algorithm, or that it runs in \(q\) queries. When considering the runtime of an algorithm, it is often useful to ignore lower order influences and focus its asymptotic behavior as the problem grows. All asymptotics are in terms of the size of the problem given as input, or a security parameter, \(n\in\mathbb{N}\). In asymptotic notation, \(O(f(n))\) refers to the set of functions that are bounded from above by \(c\cdot f(n)\) for some constant \(c\) and sufficiently large \(n\). The notation \(\Omega(f(n))\) likewise refers to functions that are bounded from below by \(c\cdot f(n)\) for some positive constant \(c\) and sufficiently large \(n\). We say that a function \(g(n)\) is polynomial in \(n\), notated \(\mathsf{poly}(n)\), if it is \(O(n^{c})\) for some constant real number \(c\). We say that \(g(n)\) is negligible, notated \(\mathsf{negl}(n)\) if for every constant real number \(c\), it is not \(\Omega(n^{-c})\). On the other hand, we say that \(g(n)\) is non-negligible if it is not negligible, and we say that it is overwhelming if \(0\leq g(n)\leq 1\) and \(1-g(n)\) is negligible. We say that an event \(E\) happens with high probability when \(\Pr[E]\geq\frac{1}{2}+\Omega(1)\). We say that an algorithm is efficient or polynomial-time, if it runs it time at most \(t(n)\) and \(t(n)\) is polynomial in \(n\). We say that an oracle algorithm is query-efficient or polynomial-query (or sometimes just polynomial-time) if it makes at most \(q(n)\) oracle queries and \(q(n)\) is polynomial in \(n\). ### Decision Problems A promise problem is a pair of disjoint sets of strings, \(\mathcal{L}=(\mathcal{L}_{\mathsf{YES}},\mathcal{L}_{\mathsf{NO}})\), where \(\mathcal{L}_{\mathsf{YES}},\mathcal{L}_{\mathsf{NO}}\subseteq\{0,1\}^{*}\), \(\mathcal{L}_{\mathsf{YES}}\cap\mathcal{L}_{\mathsf{NO}}=\emptyset\). Strings in \(\mathcal{L}_{\mathsf{YES}}\) and \(\mathcal{L}_{\mathsf{NO}}\) are called \(\mathsf{YES}\) instances and \(\mathsf{NO}\) instances, respectively. A language (sometimes called a decision problem) has the extra guarantee that \(\mathcal{L}_{\mathsf{YES}}\cup\mathcal{L}_{\mathsf{NO}}=\{0,1\}^{*}\), in which case it may be specified by only giving \(\mathcal{L}_{\mathsf{YES}}\). When otherwise clear from context, or when the distinction is not meaningful, we use the term decision problem to refer to either languages or promise problems, and use the more precise term where the distinction matters. An oracle decision problem, or a black-box decision problem, is defined similarly, but where \(\mathcal{L}_{\mathsf{YES}}\) and \(\mathcal{L}_{\mathsf{NO}}\) are instead sets of oracles, and an instance is an oracle in one of the two sets. For oracle decision problems, the size of the instance is taken to be the size of the input to the oracle. An average-case decision problem, \((\mathcal{L},\mathcal{D})\), is a decision problem \(\mathcal{L}\), paired with a probability distribution \(\mathcal{D}\) over its instances. We normally require that \(\mathcal{D}\) can be sampled by a polynomial-time quantum algorithm. We say that an algorithm decides or computes a decision problem if on input \(x\in\mathcal{L}_{\mathsf{YES}}\cup\mathcal{L}_{\mathsf{NO}}\), it accepts (outputs \(\mathsf{YES}\) or \(1\)) whenever \(x\in\mathcal{L}_{\mathsf{YES}}\), and rejects (outputs \(\mathsf{NO}\) or \(0\)) whenever \(x\in\mathcal{L}_{\mathsf{NO}}\). An algorithm for an average-case problem may fail on any subset of its instances over which the distribution, \(\mathcal{D}\), assigns a combined low probability. A complexity class is a set of (generalized) decision problems. There may be corresponding versions of the same complexity class for promise problems, languages, oracle problems, or average-case problems, and which one is meant is usually made clear from context, but whenever this is not the case, we specify explicitly which one we mean. We specifically say that an average-case decision problem \((\mathcal{L},\mathcal{D})\) is hard for a certain complexity class if for every algorithm, there exists a negligible function \(\varepsilon\) such that the algorithm fails to satisfy the conditions of the class with probability at least \(\frac{1}{2}-\varepsilon\) over the distribution \(\mathcal{D}\) on the instances. ### Complexity Classes In classical complexity theory, the class \(\mathsf{NP}\) is the class of decision problems that have efficiently verifiable proofs (called _witnesses_). That is, such problems have a polynomial time classical verifier, for which given any \(\mathsf{YES}\) instance of the decision problem, there is a polynomial length classical witness string that allows the verifier to verify the instance as a \(\mathsf{YES}\) instance, while at the same time no purported witness would lead to verifying a \(\mathsf{NO}\) instance. The related complexity class \(\mathsf{MA}\) allows the verifier to use randomness and bounded error. We concern ourselves mainly with the quantum generalizations of these classes, known as \(\mathsf{QCMA}\) and \(\mathsf{QMA}\), which allow the verifier to be a polynomial-time quantum Turing machine, and the witnesses to be classical strings or quantum states, respectively [1, 20]. **Definition 2.1** (\(\mathsf{QCMA}\)).: _A decision problem \(\mathcal{L}=(\mathcal{L}_{\mathsf{YES}},\mathcal{L}_{\mathsf{NO}})\) is in \(\mathsf{QCMA}(c,s)\) if there exists a polynomial time quantum verifier \(V\), and a polynomial \(p\), such that_ * _Completeness:_ _if_ \(x\in\mathcal{L}_{\mathsf{YES}}\)_, then there exists a classical witness_ \(w\in\{0,1\}^{p(|x|)}\) _such that_ \(V\) _accepts on input_ \(\ket{x}\ket{w}\) _with probability at least_ \(c\)__ * _Soundness:_ _if_ \(x\in\mathcal{L}_{\mathsf{NO}}\)_, then for all classical strings_ \(w^{*}\in\{0,1\}^{p(|x|)}\)_,_ \(V\) _accepts on input_ \(\ket{x}\ket{w^{*}}\) _with probability at most_ \(s\)_._ **Definition 2.2** (\(\mathsf{QMA}\)).: _A decision problem \(\mathcal{L}=(\mathcal{L}_{\mathsf{YES}},\mathcal{L}_{\mathsf{NO}})\) is in \(\mathsf{QMA}(c,s)\) if there exists a polynomial time quantum verifier \(V\), and a polynomial \(p\), such that_ * _Completeness:_ _if_ \(x\in\mathcal{L}_{\mathsf{YES}}\)_, then there exists a quantum witness_ \(\ket{\psi}\) _on_ \(p(|x|)\) _qubits such that_ \(V\) _accepts on input_ \(\ket{x}\ket{\psi}\) _with probability at least_ \(c\)__ * _Soundness:_ _if_ \(x\in\mathcal{L}_{\mathsf{NO}}\)_, then for all quantum states_ \(\ket{\psi^{*}}\) _on_ \(p(|x|)\) _qubits,_ \(V\) _accepts on input_ \(\ket{x}\ket{\psi^{*}}\) _with probability at most_ \(s\)_._ We take \(\mathsf{QCMA}=\mathsf{QCMA}(\frac{9}{10},\frac{1}{10})\) and \(\mathsf{QMA}=\mathsf{QMA}(\frac{9}{10},\frac{1}{10})\), although by making use of parallel repetition for error reduction, the completeness and soundness parameters can be set arbitrarily, with a wide leeway in the size of the completeness-soundness gap, without changing the class [1].1 Footnote 1: The standard practice is usually to set the completeness and soundness parameters to \(2/3\) and \(1/3\) respectively. Since, as mentioned, the exact values are arbitrary, we find it convenient instead to set them to \(9/10\) and \(1/10\), in order to simplify the presentation of some proofs which would otherwise require more frequent invocations of parallel repetition to increase the gap. ### Query Magnitudes and Modifying Oracles When working with quantum oracle algorithms, it is often useful to be able to bound the effect that replacing one oracle with another can have on the result of the computation. To this end, we recall the following definition and two theorems due to Bennett, Bernstein, Brassard, and Vazirani [1]: **Theorem 2.3** (Theorem 3.1 from [1]).: _If two unit-length superpositions are within Euclidean distance \(\varepsilon\) then observing the two superpositions gives samples from distributions which are within total variation distance at most \(4\varepsilon\)._ **Definition 2.4** (Definition 3.2 from [1]).: _Let \(\ket{\phi_{i}}\) be the superposition of \(M^{A}\) on input \(x\) at time \(i\). We denote by \(q_{y}(\ket{\phi_{i}})\) the sum of squared magnitudes in \(\ket{\phi_{i}}\) of configurations of \(M\) which are querying the oracle on string \(y\). We refer to \(q_{y}(\ket{\phi_{i}})\) as the query magnitude of \(y\) in \(\ket{\phi_{i}}\)._ **Theorem 2.5** (Theorem 3.3 from [1]).: _Let \(\ket{\phi_{i}}\) be the superposition of \(M^{A}\) on input \(x\) at time \(i\). Let \(\varepsilon>0\). Let \(F\subseteq[0,T-1]\times\Sigma^{*}\) be a set of time-strings pairs such that \(\sum_{(i,y)\in F}q_{y}(\ket{\phi_{i}})\leq\frac{\varepsilon^{2}}{T}\). Now suppose the answer to each query \((i,y)\in F\) is modified to some arbitrary fixed \(a_{i,y}\) (these answers need not be consistent with an oracle). Let \(\ket{\phi^{\prime}_{i}}\) be the time \(i\) superposition of \(M\) on input \(x\) with oracle \(A\) modified as stated above. Then \(\left|\ket{\phi_{T}}-\ket{\phi^{\prime}_{T}}\right|\leq\varepsilon\)._ Fundamental Tasks and Their No-go Properties ### Schemes of Quantum States We introduce the following syntax for a scheme of quantum states. A scheme is the basic structure on which the quantum no-go properties may or may not apply. In other words, some schemes may be clonable, for instance, while other schemes may not. Schemes consist primarily of a collection of quantum states, but they can also specify the collection of oracles which may be used, as well as a distribution for sampling from those states. **Definition 3.1** (Scheme).: _In the context of quantum no-go properties, a **scheme**, \((S,\mathcal{D},\mathcal{O})\), is an indexed collection of quantum states \(S=\{|\psi_{i}\rangle\}_{i\in\mathcal{Z}}\) over an index set \(\mathcal{Z}\) (which we call the set of labels), a distribution \(\mathcal{D}\) over the labels, and a collection \(\mathcal{O}\) of any quantum oracles that may be used._ Whenever either the distribution or the oracles are irrelevant or otherwise clear from context, we will drop them from the notation and write \((S,\mathcal{O})\), \((S,\mathcal{D})\), or simply \(S\). Note that the distribution \(\mathcal{D}\), which allows sampling from the collection of states, is only important for defining average-case security of the scheme, and \(\mathcal{O}\) is only necessary when considering oracle algorithms. Under a certain scheme \((S,\mathcal{D},\mathcal{O})\), _verification_ of an unknown quantum state \(|\phi\rangle\) for a label \(z\) is the measurement of whether \(|\phi\rangle\) passes for the intended state \(|\psi_{z}\rangle\), which succeeds with probability \(p=|\langle\psi_{z}|\phi\rangle|^{2}\). When we say that an algorithm succeeds in passing verification with some probability \(p\), we mean that verification succeeds with that probability over the randomness of the algorithm's output as well as the randomness of the sampling from \(\mathcal{D}\) and that of the verification measurement. That is, if the algorithm is randomized and outputs a mixed state \(\rho_{z}\) when label \(z\) is drawn, then we say that it succeeds at verification for \(z\) with probability \(p=\mathbb{E}_{z\leftarrow\mathcal{D}}\langle\psi_{z}|\rho_{z}|\psi_{z}\rangle\). This is the expected fidelity of the states produced by the algorithm with the intended state. Whenever an algorithm is tasked with passing verification for a label \(z\), we call \(z\) the target label and we call \(|\psi_{z}\rangle\) the target state. ### Cloning and Telegraphing We now formally define the tasks of cloning and telegraphing. **Definition 3.2** (Cloning).: _A scheme \(S\) is said to be \(\eta\)**-worst case clonable** if there exists a quantum algorithm \(\mathsf{Clone}(|\psi\rangle)\) such that for every label \(z\in\mathcal{Z}\), when given \(|\psi_{z}\rangle\), its corresponding quantum state in \(S\), returns a quantum state \(|\phi\rangle\) on two registers that, with probability at least \(\eta\), passes verification for \(z\) on both registers simultaneously. That is, \(\big{|}\big{(}\left<\psi_{z}|\otimes\langle\psi_{z}|\ \right)|\phi \rangle\big{|}^{2}\geq\eta\)._ \((S,\mathcal{D})\) _is said to be \(\eta\)**-average case clonable** if there exists a quantum algorithm \(\mathsf{Clone}(|\psi\rangle)\) that succeeds at the cloning task with probability \(\eta\) when \(z\) is sampled from the distribution \(\mathcal{D}\)._ **Definition 3.3** (Telegraphing).: _A scheme \(S\) is said to be \(\eta\)**-worst case telegraphable** if there exists a pair of quantum algorithms \(\mathsf{Send}(|\psi\rangle)\to c\) and \(\mathsf{Receive}(c)\rightarrow|\phi\rangle\) where \(c\) is a classical string, such that for every label \(z\in\mathcal{Z}\), when given \(|\psi_{z}\rangle\), its corresponding quantum state in \(S\), \(|\phi\rangle:=\mathsf{Receive}(\mathsf{Send}(|\psi_{z}\rangle))\) passes verification for \(z\) with probability at least \(\eta\)._ \((S,\mathcal{D})\) _is said to be \(\eta\)**-average case telegraphable** if there exists a pair of quantum algorithms \(\mathsf{Send}(|\psi\rangle)\to c\) and \(\mathsf{Receive}(c)\rightarrow|\phi\rangle\) that succeed at the telegraphing task with probability \(\eta\) when \(z\) is sampled from the distribution \(\mathcal{D}\)._ Note that quantum teleportation is the process by which a quantum state can be transmitted through a classical channel by the use of pre-shared quantum entanglement [1]. Telegraphing can thus be viewed as describing a quantum teleportation protocol without the use of entanglement: \(\mathsf{Send}\) converts the quantum state \(\left|\psi_{z}\right\rangle\) to a classical description \(c\), which \(\mathsf{Receive}\) then converts back into \(\left|\psi_{z}\right\rangle\), or an approximation thereof. This is why the no-go theorem of the telegraphing task for general quantum states is often referred to as the _no-teleportation theorem_, a name first coined by the originator of the theorem [22]. This terminology can be confusing, however, since teleportation _is_ in fact always possible when the sending and receiving parties are allowed to start out with an additional entangled quantum state. To sidestep this confusion, throughout this paper we instead use the term _telegraphing_ for the unentangled no-go task. Here, and throughout this paper, any pair of algorithms attempting to achieve the telegraphing task are attempting to do so without the use of pre-shared entanglement. ### Information Theoretic No-go Theorems We now state a version of the (information theoretic) no-go theorems for these two tasks. The No-Cloning Theorem was first proved by three independent papers [23, 24, 25], but the version we present here is due to [26]. The No-Telegraphing Theorem (originally called the No-Teleportation Theorem), a corollary of the No-Cloning Theorem, is due to [22]. We present the two theorems here together to emphasize the direct connection between them. **Theorem 3.4** (No-Cloning Theorem and No-Telegraphing Theorem).: _Let \(\mathcal{H}\) be a Hilbert space, and let \(S=\{\left|\psi_{i}\right\rangle\}_{i\in[k]}\) be a collection of pure quantum states on this Hilbert space. The following are equivalent:_ 1. \(S\) _can be perfectly cloned_ 2. \(S\) _can be perfectly telegraphed_ 3. \(S\) _is a collection of orthogonal states, with duplication_ \((\forall i,j\;\left|\left\langle\psi_{i}|\psi_{j}\right\rangle\right|^{2}\) _is either_ \(0\) _or_ \(1)\)__ The proof of cases 1 and 3 is due to [26] and the addition of case 2 is due to [22]. For completeness, we present the full proof in Appendix B. Theorem 3.4 demonstrates that a general collection of quantum states cannot be cloned or telegraphed, but all orthogonal collections can. To be specific, this is a special subset, \(\Lambda_{\mathrm{Orthogonal}}\), of the set \(\Lambda_{\mathrm{All}}\) of all collections of states, and in particular, it contains the set \(\Lambda_{\mathrm{Classical}}\) of all collections of classical strings, or equivalently, quantum states in the computational basis (\(\Lambda_{\mathrm{Classical}}\subsetneq\Lambda_{\mathrm{Orthogonal}}\subsetneq \Lambda_{\mathrm{All}}\)). We consider in this paper two subsets of \(\Lambda_{\mathrm{Orthogonal}}\), namely \(\Lambda_{\mathrm{Clonable}}^{\mathrm{efficient}}\) and \(\Lambda_{\mathrm{Telegraphable}}^{\mathrm{efficient}}\), of collections of states which can be respectively cloned or telegraphed _efficiently_, and our main theorem shows that when efficiency is in terms of black-box queries, \(\Lambda_{\mathrm{Telegraphable}}^{\mathrm{efficient}}\subsetneq\Lambda_{ \mathrm{Clonable}}^{\mathrm{efficient}}\). ### Computational No-go Properties We now define the efficient versions of the no-go tasks of cloning and telegraphing, and their associated computational no-go properties. Computational Restrictions.We call the algorithms \(\mathsf{Clone}\), \(\mathsf{Send}\), and \(\mathsf{Receive}\) the adversaries for their respective tasks. Specifying the class of algorithms from which the adversaries may originate allows us to further parameterize the definitions of these no-go tasks by computational complexity. For instance, if the adversaries are required to be computationally efficient (polynomial-time) quantum algorithms, we say that the scheme is _efficiently_ or _computationally_ clonable (or unclonable, telegraphable, etc.). If the scheme includes oracles and the adversaries are quantum oracle algorithms that make a polynomial number of oracle queries, that is, query-efficient algorithms, then we say that the scheme is clonable (unclonable, telegraphable, etc.) by efficient oracle algorithms or query-efficient algorithms. The one thing to note is that for telegraphing by efficient oracle algorithms, we require as an additional restriction that the classical message \(c\) be of polynomial length. We often use the words "computational" and "efficient" as a catch-all for both computationally efficient and query-efficient algorithms, and we use more specific terminology whenever it is necessary to differentiate between them. If the adversaries are not bounded in any way, we say that the scheme is _statistically_ or _information-theoretically_ clonable (unclonable, telegraphable, etc.). Success Probability.We say that a scheme is _\(\eta\)-unclonable_ or _\(\eta\)-untelegraphable_ (in either the worst case or in the average case) if no quantum algorithm succeeds at the corresponding task with probability greater than \(\eta\). We will often just drop the parameter \(\eta\) and simply say that a scheme is unclonable (or untelegraphable) if it is \(\eta\)-unclonable (respectively \(\eta\)-untelegraphable) for every non-negligible probability \(\eta\). We say that a scheme is perfectly clonable (or telegraphable) if it is clonable (respectively, telegraphable) with probability \(1\). Telegraphing Implies Cloning.We now give the trivial direction of the relationship between computational cloning and computational telegraphing: that telegraphing implies cloning. This implication and its proof are certainly not a new result, even in the context of computational efficiency. However, both directions of the relationship have too often been taken for granted despite one direction not always holding. We therefore give a formal proof for the direction that _does_ still hold in the context of efficient algorithms, both for completeness, as well as to contrast its simplicity with the relative complexity of the supposed converse. **Theorem 3.5** (Telegraphing Implies Cloning).: _Any scheme that is \(\eta\)-computationally telegraphable is also \(\left(\frac{4}{27}\eta^{3}\right)\)-computationally clonable. Note that this applies to both computationally efficient and query-efficient algorithms as well as to both worst case and average case versions of these properties._ Proof.: We prove this for computationally efficient algorithms, and in the worst case, since the other cases are nearly identical to this one. Let \(S\) be a scheme that is \(\eta\)-telegraphable in the worst case by computationally efficient adversaries. That is, there exist efficient quantum algorithms \(\mathsf{Send}(|\psi\rangle)\to c\) and \(\mathsf{Receive}(c)\to|\phi\rangle\) such that for all \(|\psi_{z}\rangle\in S\), \(|\phi\rangle:=\mathsf{Receive}(\mathsf{Send}(|\psi_{z}\rangle))\) passes verification for \(z\) with probability at least \(\eta\). With probability at least \(\frac{1}{3}\eta\), the classical message, \(c\), produced by \(\mathsf{Send}\) in this telegraphing protocol is a "good" one, in the sense that for such \(c\), \(\mathsf{Receive}(c)\) succeeds with probability at least \(\frac{2}{3}\eta\) (otherwise, the success probability of the protocol would have to be less than \(\frac{1}{3}\eta+\frac{2}{3}\eta=\eta\), a contradiction). Define a cloning adversary \(\mathsf{Clone}\) as follows: Given a quantum state \(|\psi_{z}\rangle\) as input, first simulate \(\mathsf{Send}(|\psi_{z}\rangle)\) to produce a classical message \(c\). Then simulate \(\mathsf{Receive}(c)\) twice, independently, to produce and output two new unentangled quantum states \(|\phi_{1}\rangle\) and \(|\phi_{2}\rangle\). Suppose that the \(c\) produced in the first step is a "good" one as defined just above (which, as we have shown, happens with probability at least \(\frac{1}{3}\eta\)). Since the two runs of \(\mathsf{Receive}\) are independent by construction, the probability that they both succeed on this "good" \(c\) is at least \((\frac{2}{3}\eta)^{2}\), and therefore the overall probability of success is at least \((\frac{1}{3}\eta)(\frac{2}{3}\eta)^{2}=\frac{4}{27}\eta^{3}\). While we incur a loss (from \(\eta\) to \(\frac{4}{27}\eta^{3}\)) in going from telegraphing to cloning, if what we care about is whether \(\eta\) is negligible or non-negligible, this loss ends up being insignificant. That is, if we have that a scheme is computationally telegraphable with non-negligible probability, then it is also computationally clonable with non-negligible probability. This is what we mean when we say that telegraphing implies cloning. Our main result, which we show in Section 4, is that the converse to this theorem does not hold, at least with respect to efficient oracle algorithms. ### Reconstruction Our central aim is to separate efficient cloning from efficient telegraphing. However, in order to do so, we find it convenient to introduce an additional third task, which we call _reconstruction_. **Definition 3.6** (Reconstruction).: _A scheme \(S\) is said to be \(\eta\)**-worst case reconstructible** if there exists a quantum algorithm \(\mathsf{Reconstruct}(a)\to|\phi\rangle\) such that for every label \(z\in\mathcal{Z}\), there exists an advice string \(a_{z}\) such that \(|\phi\rangle:=\mathsf{Reconstruct}(a_{z})\) passes verification for \(z\) with probability at least \(\eta\)._ \((S,\mathcal{D})\) _is said to be \(\eta\)**-average case reconstructible** if there exists a quantum algorithm \(\mathsf{Reconstruct}(a)\to|\phi\rangle\) that succeeds at the reconstruction task with probability \(\eta\) when \(z\) is sampled from the distribution \(\mathcal{D}\)._ The different parameterized versions of reconstruction are defined analogously to those of cloning and telegraphing. As with the classical message in the case of telegraphing, for reconstruction by efficient oracle algorithms, we require as an additional restriction that the advice string \(a_{z}\) be of polynomial length. Reconstruction can be viewed in one way as a subtask of telegraphing, where we focus our attention only on the receiving end of the telegraphing, or in another way as a telegraphing protocol in which the sender is all-powerful and can implement a (potentially even nonexistent) function from \(|\psi_{z}\rangle\) to \(a_{z}\). (This function is in fact performing the task of what we call _deconstruction_, which we do not define here, but which can be roughly described as assigning a uniquely identifying label to every state in \(S\).) Following this line of thought, we can observe another trivial implication: between telegraphing and reconstruction. **Theorem 3.7** (Telegraphing Implies Reconstruction).: _Any scheme that is \(\eta\)-computationally telegraphable is also \(\eta\)-computationally reconstructible. Note that, as before, this applies to both computationally efficient and query-efficient algorithms as well as to both worst case and average case versions of these properties._ Proof.: The proof here is even simpler than that of Theorem 3.5. As we did in that proof, we prove this theorem only for computationally efficient algorithms, and in the worst case, since the other cases are much the same. Let \(S\) be a scheme that is \(\eta\)-telegraphable in the worst case by computationally efficient adversaries. That is, there exist efficient quantum algorithms \(\mathsf{Send}(|\psi\rangle)\to c\) and \(\mathsf{Receive}(c)\to|\phi\rangle\) such that for all \(|\psi_{z}\rangle\in S\), \(|\phi\rangle:=\mathsf{Receive}(\mathsf{Send}(|\psi_{z}\rangle))\) passes verification for \(z\) with probability at least \(\eta\). For every \(|\psi_{z}\rangle\in S\), \(\mathsf{Send}(|\psi_{z}\rangle)\) produces an output \(c_{z}\) that comes from some distribution over classical strings. There must be at least one string \(c_{z}^{*}\) in its support for which \(\mathsf{Receive}(c_{z}^{*})\) succeeds with probability at least \(\eta\) (otherwise, \(\mathsf{Receive}(c_{z})\) has success probability less than \(\eta\) for all \(c_{z}\), and so the telegraphing could not have succeeded with probability \(\eta\)). Thus, for each \(z\in\mathcal{Z}\), let \(a_{z}:=c_{z}^{*}\) and let \(\mathsf{Receive}\) be the reconstruction adversary, which we have just shown will succeed on input \(a_{z}\) with probability at least \(\eta\) for all \(z\in\mathcal{Z}\). The direct consequence of Theorem 3.7 is that in order to show that a scheme is not telegraphable, it suffices to show that it is not reconstructible. In other words, in order to prove our separation between computational cloning and computational telegraphing, it suffices to show a scheme that can be computationally cloned but _not computationally reconstructed_. Reframing our aim in such a way simplifies the analysis because now we only have to deal with a single adversary in both situations (cloning and reconstruction), as opposed to two interacting adversaries for telegraphing. Furthermore, by doing so, we in fact end up showing a stronger separation. ## 4 Cloning without Telegraphability We now come to the main theorem of the paper. **Theorem 4.1**.: _There exists a scheme, relative to a quantum oracle, that on the one hand, can be perfectly cloned by an efficient quantum oracle algorithm in the worst case, but that on the other hand cannot be telegraphed by a pair of efficient quantum oracle algorithms with any non-negligible probability, even in the average case._ As mentioned before, we in fact prove the following stronger theorem, which, as a consequence of Theorem 3.7, implies Theorem 4.1: **Theorem 4.2**.: _There exists a scheme, relative to a quantum oracle, that on the one hand, can be perfectly cloned by an efficient quantum oracle algorithm in the worst case, but that on the other hand cannot be **reconstructed** by an efficient quantum oracle algorithm with any non-negligible probability, even in the average case.2_ Footnote 2: Note importantly that the fact that these quantum states cannot be efficiently reconstructed does not preclude them from appearing naturally and being used in efficient quantum computation, since they may nevertheless be efficiently _samplable_. That is, there may be an efficient way to sample from the set of states without being able to reconstruct any particular one of them on command. In fact, this is exactly the case for our scheme. The rest of Section 4 contains the proof of Theorem 4.2. In Section 4.1, we define the scheme, Scheme 4.7, and show that it is perfectly clonable. In Section 4.2, we prove that the scheme cannot be efficiently reconstructed. The form of our scheme is based on a set of states introduced by [1] which take a uniform superposition over the preimages of a random oracle. These states cannot be cloned by query-efficient algorithms, so by Theorem 3.5 this directly implies that they are untelegraphable.3 We want a scheme that is untelegraphable despite being clonable, so we add a cloning oracle, a quantum oracle that clones only this set of states. The main technical challenge is to show that access to this cloning oracle does not allow the adversaries to telegraph. Footnote 3: Note, however, that this does not imply that they are unreconstructable. Nevertheless, we show that this is the case in Proposition 4.8. We start by showing that with just the random oracle, the states are not reconstructible, via a reduction from the problem of finding multi-collisions in the random oracle. We then show that allowing cloning for the target state cannot be detected by the adversary. We finally simulate the rest of the cloning oracle by replacing the random oracle with an impostor for which we know how to clone. ### The Scheme Before we give the scheme, we first give a few definitions that are useful both for defining the scheme and for the proof of its unreconstructibility. We first define a cloning oracle for orthonormal sets. This is an oracle that successfully clones a specific subset of basis states for a given basis. **Definition 4.3** (Cloning oracle for a set).: _Let \(\mathcal{H}\) be a Hilbert space and let \(S=\{|\psi_{i}\rangle\}_{i\in[k]}\) be an orthonormal subset of \(\mathcal{H}\). Augment \(\mathcal{H}\) with a special symbol \(\bot\) outside the support of \(\mathcal{H}\). That is, \(|\bot\rangle\) is orthogonal to all of \(\mathcal{H}\)._ _A **cloning oracle**\(\mathcal{C}_{S}\) on set \(S=\{|\psi_{i}\rangle\}_{i\in[k]}\) is a quantum oracle that, for all \(i\leq k\) sends \(|\psi_{i}\rangle|\bot\rangle\) to \(|\psi_{i}\rangle|\psi_{i}\rangle\) and \(|\psi_{i}\rangle|\psi_{i}\rangle\) to \(|\psi_{i}\rangle|\bot\rangle\). For all other orthogonal states, it applies the identity. That is, when the second register is \(|\bot\rangle\), it clones any state in \(S\) and leaves all other orthogonal states unmodified._ **Definition 4.4** (Preimage superposition state).: _Let \(f:\{0,1\}^{m}\rightarrow\{0,1\}^{n}\). A **preimage superposition state** for image \(z\in\{0,1\}^{n}\) in function \(f\) is the quantum state that is the uniform positive superposition of preimages of \(z\) in \(f\):_ \[|\psi_{z}\rangle=\frac{1}{\sqrt{|f^{-1}(z)|}}\sum_{x|f(x)=z}|x\rangle\] _where \(f^{-1}(z):=\{x|f(x)=z\}\) is the set of preimages of \(z\) in \(f\)._ **Definition 4.5** (Preimage superposition set).: _Let \(f:\{0,1\}^{m}\rightarrow\{0,1\}^{n}\). A **preimage superposition set for \(f\)**, \(S_{f}\), is the set of preimage superposition states for all images in the range of \(f\)._ \[S_{f}:=\left\{\frac{1}{\sqrt{|f^{-1}(z)|}}\sum_{x|f(x)=z}|x\rangle\;\middle|\; z\in\{0,1\}^{n}\right\}\] **Definition 4.6** (Cloning oracle relative to a function).: _Let \(f:\{0,1\}^{m}\rightarrow\{0,1\}^{n}\). A **cloning oracle relative to \(f\)**, \(\mathcal{C}_{f}\), is a cloning oracle for the preimage superposition set, \(S_{f}\), of \(f\)._ We now give the formal definition of the scheme: **Scheme 4.7**.: _Let \(H:\{0,1\}^{m}\rightarrow\{0,1\}^{n}\) be a random oracle, where \(m\geq 2n\) (but bounded by a polynomial in \(n\)). Let \(\mathcal{C}_{H}\) be the cloning oracle relative to \(H\). The scheme consists of the following:_ * _The collection of oracles is_ \(\mathcal{O}:=\{H,\mathcal{C}_{H}\}\)_._ * _The set of states is_ \(S:=S_{H}\)_, the preimage superposition set for_ \(H\)_._ * _The distribution,_ \(\mathcal{D}\)_, samples the image of a random domain element of_ \(H\)_. That is, it returns_ \(z\gets H(x)\) _for a uniformly random_ \(x\in\{0,1\}^{m}\)_._ It is clear that the scheme presented here is perfectly clonable in the worst case by an efficient quantum oracle algorithm. Specifically, the cloning oracle, \(\mathcal{C}_{H}\), provides that capability, and in a single oracle query. Therefore, it remains to show that no efficient quantum oracle algorithm can reconstruct it. This is the main technical challenge of our proof and takes up the remaining part of Section 4.4 ### Proof of Unreconstructibility We wish to prove that Scheme 4.7 cannot be efficiently reconstructed by efficient quantum oracle algorithms in the average case. We prove this in a sequence of three stages, beginning with a simplified version of the scheme without a cloning oracle, then moving to one with an oracle that can only clone a single state, and finally to the full scheme with the full cloning oracle. #### 4.2.1 With No Cloning Oracle In the first stage, we consider an adversary, \(R\), which is a quantum oracle algorithm with advice. \(R\) is given a polynomial length advice string \(a_{z}\), and is allowed a polynomial number of queries to the random oracle. It is tasked with producing a state that passes verification for \(z\), namely the positive uniform superposition over all the preimages of \(z\) in the random oracle. Note that this first version does not yet have access to a cloning oracle of any sort. **Proposition 4.8**.: _Let \(R\) be a quantum oracle algorithm that is given a classical advice string \(a_{z}\in\{0,1\}^{\ell}\) for some polynomial \(\ell\) in \(n\), and makes \(q\) queries to the random oracle, where \(q\) is a polynomial in \(n\). For \(z\in\{0,1\}^{n}\) drawn uniformly at random, \(R\) cannot output a quantum state that passes verification for \(z\) with probability that is non-negligible in \(n\).5_ Footnote 5: Note that the advice string, \(a_{z}\), may in general contain _any_ information, including, for instance, any details about the set of preimages of \(z\) in \(H\), or any other useful information about the task. We show here that no polynomial amount of classical information _of any kind_ will allow \(R\) to faithfully reconstruct the state. Proof.: The main idea is that if \(R\) were able to produce the target state \(\left|\psi_{z}\right\rangle\) with non-negligible probability, then it can also do so without the advice by guessing the advice string, albeit with significantly lower probability. Measuring \(\left|\psi_{z}\right\rangle\) then gives a random preimage of the random oracle, and we can do this multiple times to produce several preimages of the same image \(z\), producing a multi-collision for the random oracle, which is harder to do than this method would give. We now give the proof. Suppose, for the sake of contradiction, that \(R\), when given advice string \(a_{z}\), makes \(q\) queries to the random oracle and then produces the mixed state \(\rho_{z}\) which passes verification for \(z\) with non-negligible probability \(\eta\) (that is, \(\left\langle\psi_{z}\right|\rho_{z}\left|\psi_{z}\right\rangle\geq\eta\)). We use \(R\) to produce a large number of disjoint collisions of the oracle. Let \(H^{-1}(z)\) be the set of preimages of \(z\) in \(H\). We have that with high probability, \(|H^{-1}(z)|\geq\Omega(2^{m-n})\). Let \(\Gamma\subset H^{-1}(z)\) be an arbitrary polynomial sized subset of \(H^{-1}(z)\), and let \(\Pi\) be the binary projective measurement that projects onto the preimages of \(z\) that are not in \(\Gamma\), that is, onto the computational basis states \(H^{-1}(z)\setminus\Gamma\). We have that \(\left\langle\psi_{z}\right|\Pi\left|\psi_{z}\right\rangle\geq 1-\epsilon\) for \(\epsilon=\frac{|\Gamma|}{|H^{-1}(z)|}\in\mathsf{negl}(n)\). Given that \(\left\langle\psi_{z}\right|\rho_{z}\left|\psi_{z}\right\rangle\geq\eta\), we apply Lemma A.1 to get that \(\mathsf{tr}(\Pi\rho_{i})\geq\eta(1-\epsilon)-2\sqrt{\epsilon(1-\eta)}\geq \frac{1}{2}\eta\) for sufficiently large \(n\). In other words, for any polynomial sized subset of preimages, and for sufficiently large \(n\), we have that measuring \(\rho_{z}\) will with non-negligible probability give a preimage of \(z\) outside that subset. Let \(k\) be a sufficiently large polynomial in \(n\), for instance let \(k=2n(\ell+1)\) (note that \(\ell\) is itself a positive integer bounded by a polynomial in \(n\)). We run \(R\) repeatedly (on the same target label \(z\) and advice \(a_{z}\)) a total of \(8k/\eta\) times and measure the outcome in the computational basis, with the goal of producing at least \(2k\) unique preimages of \(z\). By a Chernoff bound, this then succeeds with constant probability \(\Omega(1)\): that is, if \(X\) is the number of valid unique preimages, \(\Pr\left[X\leq\frac{1}{2}(4k)\right]\leq e^{-4k/8}=e^{-n(\ell+1)}\leq 1/2\). Finally, because every pair of unique preimages is a collision, this gives \(k\) disjoint collisions of the random oracle. That is, this process therefore produces \(k\) disjoint collisions with constant probability \(\Omega(1)\). Now, if this process succeeds given the advice \(a_{z}\in\{0,1\}^{\ell}\), then it can also succeed without being given advice, though with a much lower probability, by guessing the advice string with probability \(2^{-\ell}\), for an overall success probability of at least \(\Omega(2^{-\ell})\). To recap, this gives an quantum oracle algorithm for producing \(k\) disjoint collisions of a random oracle which makes \(t=8kq/\eta\) oracle queries and succeeds with probability at least \(\Omega(2^{-\ell})\). On the other hand, we recall the following theorem from Hamoudi and Magniez [14]: **Theorem 4.9** (Theorem 4.6 from [14]).: _The success probability of finding \(K\) disjoint collisions in a random function \(f:[M]\to[N]\) is at most \(O(T^{3}/(K^{2}N))^{K/2}+2^{-K}\) for any algorithm making \(T\) quantum queries to \(f\) and any \(1\leq K\leq N/8\)._ Applying the bound from the above Theorem 4.9 with \(T=8kq/\eta\), \(K=k\), \(M=2^{m}\) and \(N=2^{n}\), the success probability for this task must be at most \[O\left(\frac{T^{3}}{K^{2}N}\right)^{K/2}+2^{-K} =O\left(\frac{(8kq/\eta)^{3}}{k^{2}2^{n}}\right)^{k/2}+2^{-k}\] \[=O\left(\frac{kq^{3}}{\eta^{3}2^{n}}\right)^{k/2}+2^{-k}\] \[\leq 2^{-\Omega(k)}\] \[\leq 2^{-\Omega(n(\ell+1))}\] There therefore exists a sufficiently large \(n\), for which this is a contradiction. This completes the proof of Proposition 4.8. #### 4.2.2 With a Limited Cloning Oracle In the second stage, we allow R access to a limited cloning oracle which can clone only the target state. **Definition 4.10**.: _Let \(z\) be a label and \(|\psi_{z}\rangle\) the corresponding quantum state from the scheme. A \(z\)**-cloning oracle**, \(\mathcal{C}_{z}\), is a cloning oracle for the singleton set \(\{|\psi_{z}\rangle\}\)._ **Proposition 4.11**.: _Let \(R\) be a quantum oracle algorithm that is given a classical advice string \(a_{z}\in\{0,1\}^{\ell}\) for some polynomial \(\ell\) in \(n\), and makes \(q\) queries (where \(q\) is a polynomial in \(n\)) to the random oracle **as well as a \(z\)-cloning oracle**. Let \(R^{\prime}\) be a run of \(R\) where queries to the \(z\)-cloning oracle are instead returned unmodified (or equivalently, passed to a dummy oracle which acts as the identity). Then the total variation distance between the outcomes of the two runs is negligible in \(n\)._ Proof.: The idea is that since the \(z\)-cloning oracle and the dummy oracle differ only on the basis states where the first register is \(|\psi_{z}\rangle\), if \(R\) puts low query weight on those basis states, then swapping between the oracles can only make minimal difference. We now give the proof. Consider and adversary \(R\) which, when given advice string \(a_{z}\), makes \(k\) queries to the random oracle and \(q\) queries to the \(z\)-cloning oracle. Let \(R^{\prime}\) be a quantum oracle algorithm which simulates a run of \(R\) in which the \(z\)-cloning oracle is replaced by a dummy oracle (an oracle which acts as the identity on all states) by ignoring all of the \(z\)-cloning oracle queries (equivalent to performing the identity on each one). For each \(t\in[q]\), let \(R^{\prime}_{t}\) be a version of \(R^{\prime}\) in which the simulation is stopped prematurely at cloning query number \(t\) and which then outputs the first register of the query input. Let \(\rho^{\prime}_{t}\) be the reduced density matrix of the outputted state. Because the runs of \(R^{\prime}\) and \(R^{\prime}_{t}\) are identical up until \(R^{\prime}_{t}\) stops and outputs cloning query number \(t\), \(\rho^{\prime}_{t}\) is also the reduced state of the first query register when \(R^{\prime}\) requests query number \(t\). Let \(\eta^{\prime}_{t}:=\langle\psi_{z}|\rho^{\prime}_{t}|\psi_{z}\rangle\) be the probability that \(\rho^{\prime}_{t}\) would pass verification for \(z\). As above, since each \(R^{\prime}_{t}\) is a quantum oracle algorithm with advice that satisfies the conditions of Proposition 4.8, all the \(\eta^{\prime}_{t}\) must be negligible in \(n\). Choose a basis \(\{|\chi_{i}\rangle\}_{i\in[0,\dim{(\mathcal{H})}]}\) for \(\mathcal{H}^{\prime}:=\mathcal{H}\oplus|\bot\rangle\) as in Definition 4.3, that includes \(|\chi_{0}\rangle:=|\bot\rangle\) and \(|\chi_{z}\rangle:=|\psi_{z}\rangle\) as two basis elements. Let \(D\) be a unitary from this basis into the computational basis that sends \(|\psi_{z}\rangle|\bot\rangle\) to \(|z\rangle|0\rangle\) and \(|\psi_{z}\rangle|\psi_{z}\rangle\) to \(|z\rangle|1\rangle\) and which arbitrarily assigns all other orthogonal states to computational basis states. (For example, let \(B=\sum_{i}|i\rangle\langle\chi_{i}|\), let \(C=\sum_{ij\notin\{(z,1),(z,z)\}}|i\rangle|j\rangle\langle i|\langle j|+|z \rangle|1\rangle\langle z|\langle z|+|z\rangle|z\rangle\langle z|\langle 1|\), and let \(D=C\cdot(B\otimes B)\).) In this basis, both the \(z\)-cloning oracle and the dummy oracle can be expressed as applications of binary classical functions on all but the last bit. The \(z\)-cloning oracle becomes an application of the classical indicator function for the string \((z,0^{m-1})\): \(f_{=z}(x)=\begin{cases}1&x=(z,0^{m-1})\\ 0&\text{otherwise}\end{cases}\), and the dummy cloning oracle becomes an application of the all-zero function \(f_{\emptyset}(x)=0\). Let \(\mathcal{O}_{=z}\) be the unitary application of the indicator function, \(f_{=z}(x)\) above, which XORs the result into the last bit. Then the \(z\)-cloning oracle can be expressed as \(\mathcal{C}_{z}=D^{\dagger}\mathcal{O}_{=z}D\), and the dummy oracle can be expressed as \(\mathcal{C}_{\emptyset}=D^{\dagger}\mathcal{O}_{\text{identity}}D=D^{\dagger}ID=I\). The algorithms \(R\), \(R^{\prime}\), and \(\{R^{\prime}_{t}\}_{t\in[q]}\) can therefore be reformulated as quantum oracle algorithms that direct cloning queries to the classical oracles \(\mathcal{O}_{=z}\) in the case of \(R\), or \(\mathcal{O}_{\text{identity}}\) in the case of \(R^{\prime}\) and \(\{R^{\prime}_{t}\}_{t\in[q]}\). That is, before they make a cloning query, they apply the change of basis \(D\) into the computational basis. They then query \(\mathcal{O}_{=z}\) or \(\mathcal{O}_{\text{identity}}\), and then apply the change of basis \(D^{\dagger}\) back to the original basis. Call the versions of \(R\), \(R^{\prime}\), and \(\{R^{\prime}_{t}\}_{t\in[q]}\) in this new basis \(\mathcal{R}\), \(\mathcal{R}^{\prime}\), and \(\{\mathcal{R}^{\prime}_{t}\}_{t\in[q]}\). Note that the only difference between \(\mathcal{R}\) and \(\mathcal{R}^{\prime}\) is that cloning queries to \(\mathcal{O}_{=z}\) in \(\mathcal{R}\) are redirected to \(\mathcal{O}_{\text{identity}}\) in \(\mathcal{R}^{\prime}\). Furthermore, \(\mathcal{O}_{=z}\) and \(\mathcal{O}_{\text{identity}}\) only differ on inputs where the first register is \(|z\rangle\). We now therefore compute the query magnitude of cloning queries of \(\mathcal{R}^{\prime}\) on \(|z\rangle\). The state of the first register of cloning query number \(t\) of \(\mathcal{R}^{\prime}\) to \(\mathcal{O}_{\text{identity}}\) is given by \(B\rho^{\prime}_{t}B^{\dagger}=\sum_{i,j}|i\rangle\langle\chi_{i}|\;\rho^{ \prime}_{t}\;|\chi_{j}\rangle\langle j|\). The query magnitude on \(|z\rangle\) is then \(\langle z|\left(\sum_{i,j}|i\rangle\langle\chi_{i}|\;\rho^{\prime}_{t}\;|\chi_ {j}\rangle\langle j|\right)|z\rangle=\langle\chi_{z}|\rho^{\prime}_{t}|\chi_ {z}\rangle=\langle\psi_{z}|\rho^{\prime}_{t}|\psi_{z}\rangle=\eta^{\prime}_{t}\). We now apply Theorem 2.5 on the set \(F:=\{(i,y)\;|\;i\in[q],y=z\}\) and \(\varepsilon:=\sqrt{T\sum_{t=1}^{T}\eta^{\prime}_{t}}\), with \(T:=q\). The sum of the query magnitudes of \(\mathcal{R}^{\prime}\) on \(F\) is then \(\sum_{t=1}^{T}\eta^{\prime}_{t}\leq\frac{\varepsilon^{2}}{T}\). Let \(|\phi\rangle\) and \(|\phi^{\prime}\rangle\) be the states outputted by \(\mathcal{R}\) and \(\mathcal{R}^{\prime}\) respectively (and therefore also by \(R\) and \(R^{\prime}\) respectively). Since \(\mathcal{R}\) is identical to \(\mathcal{R}^{\prime}\), with the only difference being that the cloning oracle queries are modified on the set \(F\), then by Theorem 2.5, \(\left||\phi\rangle-|\phi^{\prime}\rangle\right|\leq\varepsilon\). By Theorem 2.3, then, the total variation distance between runs of \(R\) and \(R^{\prime}\) is therefore at most \(4\varepsilon=4\sqrt{q\sum_{t=1}^{q}\eta^{\prime}_{t}}\), which is negligible, since all the \(\eta^{\prime}_{t}\) are negligible and \(q\) is a polynomial. **Corollary 4.12**.: _Let \(R\) be a quantum oracle algorithm that is given a classical advice string \(a_{z}\in\{0,1\}^{\ell}\) for some polynomial \(\ell\) in \(n\), and makes \(q\) queries (where \(q\) is a polynomial in \(n\)) to the random oracle **as well as a \(z\)-cloning oracle**. For \(z\in\{0,1\}^{n}\) drawn uniformly at random, \(R\) cannot output a quantum state that passes verification for \(z\) with probability that is non-negligible in \(n\)._ Proof.: Suppose that \(R\), when given advice string \(a_{z}\), makes \(k\) queries to the random oracle and \(q\) queries to the \(z\)-cloning oracle, and then produces a state \(\rho_{z}\) which passes verification for \(z\) with probability \(\eta\). As in Proposition 4.11, let \(R^{\prime}\) be a run of \(R\) in which queries to the \(z\)-cloning oracle are returned unmodified. \(R^{\prime}\) is then a quantum oracle algorithm with advice that satisfies the conditions of Proposition 4.8, so it must have negligible success probability \(\eta^{\prime}\). By Proposition 4.11, the total variation distance between runs of \(R\) and \(R^{\prime}\) is negligible in \(n\) so \(\eta\) is at most negligibly larger than \(\eta^{\prime}\), and thus negligible as well. #### 4.2.3 With the Full Cloning Oracle In the third stage, we finally allow \(R\) access to the full cloning oracle, which clones all valid states of the scheme while doing nothing for invalid states. **Proposition 4.13**.: _Let \(R\) be a quantum oracle algorithm that is given a classical advice string \(a_{z}\in\{0,1\}^{\ell}\) for some polynomial \(\ell\) in \(n\), and makes \(q\) queries (where \(q\) is a polynomial in \(n\)) to the random oracle **and a full cloning oracle** for the set of valid states. For \(z\in\{0,1\}^{n}\) drawn uniformly at random, \(R\) cannot output a quantum state that passes verification for \(z\) with probability that is non-negligible in \(n\)._ Proof.: Note that in showing this, we are demonstrating that the ability to clone other valid states does not help it produce the target state. The idea is use \(R\) to produce a new adversary \(R^{\prime}\) which queries just the \(z\)-cloning oracle with comparable success. Ideally we would take a \(z\)-cloning oracle and simply simulate the rest of the cloning oracle (for states other than the target state) by using the random oracle. However, such a simulation would require a large number of queries to the random oracle and thus be highly inefficient. We get around this issue by creating an imposter random oracle and simulating cloning queries relative to it rather than relative to the original random oracle. We must show first that the impostor random oracle is indistinguishable from the original random oracle, and second that it is possible to approximately simulate cloning queries to the impostor oracle. We now give the proof. Consider and adversary \(R\) which, when given advice string \(a_{z}\), makes \(q\) queries to the random oracle and the full cloning oracle, and then with probability \(\eta\) produces a state \(\ket{\psi_{z}}\) which passes verification for \(z\). We use \(R\) to produce a similar algorithm, \(R^{\prime}\), which only makes cloning queries to the \(z\)-cloning oracle, and which must succeed with comparable probability. We produce \(R^{\prime}\) as follows: We first sample a private random function, \(H_{\text{private}}:\{0,1\}^{m}\to\{0,1\}^{n}\setminus\{z\}\), which has a limited codomain such that it does not output \(z\). That is, for each input, independently choose a uniformly random element of \(\{0,1\}^{n}\setminus\{z\}\). We then create an impostor random oracle, \(H_{\text{impostor}}:\{0,1\}^{m}\to\{0,1\}^{n}\), by combining the original and private random oracles in the following way: \[H_{\text{impostor}}(x)=\begin{cases}z&H(x)=z\\ H_{\text{private}}(x)&\text{otherwise}\end{cases}\] That is, on query input \(x\), if \(x\) is a preimage of \(z\) in \(H\), it passes the query to the original random oracle, producing \(z\), but otherwise passes it to the newly sampled private random oracle. We also create a cloning oracle relative to this impostor random oracle, \(\mathcal{C}_{\text{impostor}}\). This _impostor cloning oracle_ clones the states that are valid for the impostor random oracle, which will in general be different than the set of valid states of the original random oracle. We claim that the impostor oracles perfectly mimic the originals. **Claim 4.14**.: _The joint distribution of target image \(z\) and the impostor random oracle \(H_{\text{impostor}}\) is identical to that of \(z\) and \(H\). That is, \(H_{\text{impostor}}\) is distributed as a uniform random oracle conditioned on \(z\) being one of its images._ Proof.: To show this, we begin giving an equivalent lazy method of sampling the random oracle \(H\), along with sampling the target image \(z\). First, we choose a random element \(x^{*}\in\{0,1\}^{m}\) in the domain of \(H\). We then randomly choose \(z\in\{0,1\}^{n}\) as both its image in \(H\) and as the target image. Then, for each of the remaining elements of the domain of \(H\), sample a random image from its range. We now describe a similar method for lazily sampling the impostor random oracle \(H_{\text{impostor}}\), along with the target image \(z\). As before, we choose a random element \(x^{*}\in\{0,1\}^{m}\) in the domain, and a random image \(z\in\{0,1\}^{n}\) as both its image in \(H_{\text{impostor}}\) and as the target image. For each remaining element, \(x\), of the domain, we first sample a random image \(y\). If \(y\neq z\), resample an independent sample \(y^{\prime}\) from \(\{0,1\}^{n}\setminus\{z\}\) to be the image of \(x\). Since, conditioned on \(y\neq z\), \(y\) is uniform on \(\{0,1\}^{n}\setminus\{z\}\), and so is \(y^{\prime}\), the resampled image \(y^{\prime}\) is identically distributed to the original \(y\). The extra resampling performed to sample \(H_{\text{impostor}}\) thus has no effect on the distribution, so this process produces a distribution identical to the one above for sampling \(H\) and \(z\). As a consequence, no quantum oracle algorithm can tell the difference between query access to the original oracles \(H\) and \(\mathcal{C}_{H}\), and query access to the impostor oracles \(H_{\text{impostor}}\) and \(\mathcal{C}_{\text{impostor}}\). That is, an algorithm \(R^{\prime\prime}\) which simulates \(R\) and redirects its oracle queries to the impostor oracles will succeed with the same probability \(\eta\). This completes the first part, showing that the impostor oracles are perfect replacements for the original oracles. It now remains to show that the impostor oracles can be simulated efficiently in terms of the number of queries to the original random oracle \(H\) and a \(z\)-cloning oracle \(\mathcal{C}_{z}\). Note that implementing \(\mathcal{C}_{\text{impostor}}\) using \(H\) and \(\mathcal{C}_{z}\) may be query inefficient. We therefore create a new efficient impostor cloning oracle \(\widehat{\mathcal{C}}_{\text{impostor}}\), which for each query only makes a constant number of queries to \(H\) and \(\mathcal{C}_{z}\), but which nevertheless performs nearly as well as the inefficient \(\mathcal{C}_{\text{impostor}}\). We would like to define \(\widehat{\mathcal{C}}_{\text{impostor}}\) by saying that it acts on computational basis states approximately as \[\widehat{\mathcal{C}}_{\text{impostor}}|x\rangle|y\rangle=\begin{cases} \mathcal{C}_{z}|x\rangle|y\rangle&H(x)=z\\ \mathcal{C}_{\text{private}}|x\rangle|y\rangle&\text{otherwise}\end{cases}\] However, in reality, this is not unitary, since the resulting states will not be exactly orthogonal. We instead define it with an additional ancilla qubit as follows: Define the following two unitaries acting on an ancilla qubit \(|b\rangle\) as well as the two input registers (of the cloning oracle). \[\mathcal{U}_{1}|b\rangle|x\rangle|y\rangle =\begin{cases}|b\oplus 1\rangle|x\rangle|y\rangle&H(x)=z\\ |b\rangle|x\rangle|y\rangle&\text{otherwise}\end{cases}\] \[\mathcal{U}_{2}|b\rangle|x\rangle|y\rangle =\begin{cases}|b\rangle\otimes\mathcal{C}_{z}|x\rangle|y\rangle&b=1 \\ |b\rangle\otimes\mathcal{C}_{\text{impostor}}|x\rangle|y\rangle&\text{otherwise} \end{cases}\] The action of \(\mathcal{C}_{\text{impostor}}\) with an extra ancilla qubit can be expressed as \(I\otimes\mathcal{C}_{\text{impostor}}=\mathcal{U}_{1}\mathcal{U}_{2}\mathcal{U }_{1}\). That is, \(\mathcal{U}_{2}=I\otimes\mathcal{C}_{\text{impostor}}\) because whenever \(H(x)=z\), then \(\mathcal{C}_{z}|x\rangle|y\rangle=\mathcal{C}_{H}|x\rangle|y\rangle=\mathcal{C}_ {\text{impostor}}|x\rangle|y\rangle\). Furthermore, for any \(x\in\{0,1\}^{m}\), the support of \(\mathcal{C}_{\text{impostor}}|x\rangle|y\rangle\) is only on computational basis states \(|x^{\prime}\rangle|y^{\prime}\rangle\) such that \(H(x^{\prime})=H(x)\), which implies that \(\mathcal{U}_{2}=I\otimes\mathcal{C}_{\text{impostor}}\) commutes with \(\mathcal{U}_{1}\). We now define a modified version of \(\mathcal{U}_{2}\), but which makes no use of \(\mathcal{C}_{\text{impostor}}\), and instead uses \(\mathcal{C}_{\text{private}}\), the cloning oracle relative to \(H_{\text{private}}\): \[\widehat{\mathcal{U}}_{2}|b\rangle|x\rangle|y\rangle=\begin{cases}|b\rangle\otimes \mathcal{C}_{z}|x\rangle|y\rangle&b=1\\ |b\rangle\otimes\mathcal{C}_{\text{private}}|x\rangle|y\rangle&\text{otherwise} \end{cases}\] We thus define \(\widehat{\mathcal{C}}_{\text{impostor}}=\mathcal{U}_{1}\widehat{\mathcal{U}}_ {2}\mathcal{U}_{1}\), which we note makes two queries to \(H\) and one query6 to \(\mathcal{C}_{z}\) on each application (note that \(\mathcal{C}_{\text{private}}\) uses no oracle queries as it can be simulated directly using the private random function \(H_{\text{private}}\)). It remains to show that \(\widehat{\mathcal{C}}_{\text{impostor}}\) cannot be distinguished from \(I\otimes\mathcal{C}_{\text{impostor}}\). That is, we show that it is a good efficient approximation for \(\mathcal{C}_{\text{impostor}}\). Footnote 6: Note that it is straightforward to implement a controlled version of \(\mathcal{C}_{z}\) using a single query to \(\mathcal{C}_{z}\), as it is for any oracle for which a fixed state, on which it acts as the identity, is known. In this case, the fixed state is \(|\bot\rangle|\bot\rangle\). To do so, we prepare the state \(|\bot\rangle|\bot\rangle\) in an ancilla register. We then apply a \(0\)-controlled SWAP gate between this register and the input register on which \(\mathcal{C}_{z}\) acts, once before and then then once again after applying \(\mathcal{C}_{z}\). If the control is a \(0\), then the fixed state \(|\bot\rangle|\bot\rangle\) is swapped in, neutralizing the application of the oracle. If the control is a \(1\), then nothing is swapped and the oracle acts as expected. We observe that \(I\otimes\mathcal{C}_{\text{impostor}}\) and \(\widehat{\mathcal{C}}_{\text{impostor}}\) differ only in whether they apply \(\mathcal{C}_{\text{impostor}}\) or \(\mathcal{C}_{\text{private}}\) (in \(\mathcal{U}_{2}\) and \(\widehat{\mathcal{U}}_{2}\) respectively) on the two non-ancilla registers, and only on basis states for which the first of those registers is not a preimage of \(z\). In fact they differ only by a change of basis between a basis that includes the preimage superposition set of \(H_{\text{private}}\) and one that includes the preimage superposition set of \(H_{\text{impostor}}\). Taking a closer look at \(H_{\text{impostor}}\) and \(H_{\text{private}}\), the only difference between the functions is that the domain elements that are preimages of the target image \(z\) in \(H_{\text{impostor}}\) are reassigned to another random image in \(H_{\text{private}}\). Moreover, since the difference we observe in this setting is only for domain elements that do not map to \(z\) in \(H\) (and thus in \(H_{\text{impostor}}\)), we can set aside \(z\) in the analysis and focus on the other images. Let \(|\psi_{i}\rangle\) and \(|\widehat{\psi}_{i}\rangle\) be the respective preimage superposition states of \(H_{\text{impostor}}\) and \(H_{\text{private}}\) for image \(i\in\{0,1\}^{n}\setminus\{z\}\). Let \(\theta_{i}:=\cos^{-1}\left(\langle\psi_{i}|\widehat{\psi}_{i}\rangle\right)\) be the small angle between them. Further, let \(|\psi_{z\to i}\rangle\) be the equal positive superposition over any preimages of the target image, \(z\), in \(H_{\text{impostor}}\) that were reassigned to image \(i\) in \(H_{\text{private}}\). Then we can write \(|\widehat{\psi}_{i}\rangle=\cos(\theta_{i})|\psi_{i}\rangle+\sin(\theta_{i}) |\psi_{z\to i}\rangle\). Note that for all \(i\neq j\), \(\langle\psi_{i}|\widehat{\psi}_{j}\rangle=\cos(\theta_{j})\langle\psi_{i}| \psi_{j}\rangle+\sin(\theta_{j})\langle\psi_{i}|\psi_{z\to j}\rangle=0\) because the supports of the states (that is, their sets of preimages) are disjoint (where note again that we exclude the target image \(z\) here). And of course, each of the preimage superposition sets is orthogonal within the set: \(\langle\psi_{i}|\psi_{j}\rangle=\langle\widehat{\psi}_{i}|\widehat{\psi}_{j} \rangle=0\quad\forall i\neq j\). We can therefore partition the Hilbert space into \(2^{n}-1\) orthogonal planes, each of which is spanned by a \(|\psi_{i}\rangle\) and its corresponding \(|\widehat{\psi}_{i}\rangle\) (or \(|\psi_{z\to i}\rangle\)), as well as a remaining space orthogonal to all those planes. With this perspective, the change of basis that differentiates between \(\mathcal{C}_{\text{impostor}}\) and \(\widehat{\mathcal{C}}_{\text{impostor}}\) can be described as a small rotation of angle \(\theta_{i}\) in each of these planes and the identity in the remaining space. \[\mathcal{U}_{3}:=I-\sum_{i}\big{(}|\psi_{i}\rangle\langle\psi_{i} |+|\psi_{z\to i}\rangle\langle\psi_{z\to i}|\big{)}\] \[\qquad+\sum_{i}\big{(}\cos(\theta_{i})|\psi_{i}\rangle+\sin( \theta_{i})|\psi_{z\to i}\rangle\big{)}\langle\psi_{i}|+\big{(}-\sin(\theta_{i })|\psi_{i}\rangle+\cos(\theta_{i})|\psi_{z\to i}\rangle\big{)}\langle\psi_{z \to i}|\] Then, \[I\otimes\mathcal{C}_{\text{impostor}}=\mathcal{U}_{1}\mathcal{U}_{2}\mathcal{U} _{1}\ \ \text{and}\ \ \widehat{\mathcal{C}}_{\text{impostor}}=\mathcal{U}_{1}(\mathcal{U}_{3}^{\dagger} \otimes\mathcal{U}_{3}^{\dagger})\mathcal{U}_{2}(\mathcal{U}_{3}\otimes \mathcal{U}_{3})\mathcal{U}_{1}\] It therefore suffices to show that \(\mathcal{U}_{3}\) cannot be distinguished from the identity except with negligible advantage. Specifically, we want to show that the eigenvalues of \(I-\mathcal{U}_{3}\) are all negligible. That's because if the magnitudes of all the eigenvalues of \(I-\mathcal{U}_{3}\) are bounded from above by a negligible function \(\varepsilon\), then given any quantum state \(|\phi\rangle\) before the application of \(\mathcal{U}_{3}\) or \(I\), and any subsequent transformation, we have that the resulting Euclidean distance is \(\left\||\phi\rangle-\mathcal{U}_{3}|\phi\rangle\right\|=\left\||(I-\mathcal{U} _{3})|\phi\rangle\right\|\leq\varepsilon\), and thus by Theorem 2.3, when replacing \(I\) with \(\mathcal{U}_{3}\), the probability of success can change by at most \(4\varepsilon\). Since \(I-\mathcal{U}_{3}\) acts independently on and maintains the \(2^{n}-1\) orthogonal planes, it suffices to look at each plane individually. Specifically, its non-zero eigenvalues come in pairs of magnitude \[|\lambda_{i}| =|1-e^{\pm\mathbf{i}\theta_{i}}|\] \[=|1-\cos(\theta_{i})\mp\mathbf{i}\sin(\theta_{i})|\] \[=\sqrt{(1-\cos(\theta_{i}))^{2}+\sin^{2}(\theta_{i})}\] \[=\sqrt{2(1-\cos(\theta_{i}))}\] \[=\sqrt{2\left(1-\langle\psi_{i}|\widehat{\psi}_{i}\rangle\right)}\] In order to further break this down, let \(k_{i}\) be the number of preimages of \(i\) in \(H_{\mathrm{impostor}}\) and let \(k_{z\to i}\) be the number of preimages of the target image, \(z\), in \(H_{\mathrm{impostor}}\) that were reassigned to image \(i\) in \(H_{\mathrm{private}}\). We evaluate the inner product as \[\langle\psi_{i}|\widehat{\psi}_{i}\rangle =\left(\frac{1}{\sqrt{k_{i}}}\sum_{x|H_{\mathrm{impostor}}(x)=i} \langle x|\right)\left(\frac{1}{\sqrt{k_{i}+k_{z\to i}}}\sum_{x|H_{ \mathrm{private}}(x)=i}|x\rangle\right)\] \[=\sqrt{\frac{k_{i}}{k_{i}+k_{z\to i}}}=\sqrt{1-\frac{k_{z\to i }}{k_{i}+k_{z\to i}}}\geq 1-\frac{k_{z\to i}}{k_{i}+k_{z\to i}}\] which gives \[|\lambda_{i}|=\sqrt{2\left(1-\langle\psi_{i}|\widehat{\psi}_{i} \rangle\right)}\leq\sqrt{\frac{2k_{z\to i}}{k_{i}+k_{z\to i}}}\] The following claim frames this bound in terms of \(n\). **Claim 4.15**.: _With overwhelming probability in the choice of \(H\) and \(H_{\mathrm{private}}\), for all \(i\in\{0,1\}^{n}\setminus\{z\}\),_ \[\frac{k_{z\to i}}{k_{i}+k_{z\to i}}\leq 72n\cdot 2^{-n}\] Proof.: We show that the following all happen with overwhelming probability: * for all \(i\in\{0,1\}^{n}\setminus\{z\}\), \(k_{i}>\frac{1}{2}\cdot 2^{m-n}\) * \(\frac{1}{2}\cdot 2^{m-n}<k_{z}<3\cdot 2^{m-n}\) * for all \(i\in\{0,1\}^{n}\setminus\{z\}\), \(k_{z\to i}<36n\cdot 2^{m-2n}\) First we show that with overwhelming probability, for all \(i\in\{0,1\}^{n}\setminus\{z\}\), \(k_{i}>\frac{1}{2}\cdot 2^{m-n}\). The expected number of preimages of any image \(i\) is \(\mathbb{E}[k_{i}]=2^{m-n}\). By a Chernoff bound, \(P[k_{i}\leq\frac{1}{2}(2^{m-n})]\leq e^{-\frac{1}{8}\cdot 2^{m-n}}\) for any particular image \(i\). By a union bound over the \(2^{n}-1\) images, the probability that for any \(i\), \(k_{i}\leq\frac{1}{2}(2^{m-n})\), is at most \(2^{n}\cdot e^{-\frac{1}{8}\cdot 2^{m-n}}\leq e^{-(\frac{1}{8}\cdot 2^{m-n}-n)}\), which is negligible in \(n\) as we have that \(m\geq 2n\). We next bound the number of preimages of the target image \(z\). Specifically, we show that \(\frac{1}{2}\cdot 2^{m-n}<k_{z}<3\cdot 2^{m-n}\). The lower bound is identical to the one above for the other \(k_{i}\)'s. The upper bound is given by another Chernoff bound as \(P[k_{z}\geq 3(2^{m-n})]\leq e^{-2^{m-n}}\), which is likewise negligible in \(n\). Finally, we bound the number of preimages of \(z\) in \(H_{\mathrm{impostor}}\) that can be mapped to any one \(i\) in \(H_{\mathrm{impostor}}\). Specifically, we show that for all \(i\in\{0,1\}^{n}\setminus\{z\}\), \(k_{z\to i}<36n\cdot 2^{m-2n}\). Since we just showed that with overwhelming probability, \(z\) has at least \(\frac{1}{2}\cdot 2^{m-n}\) and at most \(3\cdot 2^{m-n}\) preimages, the expected number of these preimages distributed to each of the \(2^{n}-1\) other images is bounded by \(\frac{1}{2}\cdot 2^{m-2n}<\mathbb{E}[k_{z\to i}]<6\cdot 2^{m-2n}\). By a Chernoff bound, \(P[k_{z\to i}\geq 6n(6\cdot 2^{m-2n})]\leq e^{-\frac{25n^{2}}{2+5n}\cdot 2^{m-2n}} \leq e^{-\frac{3}{2}n\cdot 2^{m-2n}}\) for any particular image \(i\). As before, by a union bound over the \(2^{n}-1\) images, the probability that for any \(i\), \(k_{z\to i}\geq 36n\cdot 2^{m-2n}\) is at most \(2^{n}\cdot e^{-\frac{3}{2}n\cdot 2^{m-2n}}\leq e^{-(\frac{3}{2}n\cdot 2^{m-2n}-n)}\) which is negligible in \(n\) as \(m\geq 2n\). Putting these three things together, by a union bound over the three above events, with all but a negligible probability in \(n\), for all \(i\), \[\frac{k_{z\to i}}{k_{i}+k_{z\to i}}\leq\frac{36n\cdot 2^{m-2n}}{\frac{1}{2} \cdot 2^{m-n}}=72n\cdot 2^{-n}\] We therefore get an upper bound of \(\varepsilon:=12\sqrt{n}\cdot 2^{-n/2}\) on the eigenvalues of \(I-\mathcal{U}_{3}\), which is negligible in \(n\), and therefore, as shown above, an upper bound of \(4\varepsilon\) on the change in success probability incurred by replacing \(I\) with \(\mathcal{U}_{3}\). We now use a standard hybrid argument over the at most \(4q\) locations where \(\mathcal{U}_{3}\) might appear. We start with \(R^{\prime\prime}\), for which all such locations have the identity, and for which the success probability is the original success probability of \(R\), namely \(\eta\). One at a time, we insert a \(\mathcal{U}_{3}\) at each location, each time incurring a loss of at most \(4\varepsilon\) in the success probability. With all \(4q\) applications of \(\mathcal{U}_{3}\), we therefore get a success probability \(\eta^{\prime}\) of at least \(\eta-16q\varepsilon-\gamma\) (where \(\gamma\) is an additional negligible loss from the negligible chance that the sampled \(H\) and \(H_{\mathrm{private}}\) are not covered by Claim 4.15). We therefore construct \(R^{\prime}\) in this way as a quantum oracle algorithm with advice with query access to the original random oracle \(H\) and a \(z\)-cloning oracle \(\mathcal{C}_{z}\). It simulates \(R\) and redirects its oracles queries: Whenever \(R\) makes a random oracle query, it redirects the query to its own simulated \(H_{\mathrm{impostor}}\), which makes at most a single query to \(H\). Whenever \(R\) makes a cloning oracle query, it redirects the query to its \(\widehat{\mathcal{C}}_{\mathrm{impostor}}\), which makes at most one query to \(\mathcal{C}_{z}\) and two to \(H\). \(R^{\prime}\) thus satisfies the conditions of Corollary 4.12, so its success probability \(\eta^{\prime}\geq\eta-16q\varepsilon-\gamma\) must be negligible. Therefore, \(\eta\), the success probability of \(R\), must be negligible, thus completing the proof of Proposition 4.13, and as a consequence, completing the proof of our main theorems, Theorem 4.2 and Theorem 4.1. ## 5 Implications for Complexity Theory We now present an application of clonable-untelegraphable states to the study of complexity theory. While there may be a number of possible connections to quantum complexity theory, we focus on one that is of particular interest, which is to the computational no-go properties of efficiently verifiable quantum proofs. The longstanding open problem of whether the complexity classes \(\mathsf{QCMA}\) and \(\mathsf{QMA}\) are equal [1] asks whether classical proofs are just as powerful as quantum proofs in the setting of efficient quantum verification. Here, we investigate the power of quantum proofs which are not quite classical, but also not fully quantum, and are rather quantum states that violate some specific computational no-go property. We first demonstrate that violating the efficient versions of either of the no-telegraphing or no-reconstruction properties makes the resulting complexity class equivalent to \(\mathsf{QCMA}\), in which the proofs are classical strings. On the other hand, we show that this is not likely to be the case for the class \(\mathsf{clonableQMA}\), in which the proofs are quantum states that are efficiently clonable. We justify this by giving a quantum oracle relative to which \(\mathsf{clonableQMA}\) is not contained in \(\mathsf{QCMA}\). We hope to inspire further investigation into the power of such quantum proofs. Moreover, an in-depth understanding of the relative power of these complexity classes is important for constructing the cryptographic applications presented in Section 6. ### Classical vs. Quantum Witnesses Recall the definitions of \(\mathsf{QCMA}\) and \(\mathsf{QMA}\) (Definitions 2.1 and 2.2 respectively). Note that the only difference between these two classes is the format of their witnesses: \(\mathsf{QMA}\) allows any polynomial-sized quantum state as a witness, while \(\mathsf{QCMA}\) restricts witnesses to be classical strings, or equivalently, restricts them to be in the computational basis. It is evident that \(\mathsf{QCMA}\subseteq\mathsf{QMA}\)[1].7 That is, the power of the class \(\mathsf{QMA}\) is made no greater, and possibly weaker, by restricting its witnesses to be classical. Whether or not these two complexity classes are in fact equal has been a major open problem in quantum complexity theory since it was first posed in [1] over two decades ago. A sequence of works has shown increasingly strong oracle separations between the two classes, beginning with quantum oracle separations [1, 2], and most recently, separations by classical distributional oracles [14, 15], but no separation relative to a standard classical oracle is yet known. Since both classes contain \(\mathsf{MA}\) and \(\mathsf{NP}\) and are contained in \(\mathsf{PP}\) and therefore \(\mathsf{PSPACE}\)[23], a separation in the standard model (without reference to oracles) would imply separations which are not thought to be possible with existing techniques. The overriding question is nevertheless easy to phrase: _Are classical witnesses as powerful as quantum witnesses in the context of efficient verification?_ Footnote 7: As observed in [1], the soundness condition of \(\mathsf{QCMA}\) in Definition 2.1 can be replaced with the one of \(\mathsf{QMA}\) in Definition 2.2 against general quantum purported witnesses without any effect on the class. That’s because the verifier can always force a quantum purported witness to be classical by measuring it in the computational basis. In other words, \(\mathsf{QCMA}\) is also sound against _quantum_ witnesses. In this section, we make progress on this question in a new direction: inspired by the new concept of clonable-untelegraphable states, we introduce a new complexity class, \(\mathsf{clonableQMA}\), which sits in between \(\mathsf{QCMA}\) and \(\mathsf{QMA}\), and we motivate the conjecture that it is not equal to either. This comes from considering weaker restrictions on the witnesses of \(\mathsf{QMA}\). Instead of allowing the witnesses to be fully quantum as in \(\mathsf{QMA}\) or restricting them to be fully classical as in \(\mathsf{QCMA}\), we require them to violate specific computational no-go properties. We show that restricting the witnesses of \(\mathsf{QMA}\) to be either efficiently reconstructable or efficiently telegraphable collapses the resulting class down to \(\mathsf{QCMA}\). In other words, \(\mathsf{QCMA}\) can be given an equivalent definition as the class \(\mathsf{QMA}\) with efficiently reconstructable or efficiently telegraphable quantum witnesses. On the other hand, restricting the witnesses to be efficiently clonable does not have the same effect. In fact, as a consequence of the proof of our black-box separation between efficiently clonable and efficiently telegraphable quantum states, we give a quantum oracle black-box separation between \(\mathsf{QCMA}\) and the new class, \(\mathsf{clonableQMA}\), of \(\mathsf{QMA}\) problems with efficiently clonable witnesses. Moreover, we argue without a formal proof that \(\mathsf{clonableQMA}\) is not likely to equal \(\mathsf{QMA}\) either, as this would imply the unlikely consequence of all \(\mathsf{QMA}\)-complete problems having efficiently clonable witnesses, which could prove to be a significant barrier to public-key quantum money. The class \(\mathsf{clonableQMA}\) may therefore be a new complexity class standing strictly in-between \(\mathsf{QCMA}\) and \(\mathsf{QMA}\). We end Section 5 by giving a candidate oracle-free problem in \(\mathsf{clonableQMA}\) which may separate it from \(\mathsf{QCMA}\), and we show that any such problem immediately yields back a set of states that is clonable but not efficiently telegraphable. ### Computational No-go Properties of Quantum Witnesses To motivate the discussion that follows, we start by giving a definition of \(\mathsf{QMA}\) with efficiently reconstructable quantum witnesses, and then show that it is in fact an alternate definition of \(\mathsf{QCMA}\). **Definition 5.1** (alternative definition of \(\mathsf{QCMA}\) in terms of efficiently reconstructable witnesses).: _A decision problem \(\mathcal{L}=(\mathcal{L}_{\mathsf{YES}},\mathcal{L}_{\mathsf{NO}})\) is in \(\mathsf{QCMA}^{\prime}(c,f,s)\) if there exists a polynomial time quantum verifier \(V\), a polynomial time quantum reconstructor \(R\), and a polynomial \(p\), such that_ * _Completeness:_ _if_ \(x\in\mathcal{L}_{\mathsf{YES}}\)_, then there exists a_ _quantum_ _witness_ \(\left|\psi\right\rangle\) _on_ \(p(\left|x\right|)\) _qubits such that_ \(V\) _accepts on input_ \(\left|x\right\rangle\left|\psi\right\rangle\) _with probability at least_ \(c\)_, and_ _Reconstruction Fidelity:_ _for this same_ \(\left|\psi\right\rangle\)_, there exists classical advice string_ \(a\in\{0,1\}^{p(\left|x\right|)}\) _such that_ \(R(a)\) _succeeds at reconstructing_ \(\left|\psi\right\rangle\left\langle\psi\right|\) _with fidelity at least_ \(f\)_._ _That is,_ \(\left\langle\psi\right|R(a)\left|\psi\right\rangle\geq f\)_._ * _Soundness:_ _if_ \(x\in\mathcal{L}_{\mathsf{NO}}\)_, then for all_ _quantum_ _states_ \(\left|\psi^{*}\right\rangle\) _on_ \(p(\left|x\right|)\) _qubits,_ \(V\) _accepts on input_ \(\left|x\right\rangle\left|\psi^{*}\right\rangle\) _with probability at most_ \(s\)_._ **Remark 5.2**.: _Note that we only require the collection of valid witnesses to be efficiently reconstructable. That is, while the collection of witnesses for \(\mathsf{YES}\)-instances must be efficiently reconstructable, as with the standard definition of \(\mathsf{QCMA}\) (see Footnote 7 above), the class is sound against **any** quantum witness._ **Theorem 5.3**.: \(\mathsf{QCMA}^{\prime}(\frac{9}{10},\frac{9}{10},\frac{1}{10})=\mathsf{QCMA}\)_. That is, this definition of \(\mathsf{QCMA}\) in terms of efficiently reconstructable witnesses is equivalent to the definition of \(\mathsf{QCMA}\) in Definition 2.1 in terms of classical witnesses, and describes the same class of decision problems._ Proof.: The main idea is that classical witnesses are themselves efficiently reconstructable, and any efficiently reconstructable witness can be given as a classical witness instead. For completeness, we include a formal proof as follows. For this proof, let the class described by Definition 5.1 be called \(\mathsf{QCMA}^{\prime}\) to distinguish it from that described by Definition 2.1. \(\mathsf{QCMA}\subseteq\mathsf{QCMA}^{\prime}(\frac{9}{10},\frac{9}{10}, \frac{1}{10})\)**:** Let \(\mathcal{L}\) be a decision problem in \(\mathsf{QCMA}\). We have that there exists polynomial time quantum verifier \(V\) which has completeness \(\frac{9}{10}\) and soundness \(\frac{1}{10}\). Let \(V^{\prime}\) be the verifier that projects the witness onto the computational basis and then passes the result to \(V\), and let \(R^{\prime}\) be the trivial reconstructor \(R^{\prime}(a)=\left|a\right\rangle\left\langle a\right|\). We show that \(V^{\prime}\) and \(R^{\prime}\) are a valid verifier and reconstructor pair for \(\mathcal{L}\) in \(\mathsf{QCMA}^{\prime}\). If \(x\in\mathcal{L}_{\mathsf{YES}}\), then the guarantee of \(\mathsf{QCMA}\) is that there exists a classical witness \(w\) that causes \(V\) to succeed with probability at least \(\frac{9}{10}\). Therefore, let \(\left|w\right\rangle\), the computational basis state corresponding to \(w\), be the quantum witness for the \(\mathsf{QCMA}^{\prime}\) verifier \(V^{\prime}\). Furthermore, let \(a=w\) be the advice for the reconstructor \(R^{\prime}\). Since \(\left|w\right\rangle\) is already in the computational basis, we see that \(V^{\prime}\) accepts on input \(\left|x\right\rangle\left|w\right\rangle\) with probability \(\frac{9}{10}\), and furthermore, \(R^{\prime}(a)=R^{\prime}(w)=\left|w\right\rangle\left\langle w\right|\) with fidelity \(1\). If \(x\in\mathcal{L}_{\mathsf{NO}}\), then for every classical string \(w^{*}\), \(V\) accepts the witness \(\left|w^{*}\right\rangle\) with probability at most \(\frac{1}{10}\), so \(V^{\prime}\), which first projects onto the computational basis, accepts any quantum witness \(\left|\psi^{*}\right\rangle\) with the same probability. Both \(V^{\prime}\) and \(R^{\prime}\) run in polynomial time, and the witness and advice string are the same length as the original witness. So \(\mathcal{L}\in\mathsf{QCMA}^{\prime}(\frac{9}{10},\frac{9}{10},\frac{1}{10})\). \(\mathsf{QCMA}^{\prime}(\frac{9}{10},\frac{9}{10},\frac{1}{10})\subseteq \mathsf{QCMA}\):Let \(\mathcal{L}\) be a decision problem in \(\mathsf{QCMA}^{\prime}\). Then there exists a polynomial time quantum verifier \(V^{\prime}\) and a polynomial time reconstructor \(R^{\prime}\). Let \(V\) be the \(\mathsf{QCMA}\) verifier given by the composition of \(V^{\prime}\) and \(R^{\prime}\). Specifically, \(V\) first passes its classical witness to the reconstructor \(R^{\prime}\), and then passes the result of that as a quantum witness to \(V^{\prime}\) (that is, \(V_{x}(a)=V^{\prime}_{x}(R^{\prime}(a))\)). We show that \(V\) is a valid verifier for \(\mathcal{L}\) in \(\mathsf{QCMA}\). If \(x\in\mathcal{L}_{\mathsf{YES}}\), then there exists a quantum witness \(\left|\psi\right\rangle\) such that \(V^{\prime}\) accepts it with probability at least \(\frac{9}{10}\), and there exists a classical advice string \(a\) such that \(\left\langle\psi\right|R^{\prime}(a)\left|\psi\right\rangle\geq\frac{9}{10}\). Without loss of generality, \(V^{\prime}\) can be seen as a projective measurement which succeeds with probability at least \(\frac{9}{10}\). Then by Lemma A.1, \(V\), which passes \(a\) to \(R^{\prime}\) and the result of that to \(V^{\prime}\), accepts on input \(x\) and witness \(a\) with probability at least \(\left(\frac{9}{10}\right)^{2}-2\sqrt{\frac{1}{10^{2}}}=0.61\). If \(x\in\mathcal{L}_{\mathsf{NO}}\), then for all quantum witnesses \(\left|\psi^{*}\right\rangle\), \(V^{\prime}\) accepts with on input \(\left|x\right\rangle\left|\psi^{*}\right\rangle\) probability at most \(\frac{1}{10}\). Since \(V\) produces the output of \(V^{\prime}\) on some quantum state, or a probabilistic mixture of quantum states, then \(V\) will likewise only ever accept with probability at most \(\frac{1}{10}\). Since we have a constant soundness-completeness gap (between soundness 0.10 and completeness 0.61), we can amplify the gap by parallel repetition for \(\mathsf{QCMA}\) to get the completeness and soundness required of Definition 2.1. \(V\) runs in polynomial time, and the witness \(a\) is the same length as the reconstruction advice. So \(\mathcal{L}\in\mathsf{QCMA}\). Combining these, we get that \(\mathsf{QCMA}^{\prime}(\frac{9}{10},\frac{9}{10},\frac{1}{10})=\mathsf{QCMA}\). We define \(\mathsf{QMA}\) with efficiently _telegraphable_ witnesses in the same way as in Definition 5.1, but with both a polynomial time quantum deconstructor \(D\) as well as the reconstructor \(R\), which must together succeed at telegraphing the witness \(\left|\psi\right\rangle\) with fidelity at least \(f\). **Corollary 5.4**.: \(\mathsf{QMA}\) _with telegraphable witnesses is also equal to \(\mathsf{QCMA}\)._ Proof.: From Theorem 3.7, we know that telegraphable witnesses are specifically also reconstructable, and with at least the same fidelity. From Theorem 5.3, we know that this makes this class a subset of \(\mathsf{QCMA}\). In the other direction, classical witnesses are trivially telegraphable, which makes this class a superset of \(\mathsf{QCMA}\). So the two classes are in fact equivalent. **Remark 5.5**.: _We see from this that we can define \(\mathsf{QCMA}\) in three alternative but equivalent ways:_ 1. _as_ \(\mathsf{QMA}\) _with the collection of valid witnesses restricted to_ _classical strings__, or equivalently, quantum states in the computational basis_ 2. _as_ \(\mathsf{QMA}\) _with the collection of valid witnesses restricted to be_ _efficiently reconstructable quantum states_ 3. _as_ \(\mathsf{QMA}\) _with the collection of valid witnesses restricted to be_ _efficiently telegraphable quantum states_ **Remark 5.6**.: _The task of separating between \(\mathsf{QMA}\) and \(\mathsf{QCMA}\) can thus be reframed as the task of finding decision problems in \(\mathsf{QMA}\) for which any collection of witnesses is not efficiently reconstructable and/or not efficiently telegraphable._ The violation of each computational no-go property - that is, efficient reconstructability, efficient telegraphability, and efficient clonability - brings collections of quantum states closer to being classical. It might then seem reasonable at this point to see a pattern and guess that every computational no-go violation by the witnesses of QMA would make it equal to QCMA. That is, to guess that any such _classicizing_ restriction on the witnesses makes the quantum witnesses effectively only as good as classical witnesses. We now give evidence _against_ that notion, by giving a quantum oracle black-box separation as evidence that in this context of efficient verification, _efficiently clonable_ witnesses are more powerful than classical, efficiently reconstructable, or efficiently telegraphable witnesses. ### Clonable Witnesses and clonableQMA We give the following definition for clonableQMA, the class of QMA problems that have efficiently clonable witnesses. **Definition 5.7** (clonableQMA).: _A decision problem \(\mathcal{L}=(\mathcal{L}_{\mathsf{YES}},\mathcal{L}_{\mathsf{NO}})\) is in \(\mathsf{clonableQMA}(c,f,s)\) if there exists a polynomial time quantum verifier \(V\), a polynomial time quantum cloner \(C\), and a polynomial \(p\), such that_ * _Completeness:_ _if_ \(x\in\mathcal{L}_{\mathsf{YES}}\)_, then there exists a quantum witness_ \(\ket{\psi}\) _on_ \(p(\ket{x})\) _qubits such that_ \(V\) _accepts on input_ \(\ket{x}\ket{\psi}\) _with probability at least_ \(c\)_, and_ _Cloning Fidelity:_ _when given this same witness,_ \(\ket{\psi}\)_, as input,_ \(C\) _succeeds at producing two independent copies of_ \(\ket{\psi}\) _with fidelity at least_ \(f\)_. That is,_ \(\bra{\psi}\otimes\bra{\psi}C\big{(}\ket{\psi}\bra{\psi}\big{)}\ket{\psi}\otimes \ket{\psi}\geq f\)_._ * _Soundness:_ _if_ \(x\in\mathcal{L}_{\mathsf{NO}}\)_, then for all quantum states_ \(\ket{\psi^{*}}\) _on_ \(p(\ket{x})\) _qubits,_ \(V\) _accepts on input_ \(\ket{x}\ket{\psi^{*}}\) _with probability at most_ \(s\)_._ As with Definition 5.1, the definition of clonableQMA only requires the collection of valid witnesses to be efficiently clonable, and the class is therefore sound against _any_ purported quantum witness. We take \(\mathsf{clonableQMA}=\bigcup_{(1-f)\in\mathsf{negl}(n)}\mathsf{clonableQMA}( \frac{9}{10},f,\frac{1}{10})\).8 Footnote 8: Note that as with the other classes mentioned, it seems likely that the parameters here can be set arbitrarily within a wide range. If only the completeness and soundness errors are to be reduced, this can be done using the strong error reduction technique of [13], at the cost of an appropriate loss to the cloning fidelity error. The technique involves evaluating the verifier alternatingly both forward and in reverse, interspersed with measurements of the output bit and the ancilla registers. Reducing the cloning fidelity error is a different challenge, but is likely possible when the original error is small. We therefore leave as an open problem finding an error reduction procedure for the cloner. There are of course other ways that one could conceivably define a complexity class whose witnesses are efficiently clonable quantum states. For instance, the class could be defined with a single polynomial time quantum process that both verifies and clones in a single shot. We could also allow the cloner to accept a description of the problem instance as an additional input. We show in Appendix C that up to a polynomial loss in the cloning fidelity, these variations are really all the same definition. Note as well that by this definition, in order for a problem \(\mathcal{L}\) to be in clonableQMA, it is not required for any specific verifier for \(\mathcal{L}\) to accept a collection of efficiently clonable witnesses. Rather, it is only required that there exist _some_ verifier, and _some_ mapping from instances to witnesses, such that the collection of all these valid witnesses is an efficiently clonable collection. #### 5.3.1 Relationship to Qma and Qcma **Theorem 5.8**.: \(\mathsf{QCMA}\subseteq\mathsf{clonableQMA}\subseteq\mathsf{QMA}\) Proof.: The idea is that classical witnesses are efficiently clonable, and efficiently clonable witnesses are a subset of all quantum witnesses. Recall from Remark 5.2 that \(\mathsf{QCMA}\) can be sound even against general _quantum_ purported witnesses. So let \(\mathcal{L}\) be a decision problem in \(\mathsf{QCMA}\), and let \(V\) be a verifier for \(\mathcal{L}\) that accepts classical witnesses and is sound against quantum purported witnesses. Then \(V\) is a valid verifier for \(\mathcal{L}\) in \(\mathsf{clonableQMA}\), with the cloning operation being the classical copy operation on the classical witness. Moreover, any decision problem \(\mathcal{L}\in\mathsf{clonableQMA}\) is trivially also in \(\mathsf{QMA}\), since the same verifier for the decision problem in \(\mathsf{clonableQMA}\) serves as a verifier for it in \(\mathsf{QMA}\). **Remark 5.9**.: _This hierarchy of three complexity classes gives a neat picture of the power of quantum verification in terms of the computational no-go properties of their quantum witnesses, from \(\mathsf{QCMA}\) (efficiently reconstructable, efficiently telegraphable) to \(\mathsf{clonableQMA}\) (efficiently clonable) to \(\mathsf{QMA}\) (fully quantum)._ We conjecture that both containments are strict. Of course, an unconditional separation in either direction would imply an unconditional separation between \(\mathsf{QCMA}\) and \(\mathsf{QMA}\), as well a number of other resulting separations up to \(\mathsf{P}\neq\mathsf{PSPACE}\) (a separation that is believed to be true, but not thought to be possible to prove with existing techniques). Nevertheless, to justify that this is an entirely new complexity class, we give evidence for both separations. In Subsection 5.4, we show that the same quantum oracles that we used to separate no-cloning from no-telegraphing - and the same collection of quantum states - also serves to give us a black-box separation between \(\mathsf{QCMA}\) and \(\mathsf{clonableQMA}\). We conclude it by giving a quantum oracle \(\mathcal{O}\) relative to which \(\mathsf{QCMA}^{\mathcal{O}}\neq\mathsf{clonableQMA}^{\mathcal{O}}\), demonstrating that any attempt to prove their equality must at least be quantumly non-relativizing. Moreover, we argue informally that \(\mathsf{clonableQMA}\) is not likely to be equal to \(\mathsf{QMA}\) either. That is because if \(\mathsf{QMA}\) were contained in \(\mathsf{clonableQMA}\), this would mean that every \(\mathsf{QMA}\)-complete problem would have efficiently clonable quantum witnesses, which would at the very least be surprising. At worst, if the equivalence between the classes were established through a witness-isomorphic reduction (a reduction in which witnesses for one problem are mapped by an efficiently computable transformation to witnesses for the other [12, 13]), this would rule out many schemes for public-key quantum money, as schemes based on the hardness of problems in \(\mathsf{QMA}\) could be broken by mapping their witnesses to those of a \(\mathsf{clonableQMA}\) problem for which there is an efficient cloner. ### Separating \(\mathsf{QCMA}\) from \(\mathsf{clonableQMA}\) in the Black-Box Model Recall the collection of oracles and set of states defined in Scheme 4.7. We now use these same oracles to give a black-box separation between \(\mathsf{clonableQMA}\) and \(\mathsf{QCMA}\). The problem that we use to show the separation is the problem of distinguishing a dummy cloning oracle from one which clones a state from Scheme 4.7. **Definition 5.10** (Randomized oracle hidden cloning problem).: _The randomized oracle hidden cloning problem, \(\mathsf{ROHC}\), is an oracle promise problem where the input is a random oracle \(H:\{0,1\}^{m}\to\{0,1\}^{n}\), and a unitary quantum oracle \(\mathcal{C}\) on \(m\) qubits, and the problem is to decide whether_ * \(\mathsf{YES}\)_:_ \(\mathcal{C}\) _is the_ \(z\)_-cloning oracle,_ \(\mathcal{C}_{z}\)_, relative to_ \(H\)_, for some_ \(z\in\{0,1\}^{n}\) _(see Definition_ 4.10_)_ * \(\mathsf{NO}\)_:_ \(\mathcal{C}\) _acts as the identity on all quantum states_ **Proposition 5.11**.: \(\mathsf{ROHC}\) _is in \(\mathsf{clonableQMA}\) in the black-box model._ Proof.: Let the cloner and verifier required by \(\mathsf{clonableQMA}\) be defined as follows: Let \(C\) be the cloner which very simply passes its input to the cloning oracle, \(\mathcal{C}\), and outputs the result. That is, \(C\) is just a wrapper for the cloning oracle. Let \(V\) be the verifier does the following: Pass the witness \(|\psi\rangle\) and \(|\bot\rangle\) to \(\mathcal{C}\) and perform a projective measurement on the second register on the subspace spanned by \(|\bot\rangle\) against the subspace spanned by everything else. Accept if the result is not \(|\bot\rangle\). We show that \(V\) and \(C\) satisfy the following three properties: 1. Given a YES instance, there is a witness \(|\psi\rangle\) that \(V\) accepts with probability 1. 2. The witness \(|\psi\rangle\) for every such YES instance is cloned by \(C\) with perfect fidelity. 3. Given a NO instance, \(V\) rejects every witness \(|\psi\rangle\) with probability 1. All three of these are straightforward: For any YES instance, \(\mathcal{C}=\mathcal{C}_{z}\) for some \(z\in\{0,1\}^{n}\), so let the witness for this instance be \(|\psi_{z}\rangle\). \(C=\mathcal{C}_{z}\) successfully clones \(|\psi_{z}\rangle\). Therefore, given \(|\psi_{z}\rangle\) as a witness, the verifier will not measure \(|\bot\rangle\), and will accept with probability 1. On the other hand, for any NO instance, \(\mathcal{C}\) will be the identity, so the register measured by \(V\) will always be \(|\bot\rangle\) at the end, and will therefore always reject. **Proposition 5.12**.: \(\mathsf{ROHC}\) _is not in \(\mathsf{QCMA}\) in the black-box model._ Proof.: This is a direct consequence of Proposition 4.11. That is, suppose for the sake of contradiction that \(\mathsf{ROHC}\in\mathsf{QCMA}\). \(\mathsf{ROHC}\) then has a \(\mathsf{QCMA}\) verifier, \(V\). \(V\) satisfies the conditions of Proposition 4.11, which means that for every YES instance, \((H,\mathcal{C})\), where \(\mathcal{C}\) is a \(z\)-cloning oracle relative to \(H\) for some \(z\), and for every polynomial length witness string \(w\), the probability that \(V\) accepts this YES instance must be negligibly close to the probability that it accepts the corresponding NO instance in which \(\mathcal{C}\) is replaced with the identity oracle, which is a contradiction. We now convert the black-box separation into a formal oracle separation between \(\mathsf{clonableQMA}\) and \(\mathsf{QCMA}\), by using standard techniques adapted from [1]. **Theorem 5.13**.: _There exists a quantum oracle \(\mathcal{O}\) relative to which \(\mathsf{clonableQMA}^{\mathcal{O}}\neq\mathsf{QCMA}^{\mathcal{O}}\)._ Proof.: We let \(\mathcal{L}\) be a random unary language, where for each \(n\), we set \(1^{n}\) to be in \(\mathcal{L}\) with probability \(\frac{1}{2}\). We let the quantum oracle \(\mathcal{O}=\{\mathcal{O}_{n}\}_{n}\) be defined such that: for every \(n\) for which \(1^{n}\in\mathcal{L}\), we set \(\mathcal{O}_{n}=(H_{n},\mathcal{C}_{n})\), where \(H_{n}:\{0,1\}^{3n}\to\{0,1\}^{n}\) is a classical function chosen uniformly at random, \(\mathcal{C}_{n}\) is the \(z\)-cloning oracle relative to \(H_{n}\) for a randomly chosen \(z_{n}\in\{0,1\}^{n}\), and the oracles are grouped together into a single quantum oracle; and likewise, for every \(n\) for which \(1^{n}\notin\mathcal{L}\), we set \(\mathcal{O}_{n}=(H_{n},I_{n})\), where \(H_{n}\) is chosen at random the same as before, and \(I_{n}\) is an oracle which acts as the identity on \(n\) qubits. \(\mathcal{L}\) is always in \(\mathsf{clonableQMA}^{\mathcal{O}}\) for all choices of \(\mathcal{L}\) and \(\mathcal{O}\), since as shown in Proposition 5.11, there is a polynomial time verifier in can always solve the \(\mathsf{ROHC}\) problem with a witness that is cloned by its corresponding polynomial time cloner. If \(1^{n}\in\mathcal{L}\), then the verifier given there will accept the witness \(|\psi_{z_{n}}\rangle\), the uniform positive superposition over preimages of \(z_{n}\) in \(H_{n}\), with probability 1, and furthermore, the cloner will clone it with perfect fidelity. If \(1^{n}\notin\mathcal{L}\), then there is no witness that will cause the verifier to accept with non-zero probability. We now show that \(\mathcal{L}\notin\mathsf{QCMA}^{\mathcal{O}}\) with probability 1 over the choice of \(\mathcal{L}\) and \(\mathcal{O}\). For a fixed \(\mathsf{QCMA}\) machine \(M\), let \(E_{n}(M,\mathcal{L},\mathcal{O})\) be the event that \(M^{\mathcal{O}}\) succeeds at determining whether \(1^{n}\in\mathcal{L}\). That is, either \(1^{n}\in\mathcal{L}\) and there exists a polynomial-length witness \(w\) that \(M^{\mathcal{O}}\) accepts with probability \(\frac{9}{10}\), or \(1^{n}\notin\mathcal{L}\) and \(M^{\mathcal{O}}\) rejects all polynomial-length witnesses with probability \(\frac{9}{10}\). By Proposition 5.12, we have that for sufficiently large \(n\), \(\Pr_{\mathcal{L},\mathcal{O}}\left[E_{n}(M,\mathcal{L},\mathcal{O})\right]\leq \frac{1}{10}\). Since \(\mathcal{O}\) is independent for different input lengths, and since on input \(1^{n}\), \(M^{\mathcal{O}}\) can only query \(\mathcal{O}\) on polynomial length queries, this gives us that for infinitely many \(n\), \[\Pr_{\mathcal{L},\mathcal{O}}\left[E_{n}(M,\mathcal{L},\mathcal{O})\mid E_{1} (M,\mathcal{L},\mathcal{O})\wedge E_{2}(M,\mathcal{L},\mathcal{O})\wedge\dots \wedge E_{n-1}(M,\mathcal{L},\mathcal{O})\right]\leq\frac{1}{10}\] and therefore, \[\Pr_{\mathcal{L},\mathcal{O}}\left[E_{1}(M,\mathcal{L},\mathcal{O})\wedge E_{ 2}(M,\mathcal{L},\mathcal{O})\wedge\dots\right]=0\] Since there is only a countably infinite number of \(\mathsf{QCMA}\) machines as a consequence of the Solovay-Kitaev Theorem [13], we have by union bound that \[\Pr_{\mathcal{L},\mathcal{O}}\left[\exists M:E_{1}(M,\mathcal{L},\mathcal{O}) \wedge E_{2}(M,\mathcal{L},\mathcal{O})\wedge\dots\right]=0\] or in other words, that \(\mathcal{L}\notin\mathsf{QCMA}^{\mathcal{O}}\) with probability \(1\) over the choice of \(\mathcal{L}\) and \(\mathcal{O}\). We can thus fix a language \(\mathcal{L}\) and a quantum oracle \(\mathcal{O}\) such that \(\mathcal{L}\in\mathsf{clonableQMA}^{\mathcal{O}}\), but \(\mathcal{L}\notin\mathsf{QCMA}^{\mathcal{O}}\). We have used the same set of states and oracles which we showed are clonable but untelegraphable to prove an oracle separation between \(\mathsf{clonableQMA}\) and \(\mathsf{QCMA}\). Is this a general pattern? That is, can any set of clonable-untelegraphable states yield a corresponding separation between \(\mathsf{clonableQMA}\) and \(\mathsf{QCMA}\)? We believe so. Note, however, that a special feature of our separation is that besides being clonable but untelegraphable, the states we used as witnesses are efficiently _samplable_, which is crucial for cryptographic applications (see Section 6). Moreover, this separation demonstrates the difficulty of unconditionally proving that any set of states is efficiently clonable but not efficiently telegraphable, as any such proof will likely yield a corresponding complexity class separation. ### clonableQMA in the Standard Model In Section 5.4, we showed that the randomized oracle hidden cloning problem (deciding whether a quantum oracle is the identity or clones some hidden state from among a set) gives a black-box separation between \(\mathsf{QCMA}\) and \(\mathsf{clonableQMA}\). There is a natural way to convert this into a promise problem in the standard model (that is, without reference to oracles). **Definition 5.14** (Circuit hidden cloning problem).: _The circuit hidden cloning problem, \(\mathsf{CHC}\), is a promise problem where the input is the description of a \(\mathsf{poly}(n)\)-sized quantum circuit \(\mathcal{C}\) on \(2n\) qubits, and the problem is to decide whether_ * \(\mathsf{YES}\)_:_ \(|\langle\psi|\left\langle\psi\right|\mathcal{C}\left|\psi\right\rangle|0 \rangle|^{2}\geq 1-\varepsilon\) _for some_ \(|\psi\rangle\) _orthogonal to_ \(|0\rangle\)__ * \(\mathsf{NO}\)_:_ \(|\langle\psi|\left\langle\phi\right|\mathcal{C}\left|\psi\right\rangle|\phi \rangle|^{2}\leq\varepsilon\) _for all_ \(|\psi\rangle\) _and_ \(|\phi\rangle\)__ For sufficiently small \(\varepsilon\), \(\mathsf{CHC}\) is in \(\mathsf{clonableQMA}\) for the same reason that \(\mathsf{ROHC}\) is in \(\mathsf{clonableQMA}\) in the black-box model, with the cloning oracle replaced by the circuit \(\mathcal{C}\). While \(\mathsf{CHC}\) cannot be shown to be hard for \(\mathsf{QCMA}\) machines without implying a series of separations (starting from \(\mathsf{QCMA}\neq\mathsf{clonableQMA}\), and up to \(\mathsf{P}\neq\mathsf{PSPACE}\)), we motivate this by comparison to the corresponding oracle problem, and by analogy to similar complete problems for \(\mathsf{QCMA}\) and \(\mathsf{QMA}\)[23, 24, 25]. We note that any separation between \(\mathsf{clonableQMA}\) and \(\mathsf{QCMA}\) immediately yields back a set of states that is efficiently clonable but neither efficiently reconstructable nor efficiently telegraphable. **Theorem 5.15**.: _Any decision problem that separates \(\mathsf{clonableQMA}\) from \(\mathsf{QCMA}\) can be converted into a set of states that is efficiently clonable but not efficiently reconstructable or efficiently telegraphable._ Proof.: Suppose that \(\mathsf{clonableQMA}\neq\mathsf{QCMA}\), and let \(\mathcal{L}\) be a decision problem in \(\mathsf{clonableQMA}\setminus\mathsf{QCMA}\). Since \(\mathcal{L}\in\mathsf{clonableQMA}\), there is polynomial time verifier, \(V\), and a polynomial time cloner, \(\mathcal{C}\), such that for every \(\mathsf{YES}\) instance \(x\), there is a witness \(|\psi_{x}\rangle\) on \(\mathsf{poly}(|x|)\) qubits that is verified by \(V\), and which \(\mathcal{C}\) clones with high fidelity. Now, consider the set of quantum states \(S_{n}=\{(x,|\psi_{x}\rangle):x\in\mathcal{L}_{\mathsf{YES}},|x|=n\}\). By construction, this set of states is efficiently clonable with high fidelity for all \(n\). On the other hand, it is neither efficiently reconstructable nor efficiently telegraphable, as otherwise, \(\mathcal{L}\) would be in \(\mathsf{QCMA}\) by Definition 5.1 and Theorem 5.3. ## 6 Cryptographic Applications In this section, we describe the cryptographic primitive of an parallelizable but un-exfiltratable key, and we show how to build it from clonable-untelegraphable states and a few other assumptions. For concreteness, we focus on the case of encryption. We consider the following setup: a server has a secret key for a public key encryption scheme, which it uses to decrypt ciphertexts. Unfortunately, the server is compromised by a remote adversary. The adversary would like to exfiltrate the key, so that it can decrypt ciphertexts for itself. We imagine, however, that the server is only able to transmit _classical_ information, and we utilize a quantum secret key to prevent the key from being exfiltrated. This gives rise to the following definition: **Definition 6.1** (Non-exfiltratable Encryption).: _A non-exfiltratable public key encryption (nePKE) scheme is a tuple of polynomial-time quantum algorithms \(\mathsf{Gen},\mathsf{Enc},\mathsf{Dec}\) such that:_ * \(\mathsf{Gen}(1^{\lambda})\) _samples a_ classical _public key_ \(\mathsf{pk}\) _and_ quantum _secret key_ \(|\mathsf{sk}\rangle\)_._ * \(\mathsf{Enc}(\mathsf{pk},m)\) _takes as input the public key and a classical message_ \(m\in\{0,1\}^{\lambda}\)_, and outputs a ciphertext that may be classical or quantum, which we denote as_ \(c\) _or_ \(|c\rangle\)_, respectively._ * \(\mathsf{Dec}(|\mathsf{sk}\rangle,c)\) _or_ \(\mathsf{Dec}(|\mathsf{sk}\rangle,|c\rangle)\) _takes as input a secret key and a ciphertext, and outputs a classical message_ \(m^{\prime}\) _and a new secret key_ \(|\mathsf{sk}^{\prime}\rangle\) _(the original secret key being consumed since it is a quantum state)._ * **Correctness**.: _For any polynomial_ \(p(\lambda)\)_, there is a negligible function_ \(\epsilon(\lambda)\) _such that, for any sequence of messages_ \(m_{1},m_{2},\cdots,m_{p(\lambda)}\in\{0,1\}^{\lambda}\)_, the following holds:_ _Let_ \((\mathsf{pk},|\mathsf{sk}_{0}\rangle)\leftarrow\mathsf{Gen}(1^{\lambda})\) _and for each_ \(i\in[p(\lambda)]\)_, let_ \((m^{\prime}_{i},|\mathsf{sk}_{i}\rangle)=\mathsf{Dec}\big{(}|\mathsf{sk}_{i-1 }\rangle,\mathsf{Enc}(\mathsf{pk},m_{i})\big{)}\)_. Then_ \(\Pr\big{[}m^{\prime}_{i}=m_{i}\;\forall i\in[p(\lambda)]\big{]}\geq 1- \epsilon(\lambda)\)_._ * **Non-exfiltration Security**.: _For any pair of quantum polynomial-time interactive algorithms \((\mathsf{Send},\mathsf{Receive})\), there exists a negligible function_ \(\epsilon(\lambda)\) _such that for each_ \(\lambda\) _and any pair of messages_ \(m_{0},m_{1}\)_,_ \(|W_{0}-W_{1}|\leq\epsilon(\lambda)\) _where_ \(W_{b}\) _is the probability_ \(\mathsf{Receive}\) _outputs_ \(b\) _in the following experiment:_ _Run_ \((\mathsf{pk},|\mathsf{sk}))\leftarrow\mathsf{Gen}(1^{\lambda})\) _and give_ \(|\mathsf{sk}\rangle,\mathsf{pk}\) _to_ \(\mathsf{Send}\)_._ * \(\mathsf{Send}\) _produces a classical string_ \(u\)_._ * _Compute_ \(c\leftarrow\mathsf{Enc}(\mathsf{pk},m_{b})\) _(resp._ \(|c\rangle\) _if allowing for quantum ciphertexts)_ * _Run_ \(\mathsf{Receive}\) _on_ \((\mathsf{pk},u,c)\) _(resp._ \((\mathsf{pk},u,|c\rangle)\)_) to get a bit_ \(b^{\prime}\)_._ _Above,_ \(\mathsf{Send}\) _plays the role of the compromised server, and_ \(\mathsf{Receive}\) _the role of the remote attacker._ ### Parallelizeable Construction Using Clonable-Untelegraphable States We now show that clonable but untelegraphable states, along with appropriate cryptographic building blocks, yields un-exfiltratable encryption where keys can be copied, yielding a construction that facilitates parallelism. Efficiently Samplable Clonable Witnesses.First, we need a strengthening of \(\mathsf{clonableQMA}\neq\mathsf{QCMA}\): we need that there is a decision problem \(\mathcal{L}\in\mathsf{clonableQMA}\backslash\mathsf{QCMA}\) with efficiently samplable hard (instance, witness) pairs. That is, there is an efficient sampling procedure \(\mathcal{S}(1^{\lambda})\) which samples YES instances \(x\) of size \(\lambda\) along with their clonable witnesses \(|\psi_{x}\rangle\), as well as NO instances \(x\) (of course with no matching witness), such that in time polynomial in \(\lambda\), it is infeasible to decide whether \(x\) is a YES or NO instance when given \(x\) and auxiliary classical information. Witness Encryption for \(\mathsf{QMA}\).We will also need the notion of _extractable_ witness encryption for \(\mathsf{QMA}\), which we now define. For a candidate construction, we could use [1]. **Definition 6.2** (Extractable Witness Encryption for \(\mathsf{QMA}\)).: _An extractable witness encryption scheme for a decision problem \(\mathcal{L}\in\mathsf{QMA}\) is a pair of efficient (potentially quantum) algorithms \((\mathsf{Enc},\mathsf{Dec})\) such that:_ * \(\mathsf{Enc}(1^{\lambda},x,m)\) _takes as input the security parameter, a_ \(\mathsf{QMA}\) _statement_ \(x\)_, and a message_ \(m\)_. It outputs a ciphertext that may be classical or quantum, which we denote as_ \(c\) _or_ \(|c\rangle\)_, respectively._ * \(\mathsf{Dec}(x,c,|\psi\rangle)\) _or_ \(\mathsf{Dec}(x,|c\rangle,|\psi\rangle)\) _takes as input the_ \(\mathsf{QMA}\) _statement_ \(x\)_, a classical or quantum ciphertext_ \(c\) _or_ \(|c\rangle\)_, and a purported witness_ \(|\psi\rangle\) _for_ \(x\)_._ * **Correctness.** _Let_ \(R_{\mathcal{L}}(x)\) _be the set of valid witnesses for_ \(x\)_. Then for any_ \(|\psi\rangle\in R_{\mathcal{L}}(x)\)_, we have that_ \(\Pr\left[\mathsf{Dec}\big{(}x,\mathsf{Enc}(1^{\lambda},x,m),|\psi\rangle \big{)}=m\right]\geq 1-\mathsf{negl}(\lambda)\)_._ * **Extractability security.** _Consider an efficiently sampleable distribution_ \(\mathcal{D}(1^{\lambda})\) _over instances_ \(x\) _and (potentially quantum) auxiliary information_ \(\mathsf{aux}\)_. We say that_ \(\mathcal{D}\) _is_ hard _(as in, it is hard to extract a witness from_ \(\mathsf{aux}\)_) if, for all quantum polynomial-time adversaries_ \(E\)_,_ \[\Pr_{(x,\mathsf{aux})\leftarrow\mathcal{D}}[E(x,\mathsf{aux})\in R_{ \mathcal{L}}(x)]\leq\mathsf{negl}(\lambda)\] _We then say that_ \((\mathsf{Enc},\mathsf{Dec})\) _is extractable if, for every quantum polynomial-time adversary_ \(A\)_, every hard efficiently sampleable distribution_ \(\mathcal{D}\)_, and every pair of messages_ \(m_{0},m_{1}\)_,_ \[\left|\Pr_{(x,\mathsf{aux})\leftarrow\mathcal{D}}\left[A(x,\mathsf{aux}, \mathsf{Enc}(1^{\lambda},x,m_{0}))=1\right]-\Pr_{(x,\mathsf{aux})\leftarrow \mathcal{D}}\left[A(x,\mathsf{aux},\mathsf{Enc}(1^{\lambda},x,m_{1}))=1 \right]\right|\leq\mathsf{negl}(\lambda)\] **Theorem 6.3**.: _Assuming both of the following, there exists un-exfiltratable encryption with clonable secret keys:_ 1. _A pair of efficiently samplable distributions,_ \(\mathcal{D}_{\mathsf{YES}}\) _and_ \(\mathcal{D}_{\mathsf{NO}}\)_, over_ \(\mathsf{YES}\) _instance-witness pairs and_ \(\mathsf{NO}\) _instances, respectively, of a problem_ \(\mathcal{L}\in\mathsf{clonableQMA}\) _such that the average-case problem_ \((\mathcal{L},\frac{1}{2}\mathcal{D}_{\mathsf{YES}}+\frac{1}{2}\mathcal{D}_{ \mathsf{NO}})\) _is hard for_ \(\mathsf{QCMA}\)_._ 2. _Extractable witness encryption for_ \(\mathsf{QMA}\)__ _Here, \(\frac{1}{2}\mathcal{D}_{\mathsf{YES}}+\frac{1}{2}\mathcal{D}_{\mathsf{NO}}\) is the distribution on instances that takes instances from \(\mathcal{D}_{\mathsf{YES}}\) and \(\mathcal{D}_{\mathsf{NO}}\) with equal probability._ Note that the condition of being efficiently samplable does not contradict the hardness of the decision problem or the unreconstructability of the witnesses, as this only allows us to sample a witness for a random instance, and not for a specific one. Indeed, relative to an oracle, Scheme 4.7 satisfies the necessary requirements.9 Footnote 9: We in fact need a slight modification to Scheme 4.7 to make it a decision problem, by having the cloning oracle only clone half the valid states, whose images in the random oracle then become the YES instances. We can sample random instances (along with their potential witnesses) by measuring the output of the random oracle on a uniform superposition. From there it is straightforward to check if the instance sampled is a YES instance or a NO instance. However given just the instance, it is hard to produce a witness or even to tell if it is a YES instance or a NO instance (Propositions 4.11 and 4.13). The construction is as follows: Proof.: We are given a decision problem \(\mathcal{L}\in\mathsf{clonableQMA}\), a \(\mathsf{clonableQMA}(c,f,s)\) verifier \(V\) and cloner \(C\) for \(\mathcal{L}\), and polynomial time instance samplers \(\mathcal{S}_{\mathsf{YES}}\) and \(\mathcal{S}_{\mathsf{NO}}\) for YES instance-witness pairs and \(\mathsf{NO}\) instances, respectively for \(\mathcal{L}\). Our un-exfiltratable scheme \((\mathsf{Gen},\mathsf{Enc},\mathsf{Dec})\) is defined as follows: * \(\mathsf{Gen}(1^{\lambda})\) runs \((x,|\psi\rangle)\leftarrow\mathcal{S}_{\mathsf{YES}}(1^{\lambda})\) and outputs \(\mathsf{pk}=x\) as the public key and \(|\mathsf{sk}\rangle=|\psi\rangle\) as the secret key. * \(\mathsf{Enc}(\mathsf{pk},m)\) runs and outputs the result of \(\mathsf{Enc}^{\prime}(1^{\lambda},\mathsf{pk},m)\) * \(\mathsf{Dec}(|\mathsf{sk}\rangle,c)\) runs and outputs the result of \(\mathsf{Dec}^{\prime}(x,c,|\mathsf{sk}\rangle)\). Correctness follows immediately from the correctness of \((\mathsf{Gen}^{\prime},\mathsf{Enc}^{\prime})\), and the secret keys \(|\mathsf{sk}\rangle\) are clonable by the cloner \(C\) for \(\mathcal{L}\). It remains to prove security. Consider an adversary \((\mathsf{Send},\mathsf{Receive})\) breaking the un-exfiltratability of \((\mathsf{Gen},\mathsf{Enc},\mathsf{Dec})\). This means there is a non-negligible \(\epsilon(\lambda)\) and a pair of messages \(m_{0},m_{1}\) such that \(|W_{0}-W_{1}|\geq\epsilon(\lambda)\) where \(W_{0},W_{1}\) are the quantities in Definition 6.1. We construct a distribution \(\mathcal{D}^{\prime}\) and algorithm \(A^{\prime}\) which attacks \((\mathsf{Enc}^{\prime},\mathsf{Dec}^{\prime})\): * \(\mathcal{D}^{\prime}(1^{\lambda})\) runs \((x,|\psi\rangle)\leftarrow\mathcal{S}_{\mathsf{YES}}(1^{\lambda})\), and then runs \(u\leftarrow\mathsf{Send}(x,|\psi\rangle)\), where \(u\) is a classical string. It outputs \((x,\mathsf{aux}=u)\). * \(A^{\prime}(x,\mathsf{aux},c)\) runs \(\mathsf{Receive}(x,\mathsf{aux},c)\) and outputs whatever \(\mathsf{Receive}\) outputs. By construction, we have that \[\left|\Pr_{(x,\mathsf{aux})\leftarrow\mathcal{D}^{\prime}}\left[A^{\prime}(x, \mathsf{aux},\mathsf{Enc}^{\prime}(1^{\lambda},x,m_{0}))=1\right]-\Pr_{(x, \mathsf{aux})\leftarrow\mathcal{D}^{\prime}}\left[A^{\prime}(x,\mathsf{aux}, \mathsf{Enc}^{\prime}(1^{\lambda},x,m_{1}))=1\right]\right|\geq\epsilon(\lambda)\] Thus, by the extractability security of \((\mathsf{Enc}^{\prime},\mathsf{Dec}^{\prime})\), we have that \(\mathcal{D}^{\prime}\) must not be hard. This means there exists a quantum polynomial-time extractor \(E\) and non-negligible \(\delta(\lambda)\) such that \(\Pr_{(x,\mathsf{aux})\leftarrow\mathcal{D}^{\prime}}\big{[}\Pr[V(x,E(x, \mathsf{aux}))=1]\geq c\big{]}\geq\delta(\lambda)\), where \(V\) is the clonableQMA verifier for \(\mathcal{L}\) and \(c\) is the completeness parameter. That is, with probability at least \(\delta(\lambda)\), \(E\) extracts a valid quantum witness for \(V\). We then use \(E\) and \(V\) to construct an average-case QCMA verifier \(V^{\prime}\) for the distribution of instances coming from the equal mixture of \(\mathcal{S}_{\mathsf{YES}}\) and \(\mathcal{S}_{\mathsf{NO}}\). With instance \(x\) and witness \(u\), \(V^{\prime}(x,u)\) is defined as \(V^{\prime}(x,u):=V(x,E(x,u))\). We then have that over the instance distribution of \(\mathcal{S}_{\mathsf{YES}}\), the \(u\) outputted by \(\mathsf{Send}(x,|\psi)\) is, with non-negligible probability at least \(\delta(\lambda)\), a witness for \(x\) relative to \(V^{\prime}\); in particular the witness exists. We also have that for instances sampled from \(\mathcal{S}_{\mathsf{NO}}\), there is no quantum witness that \(V\) accepts with probability greater than \(s\), and there is therefore no classical witness \(u\) that \(V^{\prime}\) accepts with probability greater than \(s\). We have therefore that \(V^{\prime}\) satisfies the conditions of QCMA for \((\mathcal{L},\mathcal{S})\) with probability \(\frac{1}{2}+\frac{1}{2}\delta(\lambda)\) over the distribution \(\mathcal{S}\) on instances induced by sampling equally from \(\mathcal{S}_{\mathsf{YES}}\) and \(\mathcal{S}_{\mathsf{NO}}\), contradicting the condition that \((\mathcal{L},\mathcal{S})\) is hard for QCMA. We then have that, as claimed, \((\mathsf{Gen},\mathsf{Enc},\mathsf{Dec})\) is a secure un-exfiltratable encryption with clonable quantum secret keys.
2305.03030
Decentralized and Compositional Interconnection Topology Synthesis for Linear Networked Systems
In this paper, we consider networked systems comprised of interconnected sets of linear subsystems and propose a decentralized and compositional approach to stabilize or dissipativate such linear networked systems via optimally modifying some existing interconnections and/or creating entirely new interconnections. We also extend this interconnection topology synthesis approach to ensure the ability to stabilize or dissipativate such linear networked systems under distributed (local) feedback control. To the best of the authors' knowledge, this is the first work that attempts to address the optimal interconnection topology synthesis problem for linear networked systems. The proposed approach in this paper only involves solving a sequence of linear matrix inequality problems (one at each subsystem). Thus, using standard convex optimization toolboxes, it can be implemented efficiently and scalably in a decentralized and compositional manner. Apart from many generic linear networked systems applications (e.g., power grid control), a unique application for the proposed interconnection topology synthesis approach is in generating random stable (or dissipative, stabilizable, dissipativate-able) linear networked systems for simulation purposes. We also include an interesting case study where the proposed interconnection topology synthesis approach is compared with an alternative approach that only uses dissipativity information of the involved subsystems.
Shirantha Welikala, Hai Lin, Panos J. Antsaklis
2023-05-04T17:53:35Z
http://arxiv.org/abs/2305.03030v1
# Decentralized and Compositional Interconnection Topology Synthesis for Linear Networked Systems ###### Abstract In this paper, we consider networked systems comprised of interconnected sets of linear subsystems and propose a decentralized and compositional approach to stabilize or dissipative such linear networked systems via optimally modifying some existing interconnections and/or creating entirely new interconnections. We also extend this interconnection topology synthesis approach to ensure the ability to stabilize or dissipativate such linear networked systems under distributed (local) feedback control. To the best of the authors' knowledge, this is the first work that attempts to address the optimal interconnection topology synthesis problem for linear networked systems. The proposed approach in this paper only involves solving a sequence of linear matrix inequality problems (one at each subsystem). Thus, using standard convex optimization toolboxes, it can be implemented efficiently and scalably in a decentralized and compositional manner. Apart from many generic linear networked systems applications (e.g., power grid control), a unique application for the proposed interconnection topology synthesis approach is in generating random stable (or dissipative, stabilizable, dissipative-able) linear networked systems for simulation purposes. We also include an interesting case study where the proposed interconnection topology synthesis approach is compared with an alternative approach that only uses dissipativity information of the involved subsystems. ## I Introduction In recent years, attention towards analysis, controller synthesis, topology synthesis as well as optimization of large-scale networked systems (comprised of dynamically coupled subsystems) has been renewed due to their various emerging applications (e.g., in critical infrastructure networks like supply chains [1], power grids [2], etc.) and confronting unique challenges (e.g., resilience [3], security [4], etc.). For such networked systems, a large number of distributed control solutions have been proposed in the literature that can not only stabilize but also optimize some performance metrics of interest [5] during their operation. However, almost all such distributed control solutions are synthesized by a centralized design process which raises concerns related to their security, scalability, and compositionality [6]. Over the years, there have been several attempts to address this decentralized controller synthesis problem exploiting weak couplings [7], hierarchical techniques [8], and decomposition techniques [9]. In particular, the work in [9] proposes a natural and efficient decomposition technique for analysis and synthesis of distributed controllers inspired by the Sylvester's criterion [10]. Motivated by the attractive qualities of this Sylvester's criterion based decomposition approach [9], our recent work in [6] (and its extension [11]) generalized it so that many fundamental linear control solutions (e.g., dissipativity analysis, linear observer design, etc.) can be implemented in a decentralized as well as compositional manner over large-scale linear networked systems. Nevertheless, a major challenge faced by this approach (as well as many other control solutions proposed for large-scale networked systems) is the incompatibility between the considered networked system and the proposed solution. Such an incompatibility may be due to the inherent weaknesses in the networked system and/or in the proposed solution. For example, a networked system may not yield a conclusive (and desired) result under a particular analysis technique. Similarly, a networked system may not be capable of yielding desired properties under a particular class of controllers. To address this incompatibility issue, we can either change the networked system to match the proposed solution (e.g., see [12]), or improve/specialize the proposed solution so as to handle the considered networked system (e.g., see [6]). While in many scenarios it is natural and practical (and even advisable) to take the latter approach, in some instances, the prior approach is also a valid and sensible option to take. Most importantly, developing techniques to systematically change the networked systems can lead to insightful findings. For example, assuming the proposed control solution sufficiently rich, we might be able to answer questions like: What kinds of network topologies are more robust to the disturbances? What are the most critical interconnections in the networked system? What is the most cost efficient network topology? In this paper, we set out to solve the said incompatibility issue faced by the Sylvester's criterion based decentralized and compositional approach proposed in [6] (intended for analysis and distributed controller synthesis of large-scale linear networked systems). To this end, we propose to change the networked system so that it matches the approach proposed in [6]. In particular, in the considered networked system, we treat some inter-subsystem interconnections (if not all) as design variables and explore the possibility to: (1) change those variable interconnections from their nominal values, (2) create entirely new interconnections, and/or (3) remove existing interconnections, such that the proposed approach in [6] can yield conclusive as well as desired results. In essence, this can be seen as an effort to synthesize the interconnection topology for linear networked systems. In fact, there have been only very few attempts on designing interconnection topologies for networked systems. For example, the work in [13] considers designing a network topology to make the communications optimally efficient for a continuous-time average consensus protocol. The proposed solution in [13] takes the form of a mixed integer semidefinite program - which does not scale well. The interconnection matrix synthesis problem is considered limited to linear and positive networked systems in [14]. Several other interconnection matrix synthesis techniques such as the ones proposed in [15, 16] and [17] have been reviewed in our recent work [12] (see also its extension [18]). In particular, the work in [12] proposes an interconnection matrix synthesis technique for non-linear networked systems using only the subsystem dissipativity properties (i.e., without using the complete knowledge of the non-linear subsystem dynamics). However, in this paper, we limit to linear networked systems and use the complete knowledge of the linear subsystem dynamics for interconnection topology synthesis. Nevertheless, as we will show in this paper (particularly in our case study), there is a clear advantage due to the use of such additional information regarding the networked system as compared to [12]. #### I-B1 **Contributions** Our contributions can be summarized as follows: (1) We take a control theoretic approach to formulate several interconnection topology synthesis problems arising related to linear networked systems as LMI problems; (2) Since the proposed interconnection topology synthesis approach is inspired by [6], it is inherently decentralized and compositional; (3) Moreover, it can be used in scenarios where the analysis and controller synthesis approaches proposed in [6] returns inconclusive; (4) We also provide candidate local objective functions that can be used to penalize deviations from a nominal set of specifications (topology). (5) The proposed interconnection topology synthesis approach can be used to generate random linear networked systems with certain qualities (e.g., stabilizability) - which is helpful when designing, testing, and validating control strategies developed for networked systems. (6) Similar to [6], the proposed approach can be extended to address a wide range of problems arising related to linear networked systems based on fundamental linear systems theory (e.g., optimal topology synthesis to ensure observability). (7) We provide candidate local objective functions that can be used to penalize deviations from a nominal interconnection topology; (8) We provide a detailed case study comparing the interconnection topology synthesis approaches proposed in this paper and in our recent work [12]; #### I-B2 **Organization** This paper is organized as follows. Section II presents the details of the considered class of networked systems and motivates the need for interconnection topology synthesis. Section III summarizes several important preliminary concepts. Our main theoretical results that address several different interconnection topology synthesis problems of interest are presented in Sec. IV along with several important remarks. A case study with fundamental details, numerical results, discussions, and comparisons are provided in Sec. V before concluding the paper in Sec. VI. #### I-B3 **Notation** The sets of real and natural numbers are denoted by \(\mathbb{R}\) and \(\mathbb{N}\), respectively. We define \(\mathbb{N}_{N}\triangleq\{1,2,\ldots,N\}\) where \(N\in\mathbb{N}\). An \(n\times m\) block matrix \(A\) can be represented as \(A=[A_{ij}]_{i\in\mathbb{N}_{n},j\in\mathbb{N}_{m}}\) where \(A_{ij}\) is the \((i,j)^{\text{th}}\) block of \(A\) (for indexing purposes, either subscripts or superscripts may be used, i.e., \(A_{ij}\equiv A^{ij}\)). If \(\Psi\triangleq[\Psi^{kl}]_{k,l\in\mathbb{N}_{m}}\) where \(\Psi^{kl}\triangleq[\Psi^{kl}_{ij}]_{i,j\in\mathbb{N}_{n}}\), its block element-wise form [6] is denoted as \(\text{BEW}(\Psi)\triangleq[[\Psi^{kl}_{ij}]_{k,l\in\mathbb{N}_{m}}]_{i,j\in \mathbb{N}_{n}}\). The transpose of a matrix \(A\) is denoted by \(A^{\top}\) and \((A^{\top})^{-1}=A^{-\top}\). The zero and identity matrices are denoted by \(\mathbf{0}\) and \(\mathbf{I}\), respectively (dimensions will be clear from the context). A symmetric positive definite (semi-definite) matrix \(A\in\mathbb{R}^{n\times n}\) is represented as \(A=A^{\top}>0\) (\(A=A^{\top}\geq 0\)). Unless stated otherwise, we assume \(A>0\iff A=A^{\top}>0\) (i.e., symmetry is implied by the positive definiteness). \(\mathbf{1}_{\{\cdot\}}\) is the indicator function and \(e_{ij}\triangleq\mathbf{I}\cdot\mathbf{1}_{\{i=j\}}\). ## II Problem Formulation ### _The Networked System_ We consider a networked dynamical system \(\mathcal{G}_{N}\) comprised of \(N\) interconnected subsystems denoted by \(\{\Sigma_{i}:i\in\mathbb{N}_{N}\}\). The dynamics of the \(i^{\text{th}}\) subsystem \(\Sigma_{i},i\in\mathbb{N}_{N}\) are given by \[\dot{x}_{i}(t)= \sum_{j\in\bar{\mathcal{E}}_{i}}A_{ij}x_{j}(t)+\sum_{j\in\bar{ \mathcal{E}}_{i}}B_{ij}u_{j}(t)+\sum_{j\in\bar{\mathcal{E}}_{i}}E_{ij}w_{j}(t), \tag{1}\] \[y_{i}(t)= \sum_{j\in\bar{\mathcal{E}}_{i}}C_{ij}x_{j}(t)+\sum_{j\in\bar{ \mathcal{E}}_{i}}D_{ij}u_{j}(t)+\sum_{j\in\bar{\mathcal{E}}_{i}}F_{ij}w_{j}(t),\] where \(x_{i}(t)\in\mathbb{R}^{n_{i}},\ u_{i}(t)\in\mathbb{R}^{p_{i}},\ w_{i}(t)\in \mathbb{R}^{q_{i}}\) and \(y_{i}(t)\in\mathbb{R}^{m_{i}}\) respectively represents the state, input, disturbance and output specific to the subsystems \(\Sigma_{i}\) at time \(t\in\mathbb{R}_{\geq 0}\). In (1), \(\bar{\mathcal{E}}_{i}\triangleq\mathcal{E}_{i}\cup\{i\}\) where \(\mathcal{E}_{i}\subset\mathbb{N}_{N}\) is the set of "in-neighbors" of the subsystem \(\Sigma_{i}\). Formally, any subsystem \(\Sigma_{j}\) is an "in-neighbor" of the subsystem \(\Sigma_{i}\) (i.e., \(j\in\mathcal{E}_{i}\)) iff the matrices \(A_{ij},B_{ij},C_{ij},D_{ij},E_{ij},F_{ij}\) in (1) are not all zero matrices. Conversely, \(\bar{\mathcal{F}}_{i}\triangleq\mathcal{F}_{i}\cup\{i\}\) where \(\mathcal{F}_{i}\triangleq\{j:j\in\mathbb{N}_{N},\mathcal{E}_{j}\ni i\}\) is the set of "out-neighbors" of the subsystem \(\Sigma_{i}\). An example networked system can be seen in Fig. 1. By writing (1) for all \(i\in\mathbb{N}_{N}\) and concatenating suitably, we can get the dynamics of the networked system \(\mathcal{G}_{N}\) as \[\dot{x}(t) =Ax(t)+Bu(t)+Ew(t), \tag{2}\] \[y(t) =Cx(t)+Du(t)+Fw(t),\] where \(A=[A_{ij}]_{i,j\in\mathbb{N}_{N}}\), \(B=[B_{ij}]_{i,j\in\mathbb{N}_{N}}\), \(E=[E_{ij}]_{i,j\in\mathbb{N}_{N}}\), \(C=[C_{ij}]_{i,j\in\mathbb{N}_{N}}\), \(D=[D_{ij}]_{i,j\in\mathbb{N}_{N}}\) and \(F=[F_{ij}]_{i,j\in\mathbb{N}_{N}}\) are all \(N\times N\) block matrices, and \(x(t)\in\mathbb{R}^{n},\ u(t)\in\mathbb{R}^{p},w(t)\in\mathbb{R}^{q}\) and \(y(t)\in\mathbb{R}^{m}\) (with \(n=\sum_{i\in\mathbb{N}_{N}}n_{i}\), \(p=\sum_{i\in\mathbb{N}_{N}}p_{i}\), \(q=\sum_{i\in\mathbb{N}_{N}}p_{i}\)). Fig. 1: An example networked dynamical system \(\mathcal{G}_{N}\). \(\sum_{i\in\mathbb{N}_{N}}q_{i}\) and \(m=\sum_{i\in\mathbb{N}_{N}}m_{i}\)) are all \(N\times 1\) block matrices respectively representing the networked system's state, input, disturbance and output at time \(t\in\mathbb{R}_{\geq 0}\). ### _Distributed Controllers_ To enforce desired properties (e.g., stability) upon the networked system, a subsystem \(\Sigma_{i},i\in\mathbb{N}_{N}\) can use a distributed state feedback controller: \[u_{i}(t)=\sum_{j\in\mathcal{E}_{i}}K_{ij}x_{j}(t). \tag{3}\] By writing (3) for all \(i\in\mathbb{N}_{N}\) and concatenating appropriately, we get the global form of the distributed feedback controller as \[u(t)=Kx(t), \tag{4}\] where \(K=[K_{ij}]_{i,j\in\mathbb{N}_{N}}\). Note that the unspecified blocks in various block matrices in both (2) and (4) are zeros matrices (e.g., \(A_{ij}=0,\forall j\not\in\mathcal{E}_{i}\)). ### _Interconnection Topology Synthesis_ Even though state feedback control is a reasonable approach to enforce desired properties (e.g., stability) upon the networked system, it may be not useful in two scenarios: (1) when the networked system inherently involves no control inputs (i.e., when \(B=D=\mathbf{0}\) in (2)), or (2) when the networked system is inherently incapable of achieving the desired properties under state feedback control (e.g., if (2) is not stabilizable when the desired property is stability). To address these inherent weaknesses of the networked system, in this paper, we propose to optimally adjust the interconnection parameters of the networked system (mainly \(A_{ij}\) blocks with \(i\neq j\) in (1)). Hence this approach can be seen as an attempt to synthesize the interconnection topology of the networked system. Note also that, for the purposes of analysis and controller synthesis of the networked system (2), we can use the decentralized and compositional technique proposed in [6]. However, due to the used assumptions in [6], this decentralized and compositional technique can return inconclusive (when analyzing) or infeasible (when synthesizing controllers) [11]. Nevertheless, as we will show in the sequel, this technical weakness can also be addressed by optimally adjusting the interconnection parameters of the networked system. ## III Preliminaries ### _Stability and Dissipativity_ Since our main goal is to synthesize the interconnection topology of the linear networked system (2) so as to enforce properties like stability or dissipativity (both without or with distributed feedback control (3)), we next briefly introduce some relevant stability and dissipativity results. Consider the linear time-invariant (LTI) system \[\begin{split}\dot{x}(t)&=Ax(t)+Bu(t),\\ y(t)&=Cx(t)+Du(t),\end{split} \tag{5}\] where \(x(t)\in\mathbb{R}^{n},u(t)\in\mathbb{R}^{p}\), and \(y(t)\in\mathbb{R}^{m}\) respectively represent the state, control input, and output at time \(t\in\mathbb{R}_{\geq 0}\). StabilityA well-known necessary and sufficient condition for the stability of (5) is given in the following lemma as a linear matrix inequality (LMI). **Lemma 1**: _[_10_]_ _The dynamical system (5) (under \(u(t)=\mathbf{0}\)) is globally uniformly (exponentially) stable iff \(\exists P>0\) such that_ \[-A^{\top}P-PA\geq 0\qquad(-A^{\top}P-PA>0). \tag{6}\] Note that, henceforth, by'stability,' we simply refer to global exponential stability. \((Q,S,R)\)-DissipativityIn general, dissipativity is an important property of dynamical systems that has many practical uses [19]. In this paper, we consider the quadratic dissipativity property called \((Q,S,R)\)-dissipativity. **Definition 1**: _[_20_]_ _The dynamical system (5) is \((Q,S,R)\)-dissipative from \(u(t)\) to \(y(t)\), if there exists a positive definite function \(V(x):\mathbb{R}^{n}\rightarrow\mathbb{R}_{\geq 0}\) (storage function) such that for all \(t_{1}\geq t_{0}\geq 0,x(t_{0})\in\mathbb{R}^{n}\) and \(u(t)\in\mathbb{R}^{m}\), the inequality \(V(x(t_{1}))-V(x(t_{0}))\ \leq\ \int_{t_{0}}^{t_{1}}\begin{bmatrix}y(t)\\ u(t)\end{bmatrix}\begin{bmatrix}Q&S\\ S^{\top}&R\end{bmatrix}\begin{bmatrix}y(t)\\ u(t)\end{bmatrix}dt\) holds for the given \(Q\in\mathbb{R}^{m\times m},S\in\mathbb{R}^{m\times p}\) and \(R\in\mathbb{R}^{p\times p}\)._ Through appropriate choices of \(Q,S\) and \(R\) matrices, \((Q,S,R)\)-dissipativity can capture several dynamical properties of interest, as summarized in the following remark. **Remark 1**: _[_20_]_ _The dynamical system (5) satisfying Def. 1: (i) is passive iff \(Q=0,S=\frac{1}{2}\mathbf{I},R=0\); (ii) is strictly passive iff \(Q=-\rho\mathbf{I},S=\frac{1}{2}\mathbf{I},R=-\nu\mathbf{I}\) where \(\rho,\nu>0\) (\(\nu\), \(\rho\) are passivity indices [11]); (iii) is \(\mathcal{L}_{2}\)-stable iff \(Q=-\mathbf{I},S=0,R=-\gamma^{2}\mathbf{I}\) where \(\gamma\geq 0\) (\(\gamma\) is an \(\mathcal{L}_{2}\)-gain of the system); (iv) is sector bounded iff \(Q=-\mathbf{I},S=(a+b)\mathbf{I},R=-ab\mathbf{I}\) where \(a,b\in\mathbb{R}\) (\(a,b\) are sector bound parameters)._ A necessary and sufficient condition for \((Q,S,R)\)-dissipativity of (5) is given in the next lemma as an LMI. **Lemma 2**: _[_11_]_ _The dynamical system (5) is \((Q,S,R)\)-dissipative (\(-Q>0,R=R^{\top}\)) from \(u(t)\) to \(y(t)\) iff \(\exists P>0\) such that_ \[\begin{bmatrix}-A^{\top}P-PA&-PB+C^{\top}S&C^{\top}\\ -B^{\top}P+S^{\top}C&D^{\top}S+S^{\top}D+R&D^{\top}\\ C&D&-Q^{-1}\end{bmatrix}\geq 0. \tag{7}\] Note that LMIs in (6) and (7) are "linear" as they contain linear terms in the corresponding design variable \(P\). As shown in [21], LMIs can be solved efficiently and scalably using standard convex optimization algorithms. ### _Interconnection Topology Synthesis_ Due to the similarity between (5) and (2), LMIs (6) and (7) can respectively be used for stability and dissipativity analysis of the networked system (2). Note, however, that, in the LMIs (6) and (7), we cannot treat the matrix \(A\) (particularly its non-diagonal elements \(A_{ij}\) with \(i\neq j\)) as an independent design variable separately from \(P\). This is because it makes (6) and (7) bi-linear matrix inequalities - which are non-linear and significantly harder to solve compared to the corresponding LMIs. Therefore, synthesizing certain elements of \(A\) (i.e., the interconnection parameters of the networked system) such that stability or dissipativity holds for the networked system (2) is a non-trivial and challenging problem. Similarly, synthesizing certain interconnection parameters of the networked system such that stabilizability or dissipative-ability holds for the networked system (2) under state feedback control (3) is also a non-trivial and challenging problem. We address these challenges by taking a decentralized and compositional approach. In particular, to analyze or enforce (via state feedback control) desired properties like stability or dissipativity of the networked system (2), compared to solving large centralized LMIs like (6) and (7), we propose to solve their small decentralized and compositional versions proposed in [6]. This approach allows us to sequentially synthesize the interconnection parameters of the networked system (2) (i.e., step-by-step). In particular, at each step, we add a new subsystem to the current network and solve a small LMI problem where some interconnection parameters related to the new subsystem are treated as design variables while all other interconnection parameters are treated as fixed. Before providing more details about this approach, we first need to outline the decentralized and compositional approach proposed in [6] that can be used to analyze/enforce centralized LMIs exploiting a concept named "network matrices." ### _Network Matrices_ Consider a directed network \(\mathcal{G}_{n}=(\mathcal{V},\mathcal{E})\) where \(\mathcal{V}\triangleq\{\Sigma_{i}:i\in\mathbb{N}_{n}\}\) is the set of subsystems (nodes), \(\mathcal{E}\subset\mathcal{V}\times\mathcal{V}\) is the set of inter-subsystem interconnections (edges) and \(n\in\mathbb{N}\). We next recall a class of matrices named "network matrices" introduced in [6] corresponding to such a network \(\mathcal{G}_{n}\). **Definition 2**: _[_6_]_ _Given a network \(\mathcal{G}_{n}=(\mathcal{V},\mathcal{E})\), any \(n\times n\) block matrix \(\Theta=\left[\Theta_{ij}\right]_{i,j\in\mathbb{N}_{n}}\) is a corresponding network matrix if: (1) \(\Theta_{ij}\) contains information specific only to the subsystems \(\Sigma_{i}\) and \(\Sigma_{j}\), and (2) \((\Sigma_{i},\Sigma_{j})\not\in\mathcal{E}\) and \((\Sigma_{j},\Sigma_{i})\not\in\mathcal{E}\) implies \(\Theta_{ij}=\Theta_{ji}=\mathbf{0}\), for all \(i,j\in\mathbb{N}_{n}\)._ According to this definition, any \(n\times n\) block matrix \(\Theta=[\Theta_{ij}]_{i,j\in\mathbb{N}_{n}}\) is a network matrix of \(\mathcal{G}_{n}\) if \(\Theta_{ij}\) is a coupling weight matrix corresponding to the edge \((\Sigma_{i},\Sigma_{j})\in\mathcal{V}\). Moreover, any \(n\times n\) block diagonal matrix \(\Theta=\text{diag}(\Theta_{ii}:i\in\mathbb{N}_{n})\) where \(\Theta_{ii}\) contains information specific only to the subsystem \(\Sigma_{i}\), is a network matrix of any network with \(n\in\mathbb{N}\) subsystems. The following lemmas provide several useful properties of such network matrices established in [6]. **Lemma 3**: _[_6_]_ _Given a network \(\mathcal{G}_{n}\), a few corresponding block network matrices \(\Theta,\Phi,\{\Psi^{kl}:k,l\in\mathbb{N}_{m}\}\), and some arbitrary block-block matrix \(\Psi\triangleq[\Psi^{kl}]_{k,l\in\mathbb{N}_{m}}\):_ 1. \(\Theta^{\top},\ \alpha\Theta+\beta\Phi\) _are network matrices for any_ \(\alpha,\beta\in\mathbb{R}\)_._ 2. \(\Phi\Theta\)_,_ \(\Theta\Phi\) _are network matrices whenever_ \(\Phi\) _is a block diagonal network matrix._ 3. \(\text{BEW}(\Psi)\triangleq[[\Psi^{kl}_{ij}]_{k,l\in\mathbb{N}_{m}}]_{i,j\in \mathbb{N}_{n}}\) _is a network matrix._ The above lemma enables claiming custom block matrices as "network matrices" by enforcing additional conditions. For example, if \(A\) and \(P\) are two block network matrices and \(P\) is block diagonal, then: (1) \(A^{\top}P,PA\) and \(A^{\top}P+PA\) are all network matrices, and (2) if \(\Psi\triangleq\left[\begin{smallmatrix}P&A^{\top}P\\ PA&P\end{smallmatrix}\right]\) is some block-block matrix, its _block element-wise_ (BEW) form \(\text{BEW}(\Psi)\triangleq\left[\begin{smallmatrix}P_{i\epsilon ij}&M^{\top}_{ j}P_{j}\\ P_{ii}M_{ij}&P_{i}\epsilon_{ij}\end{smallmatrix}\right]\nolimits_{i,j\in \mathbb{N}_{N}}\) is a network matrix. **Lemma 4**: _[_6_]_ _Let \(\Psi=[\Psi^{kl}]_{k,l\in\mathbb{N}_{m}}\) be an \(m\times m\) block-block matrix where \(\Psi^{kl},\forall k,l\in\mathbb{N}_{m}\) are \(n\times n\) block matrices. Then, \(\Psi>0\iff\text{BEW}(\Psi)\triangleq[[\Psi^{kl}_{ij}]_{k,l\in\mathbb{N}_{m}}] _{i,j\in\mathbb{N}_{n}}>0\)._ Inspired by Sylvester's criterion [10], the following lemma provides a decentralized and compositional testing criterion to evaluate the positive definiteness of an \(N\times N\) block matrix \(W=[W_{ij}]_{i,j\in\mathbb{N}_{N}}\) (for more details, see [11]). **Lemma 5**: _[_6_]_ _A symmetric \(N\times N\) block matrix \(W=[W_{ij}]_{i,j\in\mathbb{N}_{N}}>0\) if and only if_ \[\tilde{W}_{ii}\triangleq W_{ii}-\tilde{W}_{i}\mathcal{D}_{i}\tilde{W}_{i}^{\top}>0, \ \ \ \ \forall i\in\mathbb{N}_{N}, \tag{8}\] _where_ \[\begin{split}\tilde{W}_{i}&\triangleq[\tilde{W}_{ij}]_{j \in\mathbb{N}_{i-1}}\triangleq W_{i}(\mathcal{D}_{i}\mathcal{A}_{i}^{\top})^{ -1},\\ W_{i}&\triangleq[W_{ij}]_{j\in\mathbb{N}_{i-1}},\ \ \ \mathcal{D}_{i}\triangleq\text{diag}(\tilde{W}_{jj}^{-1}:j\in\mathbb{N}_{i-1}), \\ \mathcal{A}_{i}&\triangleq\ \begin{bmatrix}\tilde{W}_{11}& \mathbf{0}&\cdots&\mathbf{0}\\ \tilde{W}_{21}&\tilde{W}_{22}&\cdots&\mathbf{0}\\ \vdots&\vdots&\vdots&\vdots\\ \tilde{W}_{i-1,1}&\tilde{W}_{i-1,2}&\cdots&\tilde{W}_{i-1,i-1}\end{bmatrix}. \end{split} \tag{9}\] The above lemma shows that testing positive definiteness of an \(N\times N\) block matrix \(W=[W_{ij}]_{i,j\in\mathbb{N}_{N}}\) can be broken down to \(N\) separate smaller sequence of tests (iterations). In a network setting where \(W\) is a block network matrix corresponding to a network \(\mathcal{G}_{N}\), at the \(i^{\text{th}}\) iteration (i.e., at the subsystem \(\Sigma_{i}\)), we now only need to test whether \(\tilde{W}_{ii}>0\), where \(\tilde{W}_{ii}\) can be computed using: (1) \(\{W_{ij}:j\in\mathbb{N}_{i}\}\) (related blocks to the subsystem \(\Sigma_{i}\) extracted from \(W\)), (2) \(\{\tilde{W}_{ij}:j\in\mathbb{N}_{i-1}\}\) (computed using (9) at subsystem \(\Sigma_{i}\)), and (3) \(\{\tilde{W}_{jk}:k\in\mathbb{N}_{j}\}:j\in\mathbb{N}_{i-1}\}\) (blocks computed using (9) at previous iterations/subsystems \(\{\Sigma_{j}:j\in\mathbb{N}_{i-1}\}\)). Note also that, using Schur complement theory, the matrix inequality \(\tilde{W}_{ii}>0\) (8) can be transformed into a form that is linear in \([W_{ij}]_{j\in\mathbb{N}_{i-1}}\). Therefore it is clear that Lm. 5 can be used to efficiently test/enforce the positive definiteness of a network matrix in a decentralized manner. In fact, as shown in [11], for some network topologies, this process is fully distributed (i.e., no communications are required between non-neighboring subsystems). Moreover, the compositionally of this process (i.e., the resilience to subsystem removals/additions from/to the network) has also been established in [6]. This decentralized and compositional approach to test/enforce the positive-definiteness of a network matrix \(W\) is outlined in Alg. 1. ## IV Main Results In this section, we present our main theoretical results on decentralized and compositional synthesis of interconnection topology in linear networked systems. Note that this synthesis process is designed to enforce stability or dissipativity both without or with the help of distributed state feedback control, i.e., we are interested in enforcing: (1) stability, (2) stabilizability (under feedback control), (3) dissipativity and (4) dissipative-ability (under feedback control), via synthesizing an appropriate interconnection topology. Based on the subsystem dynamics (1), it is clear that the interconnection topology of the networked system (2) is determined by the block structures of the block network matrices \(A,B,E,C,D,F\) and \(K\) in (2) and (4) (see also Def. 2). Regarding these block network matrices, we make the following two technical assumptions. **Assumption 1**: _[_6_]_ _The block network matrices \(C,D\), and \(F\) (in (2)) are block diagonal network matrices._ **Assumption 2**: _Any block network matrix \(M=[M_{ij}]_{i,j\in\mathbb{N}_{N}}\) in the set \(\{A,B,E,K\}\) (from (2) and (4)) unless stated otherwise, satisfies the following statement: Corresponding to a subsystem \(\Sigma_{i},i\in\mathbb{N}_{N}\), the intrinsic matrix block \(M_{ii}\) and the interconnection matrix blocks \(\{M_{ij}:j\in\mathcal{E}_{i}\}\) and \(\{M_{ji}:j\in\mathcal{F}_{i}\}\) are known and fixed while the remaining interconnection matrix blocks \(\{M_{ij}:j\not\in\mathcal{E}_{i}\}\) and \(\{M_{ji}:j\not\in\mathcal{F}_{i}\}\) are free to be designed._ Note that the above assumption relaxes a hard constraint used in [6]. For example, in [6], \(A_{ij}=\mathbf{0},\forall j\not\in\mathcal{E}_{i}\) and \(A_{ji}=\mathbf{0},\forall j\not\in\mathcal{F}_{i}\) was assumed. In contrast, here we allow new interconnections when necessary via treating, e.g., \(\{A_{ij}:j\not\in\mathcal{E}_{i}\}\) and \(\{A_{ji}:j\not\in\mathcal{F}_{i}\}\) as free to be designed. Note also that we execute this design/synthesis task in a decentralized and compositional manner. Therefore, in its \(i^{\text{th}}\) step (executed at the subsystem \(\Sigma_{i},i\in\mathbb{N}_{N}\), according to Alg. 1), we only need to synthesize a subset of interconnection matrix blocks, E.g., \(\{A_{ij}:j\not\in\mathcal{E}_{i}\cap\mathbb{N}_{i-1}\}\) and \(\{A_{ji}:j\not\in\mathcal{F}_{i}\cap\mathbb{N}_{i-1}\}\). ### _Enforcing Stability and Stabilizability_ StabilityThe following theorem considers an un-actuated networked system and provides how new interconnections can be created via designing the interconnection matrix blocks \(\{A_{ij}:j\not\in\mathcal{E}_{i}\}\) and \(\{A_{ji}:j\not\in\mathcal{F}_{i}\}\). **Theorem 1**: _(Stability via Topology Synthesis) The networked system (2), under Assumption 2, \(u(t)=\mathbf{0}\) and \(w(t)=\mathbf{0}\), is stable if at each subsystem \(\Sigma_{i},i\in\mathbb{N}_{N}\), the LMI problem_ \[\mathbb{P}_{1}:\text{Find}\ \ P_{ii},\{Q_{ij}:j\not\in \mathcal{E}_{i}\cap\mathbb{N}_{i-1}\},\{A_{ji}:j\not\in\mathcal{F}_{i}\cap \mathbb{N}_{i-1}\} \tag{10}\] \[\text{such that}\ \ P_{ii}>0,\ \ \tilde{W}_{ii}>0,\] _is feasible, where \(\tilde{W}_{ii}\) is computed from Alg. 1 (Steps: 3-16) when analyzing \(W=[W_{ij}]_{i,j\in\mathbb{N}_{N}}>0\) with_ \[W_{ij}=-P_{ii}A_{ij}\mathbf{1}_{\{j\in\mathcal{E}_{i}\}}-A_{ji}^{\top}P_{jj} \mathbf{1}_{\{j\in\mathcal{F}_{i}\}}-Q_{ij}\mathbf{1}_{\{j\not\in\mathcal{E} _{i}\}}, \tag{11}\] _and the new interconnections are \(\{A_{ji}:j\not\in\mathcal{F}_{i}\cap\mathbb{N}_{i-1}\}\) and \(\{A_{ij}\triangleq P_{ii}^{-1}Q_{ij}:j\not\in\mathcal{E}_{i}\cap\mathbb{N}_{i- 1}\}\)._ Proof:: Let us define \(P\triangleq\text{diag}(P_{ii}:i\in\mathbb{N}_{N})\) and \(W\triangleq-A^{\top}P-PA\) where now \(A=[A_{ij}]_{i,j\in\mathbb{N}_{N}}\) includes variable interconnection blocks \(\{A_{ij}:j\not\in\mathcal{E}_{i}\}\) and \(\{A_{ji}:j\not\in\mathcal{F}_{i}\}\) in its every \(i^{\text{th}}\) column and row, respectively, \(i\in\mathbb{N}_{N}\) (replacing the fixed \(\mathbf{0}\) blocks that were there as per the original definition of \(A\) given in (2)). According to Lm. 1, this new networked system is stable if we can find \(P>0\) such that \(W>0\). Based on the above definition of \(W=[W_{ij}]_{i,j\in\mathbb{N}_{N}}\), it is a symmetric network matrix (see Def. 2) where \[W_{ij}=-P_{ii}A_{ij}-A_{ji}^{\top}P_{jj}. \tag{12}\] Thus, we can use Alg. 1 to test \(W>0\) in a decentralized and compositional manner via testing \(\tilde{W}_{ii}>0\) at each subsystem \(\Sigma_{i},i\in\mathbb{N}_{N}\) sequentially (see Lm. 5 and (8)). However, according to (9), testing \(\tilde{W}_{ii}>0\) will be an LMI problem only if the terms \(\{W_{ij}:j\in\mathbb{N}_{i-1}\}\) are linear in the program variables: \(P_{ii},\{A_{ij}:j\not\in\mathcal{E}_{i}\cap\mathbb{N}_{i-1}\}\) and \(\{A_{ji}:j\not\in\mathcal{F}_{i}\cap\mathbb{N}_{i-1}\}\). Based on (12), the term \(-P_{ii}A_{ij}\) in \(W_{ij}\) become bi-linear whenever \(j\in\not\in\mathcal{E}_{i}\cap\mathbb{N}_{i-1}\). This calls for a change of variables: \[Q_{ij}\triangleq P_{ii}A_{ij},\ \ j\not\in\mathcal{E}_{i}\cap\mathbb{N}_{i-1}, \tag{13}\] which transforms \(W_{ij}\) in (12) into the form (11) - which is linear in terms of the new program variables: \(P_{ii},\{Q_{ij}:j\not\in\mathcal{E}_{i}\cap\mathbb{N}_{i-1}\}\) and \(\{A_{ji}:j\not\in\mathcal{F}_{i}\cap\mathbb{N}_{i-1}\}\). Consequently, testing \(\tilde{W}_{ii}>0\) takes the form of an LMI problem (10). If all the local LMI problems (10) are feasible, \(\exists P_{ii}>0\) such that \(\tilde{W}_{ii}>0,\forall i\in\mathbb{N}_{N}\) - which implies that \(\exists P>0\) such that \(W>0\). This leads to the conclusion that the new networked system with new interconnections \(\{A_{ji}:j\not\in\mathcal{F}_{i}\cap\mathbb{N}_{i-1}\}\) and \(\{A_{ij}\triangleq P_{ii}^{-1}Q_{ij}:j\not\in\mathcal{E}_{i}\cap\mathbb{N}_{i- 1}\}\) is stable. **Remark 2**: _(Objective Functions) As the objective function of the decentralized LMI problem (10) proposed in Th. 1, we can use:_ \[J=\sum_{j\not\in\mathcal{E}_{i}\cap\mathbb{N}_{i-1}}c_{ij}\|Q_{ij}-P_{ii}\bar{A} _{ij}\|+\beta_{ii}\sum_{j\not\in\mathcal{F}_{i}\cap\mathbb{N}_{i-1}}c_{ji}\|A_{ ji}-\bar{A}_{ji}\|, \tag{14}\] _where (1) \(\bar{A}_{ij}\) and \(\bar{A}_{ji}\) matrices represent desired/prescribed candidates for \(A_{ij}\) and \(A_{ji}\), respectively, (2) \(c_{ij}\) and \(c_{ji}\) scalars represent the cost of the interconnections \((\Sigma_{j},\Sigma_{i})\) and \((\Sigma_{i},\Sigma_{j})\), respectively, and (3) \(\beta_{ii}\) is a normalizing constant. It is easy to see that selecting \(\beta_{ii}=\|P_{ii}\|\) equally weights the two components in the above objective function (14). However, such a choice is not practical as it makes the objective function non-convex. Therefore, a reasonable alternative would be to use \(\beta_{ii}=\|P_{jj}\|\) - which is a known constant when evaluating (10). Note that an intuitive objective function of this form can also be used with the decentralized LMI problems proposed in the sequel in Theorems 2-4 (with a few minor modifications). **Remark 3**: _(Refining Existing Interconnections) The proposed interconnection topology synthesis approach can also be used to refine existing interconnections. To show this, assume we are interested in refining the interconnection \(A_{ij}\) for some \(j\in\mathcal{E}_{i}\). First, we need to modify the set of in-neighbors of the subsystem \(\Sigma_{i}\) such that \(\mathcal{E}_{i}\rightarrow\mathcal{E}_{i}\backslash\{j\}\). Next, the current value of \(A_{ij}\) should be stored as the prescribed value \(\bar{A}_{ij}\) in the LMI objective (14) and then consider \(A_{ij}\) as a design variable to be synthesized. Moreover, if we are interested in removing the interconnection \(A_{ij}\) entirely, we need to set the cost coefficient \(c_{ij}\) in the LMI objective (14) arbitrarily high and set \(\bar{A}_{ij}=\mathbf{0}\). Finally, via solving the LMI problem (10) we can obtain the refined interconnection topology (with a refined \(A_{ij}\) value)._ StabilizabilityThe next theoretical result is on enforcing stabilizability under distributed state feedback control via interconnection topology synthesis. **Theorem 2**: _(Stabilizability via Topology Synthesis) The networked system (2) (where \(B\) is block diagonal), under Assumption 2, \(w(t)=\mathbf{0}\) and local state feedback control (3), is stable if at each subsystem \(\Sigma_{i},i\in\mathbb{N}_{N}\), the LMI problem_ \[\begin{array}{ll}\mathbb{P}_{2}:\text{Find}&M_{ii},L_{ii},\{L_{ij}:j\in \mathbb{N}_{i-1}\},\{L_{ji}:j\in\mathbb{N}_{i-1}\},\\ &\{A_{ij}:j\not\in\mathcal{E}_{i}\cap\mathbb{N}_{i-1}\},\{Q_{ji}:j\not\in \mathcal{F}_{i}\cap\mathbb{N}_{i-1}\}\\ \text{such that}&M_{ii}>0,\ \ \tilde{W}_{ii}>0,\end{array} \tag{15}\] _is feasible, where \(\tilde{W}_{ii}\) is computed from Alg. 1 (Steps: 3-16) when enforcing \(W=[W_{ij}]_{i,j\in\mathbb{N}_{N}}>0\) with_ \[\begin{array}{ll}W_{ij}=&-A_{ij}M_{jj}-M_{ii}A_{ji}^{\top}\mathbf{1}_{\{j \not\in\mathcal{F}_{i}\}}-Q_{ji}^{\top}\mathbf{1}_{j\not\in\mathcal{F}_{i}} \\ &-L_{ji}^{\top}\mathbf{1}_{jj}^{\top}-B_{ii}L_{ij}.\end{array} \tag{16}\] _The local state feedback controller gains (which includes the new feedback interconnections \(\{K_{ij}:j\not\in\mathcal{E}_{i}\cap\mathbb{N}_{i-1}\}\) and \(\{K_{ji}:j\not\in\mathcal{F}_{i}\cap\mathbb{N}_{i-1}\}\)) are:_ \[K_{ij}=L_{ij}M_{jj}^{-1}\quad\text{and}\quad K_{ji}=L_{ji}M_{ii}^{-1},\ \ \forall j\in\mathbb{N}_{i} \tag{17}\] _and the new system interconnections are: \(\{A_{ij}:j\not\in\mathcal{E}_{i}\cap\mathbb{N}_{i-1}\}\) and \(\{A_{ji}\triangleq Q_{ji}M_{ii}^{-1}:j\not\in\mathcal{F}_{i}\cap\mathbb{N}_{i-1}\}\)._ Let us define \(M\triangleq\text{diag}(M_{ii}:i\in\mathbb{N}_{N})\), \(W\triangleq-MA^{\top}-AM-L^{\top}B^{\top}-BL\) and \(K=LM^{-1}\), where now \(A=[A_{ij}]_{i,j\in\mathbb{N}_{N}}\) and \(K=[K_{ij}]_{i,j\in\mathbb{N}_{i}}\) includes variable interconnection blocks \(\{A_{ij}:j\not\in\mathcal{E}_{i}\}\), \(\{A_{ji}:j\not\in\mathcal{F}_{i}\}\) and \(\{K_{ij}:j\not\in\mathcal{E}_{i}\}\), \(\{K_{ji}:j\not\in\mathcal{F}_{i}\}\), respectively (replacing the fixed \(\mathbf{0}\) blocks that were there in \(A\) (2) and \(K\) (4)). Starting from Lm. 1, it is easy to show that if there exists \(M>0\) and \(L\) such that \(W>0\), the feedback controller gains given by \(K=LM^{-1}\) guarantees the closed-loop stability of the networked system (with new interconnections). Based on the above definition of \(W=[W_{ij}]_{i,j\in\mathbb{N}_{N}}\), it is a symmetric network matrix (see Def. 2) where \[W_{ij}=-M_{ii}A_{ji}^{\top}-A_{ij}M_{jj}-L_{ji}^{\top}B_{jj}^{\top}-B_{ii}L_{ ij}. \tag{18}\] Therefore, we can use Alg. 1 to enforce \(W>0\) in a decentralized and compositional manner via enforcing \(\tilde{W}_{ii}>0\) at each subsystem \(\Sigma_{i},i\in\mathbb{N}_{N}\) sequentially. As in the proof of Th. 1, to make this local enforcement \(\tilde{W}_{ii}>0\) an LMI problem, we need to replace any bi-linear term in \(W_{ij}\) (18) using a change of variables. In this case, the term \(-M_{ii}A_{ji}^{\top}\) in \(W_{ij}\) (18) is bi-linear, and thus we introduce: \[Q_{ji}^{\top}\triangleq M_{ii}A_{ji}^{\top}\iff Q_{ji}\triangleq A_{ji}M_{ii},\ \ \forall j\not\in\mathcal{F}_{i}\cap\mathbb{N}_{i-1}, \tag{19}\] which transforms \(W_{ij}\) in (18) into the form (16). Consequently, testing \(\tilde{W}_{ii}>0\) takes the form of an LMI problem (10). The remainder of the proof is similar to Th. 1, and is therefore omitted. ### _Enforcing Dissipativity and Dissipativate-ability_ Next, we provide decentralized and compositional interconnection topology synthesis techniques to ensure dissipativity and dissipative-ability. In particular, we consider the \((Q,S,R)\)-dissipativity property introduced in Def. 1, and regarding the specification matrices \(Q,S\) and \(R\), we assume: (1) they are network matrices, (2) \(Q\) is a block diagonal, (3) \(-Q>0\) (see Rm. 1), and (3) \(R=R^{\top}\). **Theorem 3**: _(Dissipativity via Topology Synthesis) The networked system (2) (where \(C,D\) are block diagonal) under \(w(t)=\mathbf{0}\) is \((Q,S,R)\)-dissipative from \(u(t)\) to \(y(t)\) if at each subsystem \(\Sigma_{i},i\in\mathbb{N}_{N}\), the LMI problem_ \[\begin{array}{ll}\mathbb{P}_{3}:\text{Find}&P_{ii},\{G_{ij}:j\not\in\mathcal{E}_ {i}\},\{A_{ji}:j\not\in\mathcal{F}_{i}\},\\ &\{H_{ij}:j\not\in\mathcal{E}_{i}\},\{B_{ji}:j\not\in\mathcal{F}_{i}\}\\ \text{such that}&P_{ii}>0,\ \ \tilde{W}_{ii}>0,\end{array} \tag{20}\] _is feasible, where \(\tilde{W}_{ii}\) is computed from Alg. 1 (Steps: 3-16) when analyzing \(W=[W_{ij}]_{i,j\in\mathbb{N}_{N}}>0\) with_ \[W_{ij}=\begin{bmatrix}W_{ij}^{11}&W_{ij}^{12}&C_{ii}^{\top}e_{ij}\\ W_{ij}^{21}&D_{ii}^{\top}S_{ij}+S_{ji}^{\top}D_{jj}+R_{ij}&D_{ii}^{\top}e_{ij} \\ C_{jj}e_{ij}&D_{jj}e_{ij}&-Q_{ii}^{-1}e_{ij}\\ \end{bmatrix}. \tag{21}\] _where_ \[\begin{array}{ll}W_{ij}^{11}=&-P_{ii}A_{ij}\mathbf{1}_{\{j\not\in\mathcal{E}_ {i}\}}-G_{ij}\mathbf{1}_{\{j\not\in\mathcal{E}_{i}\}}-A_{ji}^{\top}P_{jj},\\ W_{ij}^{12}=&-P_{ii}B_{ij}\mathbf{1}_{\{j\not\in\mathcal{E}_{i}\}}-H_{ij}\mathbf{1}_{ \{j\not\in\mathcal{E}_{i}\}}+C_{ii}^{\top}S_{ij},\\ W_{ij}^{21}=&-B_{ji}^{\top}P_{jj}+S_{ji}^{\top}C_{jj}.\end{array} \tag{22}\] _The new system interconnections are: \(\{A_{ij}\triangleq P_{ii}^{-1}G_{ij}:j\not\in\mathcal{E}_{i}\cap\mathbb{N}_{i-1}\}\) and \(\{A_{ji}:j\not\in\mathcal{F}_{i}\cap\mathbb{N}_{i-1}\}\), and the new input interconnections are: \(\{B_{ij}\triangleq P_{ii}^{-1}H_{ij}:j\not\in\mathcal{E}_{i}\cap\mathbb{N}_{i-1}\}\) and \(\{B_{ji}:j\not\in\mathcal{F}_{i}\cap\mathbb{N}_{i-1}\}\)._ The proof starts by defining \(P\triangleq\text{diag}(P_{ii}:i\in\mathbb{N}_{N})\) and \[W\triangleq BEW\big{(}\begin{bmatrix}-A^{\top}P-PA&-PB+C^{\top}S&C^{\top}\\ -B^{\top}P+S^{\top}C&D^{\top}S+S^{\top}D+R&D^{\top}\\ C&D&-Q^{-1}\end{bmatrix}\big{)}\] (inspired by (7) and including the interested variable interconnection blocks), and proceeds using Prop. 2 in a similar manner to the proof of Th. 1 (note that here Lm. 4 is needed to deduce \(W=\text{BEW}(\Psi)>0\iff\Psi>0\)). Therefore, explicit details of the proof are omitted here. **Theorem 4**: _(Dissipative-ablity via Topology Synthesis) The networked system (2) (where \(B,C,F\) are block diagonal) under \(D=\mathbf{0}\) and local state feedback control (3) is \((Q,S,R)\)-dissipative from \(w(t)\) to \(y(t)\) if at each subsystem \(\Sigma_{i},i\in\mathbb{N}_{N}\), the LMI problem_ \[\begin{array}{ll}\mathbb{P}_{4}:\text{Find}&M_{ii},L_{ii},\{L_{ij}:j\in \mathbb{N}_{i-1}\},\{L_{ji}:j\in\mathbb{N}_{i-1}\},\\ &\{A_{ij}:j\notin\mathcal{E}_{i}\cap\mathbb{N}_{i-1}\},\{G_{ji}:j\not\in \mathcal{E}_{i}\cap\mathbb{N}_{i-1}\},\\ &\{E_{ij}:j\notin\mathcal{E}_{i}\cap\mathbb{N}_{i-1}\},\{E_{ji}:j\not\in \mathcal{E}_{i}\cap\mathbb{N}_{i-1}\},\\ \text{such that}&M_{ii}>0,\;\;\tilde{W}_{ii}>0,\end{array} \tag{22}\] _is feasible, where \(\tilde{W}_{ii}\) is computed from Alg. 1 (Steps: 3-16) when enforcing \(W=[W_{ij}]_{i,j\in\mathbb{N}_{N}}>0\) with \(W_{ij}\) given in (23). The local state feedback controller gains (which include new feedback interconnections \(\{K_{ij}:j\not\in\mathcal{E}_{i}\cap\mathbb{N}_{i-1}\}\) and \(\{K_{ji}:j\not\in\mathcal{F}_{i}\cap\mathbb{N}_{i-1}\}\)) are given by (17), the new system interconnections are: \(\{A_{ij}:j\not\in\mathcal{E}_{i}\cap\mathbb{N}_{i-1}\}\) and \(\{A_{ji}\triangleq G_{ji}M_{ii}^{-1}:j\not\in\mathcal{F}_{i}\cap\mathbb{N}_{i-1}\}\), and the new input interconnections are: \(\{E_{ij}:j\not\in\mathcal{E}_{i}\cap\mathbb{N}_{i-1}\}\) and \(\{E_{ji}:j\not\in\mathcal{F}_{i}\cap\mathbb{N}_{i-1}\}\)._ The proof is similar to that of Th. 3. **Remark 4**: _(Designing Intrinsic Parameters) When using the decentralized LMI problems proposed in Th. 3 and Th. 4, if the application allows, we can also treat some subsystem intrinsic parameters (in addition to interconnection parameters) as design variables - without compromising the LMI format of the problem. For example, in the LMI problem (20), we can include \(C_{ii}\) and/or \(D_{ii}\) as design variables as \(W_{ij}\) (21) is linear in both \(C_{ii}\) and \(D_{ii}\)._ ## V Case Study In this section, we compare: (1) the decentralization-based topology synthesis (**DeTS**) approach proposed for linear networked systems in this paper with (2) the dissipativity-based topology synthesis (**DiTS**) approach proposed for non-linear networked systems in our recent work [12]. In particular, we use two variants of the **DiTS** approach based on the accuracy of the used dissipativity information of the subsystems. They are denoted by the acronyms **W-DiTS** and **S-DiTS**, representing scenarios where the available dissipativity information of the subsystems is weak (less precise) and strong (more precise), respectively. Note that, due to space constraints and for simplicity, we limit this case study to scenarios where topology synthesis is required to ensure the stability of a certain randomly generated networked system. ### _Considered Networked System_ In this case study, we consider a randomly generated networked system of the form (2) with \(N=5,n_{i}=3,\forall i\in\mathbb{N}_{N}\) and \(B=E=\mathbf{0}\). The initial values of a subset of system matrices \(\{A_{ij}:j\in\mathcal{\bar{E}}_{i},i\in\mathbb{N}_{N}\}\) are given in (24). The corresponding initial interconnection topology is shown in the graph in Fig. 2 (left). It is worth noting that this initial networked system is unstable and the decentralized stability analysis proposed in [6] returns inconclusive. Two interconnection cost matrices inspired by this initial interconnection topology are shown in the same figure (i.e., \(C_{f}\) and \(C_{d}\)). Note that \(C_{f}\) has fixed cost levels while \(C_{d}\) has graphical distance-inspired cost levels for different interconnections. Elements of these cost matrices are used in the topology synthesis processes (e.g., as \([c_{ij}]_{i,j\in\mathbb{N}_{N}}\) in (14)) to penalize deviations from the initial topology. Since the proposed topology synthesis approach in this paper (**DeTS**) is decentralized, for comparison purposes, we use the following two centralized cost functions: \[\begin{array}{l}J_{Dev}\triangleq\sum_{i,j\in\mathbb{N}_{N},i\neq j}c_{ij} \|A_{ij}^{*}-\bar{A}_{ij}\|,\\ J_{Nom}\triangleq\sum_{i,j\in\mathbb{N}_{N},i\neq j}\|A_{ij}^{*}\|,\end{array} \tag{25}\] where \(c_{ij}\) is the interconnection cost coefficient (taken from either \(C_{f}\) or \(C_{d}\)), \(\bar{A}_{ij}\) is the initial (given) system matrix block, and \(A_{ij}^{*}\) is the optimally synthesized system matrix block - corresponding to the interconnection \((\Sigma_{j},\Sigma_{i})\). Note that, in (25), \(J_{Dev}\) represents the weighted deviation from the initial topology while \(J_{Nom}\) represents the nominal coupling strength of the synthesized topology. ### _Dissipativity Based Topology Synthesis (DiTS)_ The work in [12] considers networked systems comprised of non-linear subsystems \(\tilde{\Sigma}_{i}:u_{i}\to y_{i},i\in\mathbb{N}_{N}\) interconnected through a static interconnection matrix \(M\) (e.g., see Fig. 3). A key advantage of the solution proposed in [12] is that it only requires the knowledge of \((Q,S,R)\)-dissipativity properties of the subsystems: \(\{(Q_{i},S_{i},R_{i}):i\in\mathbb{N}_{N}\}\) (in lieu of exact dynamic models of the subsystems). Even though subsystem dissipativity information is limited, it provides a simple, robust, and energy-based representation for the subsystems. In particular, the work in [12] uses such subsystem dissipativity information to formulate an LMI problem so as to synthesize the optimal interconnection matrix \(M\) (and hence, the interconnection topology) for the non-linear networked system under some minor assumptions. To make this paper self-contained, we have summarized this dissipativity-based topology synthesis approach in the following proposition. \[W_{ij}=\begin{bmatrix}-A_{ij}M_{jj}-M_{ii}A_{ji}^{\top}\mathbf{1}_{\{j\in \mathcal{\bar{E}}_{i}\}}-G_{ji}^{\top}\mathbf{1}_{\{j\not\in\mathcal{F}_{i} \}}-B_{ii}L_{ij}-L_{ji}^{\top}B_{jj}&-E_{ij}+M_{ii}C_{ii}^{\top}S_{ij}&M_{ii}C_ {ii}^{\top}e_{ij}\\ -E_{ji}^{\top}+S_{ji}^{\top}C_{jj}M_{jj}&F_{ii}^{\top}S_{ij}+S_{ji}F_{jj}+R_{ ij}&F_{ii}^{\top}e_{ij}\\ C_{jj}M_{jj}e_{ij}&F_{jj}e_{ij}&-Q_{ii}^{\top}e_{ij}\end{bmatrix} \tag{23}\] **Proposition 1**: _[_12_, Prop. 5]_ _Under \(R_{i}<0,\forall i\in\mathbb{N}_{N}\), a stabilizing interconnection matrix \(M\) for the non-linear networked system shown in Fig. 3 can be found via solving the LMI problem (centralized):_ \[\begin{array}{ll}\mathbb{P}_{5}:\text{Find}&L,\{p_{i}\in\mathbb{R}:i\in \mathbb{N}_{N}\}\\ \text{such that}&p_{i}>0,\forall i\in\mathbb{N}_{N},\\ &\begin{bmatrix}\textbf{R}_{p}&L\\ L^{\top}&-(L^{\top}\textbf{X}+\textbf{X}^{\top}L+\textbf{Q}_{p})\end{bmatrix} \end{array} \tag{26}\] _as \(M\triangleq\textbf{R}_{p}^{-1}L\), where \(\textbf{R}_{p}\triangleq\text{diag}(p_{i}R_{i}\textbf{I}:i\in\mathbb{N}_{N})\), \(\textbf{Q}_{p}\triangleq\text{diag}(p_{i}Q_{i}\textbf{I}:i\in\mathbb{N}_{N})\), and \(\textbf{X}\triangleq\text{diag}(R_{i}^{-1}S_{i}:i\in\mathbb{N}_{N})\)._ To apply Prop. 1 for the considered linear networked system, we first need to identify the corresponding construction of a nonlinear subsystem \(\tilde{\Sigma}_{i},i\in\mathbb{N}_{N}\) and the interconnection matrix \(M=[M_{ij}]_{i,j\in\mathbb{N}_{N}}\) (shown in Fig. 3). This can be achieved by re-arranging the dynamics of a considered linear subsystem \(\Sigma_{i},i\in\mathbb{N}_{N}\) as: \[\Sigma_{i}:\Big{\{}\dot{x}_{i}(t)=\sum_{j\in\mathcal{E}_{i}}A_{ij}x_{j}(t)=A_{ ii}x_{i}(t)+\sum_{j\in\mathcal{E}_{i}}A_{ij}x_{j}(t).\] Now, the dynamics of a corresponding non-linear (in name only) subsystem \(\tilde{\Sigma}_{i},i\in\mathbb{N}_{N}\) can be written as: \[\tilde{\Sigma}_{i}:\left\{\begin{aligned} \dot{x}_{i}(t)=& A_{ii}x_{i}(t)+u_{i}(t),\\ y_{i}(t)=& x_{i}(t),\end{aligned}\right. \tag{27}\] where \(u_{i}(t)\triangleq\sum_{j\in\mathcal{E}_{i}}A_{ij}y_{j}(t)\equiv\sum_{j\in \mathbb{N}_{N}}M_{ij}y_{j}(t)\). Therefore, \(M_{ij}\triangleq A_{ij}\textbf{I}_{\{i,j\in\mathbb{N}_{N},i\neq j\}}\), and thus, using Prop. 1, we can synthesize the system matrices \(\{A_{ij}:i,j\in\mathbb{N}_{N},i\neq j\}\) which implies the optimal interconnection topology for the considered networked system. Recall that, per our As. 2, the system matrices \(\{A_{ii},i\in\mathbb{N}_{N}\}\) and the non-linear subsystems \(\{\tilde{\Sigma}_{i}:i\in\mathbb{N}_{N}\}\) (27) are prespecified. However, to apply Prop. 1, we only require the subsystem dissipativity properties: \(\{(Q_{i},S_{i},R_{i}):i\in\mathbb{N}_{N}\}\). Here we assume each subsystem \(\tilde{\Sigma}_{i},i\in\mathbb{N}_{N}\) to have input and output passivity indices as \(\nu_{i}\) and \(\rho_{i}\), respectively. In other words, subsystem \(\tilde{\Sigma}_{i}\) (27) is assumed to be \((-\rho_{i}\textbf{I},0.5\textbf{I},-\nu_{i}\textbf{I})\)-dissipative (see Def. 1). Candidate values for such passivity indices were estimated by applying Lm. 2 under: (1) a trial and error approach (which led to weak/less precise passivity indices), and (2) an LMI-based optimization approach [12] (which lead to strong/precise passivity indices). It is worth noting that there are several other alternative approaches to estimate such passivity indices (e.g., see [22, 23, 24]). Note that the said two types of passivity indices estimates led to the two dissipativity-based topology synthesis approaches: (1) **W-DiTS** and (2) **S-DiTS** mentioned earlier. Note also that, inspired by Rm. 2, to penalize deviations from the initial interconnection topology, when solving the LMI problem (26) in Prop. 1, we use the objective function \[J=\big{\|}[c_{ij}(L_{ij}-p_{i}R_{i}\bar{A}_{ij})]_{i,j\in\mathbb{N}_{N}} \big{\|}\,. \tag{28}\] ### _Decentralization Based Topology Synthesis (DeTS)_ For the considered networked system in this case study, the application of the proposed **DeTS** approach is straightforward as it only involves solving the sequence of LMI problems given in Th. 1. In the implementation, as the LMI objective function, we used (14) (with \(\beta_{i}i=\|P_{j}j\|\)) as proposed in Rm. 2. Note also that, as suggested in Rm. 3, we considered the possibility of refining all the existing interconnections. Consequently, in numerical results, we observed that it is possible (and even preferred) to get optimal interconnection topologies where some initial interconnections have been removed completely to preserve stability while minimizing deviations from the initial topology. ### _Observations and Discussion_ Figure 4 illustrates the synthesized optimal interconnection topologies under the aforementioned topology synthesis techniques: **W-DiTS** (Figs. 4ab), **S-DiTS** (Figs. 4cd), and **DeTS** (Figs. 4ef), and under the said interconnection cost matrices \(C_{f}\) (left) and \(C_{d}\) (right). Moreover, the observed deviation and nominal cost values proposed in (25) are given in the titles of the subfigures in Fig. 4. Based on the observed newly added (green-colored) and entirely removed (red-colored) edges with respect to the initial topology (blue-colored) in each scenario, we can clearly see the practical advantage of the proposed **DeTS** approach in this paper compared to both **W-DiTS** and **S-DiTS** methods adapted from [12]. In essence, the proposed DeTS approach has mainly resorted to removing a single edge from the initial topology to ensure the stability of the considered networked system. In contrast, the dissipativity-based approaches **W-DiTS** and **S-DiTS** have mainly resorted to creating several new interconnections to achieve the same goal. Note also that both such approaches may have also refined some existing interconnections (this is also implied by the change in the nominal cost observed in Fig. 4ef compared to that in Fig. 4ef). Another interesting observation is that when using the interconnection cost matrix \(C_{d}\) as opposed to \(C_{f}\), the number of newly added edges decreases (particularly the lengthy ones, e.g., compare Figs. 4ac with Figs. 4bd). However, during the same process, the observed nominal cost increases owing to the internal changes required to stabilize the considered networked system without using additional new edges. Note also that a similar reduction in the number of newly added edges occurs when we have more precise/stronger passivity information regarding the subsystems (e.g., compare Figs. (b)b with Figs. (c)c)d). This is because dissipativity-based topology synthesis [12] becomes more flexible (as opposed to becoming more constrained/conservative) when underlying subsystems are strongly passive. This can also be understood by the fact that strongly passive systems not only can easily be stabilized but also can tolerate other connected weakly passive subsystems (due to the compositionality of passivity). Nevertheless, similar to before, an increment in the nominal cost can be seen due to the internal changes required to achieve stability without using additional new edges. In terms of the deviation cost (25), when using \(C_{f}\), the **S-DiTS** approach has provided the best performance. Note, however, that, in this case, the proposed **DeTS** approach performs closely to the **S-DiTS** approach while also having a better nominal cost. Moreover, when using \(C_{d}\), the proposed **DeTS** approach performs the best (which is also the overall best deviation cost value observed in this case study). We conclude this paper by acknowledging some unique attributes of the **S-DiTS** approach adapted from [12] as opposed to the **DeTS** approach proposed in this paper. Let us consider the amount of information required to synthesize topologies under each approach. First, note that both these approaches use the initial interconnection system matrices \(\{A_{ij}:i,j\in\mathbb{N}_{N},i\neq j\}\) as a reference to penalize deviations from them. However, the intrinsic system matrices \(\{A_{ii}\in\mathbb{R}^{n_{i}\times n_{i}}:i\in\mathbb{N}_{N}\}\) are only used in the proposed **DeTS** approach in this paper. In contrast, only two scalar passivity indices per each subsystem: \(\{(\nu_{i},\rho_{i})\in\mathbb{R}^{2}:i\in\mathbb{N}_{N}\}\) are being used in the **S-DiTS** approach. In real-world scenarios, such scalar passivity indices can be estimated conveniently and accurately - compared to having to estimate the entire intrinsic system matrices. Moreover, as detailed in [12], the **S-DITS** approach is applicable to a variety of networked systems comprised of non-linear subsystems. ## VI Conclusion This paper considered networked systems comprised of interconnected linear subsystems and proposed a decentralized and compositional approach to stabilize or dissipative such linear networked systems via synthesizing an optimal set of interconnections (topology) for the subsystems. The proposed topology synthesis approach was then extended to ensure the ability to stabilize or dissipative linear networked systems using distributed feedback control. We gain this ability to optimally synthesize topologies by improving an existing decentralized and compositional approach for various analysis and controller synthesis tasks related to linear networked systems. The proposed topology synthesis approach only involves solving a sequence of linear matrix inequality problems - which can be implemented efficiently and scalably using standard convex optimization Fig. 4: Obtained optimal interconnection topologies for the considered linear networked system under different topology synthesis methods: **W-DiTS**, **S-DiTS**, and **DeTS**, under different interconnection cost matrix: \(C_{f}\) and \(C_{d}\). The blue, green, and red edges in the sub-figures represent the initial, newly added, and entirely removed interconnections, respectively. The titles of the sub-figures indicate the deviation and nominal cost values as defined in (25) Fig. 3: A non-linear networked dynamical system configuration considered in [12] (compare with Fig. 1). Fig. 2: Numerical Example : Initial Topology toolboxes. The presented case study showed that the proposed topology synthesis approach provides simplistic and high-performing solutions compared to an existing topology synthesis approach proposed for more general non-linear networked systems with limited information about the subsystems. Future work aims to closely study several critical real-world applications (e.g., supply chain networks, micro grids, vehicular platoons, and multi-robot systems) where the proposed topology synthesis approach can be directly applied to optimize existing networks in such applications.
2307.12744
Memory Effects, Multiple Time Scales and Local Stability in Langevin Models of the S&P500 Market Correlation
The analysis of market correlations is crucial for optimal portfolio selection of correlated assets, but their memory effects have often been neglected. In this work, we analyse the mean market correlation of the S&P500 which corresponds to the main market mode in principle component analysis. We fit a generalised Langevin equation (GLE) to the data whose memory kernel implies that there is a significant memory effect in the market correlation ranging back at least three trading weeks. The memory kernel improves the forecasting accuracy of the GLE compared to models without memory and hence, such a memory effect has to be taken into account for optimal portfolio selection to minimise risk or for predicting future correlations. Moreover, a Bayesian resilience estimation provides further evidence for non-Markovianity in the data and suggests the existence of a hidden slow time scale that operates on much slower times than the observed daily market data. Assuming that such a slow time scale exists, our work supports previous research on the existence of locally stable market states.
Tobias Wand, Martin Heßler, Oliver Kamps
2023-07-24T12:35:45Z
http://arxiv.org/abs/2307.12744v1
Memory Effects, Multiple Time Scales and Local Stability in Langevin Models of the S&P500 Market Correlation ###### Abstract The analysis of market correlations is crucial for optimal portfolio selection of correlated assets, but their memory effects have often been neglected. In this work, we analyse the mean market correlation of the S&P500 which corresponds to the main market mode in principle component analysis. We fit a generalised Langevin equation (GLE) to the data whose memory kernel implies that there is a significant memory effect in the market correlation ranging back at least three trading weeks. The memory kernel improves the forecasting accuracy of the GLE compared to models without memory and hence, such a memory effect has to be taken into account for optimal portfolio selection to minimise risk or for predicting future correlations. Moreover, a Bayesian resilience estimation provides further evidence for Non-Markovianity in the data and suggests the existence of a hidden slow time scale that operates on much slower times than the observed daily market data. Assuming that such a slow time scale exists, our work supports previous research on the existence of locally stable market states. 25/07/2023 **Keywords:** Langevin Equation, Econophysics, Bayesian Estimation, Memory Effects, Non-Markovian Dynamics ## 1 Introduction The S&P500 is an aggregated index of the stocks of the 500 largest companies traded at US stock exchanges and therefore serves as an indicator of the overall US economic performance. Estimating and predicting the correlation between different assets is crucial for optimal portfolio selection and risk management and has been the focus of financial research since Markowitz's portfolio theory [27]. Because the economy is a system with a large number of interacting parts (traders and companies), it is amenable to the analysis tools from complex systems science [25]. Due to the increasing availability of data for the economy, highly data-driven methods can be applied in this field of research, with many researchers choosing to focus on the correlations between the relative price changes of different stocks, i.e. the correlations of the stocks' returns [4, 23, 26, 27, 28, 34]. In [32, 38, 45] the authors identified states of the economy by analysing correlation matrices of daily S&P 500 data and found eight clusters. These can be interpreted as economic states reflecting e.g. times of economic growth or crises [32] and are found to be locally stable [38, 45]. Further analyses showed that exogenous events, precursors for crises and collective market behaviour can also be identified via the correlation matrix approach [14, 13, 15]. Ever since Bachelier's seminal work introduced the random walk model to describe stock prices [2], the inherent stochasticity of financial data has been taken into account by researchers and practitioners alike. The Langevin equation is a model for stochastic differential equations that includes a deterministic drift and a random diffusion and can be used to describe such stochastic data. Much research has been devoted to estimating Langevin equations from data [7, 43, 35, 6, 36, 20, 54] and Langevin equations have found applications in various fields of research such as fluid dynamics [42], molecular dynamics [21] and meteorology [5] (cf. [8] for a review of applications). The generalised Langevin equation (GLE) expands the Langevin model by introducing a kernel function to include memory effects [31]. Bayesian statistics includes a collection of methods that reverse the classical approach to statistics and focuses on calculating posterior distributions of model parameters as probabilities conditional on the observed data [44, 49]. This approach enables an efficient estimation of Non-Markovian Langevin equations [53, 52]. Also, other time series analysis methods can be implemented in the Bayesian frame work to e.g. detect change points in time series data [18]. We show that an estimated GLE model manages to achieve a high goodness-of-fit for the correlation time series and implies a strong memory effect which has to be taken into account when predicting future market correlations. Furthermore, we perform a detailed comparison of a Markovian mono-time scale and a Non-Markovian two-time scale model with a hidden slow time scale which confirms the GLE analysis results regarding Non-Markovian memory effects. Additionally, the analysis supports the local quasi-stationary economic state theory discussed in Stepanov et al. [45] and provides some evidence that a hidden slow time scale might be involved in the mean market correlation dynamics. It is not far-fetched to assume that a complex system like the human economy involves a multitude of time scales and our findings coincide with such reasoning. The slow time scales might be connected to business and innovation cycles or similar economic dynamical mechanisms even though we could not extract a quantitatively reasonable magnitude of the hidden slow time scale. The remainder of this article is structured as follows: Section 2 explains the data gathering and preprocessing in 2.1, the Bayesian methodology in 2.2, the GLE estimation procedure in 2.3 and the resilience analysis in subsection 2.4. Section 3 describes the goodness-of-fit and the estimated memory parameters for the GLE model in 3.1 as well as the resilience results including the two-time scale discussion in section 3.2. Finally, the results of these analyses are interpreted and compared to the results from [45] in section 4. ## 2 Data and Methods ### Data Preparation The S&P 500 is a stock index containing 500 large US companies traded at American stock exchanges. Daily stock data from these companies were downloaded via the Python package _yfinance_[1] for the time period between 1992 and 2012, which is the same time period as in [38]. Only stock data of companies that were part of the S&P 500 index during at least 99.5% of the time were used for this analysis. If a company's price time series \(P_{t}\) is not available for the full time period, the remaining 0.5% of the price time series data are interpolated linearly with the _.interpolate()_ method in _pandas_[37, 30]. Overall, there are 249 companies' time series for 5291 trading days with one observation per date. Each company's returns \(R_{t}=(P_{t+1}-P_{t})/P_{t}\) are locally normalised to remedy the impact of sudden changes in the drift of the time series with the method introduced in [39] as \[r_{t}=\frac{R_{t}-\langle R_{t}\rangle_{n}}{\sqrt{\langle R_{t}^{2}\rangle_{n}- \langle R_{t}\rangle_{n}^{2}}}. \tag{1}\] Here, \(\langle\dots\rangle_{n}\) denotes a local mean across the \(n\) most recent data points, i.e. \(r_{t}\) is subjected to a standard normalisation transformation with respect to the local mean and standard deviation (i.e. volatility \(\sigma\)). Following [32], \(n=13\) was chosen for the daily data. For each time step \(t\) and each pair of sectors \(i,j\) the local correlation coefficients \[C_{i,j}=\frac{\langle r_{t}^{(i)}r_{t}^{(j)}\rangle_{\tau}-\langle r_{t}^{(i) }\rangle_{\tau}\langle r_{t}^{(j)}\rangle_{\tau}}{\sigma_{\tau}^{(i)}\sigma_{ \tau}^{(j)}} \tag{2}\] are calculated over a time period of \(\tau=42\) trading days like in [32] (the 42 working days correspond to 2 trading months) with the local standard deviations \(\sigma_{\tau}^{(i)}\). As shown via Principle Component Analysis in [45], the mean correlation \[\bar{C}=\frac{1}{N}\sum_{\text{i,j}}C_{i,j} \tag{3}\] already describes much of the variability in the data. Hence, it makes sense to restrict the analysis to this one-dimensional time series \(\bar{C}(t)\) (shown in figure 1). The time \(t\) is here selected as the central value of the time window of length \(\tau\) in order to have a symmetrical window. The preprocessed time series is available via [50]. ### Bayesian Statistics Bayes' theorem for the conditional probability distributions \(f(\cdot|\cdot)\) of model parameters \(\theta\) and observed data \[f_{post}(\theta|x)\sim f_{prior}(\theta)f(x|\theta) \tag{4}\] connects the standard statistical likelihood \(f(x|\theta)\) and prior knowledge about the model parameters \(f_{prior}\) to a posterior distribution \(f_{post}\) of the unknown model parameters conditional on the observed data [24]. A bundle of methods have been derived from this approach and are collectively referred to as Bayesian Statistics [44, 49]. Markov chain Monte Carlo algorithms (MCMC) can be used to infer the posterior distribution by simulating several random walkers that explore the (potentially high-dimensional) posterior distribution [11]. It generates samples of the model parameters which are uncorrelated after cutting off the first \(n_{burn}\) exploration steps as a burn-in period and thinning the remaining samples. The resulting MCMC samples can also be used to integrate the posterior distribution over all but one model parameter \(\theta_{i}\) to derive the marginal posterior distribution \(f(\theta_{i}|x)\) of this single parameter. Parameter estimation can then be done via the mean of the posterior distribution or via its maximum (abbreviated as MAP for maximum a posteriori) and credible intervals (CIs) of e.g. 95% credibility can be directly derived from the quantiles of the samples for \(\theta_{i}\). ### Fitting a Generalised Langevin Equation with Memory Kernel Sections 4 and 5 of [45] analyse the one-dimensional mean correlation time series by fitting a Langevin model \[\frac{\mathrm{d}\bar{C}}{\mathrm{d}t}(t)=D^{(1)}\left(\bar{C},t\right)+\sqrt{D^{( 2)}\left(\bar{C}\right)}\Gamma(t) \tag{5}\] with independent Gaussian noise \(\Gamma\), a deterministic time-dependent drift function \(D^{(1)}\) and a time-independent diffusion function \(D^{(2)}\). Note that the Langevin equation is a Markovian model, i.e. it has no memory. As an alternative model, a Generalised Langevin Equation (GLE) includes previous values of the time series in a memory kernel \(\mathcal{K}\), but has only time-independent parameters with a functional equation \[\frac{\mathrm{d}\bar{C}}{\mathrm{d}t}(t)=D^{(1)}\left(\bar{C}\right)+\int_{s=0 }^{t}\mathcal{K}(s)\bar{C}(t-s)\,\mathrm{d}s+\sqrt{D^{(2)}\left(\bar{C}\right) }\Gamma(t). \tag{6}\] A Bayesian estimation of the parameters of equation (6) is implemented in [53]. It discretises the memory kernel \(\mathcal{K}(k)\) to \(\mathcal{K}_{k}\) and discretises the observed data into \(n_{B}\) different bins to significantly speed up the estimation procedure. Because an overlap of the windows used to calculate \(\bar{C}\) can lead to artefacts in the memory effects (cf. section A in the appendix), we choose to calculate \(\bar{C}\) with different parameters than \(\tau=42\) and \(s=1\) from [45] (every \(s^{th}\) is retained for the analysis and hence for \(s=1\), the full time series is used). Instead, we set \(\tau=5\) and \(s=5\), meaning that the mean market correlation \(\bar{C}\) is calculated over one trading week for the end of the trading week in disjoint windows (i.e. we create a time series whose length is \(1/s=20\%\) of the original time series). Because using disjoint intervals automatically leads to a thinning of the time series, this seemed like a useful Figure 1: The mean correlation of the S&P500 market calculated. The time series are the mean values of correlation matrices which were calculated based on moving windows of length \(\tau\) days plotted against the centre of the \(\tau\)-days-interval. This figure depicts \(\tau=5\) and \(\tau=42\). trade-off with a real-world interpretation as weekly correlation. Note that although only five trading days contribute to the calculation of \(\bar{C}\), there is still an order of magnitude of \(10^{4}\) correlations \(C_{i,j}\) whose mean value is the desired \(\bar{C}\). I.e. although only a short time window is used to calculate each of the \((i,j)\) correlation pairs, the sheer size of the ensemble of \(C_{i,j}\) reaffirms our trust in the accuracy of their mean \(\bar{C}\). The resulting time series is shown in figure 1 together with the time series used in [45] and although it obviously becomes much noisier due to the shorter window size, it still retains some of the features of the less noisy time series like e.g. the position of spikes with high correlation. As described in [53], the memory effects are aggregated to a quantity \(K_{k}\) (named \(\kappa_{k}\) in [53]) that measures the strength of all memory effects from 1 up to \(k\) time steps ago. If \(K_{k}\) saturates towards a plateau at step \(k_{0}\), then \(k_{0}\) is the maximum length of any reasonable memory kernel. This often also coincides with kernel values of \(\mathcal{K}_{k_{0}}\approx 0\) at \(k_{0}\). The estimated values for the model parameters are chosen by calculating the marginal posterior distribution and choosing the mean or MAP parameter estimation. The goodness-of-fit is also evaluated by using MCMC to simulate an artificial time series \(\bar{C}_{t}^{(a)}\) with the estimated model and comparing the autocorrelation function and the distributions of \(\bar{C}_{t}^{(a)}\) and the increments \(\bar{C}_{t}^{(a)}-\bar{C}_{t-j}^{(a)}\) to those of the original data. Finally, the inferred model structure can be trained on only a subset of the data (training data) and be used to predict the remaining test data to evaluate its predictive performance. ### Resilience Estimation By applying Bayes' theorem (4) we can deduce a quantitative measure of resilience under given model assumptions. Therefore, we infer the parameters of two models of stochastic differential equations in a rolling window approach that allows for resolving the time evolution of the resilience and noise level and accounts for the quasi-stationary nature of the time series that is observed in Stepanov et al. [45]. Following the quasi-stationarity argument we assume to be in a fixed state per window, i.e. the fixed point \(\bar{C}^{*}\) which is approximated by averaging over the mean market correlation data per window. First, we estimate a Markovian Langevin equation (5) with the Taylor-expanded drift \[D_{\bar{C}}^{(1)}(\bar{C}(t),t) =\alpha_{0}(t)+\alpha_{1}(t)(\bar{C}-\bar{C}^{*})+\alpha_{2}(t) (\bar{C}-\bar{C}^{*})^{2}+\alpha_{3}(t)(\bar{C}-\bar{C}^{*})^{3}+\mathcal{O} (\bar{C}^{4}) \tag{7}\] \[=\theta_{0}(t;\bar{C}^{*})+\theta_{1}(t;\bar{C}^{*})\cdot\bar{C} +\theta_{2}(t;\bar{C}^{*})\cdot\bar{C}^{2}+\theta_{3}(t;\bar{C}^{*})\cdot\bar {C}^{3}+\mathcal{O}(\bar{C}^{4}) \tag{8}\] with the drift parameters \(\theta_{0,\ldots,3}(t;\bar{C}^{*})\equiv\theta_{0,\ldots,3}(\bar{C}^{*})\) per rolling window and the constant diffusion \(D^{(2)}(\bar{C})=\theta_{4}^{2}\equiv const.=:\sigma^{2}\). We choose uninformed priors which are given by \[p_{\text{prior}}(\theta_{0},\theta_{1})=\frac{1}{2\pi(1+\theta_{1}^{2})^{ \frac{3}{2}}} \tag{9}\] for the linear part of the drift function and the Jeffreys scale prior [49] \[p_{\text{prior}}(\theta_{4})=\frac{1}{\theta_{4}} \tag{10}\] for the noise level \(\theta_{4}\) with suitable prior ranges. The priors of higher order parameters are chosen to be \[p_{\text{prior}}(\theta_{2}) =\mathcal{N}(\mu=0,\tilde{\sigma}=4)\text{ and} \tag{11}\] \[p_{\text{prior}}(\theta_{3}) =\mathcal{N}(\mu=0,\tilde{\sigma}=8) \tag{12}\] with Gaussian distributions \(\mathcal{N}\) centred around the mean \(\mu=0\) with a standard deviation \(\tilde{\sigma}=4\) and \(\tilde{\sigma}=8\). Since we consider economic systems to operate on multiple time scales which concurrently lead often to Non-Markovian time series, we additionally introduce a two-dimensional Non-Markovian model analogue to Willers and Kamps [54]. The model takes the form \[\frac{\mathrm{d}\bar{C}(t)}{\mathrm{d}t} =D^{(1)}_{\bar{C}}(\bar{C},t)+\sqrt{D^{(2)}_{\bar{C}}(\bar{C},t)}\cdot\lambda \tag{13}\] \[\frac{\mathrm{d}\lambda(t)}{\mathrm{d}t} =D^{(1)}_{\lambda}(\lambda,t)+\sqrt{D^{(2)}_{\lambda}(\lambda,t) }\cdot\Gamma(t) \tag{14}\] with a hidden Ornstein-Uhlenbeck process (OU-process) \(\lambda\), drift \(D^{(1)}_{\lambda}(\lambda,t)=-\frac{1}{\theta_{5}^{2}}\cdot\lambda\) and diffusion \(D^{(2)}_{\lambda}(\lambda,t)=\frac{1}{\theta_{5}^{2}}\). Drift \(D^{(1)}_{\bar{C}}(\bar{C},t)\) and diffusion \(D^{(2)}_{\bar{C}}(\bar{C},t)\) of the observed process \(\bar{C}(t)\) remain unchanged. The Non-Markovian analogue to the constant noise level \(\sigma^{2}\) of the Langevin equation is given by the composite noise level \[\Psi=\sqrt{D^{(2)}_{\bar{C}}(\bar{C},t)}\cdot\sqrt{D^{(2)}_{\lambda}(\lambda, t)}\cdot h \tag{15}\] with the small discrete sampling time step \(h\). For the OU-process, an invariant prior of a straight line and a scale prior for the diffusion are multiplied: \[p_{\text{prior}}(\theta_{5})=\frac{\theta_{5}}{2\pi\left(1+\left(-\frac{1}{ \theta_{5}}\right)^{2}\right)^{\frac{3}{2}}}. \tag{16}\] Furthermore, via the prior we introduce a pre-defined time scale separation of the time scales \(\tau_{\bar{C}}\) and \(\tau_{\lambda}\) of the observed and unobserved process, respectively, i.e. we require either \(\tau_{\bar{C}}>\gamma\cdot\tau_{\lambda}\) or \(\tau_{\lambda}>\gamma\cdot\tau_{\bar{C}}\) with a scale separation coefficient \(\gamma\). The characteristic time scales [46] are approximated by \[\tau_{\nu}=\Bigg{|}\left(\left.\frac{\mathrm{d}D^{(1)}_{\nu}(\nu,t)}{\mathrm{ d}\nu}\right)^{-1}\right|\Bigg{|}_{\nu=\nu^{*}}\qquad\quad\text{with }\nu\in\{\bar{C},\lambda\}. \tag{17}\] The priors for the model of the observed data \(\bar{C}(t)\) remain unchanged as well apart from the term \(D^{(2)}_{\bar{C}}(\bar{C},t)=\theta_{4}^{2}\) which corresponds to a coupling constant in the Non-Markovian model. We simply apply a Gaussian prior like the one in equation (11) to \(\theta_{4}\) in this case. Inspired by the formalism of linear stability analysis for both models we calculate the drift slope \[\zeta=\left.\frac{\mathrm{d}D^{(1)}_{\bar{C}}(\bar{C})}{\mathrm{d}\bar{C}} \right|_{\bar{C}=\bar{C}^{*}} \tag{18}\] per window as a Bayesian parametric resilience measure. For the Markovian model the calculations are performed with the open-source Python toolkit _antiCPy_[16, 17]. In this modelling framework, a stable state corresponds to a negative drift slope \(\zeta\), whereas a changing sign indicates destabilisation via a bifurcation. More details on the procedure can be found in [18]. ## 3 Results ### Estimated GLE Model The data is split into \(n_{B}=10\) equally wide bins and an initial modelling attempt with a rather long kernel \(k_{\max}=10\) is tested, i.e. \(\mathcal{K}_{q}=0\) for all \(q>k_{\max}\). It shows a plateau emerging at around \(k\geq 6\) (cf. appendix B). Hence, a model with \(k_{\max}=6\) is used as a reasonable length for the memory kernel and its goodness-of-fit is evaluated. We then use a very conservative estimation of the memory up to \(k_{\max}=3\) to evaluate the GLE's predictive power. #### 3.1.1 Goodness-of-fit A time series of length \(10^{5}\) is simulated via Euler-Maruyama integration of the GLE model to compute the autocorrelation function (ACF) and to compare it to the ACF of the original time series. The best model is chosen via MAP estimation in the Bayesian framework of [53], but selecting the mean estimation yields almost identical results. Both ACFs are computed via the function _statsmodels.tsa.stattools.acf_ from the Python package _statsmodels_[41]. Figure 2 shows that the two ACFs show very good agreement up to lags of 10 trading weeks and decent agreement up to lags of 20 weeks. The distributions of the time series and the first two increments for the original and the simulated data are shown in figure 3 for the MAP estimation and also for the mean estimation. Figure 3 shows an almost perfect overlap between the two estimation procedures and a good agreement between the estimated time series and the original time series, especially for the increment distributions. Overall, these diagnostics indicate that the estimated model with memory kernel length 6 manages to reproduce these important statistical properties of the original time series. Figure 2: Comparison between the autocorrelation functions of the original time series (solid line) and the simulated data from the model with kernel length 6 (dashed line) simulated via MAP estimation of the MCMC-integrated marginal densities. Up to lag 30, the ACFs show good agreement. The alternative simulation via mean estimation of the density yields an almost identical ACF. However, the ACF based on the model with no memory kernel (dotted line) fails to capture the empirical ACF. Because the realised values of the time series \(\bar{C}\) are not distributed uniformly, we also use a modelling procedure with unequal bin widths so that each of the 10 bins has the same amount of data. However, this barely changes the model diagnostics shown in figures 2 and 3. The only noticeable change is that for long kernels with length \(\geq 10\), the ACF seems to be captured a little worse in the shown range of figure 2, but manages to fit more closely to the empirical ACF for large lags at around \(r\approx 100\). While the regular LE without memory has a very similar increment distribution, the ACF is much worse than for the GLE. #### 3.1.2 Estimated Memory Kernel To make further inference on the memory kernel and to estimate its quantitative effect, the posterior distribution of the model with memory length 6 is sampled via MCMC. 100 walkers are simulated for \(10^{5}\) time steps and after a burn-in period of 450 initial time steps is discarded, the chains are thinned by only keeping every \(450^{\text{th}}\) step to obtain uncorrelated samples. Bayesian CIs can now be calculated at the 95% level for each parameter in the kernel function and the results are shown in figure 4. The CIs of \(\mathcal{K}_{4}\) are already very close to the zero line and those of \(\mathcal{K}_{5}\) include zero, meaning that there is no evidence for a memory term at five weeks distance. Interestingly, the CIs for \(\mathcal{K}_{6}\) clearly exclude the value zero. Because it is difficult to exactly identify the beginning of the plateau in the memory aggregation in figure 9, it may be possible that the plateau already emerges at \(k=5\) and that the nonzero memory kernel \(\mathcal{K}_{6}\) is therefore unreliable. However, the results in figure 4 clearly imply a nonzero memory effect for four time steps with a clearly nonzero effect strength for memories up to \(k=3\) trading weeks. Figure 3: KDE plots to compare the distributions between the real data and the MCMC samples for the two estimated GLE models with kernel length 6 for the time series data \(\bar{C}_{t}\) (left), the increments \(\bar{C}_{t}-\bar{C}_{t-1}\) (centre) and \(\bar{C}_{t}-\bar{C}_{t-2}\) (right). Differences between the two simulated models are hardly visible and overall, there is a good overlap with the real data. For the LE without memory, the respective distributions are depicted in appendix C. #### 3.1.3 Prediction via the GLE with Kernel Length 3 With a conservative interpretation of the results in section 3.1.2, a model with memory length \(k=3\) is used to evaluate the GLE's power to predict a future value \(y_{t+1}\) by forecasting an accurate prediction \(\hat{y}_{t+1}\). It is tested against a regular Langevin equation (LE) model without any memory effects (corresponding to \(k=0\) and also estimated with the code in [53]) and against the naive benchmark of predicting the next time step \(y_{t+1}\) by simply setting it to the last previously known value: \(\hat{y}_{t+1}=y_{t}\). To test these three methods, the Langevin models are trained on the first \(\alpha\%\) of the time series (the training data) and the predictions of the GLE, the LE and the naive forecast are evaluated on both the training data (as in-sample predictions) and on the remaining \(1-\alpha\%\) test data (as out-of-sample predictions). The coefficient of prediction \(\rho^{2}\) is used to evaluate their predictive accuracy. If \(n\) observations \(y_{1,\ldots,n}\) are forecasted as \(\hat{y}_{1,\ldots,n}\), then the coefficient of prediction is given by \[\rho^{2}=1-\frac{\sum_{i=1}^{n}(\hat{y}_{i}-y_{i})^{2}}{\sum_{i=1}^{n}(\bar{y }-y_{i})^{2}} \tag{19}\] with the \(\bar{y}\) denoting the mean value. It takes the value of \(\rho^{2}=1\), if the prediction is always exactly true, and \(\rho^{2}=0\), if the prediction is only as accurate as always using the mean \(\bar{y}\), and \(\rho^{2}<0\), if it is less accurate than using the mean and is computed via the function _sklearn.metrics.r2_score_ from [33]. The results in table 1 show that the GLE model consistently achieves the highest accuracy on in-sample and out-of-sample predictions for the three chosen test data sizes. Notably, the negative \(\rho^{2}\) of the naive method for the test data indicates that the out-of-sample prediction is by no means trivial, meaning that the low, but positive \(\rho^{2}\) of the GLE on the test data is nevertheless a good Figure 4: Values for the memory kernel \(\mathcal{K}_{k}\) in the model with memory length 6. Credible intervals were estimated via MCMC at the 95% level. The inclusion of 0 in the credible interval of \(\mathcal{K}_{5}\) combined with the uncertainty about the beginning of the plateau in figure 9 imply that the nonzero memory effect for \(\mathcal{K}_{6}\) may be misleading. All previous memory kernels \(\mathcal{K}_{1,2,3,4}\) have nonzero values at the 95% credibility level. performance. The GLE achieves slightly better results than the LE, indicating that the memory effect should be taken into account for prediction tasks. Figure 5 shows the predictions of the LE and GLE for the in-sample and out-of-sample predictions with \(\alpha=90\%\). The same visual comparison between naive forecast and the GLE can be found in the appendix D. \begin{table} \begin{tabular}{|l c|c c c|c c c|} \hline \multirow{2}{*}{**Model**} & \multirow{2}{*}{\(\alpha\)} & \multicolumn{3}{c|}{**In-Sample**} & \multicolumn{3}{c|}{**Out-Of-Sample**} \\ \cline{3-8} & & 80\% & 85\% & 90\% & 80\% & 85\% & 90\% \\ \hline Naive & & 0.07 & 0.15 & 0.19 & -0.21 & -0.15 & -0.11 \\ \hline LE & & 0.29 & 0.32 & 0.35 & -0.14 & -0.01 & 0.08 \\ \hline GLE & & 0.39 & 0.42 & 0.45 & 0.01 & 0.08 & 0.10 \\ \hline \end{tabular} \end{table} Table 1: Comparing the predictions of the naive forecast \(\hat{y}_{t+1}=y_{t}\), the Langevin equation (LE) and the GLE with memory kernel length \(k=3\) via their \(\rho^{2}\) score on in-sample training data and out-of-sample test data (\(\alpha\%\) training data and the last \(1-\alpha\%\) of the time series as test data). Note that the partially negative \(\rho^{2}\) for the test data indicates that the test data is quite difficult to predict, which corresponds to the high fluctuations in the final part of the time series in figure 1. Figure 5: Comparison between one-step-ahead forecasts of the GLE model with Kernel against a regular Langevin estimation without memory effects on the training data (left) and the test data (right). The identity \(f(x)=x\) is given as a benchmark for perfect predictive accuracy. Here, \(\alpha=90\%\) of the time series were used as training data. ### Hidden Slow Time Scale and Non-Markovianity Applying the resilience analysis method, described in subsection 2.4, we can deduce some interesting evidence for multiple time scales and further confirmation of Non-Markovianity present in the considered economic time series. The results for both the simple Markovian and multi-scale Non-Markovian model (cf. (5) and (13), respectively) are compared in figure 6 (a-c). For better readability the parameters of the calculations are listed in table 2 of appendix E. Stepanov et al. [45] argue for quasi-stationary economic states which are occupied over finite time periods, before they transition into another quasi-stationary economic state. The approximated state potentials from the mean market correlation in the article [45] suggest that there might be shifts in the fixed point positions over time, but no bifurcation-induced tipping (B-tipping) is involved, i.e. no qualitative change of stable and unstable fixed points or attractors is observed. Instead of B-tipping mechanisms, the intrinsic economic stochasticity drives the jumps between alternative quasi-stationary states, which is a mechanism basically related to noise-induced tipping (N-tipping). If we choose a model that captures the key features of the data generating process, we should thus be able to uncover generally negative drift slope estimates \(\hat{\zeta}\), corresponding to data of a locally quasi-stationary state in each rolling window of the mean market correlation \(\bar{C}(t)\) the data of which is shown again in figure 6 (a). In this spirit, we find the Non-Markovian model with an unobserved slow time scale, i.e. \(\tau_{\lambda}>\gamma\cdot\tau_{\bar{C}}\) with \(\gamma=2\), to yield the expected result of negative drift slope estimates \(\hat{\zeta}\) as presented in figure 6 (b) and indicated by the red solid line with green credibility bands (CBs). This result is supported by rather intuitive qualitative considerations: Economic processes are well-known to operate on various fast and slow time scales, on the one hand e.g. high frequency trading, trading psychology and sudden external events, whether that might be political, sociological, severe weather or other impacts. These fast-scale processes could be termed as "economic weather" in a climatologist's metaphor. On the other hand, long-term "economic climate evolution" takes place on much slower time-scales. Examples could be innovation processes and technical revolutions like the invention of the steam engine or the internet, economic cycle theories like that of Marx (\(\tau\sim 2\,\mathrm{a}\) to \(10\,\mathrm{a}\)) [29, 10], Keynes, Schumpeter or Kondratjew [3, 22, 40], cycles of fiscal and demographic developments [48], cultural evolution [51] and generation changes influencing economic reasoning, adaptions to climate change and the scarcity of resources and much more. Keeping that in mind, the trading day resolution of the data is rather fine-grained and it might be reasonable to assume that a hidden slow time scale is present in the data. The slow time scale of the presented Non-Markov model is determined by the upper boundary of the prior range which is chosen to correspond roughly to \(6\,\mathrm{a}\) which is the time of an average business cycle and coincides with the first zero-crossing of the autocorrelation function of the mean market correlation \(\bar{C}(t)\). If the prior is chosen broader the model converges to roughly \(700\,\mathrm{a}\) to \(4000\,\mathrm{a}\) which is not a plausible magnitude of the time scale of economic evolution (cf. Appendix E). However, the main results of local quasi-stationary economic states is not affected by the prior choice. The estimation of a more reasonable magnitude of the slow time scale without prior restriction might be prohibited by the very limited amount of data per window, since the window range does not even include one complete business cycle which would be the smallest proposed slow time scale candidate. Further note that the two-time scale Non-Markovian model in eq. (13) incorporates noise in the slow non-observed variable \(\lambda\). In that way it could formally reflect intermediate dynamics on a time scale \(\tau_{\mathrm{N}}\) with \(\tau_{\bar{C}}<\tau_{\mathrm{N}}<\tau_{\lambda}\). But since it is coupled to the mean market correlation \(\bar{C}(t)\) on trading day resolution, we consider Figure 6: Results of the resilience analyses based on the Markovian Langevin equation 5 and the Non-Markovian model 13 with an unobserved slow time scale \(\tau_{\lambda}\), i.e. under the prior assumption \(\tau_{\lambda}>2\cdot\tau_{\bar{C}}\): (a) For the mean market correlation \(\bar{C}(t)\). (d) For a synthetic dataset \(x\) that shares the multi-scaling features and Non-Markovianity with the economic time series \(\bar{C}(t)\). (b) The Markovian model suggests persistent latent instability, i.e. \(\hat{\zeta}\approx 0\), whereas the Non-Markovian slopes agree with locally quasi-stationary economic states as observed in Stepanov et al. [45]. (c) The noise level estimates are almost identical and tend to increase over time which might reflect more turbulent economic times towards the the financial crisis 2008. The plateau of ca. one rolling window length around 1998 might be due to the incorporation of some outliers in the time of the ending Asian crisis and the beginning of the Russian financial crisis. (e) Drift slope results of the synthetic dataset \(x\) analogue to (b). The qualitative results are in good agreement to (b) which might be a hint that the mean market correlation \(\bar{C}(t)\) exhibits Non-Markovian features and is additionally governed by processes on a slower time scale \(\tau_{\lambda}\). (f) The estimated noise levels of the synthetic dataset \(x\) are almost identical which is also observed for the noise levels of the mean market correlation \(\bar{C}(t)\) in (c). For more details see the running text. the noise operating on a time scale \(\tau_{\rm N}<\tau_{\bar{C}}\). In contrast to the multi-scale Non-Markovian model, the mono-scale Markovian model cannot reflect the local quasi-stationarity of the economic states, postulated in Stepanov et al. [45], but the drift slope estimates \(\hat{\zeta}\), indicated by the blue solid line with orange CBs in figure 6 (b), suggest persistently latent instability with \(\hat{\zeta}\approx 0\). The noise analogues \(\sigma\) and \(\Psi\) of the Markovian and Non-Markovian model, respectively, are almost identical as observable in figure 6 (c) following the same color-coding. The noise level seems to increase over the years with a clear increase in the periods of the global financial crisis and the Euro crisis which accounts for a higher N-tipping probability in this highly turbulent economic period. The noise plateau of roughly one window length around the end of the Asian and the beginning of the Russian financial crises around 1998 is probably the results of the outliers that are incorporated into the windows. Since the discussed observations alone only allow for relatively weak qualitative deduction of the discussed features of multi-scaling and Non-Markovianity, we additionally perform an analogous analysis on a synthetic time series \(x\) that shares the key features of a hidden slow time scale and Non-Markovianity with the original one. In figure 7 we provide a comparison of the per definition stationary first differences, (a) of the original mean market correlation \(\bar{C}(t)\) and (b), of the synthetic time series \(x\). The noise level in \(x\) is assumed to increase over time to mirror the noise level evolution of the mean market correlation \(\bar{C}(t)\), suggested by the estimates in figure 6 (c), and is adjusted to cover almost the range of the first differences in \(\bar{C}(t)\). Only the positive trend of the mean market correlation visible in figure 6 (a) is not included in the simulations of \(x\). In that way the PDFs of the two time series' first differences are shaped similar apart from the fact that the highly centered probability mass of the mean market correlation \(\bar{C}(t)\) with steep tails due to rare outliers is a bit more smeared out into flatter tails of the synthetic time series \(x\). More simulation details can be found in the Appendix E. The resilience analyses on the synthetic time series \(x\) are shown in figure 6 (d-f) and are in very good agreement to the original analyses results in figure 6 (a-c). That appears at least as an independent qualitative confirmation of our findings' interpretation. Moreover, this interpretation is supported by two additional facts: First, the estimation of a Markovian model on the Gaussian kernel detrended version of the mean market correlation \(\bar{C}(t)\) results in similar results to the multi-scale Non-Markovian model. This strengthens our previous findings, because the detrending subtracts a non-stationary slow process suspected to be present in the data. We notice that weaker detrending leads to positive trends in the drift slope estimates \(\hat{\zeta}\) which could be due to increasing distortions due to incomplete detrending of the non-stationarity. Second, we fit a multi-scale Non-Markovian model with inverse time scale separation \(\tau_{\bar{C}}>\gamma\cdot\tau_{\lambda}\) with \(\gamma=2\) (i.e. the observed trading day time scale of the mean market correlation \(\bar{C}(t)\) is considered to be at least two times slower than the hidden time scale). This leads to results similar to the _Markovian_ model with and without detrending. This is an expected result, since the prior restriction of the time scale separation basically restricts the model to the Markovian case in which only the trading day time scale can be resolved apart from an even faster stochastic contribution. In other words, the prior assumption \(\tau_{\bar{C}}>\gamma\cdot\tau_{\lambda}\) with \(\gamma=2\) prohibits the incorporation of an unobserved slower time scale even if it is present in the data. The discussed results are presented in more detail in Appendix E. ## 4 Discussion and Conclusion The estimated GLE model manages to reproduce the statistical properties of the original data for the end-of-week correlations as shown in figs. 2 and 3. The estimated memory kernel parameters show that even with a highly conservative interpretation of the 95% credible level, there are clearly nonzero memory effects for memory terms for all lags as far back as a lag of 3 weeks. Therefore, it is advised to use a model with memory to describe the correlation of the S&P500 market, which is an improvement to the Markovian Langevin model estimated in [45]. Moreover, the GLE estimation presented in this article achieves a high goodness-of-fit for the entire time series, whereas Stepanov et al. used a time-dependent Langevin model by splitting the time series into different intervals and estimating Markovian Langevin equations for each of them. Our work shows that this procedure can be circumvented by using a model with memory of at least 3 trading weeks. The major advantage of our method is its possible application in predicting future market correlation: The time-dependent drift estimation in [45] has no clear or smooth functional dependence on time and therefore, little information can be inferred about future values of the correlation time series. Our method needs no time dependency, generalises over the entire time series and can be used to predict future correlation values which can be used for portfolio risk assessment. As shown in section 3.1.3, the memory kernel helps the GLE to achieve better prediction accuracy than the regular Langevin equation and much better results than the naive forecasting method of using the last observation as the predicted value. Notably, the existence of memory effects in the market's correlation structure can be interpreted in the context of volatility clustering. It is a well-known stylised fact from empirical research on financial markets that the volatility of a stock's returns tends to cluster: periods of high volatility are often followed by periods of high volatility and vice versa for low volatility [9, 12]. The correlation \(\rho_{X,Y}\) between two asset returns \(r_{X}\) and \(r_{Y}\) with expectation values \(\mu_{X},\mu_{Y}\) and volatilities \(\sigma_{X},\sigma_{Y}\) Figure 7: First differences (FD) and corresponding PDFs of the raw mean market correlation data \(\bar{C}(t)\) and the synthetic dataset \(x\) which shares the key features of Non-Markovianity and an unobserved slow time scale with the mean market correlation data \(\bar{C}(t)\). (a) The FDs of the mean market correlation are highly centered around zero with some outliers. (b) The FD capture essentially the same range as the FDs of \(\bar{C}(t)\), but are less centered arount zero. The time series is modeled with increasing coupling strength to imitate the increase of the noise level found for the mean market correlation \(\bar{C}(t)\) in figure 6 (c). (c) The FD PDFs of both time series are comparable. The PDFs only deviate to some extent in the flatter tails of the synthetic orange histogram with less dense probability density around zero. The positive trend of \(\bar{C}(t)\) is not modelled in the simulated time series \(x\). with is defined as \[\rho_{X,Y}=\frac{\mathbb{E}\left[(X-\mu_{X})(Y-\mu_{Y})\right]}{\sigma_{X}\sigma_ {Y}} \tag{20}\] and therefore directly includes the volatility values \(\sigma_{X}\) and \(\sigma_{Y}\). Because the time series of volatility estimators \(\sigma_{X}(t)\) shows a well-known memory effect, it is not far-fetched to assume a similar memory effect in the correlations \(\rho_{X,Y}\) between two assets or, as we have discussed in this article, in the mean correlation of the market as a whole. These considerations are complemented by a resilience analysis that involves the estimation of a mono-time scale Markovian model and a two-time scale Non-Markovian model. In contrast to the memoryless Markovian model, only the Non-Markovian model exhibits the negative drift slopes which are in line with the hypothesis of locally quasi-stationary economic states postulated and observed in Stepanov et al. [45]. An independent change point analysis approach also supports this view [19]. Overall, these findings provide new evidence for the existence of such locally quasi-stationary economic states and for the presence of a significant non-Markovian memory effect. Interestingly, the resilience analysis yields some evidence that a second time scale which is slower than the trading day time scale of the mean market correlation data, is involved in the underlying economic dynamics. Economic processes operate on various fast and slow time scales, e.g. day trading, trading psychology and sudden external events -- whether that might be political, sociologic, severe weather or other impacts -- may be incorporated in the fast trading day resolution of the mean market correlation data. We refer to these fast-scale processes in terms of "economic weather" to employ a metaphor from climatology (cf. also the distinction in climate-like and weather-like tasks in [47]). In contrast, the long-term evolution of the "economic climate" might involve innovation processes and technical revolutions like the invention of the steam engine or the internet, economic cycle theories like that of Marx (\(\tau\sim 2\,\mathrm{a}\) to \(10\,\mathrm{a}\)), Keynes, Schumpeter or Kondratjew [29, 10, 3, 40, 22, 48], cultural evolution [51] and generational changes influencing economic reasoning, adaptions to climate change and the scarcity of resources and much more. However, we were not able to derive an economically reasonable magnitude of the slow time scale which we would expect to lie in the range of decades up to hundred years corresponding to well-known economic cycle theories or cultural evolution processes. Instead, our applied MCMC model estimation without prior range restriction converges to a hidden slow time scale of roughly \(700\,\mathrm{a}\) to \(4000\,\mathrm{a}\). Nevertheless, our results suggest that there should be involved at least two time scales in the data-generating process which is an interesting starting point for future research. Notably, it is not particularly surprising that we could not quantify the hidden time scale, since we employ a very simple model parametrisation, have only access to one variable of the high-dimensional economic state space and perform our estimation on small windows that not even include the smallest economic cycle time scale of roughly \(2\,\mathrm{a}\) to \(10\,\mathrm{a}\) that typically correspond to business cycles. Against this background it might be a very interesting challenge of future research to develop more realistic models and estimation procedures that perform reliably under the circumstances of limited data per window and incomplete variable sets to uncover the manifold of hidden time scales in the complex system of human economy.
2307.03891
MARBLER: An Open Platform for Standardized Evaluation of Multi-Robot Reinforcement Learning Algorithms
Multi-Agent Reinforcement Learning (MARL) has enjoyed significant recent progress thanks, in part, to the integration of deep learning techniques for modeling interactions in complex environments. This is naturally starting to benefit multi-robot systems (MRS) in the form of multi-robot RL (MRRL). However, existing infrastructure to train and evaluate policies predominantly focus on the challenges of coordinating virtual agents, and ignore characteristics important to robotic systems. Few platforms support realistic robot dynamics, and fewer still can evaluate Sim2Real performance of learned behavior. To address these issues, we contribute MARBLER: Multi-Agent RL Benchmark and Learning Environment for the Robotarium. MARBLER offers a robust and comprehensive evaluation platform for MRRL by marrying Georgia Tech's Robotarium (which enables rapid deployment on physical MRS) and OpenAI's Gym interface (which facilitates standardized use of modern learning algorithms). MARBLER offers a highly controllable environment with realistic dynamics, including barrier certificate-based obstacle avoidance. It allows anyone across the world to train and deploy MRRL algorithms on a physical testbed with reproducibility. Further, we introduce five novel scenarios inspired by common challenges in MRS and provide support for new custom scenarios. Finally, we use MARBLER to evaluate popular MARL algorithms and provide insights into their suitability for MRRL. In summary, MARBLER can be a valuable tool to the MRS research community by facilitating comprehensive and standardized evaluation of learning algorithms on realistic simulations and physical hardware. Links to our open-source framework and videos of real-world experiments can be found at https://shubhlohiya.github.io/MARBLER/.
Reza Torbati, Shubham Lohiya, Shivika Singh, Meher Shashwat Nigam, Harish Ravichandar
2023-07-08T03:58:23Z
http://arxiv.org/abs/2307.03891v4
MARBLER: An Open Platform for Standardized Evaluation of Multi-Robot Reinforcement Learning Algorithms ###### Abstract Multi-agent reinforcement learning (MARL) has enjoyed significant recent progress, thanks to deep learning. This is naturally starting to benefit multi-robot systems (MRS) in the form of multi-robot RL (MRRL). However, existing infrastructure to train and evaluate policies predominantly focus on challenges in coordinating virtual agents, and ignore characteristics important to robotic systems. Few platforms support realistic robot dynamics, and fewer still can evaluate Sim2Real performance of learned behavior. To address these issues, we contribute MARBLER: Multi-Agent RL Benchmark and Learning Environment for the Robotarium. MARBLER offers a robust and comprehensive evaluation platform for MRRL by marrying Georgia Tech's Robotarium (which enables rapid prototyping on physical MRS) and OpenAI's Gym framework (which facilitates standardized use of modern learning algorithms). MARBLER offers a highly controllable environment with realistic dynamics, including barrier certificate-based obstacle avoidance. It allows anyone across the world to train and deploy MRRL algorithms on a physical testbed with reproducibility. Further, we introduce five novel scenarios inspired by common challenges in MRS and provide support for new custom scenarios. Finally, we use MARBLER to evaluate popular MARL algorithms and provide insights into their suitability for MRRL. In summary, MARBLER can be a valuable tool to the MRS research community by facilitating comprehensive and standardized evaluation of learning algorithms on realistic simulations and physical hardware. Links to our open-source framework and the videos of real-world experiments can be found at [https://shubhlohiya.github.io/MARBLER/](https://shubhlohiya.github.io/MARBLER/). ## I Introduction With increasing demand for robotics to operate in complex real-world environments, coordination of multiple robots is becoming paramount. However, the complexity of exact solutions to important problems (e.g., coverage control [1], path-planning [2], and task allocation [3]) grows exponentially as the number of robots increase [4]. Consequently, Multi-Robot Reinforcement Learning (MRRL) [5] is emerging as a promising alternative paradigm to address this challenge. MRRL has proven useful for delivery robots [6], coordinated robotic exploration [1], multi-robot communication [7, 8], multi-robot path planning [9], multi-robot target localization [10] and more [11]. However, despite being developed for robotics, learning algorithms are rarely evaluated in the real-world, with a few notable exceptions [12, 13, 14, 15]. However, even the exceptions were tested on smaller teams (2, 2, 3, and 4 robots, respectively) and on ad-hoc platforms, rending reproducibility time-consuming and difficult. In contrast, Multi-_Agent_ Reinforcement Learning (MARL) algorithms can be evaluated in a systematic way in many standardized simulated environments, such as the Multi-Agent Particle Environment (MPE) [16] and the StarCraft Multi-Agent Challenge (SMAC) [17]. While it might possible use existing MARL environments to evaluate algorithms developed for MRS, they lack realistic robot dynamics and likely have a large sim2real gap. Further, they do not directly allow for evaluation and benchmarking on physical robots. In this work, we develop an integrated and holistic platform that can enable seamless training of MRRL policies and their evaluation on physical robots. Specifically, we contribute **M**ulti-**A**gent **RL** Benchmark and **L**earning **E**nvironment for the **R**obotarium (MARBLER). MARBLER is a bridge between the MARL community and the physical robots in the Robotarium [19] that makes it easy to evaluate MRRL algorithms and design novel scenarios. The Robotarium is a remotely-accessible, publicly-available, and free-to-use testbed for MRS that allows for up to 20 robots at once in a highly-customizable environment. As such, MARBLER enables machine learning researchers to develop and test algorithms for physical robots, and control theorists to experiment with state-of-the-art (SOTA) learning algorithms. Our MARBLER platform has the following key benefits: 1. The simulated robots in MARBLER exhibit dynamics similar to that of physical robots as it is built on top of the Robotarium's simulator. Further, MARBLER includes support for barrier certificates to prevent collisions, forcing algorithms to learn in realistic settings. 2. MARBLER inherits the open-access benefits of the Robotarium, enabling anyone across the world to train coordination algorithms and systematically deploy on a physical multi-robot testbed with reproducibility. 3. MARBLER is compatible with any learning algorithm that can be used with the OpenAI Gym interface. 4. MARBLER currently has 5 novel scenarios inspired by common and challenging problems in MRS. Fig. 1: MARBLER enables users to train coordination policies with an explicit emphasis on multi-robot teams by serving as a bridge between the state-of-the-art in MARL algorithms (e.g., EPyMARL [18]) with that in multi-robot testbed (Robotarium [19]). 5. MARBLER is open-source and allows users to easily add new scenarios or modify existing ones. By creating an interface between MARL algorithms and the Robotarium, MARBLER is the first publicly-available environment that can evaluate Sim2Real capability in MRRL. Further, MARBLER can serve as a benchmark to evaluate learning algorithms in simulation with real-world constraints and readily deploy them on physical robots. In addition, we conducted detailed evaluations of existing MARL algorithms by leveraging Extended PyMARL (EPyMARL) [18] within MARBLER. Our experiments reveal insights into how different characteristics of existing algorithms (e.g., policy gradient vs. valued-based, parameter sharing, etc.) impact performance in both simulated and physical multi-robot systems. ## II Related Work ### _MARL and MRRL Platforms_ The Multi-Agent Particle Environment (MPE) [16] is a popular framework for evaluating MARL algorithms, consisting of cooperative and adversarial 2D tasks. In MPE, agents apply forces to particles which can interact with landmarks and other agents. This is a popular setup in MARL environments and has been extended by platforms such as VMAS [20]: a vectorized version of MPE that is supported by GPUs to allow for more complex scenarios and faster training. However, particle simulators have very different dynamics than real robots making them poor choices for MRRL benchmarking. Another popular MARL environment is StarCraft Multi-Agent Challenge (SMAC) [17] which is considerably more complex, requiring agents to handle partial observability over long horizons. However, the agent dynamics in SMAC is still considerably different from real world robots, again making it a poor choice to evaluate MRRL algorithms. There are few frameworks that are designed to benchmark MRRL algorithms and fewer that are able to evaluate Sim2Real performance of algorithms. SMART [21] is one such environment. However, SMART is limited to scenarios involving autonomous driving, it only supports up to four robots, and neither their evaluation test bed nor their source code is publicly available. The other MRRL environment that allows for Sim2Real testing is MultiRoboLearn [22]: an open-source framework that provides an OpenAI Gym interface for easier integration. However it also only supports a maximum of 4 robots, and, like SMART, it does not have a publicly available testbed. Additionally, creating new scenarios in MultiRoboLearn requires creating custom environments in Gazebo [23], introducing significant overhead. In contrast to existing environments, MARBLER's simulator closely mimics the constraints of physical robots _and_ allows researchers to evaluate Sim2Real capabilities in a standardized and reproducible way. Therefore, MARBLER is the first MRRL benchmark that has both a realistic simulator and a physical testbed that _anyone_ can use. ### _MARL Algorithms_ A variety of MARL algorithms have been proposed that perform very well in simulated environments. PPO [24] is an effective actor-critic policy gradient method for single agent RL. MAPPO [25] is the multi-agent extension of PPO where a single centralized critic is conditioned on all agent's observations to learn a joint state value function and a separate actor for each agent tries to learn the best action to take conditioned only on the agent's individual observations. In contrast to MAPPO, QMIX [26] and VDN [27] are value-based methods that decompose the joint state-action value function into individual state-action value functions. VDN learns to decompose the team value function agent-wise while QMIX learns agent-specific Q networks and combines them monotonically via hypernetworks. In SMAC and MPE, MAPPO, QMIX, and VDN have been shown to be three of the best performing MARL algorithms [18]. However, while these algorithms have performed very well in simulation, there is limited testing of their real world performance. [21] evaluated VDN's and QMIX's performance on robots and [12] and [13] evaluate different versions of multi-agent PPO based algorithms on real robots. However, these are some of the only works to do real-world evaluations and the experiments only used at most four robots and are not easily reproducible. Another important design problem in MRRL is if robots should share parameters. When robots share parameters, their networks all learn together which greatly reduces the number of parameters to be trained. However, this leads to robots all learning the same behavior. To combat this, robots have unique IDs appended to their observations but this approach still only allows robots to learn policies with limited heterogeneity [12]. Alternatively, each robot can learn its own set of network parameters which allows robots to learn truly heterogeneous behavior but greatly increases the number of environment interactions needed for robots to learn, which can be expesive in realistic settings. ### _The Robotarium_ The Robotarium [19] is a remotely accessible multi-robot laboratory developed by Georgia Tech. It features a 12ft x 14ft testbed, 8 Vicon motion-capture cameras and allows up to 20 GRITSBots [28] to operate at once. The Robotarium has inbuilt control barrier certificates (CBF) [29] which provide a provable guarantee of online collision avoidance for the robots, by ensuring a minimum inter-robot distance. Control commands that don't satisfy constraints are updated with minimum possible deviation before execution, by a quadratic-program based controller. Hence, the policies learned in environments utilizing CBFs will have to adapt to these actuator constraints which makes the platform more realistic and allows policies to be run on real robots. The Robotarium also provides a Python simulator that closely resembles how the robots will act in the real Robotarium. Once programs are working in simulation, the Robotarium has a publicly accessible website where anyone in the world can upload their programs for them to then be run in the real Robotarium on real robots. ## III The MARBLER Platform Historically, evaluating MRRL algorithms using the Robotarium's simulator has been a challenging task. The lack of a standardized framework for MRRL in the Robotarium means that researchers have to create scenarios from scratch, design the low level control algorithms to control the robots after they select an action, control how the graphics are displayed, and more. As a result, to the best of our knowledge, only [30] has evaluated deep reinforcement learning algorithms with the Robotarium, despite its open accessibility to researchers. Addressing this limitation, MARBLER establishes a cohesive and user-friendly API tailored specifically for MRRL experiments. Researchers can design novel environments or employ the pre-existing default environments to execute their algorithms, thereby allowing reproducibility across studies. Moreover, owing to its integration with the Robotarium's simulator, MARBLER streamlines the process of transitioning trained robots from simulation to real-world deployment. Through the execution of a single script, users can generate the files necessary for submitting their policies to the physical Robotarium. Because the Robotarium is accessible to all users free of charge, MARBLER is the first platform that allows for the deployment of MRRL algorithms on real robots in a highly reproducible manner. ### _Core Components_ MARBLER is comprised of four core components that form the foundation of the platform: **Core:** The Core component serves as the fundamental building block of MARBLER, leveraging the Robotarium's python simulator. It encompasses critical functionalities necessary for the environment, such as environment resetting and discrete time step advancement. By utilizing the capabilities of the Robotarium's simulator and CBFs, MARBLER incorporates realistic dynamics that emulate the constraints encountered by real robots. **Scenarios:** The scenarios module defines the environments the robots interact in and the specific tasks they must accomplish. **Gym Interface:** Each scenario within MARBLER is registered as a Gym environment, which allows for direct compatibility with the algorithms and tools that support the Gym interface. **Test Pipeline:** The Test Pipeline provides a streamlined process for importing trained robots into the simulation environment, giving researchers a way to visualize robots' performance and collect test data. Subsequently, researchers can execute a script to prepare their files for submission to the Robotarium, which can then be uploaded to the real Robotarium, enabling evaluation in a real-world setting. ### _Scenarios_ #### Iii-B1 Existing Scenarios To facilitate immediate testing and evaluation using MARBLER, we introduce five scenarios inspired by diverse MRRL problems. These scenarios are designed to offer researchers a starting point for experimentation and can be easily customized by modifying the scenario's associated configuration file. Parameters such as the number of robots, communication methods, scenario difficulty, and more, can be adjusted as needed. A complete overview of these scenarios is available in the supplementary material1. but we include brief descriptions here: Footnote 1: Supplementary material can be found here **Simple Navigation (Fig. 2a):** Robots navigate towards a known destination point. This scenario is an easy starting point for algorithms to learn in. **Predator Capture Prey (PCP) (Fig. 2b):** Sensing robots and capture robots must work together to capture the prey. Sensing robots know the location of prey within their sensing radius and must communicate this to the blind capture robots. Inspired by the Predator Capture Prey scenario in [7]. **Warehouse (Fig. 2c):** Robots must navigate to their color zone on the right to receive a load and then unload in their color zone on the left while avoiding collisions; a Multi-Robot Path Finding environment [31]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline **Platform** & **Robot-based** & **Collision** & **OpenAI Gym** & **Max** & **Sim2Real** & **Public Testbed** & **Custom** \\ & **Dynamics** & **Avoidance** & **Computability** & **\#Agents/Robots** & **Capabilities** & **Available** & **Scenarios** \\ \hline MPE [16] & No & Optional (elastic) & Yes & No limit & No & N/A & Yes \\ \hline VMAS [20] & No & Optional (elastic) & Yes & No limit & No & N/A & Yes \\ \hline SMAC [17] & No & No & No & 50 & No & N/A & \begin{tabular}{c} Limited (only \\ new maps) \\ \end{tabular} \\ \hline SMART [21] & Yes & Yes & No & 4 & Yes & No & Yes \\ \hline MultiRoboLearn [22] & Yes & Yes & Yes & 4 & Yes & No & \begin{tabular}{c} Yes, but \\ Difficult \\ \end{tabular} \\ \hline Robotarium [19] & Yes & Yes (CBFs) & No & 20 & Yes & Yes & \begin{tabular}{c} Yes, but \\ Difficult \\ \end{tabular} \\ \hline **MARBLER (ours)** & **Yes** & **Yes (CBFs)** & **Yes** & **20** & **Yes** & **Yes** & **Yes** \\ \hline \end{tabular} \end{table} TABLE I: Comparison of MARBLER with other platforms. MARBLER is the only MRRL platform with Sim2Real capabilities that allows for more than four robots and has a publicly available testbed **Material Transport (MT) (Fig. (d)d):** Robots with varying speeds and capacities must collaborate to efficiently unload two zones: one nearby with a large amount of material and one further away with a small amount of material. This is a task allocation problem [3] where the robots must collaborate to unload the zones within a time limit. **Arctic Transport (AT) (Fig. (e)e):** Drones can move fast over any tile and have a large sensing radius. Ice and water robots have a limited sensing radius and move fast over some tiles but slow over other tiles. Robots are rewarded based on how far the ice/water robots are from the goal zone so the drones must guide the ice/water robots. This is a Multi-Robot Path Planning scenario [9] where the drones must find a path to the goal zone and communicate it to the ice/water robots. #### Iii-B2 Creating New Scenarios MARBLER provides a user-friendly approach to create new scenarios, similar to MPE and VMAS. Researchers can customize the action space, observation space, visualizations, and other relevant parameters without needing to interact with the underlying Robotarium code, allowing researchers to develop tailored scenarios that align with their specific use cases. Our GitHub includes comprehensive documentation to create new scenarios. ## IV Experiments ### _Experiment Setup_ For all our experiments, we used the EPyMARL framework to train our robots. Because the scenarios in MARBLER have been registered as Gym environments, they are directly compatible with EPyMARL. This allowed us to train policies using the various learning algorithms available in EPyMARL with no modifications. **Baselines**: We compared MAPPO [25], QMIX [26], and VDN [27] with parameter sharing. To investigate the effects of parameter sharing, we also evaluated QMIX without parameter sharing (QMIX_NS). Fig. 2: The existing scenarios in MARBLER. The top images show the robots running in simulation and the bottom images show the robots running in the Robotarium. ### _Evaluation Protocol_ We evaluated all algorithms in the PCP, Warehouse, MT, and AT scenarios with 4, 6, 4, and 4 robots respectively. Before training each algorithm, we ran a hyperparameter search in the Simple Navigation environment in a manner similar to [18]. Exact details on the hyperparameter search along with the hyperparameters we used for each algorithm can be found in the supplementary material2. Footnote 2: Supplementary material can be found here We trained VDN and QMIX for a total of 5 million time steps in each scenario. Given the conflicting evidence about off-policy algorithms being more sample efficient than on-policy algorithms due to their use of a replay buffer [18, 25], we trained MAPPO for a total of 25 million time steps. We trained five seeds for each algorithm. Because the Robotarium immediately stops a run when robots collide or go outside the acceptable boundaries, we used strict CBFs so that, if the robots attempt to get within 20cm from each other, their movement slows to the point to where they almost stop. We also penalize the robots and end the episode if robots collide or drive outside the boundaries of the environment. By doing this, the robots are able to successfully run in the Robotarium after training. In all scenarios, robots had full communication and in all scenarios except MT, robots had unlimited bandwidth in their communications. Exact details about how the environments were configured for these evaluations are included in the supplementary material. ### _Computational Requirements_ We trained all models using CPUs; primarily with a Dual Intel(R) Xeon(R) Gold 6226 [32] and an Intel(R) Core(TM) i7-12700KF. It took 16084 CPU hours to train all models (excluding hyperparameter searches). ## V Results To compare baselines, first we look at training evaluation returns to evaluate sample efficiency and how much of an impact different seeds make which can be seen in Fig. 3. Then, we compared the best performing models for each algorithm in each scenario. To do this, we took the model that achieved the highest reward for each algorithm and evaluated the model in simulation and on real robots to compare performances. In simulation, we ran each model for 100 episodes and on the real robots, we ran each model for 10 episodes. The results can be seen in table II. ### _Value Based vs. Policy Gradient_ For the first 5 million timsteps, VDN is the best performing algorithm in every scenario. After 25 million steps, MAPPO's best performing seeds approaches that of VDN's in MT and AT and surpasses it in Warehouse. However, all seeds in MAPPO converge to lower performance in PCP than in any of the value based methods. Additionally, MAPPO's performance is much more influenced by its seed than in any value-based method. This is contradictory to the findings in [25] but it seems that VDN generally outperforms MAPPO in MARBLER suggesting that value based methods, particularly VDN, may be more applicable to physical robots than policy gradients. ### _Effects of Parameter Sharing_ The performance of models trained with parameter sharing vs. without parameter sharing depends on the heterogeneity of the environment. In the Warehouse scenario, where robots are homogeneous except for their loading zone locations, QMIX outperformed QMIX_NS significantly. In MT, the robots need to learn slightly different policies to ensure that all zones are unloaded within the time limit, but the optimal policies are similar. In AT, drones and ice/water robots had fundamentally different optimal policies, yet neither QMIX nor QMIX_NS utilized the drones' enhanced sensing radius, resulting in similar policies for all robots. In AT and MT, with limited heterogeneity, QMIX showed a significant performance advantage over QMIX_NS but much less significant than in Warehouse. However, in the PCP scenario, where very different policies were learned for the Predator and the Capture robots, QMIX and QMIX_NS performed similarly. Thus, as heterogeneity increases, the gap between policies trained with and without parameter sharing shrinks, consistent with the findings from [12]. This suggests that in scenarios with more diverse heterogeneity, models trained without parameter sharing may outperform those trained with it. Fig. 3: Evaluation returns for each algorithm during training. The solid line is the mean reward across the five seeds and the shaded area is the 95% confidence interval. The remaining timesteps for MAPPO can be seen in the supplementary material. Additionally, robots trained with QMIX_NS went out of bounds a total of 10 times in simulation and 6 times on real robots. In contrast, robots trained with _all_ parameter sharing methods only went out of bounds once in simulation and once on real robots. When a single robot goes out of bounds, all robots are given a large negative penalty and the episode ends. This suggests it is much more difficult for robots to learn how to handle events where a single robot can cause all other robots to suffer a penalty without parameter sharing. ### _Sim2Real Gap_ As shown in table II, there are few significant differences between the algorithms' performance in simulation and in the real Robotarium. This gives strong evidence that the simulator is very similar to real robots. However, there is one key difference between the real experiments and the simulated experiments: the robots never collide in simulation and robots go out of bounds more than 6x more often on average on real robots. The only time an algorithms' metrics were significantly worse on real robots vs. in simulation was when the real robots collided or went out of bounds. To further evaluate this, we retrained VDN in PCP using less safe CBFs that are only effective at 17cm and do not slow the robots as much when their within the safety radii. In addition, we did not stop the episode or penalize the robots for driving out of bounds or colliding. This is how the Robotarium's safety mechanisms are setup by default. Other than these two modifications, we trained these models the same way as the original VDN models. As seen in table III, the differences between the test performance of the robots with the default CBFs compared to the safe CBFs in simulation is not significant. However, when we ran these robots in the Robotarium, they collided 3/10 episodes, despite using the recommended method of preventing collisions, the robots never colliding in the 100 simulated episodes, and the robots with the safe CBFs never colliding. This gives more evidence that, when it comes to safety, there is a significant Sim2Real gap which highlights the second major benefit of using MARBLER: even if robots seem to learn safe policies in simulation, those policies may not run safely in the real world. This makes MARBLER the first open platform created that can be used to evaluate how safe learned MRRL policies are. ## VI Conclusion We introduce MARBLER, the first open platform with Sim2Real capabilities, realistic robot dynamics, and the ability to evaluate how safe MRRL algorithms are. MARBLER environments are fully compatible with OpenAI Gym, providing an easy interface with modern learning algorithms. To demonstrate the utility of MARBLER, we developed five MRRL scenarios and utilized the EPyMARL framework to benchmark popular MARL algorithms, both in simulation and in the real-world. We believe MARBLER will help researchers benchmark Sim2Real transfer capabilities of MRRL algorithms in a systematic and reproducible way, making it an invaluable tool for the research community. \begin{table} \begin{tabular}{c l|c c c c|c c c c} \hline \hline & & \multicolumn{6}{c|}{**Simulated Experiments**} & \multicolumn{6}{c}{**Real-World Experiments**} \\ \cline{3-10} **Scenario** & **Metric** & **MAPPO** & **VDN** & **QMIX** & **QMIX\_NS** & **MAPPO** & **VDN** & **QMIX** & **QMIX\_NS** \\ \hline \multirow{3}{*}{**Predator Capture**} & Reward & 23.48\(\pm\)5.33 & **33.25\(\pm\)0.46** & 30.02\(\pm\)3.8 & 31.76\(\pm\)2.7 & 21.63\(\pm\)9.06 & **31.51\(\pm\)5.51** & 29.2\(\pm\)4.4 & 30.13\(\pm\)5.26 \\ & Steps & 80.4\(\pm\)1.8 & **55\(\pm\)9.18** & 69.7\(\pm\)12.32 & 62.9\(\pm\)12.75 & 81\(\pm\)0 & **57.85\(\pm\)15.79** & 70\(\pm\)13.63 & 67.5\(\pm\)11.38 \\ & Prey Left & 1.6\(\pm\)0.92 & **0.0** & 0.5\(\pm\)0.67 & 0.2\(\pm\)0.4 & 2.1\(\pm\)63 & **0.3\(\pm\)0.95** & 0.6\(\pm\)0.7 & 0.5\(\pm\)0.97 \\ & Collisions & 0 & 0 & 0 & 0 & 10\% & 0 & 0 & 0 \\ & Out of Bounds & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \multirow{3}{*}{**Warehouse**} & Reward & **36.6\(\pm\)1.8** & 28.7\(\pm\)1.49 & 27.4\(\pm\)1.02 & 1.8\(\pm\)1.25 & **35.1\(\pm\)247** & 26.2\(\pm\)0.79 & 26.89\(\pm\)1.76 & -32.11\(\pm\)11.18 \\ & Collisions & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ & Out of Bounds & 0 & 0 & 0 & 5\% & 0 & 0 & 0 & 20\% \\ \hline \multirow{3}{*}{**Material Transport**} & Reward & 4.47\(\pm\)0.93 & **5.15\(\pm\)1.3** & 3.55\(\pm\)0.85 & 2.08\(\pm\)0.85 & 3.76\(\pm\)2.19 & **5.73\(\pm\)1.16** & 3.72\(\pm\)1.14 & 1.78\(\pm\)1.97 \\ & Steps & 71\(\pm\)0 & **65.48\(\pm\)4.8** & 71\(\pm\)0 & 71\(\pm\)0 & 70\(\pm\)1.26 & **60.1\(\pm\)7.99** & 71\(\pm\)0 & 71\(\pm\)0 \\ & Material Left & 8.4\(\pm\)4.41 & **0.1\(\pm\)0.3** & 9.4\(\pm\)4.59 & 28.70\(\pm\)12.74 & 8.7\(\pm\)13.51 & **0.1\(\pm\)0.32** & 14\(\pm\)0.10 & 32.6\(\pm\)12.36 \\ & Collisions & 0 & 0 & 0 & 0 & 10\% & 0 & 0 & 0 \\ & Out of Bounds & 1\% & & 4\% & 0 & 0 & 0 & 0 & 10\% \\ \hline \multirow{3}{*}{**Arctic Transport**} & Reward & -7.23\(\pm\)1.61 & -6.98\(\pm\)1.75 & -7.13\(\pm\)1.59 & -11.29\(\pm\)3.29 & -7.91\(\pm\)1.66 & -7.86\(\pm\)3.33 & -12.15\(\pm\)9.8 & -18.49\(\pm\)13.34 \\ & Steps & 41.7\(\pm\)10.65 & 38.1\(\pm\)10.25 & **35.8\(\pm\)8.24** & 57\(\pm\)7.71 & 46.5\(\pm\)15.92 & **34.4\(\pm\)11.15** & 43.6\(\pm\)13.07 & 51.4\(\pm\)13.04 \\ \cline{1-1} & Collisions & 0 & 0 & 0 & 0 & 0 & 0 & 10\% & 0 \\ \hline \hline \end{tabular} \end{table} TABLE II: The mean returns and standard deviations of each algorithm in each scenario. The simulated results were taken over 100 episodes and the results from real robots were taken across 10 episodes. Collisions refer to the percent of episodes terminated due to robots colliding, Out of Bounds refers to the percent of episodes terminated due to robots going outside the boundary of the Robotarium. The steps for episodes that end due to a collision or a boundary violation is set to the maximum. Best values for simulation and real in each row are bolded. Note that robots never collide in simulations. \begin{table} \begin{tabular}{c l|c c} \hline \hline **Scenario** & **Metric** & **VDN** & **VDN Default** \\ \hline \multirow{3}{*}{**Predator Capture**} & Reward & 33.25\(\pm\)0.46 & 30.34\(\pm\)4.63 \\ & Steps & 55\(\pm\)9.18 & 63.20\(\pm\)12.39 \\ & Prey Left & 0\(\pm\)0 & 0.10\(\pm\)0.30 \\ \cline{1-1} & Collisions & 0 & 0 \\ \cline{1-1} & Boundaries & 0 &.03 \\ \hline \hline \end{tabular} \end{table} TABLE III: VDN using the safer CBFs vs. VDN with the default CBFs in test conditions. To fairly compare the algorithms, we penalized the robots for colliding or driving out of bounds even though VDN default was not penalized during training, which explains its slightly worse performance.
2306.05111
AutoCharge: Autonomous Charging for Perpetual Quadrotor Missions
Battery endurance represents a key challenge for long-term autonomy and long-range operations, especially in the case of aerial robots. In this paper, we propose AutoCharge, an autonomous charging solution for quadrotors that combines a portable ground station with a flexible, lightweight charging tether and is capable of universal, highly efficient, and robust charging. We design and manufacture a pair of circular magnetic connectors to ensure a precise orientation-agnostic electrical connection between the ground station and the charging tether. Moreover, we supply the ground station with an electromagnet that largely increases the tolerance to localization and control errors during the docking maneuver, while still guaranteeing smooth un-docking once the charging process is completed. We demonstrate AutoCharge on a perpetual 10 hours quadrotor flight experiment and show that the docking and un-docking performance is solidly repeatable, enabling perpetual quadrotor flight missions.
Alessandro Saviolo, Jeffrey Mao, Roshan Balu T M B, Vivek Radhakrishnan, Giuseppe Loianno
2023-06-08T11:19:55Z
http://arxiv.org/abs/2306.05111v1
# AutoCharge: Autonomous Charging for Perpetual Quadrotor Missions ###### Abstract Battery endurance represents a key challenge for long-term autonomy and long-range operations, especially in the case of aerial robots. In this paper, we propose AutoCharge, an autonomous charging solution for quadrotors that combines a portable ground station with a flexible, lightweight charging tether and is capable of universal, highly efficient, and robust charging. We design and manufacture a pair of circular magnetic connectors to ensure a precise orientation-agnostic electrical connection between the ground station and the charging tether. Moreover, we supply the ground station with an electromagnet that largely increases the tolerance to localization and control errors during the docking maneuver, while still guaranteeing smooth un-docking once the charging process is completed. We demonstrate AutoCharge on a perpetual \(10\) hours quadrotor flight experiment and show that the docking and un-docking performance is solidly repeatable, enabling perpetual quadrotor flight missions. ## Supplementary Material **Video**: [https://youtu.be/6xYvl-qle3M](https://youtu.be/6xYvl-qle3M) ## I Introduction In recent years, unmanned aerial vehicles like quadrotors have drawn significant attention for several applications including search and rescue, transportation, and inspection due to their simplicity in design, agility, low cost, and ability to hover in place and move in 3D [1]. Nevertheless, these robots are constrained by limited battery endurance which restrains their applicability in persistent, long-distance missions. The ideal solution for the autonomous charging problem for quadrotors requires a system that is _efficient_, to reduce power waste and heat generation; _portable_, so that it may be transported and used in different tasks; _universal_, able to charge quadrotors of different frame shapes, sizes, and battery capacities; and _robust_, such that it guarantees persistent docking performance by accommodating large control and localization errors of the quadrotor. Various solutions have been proposed for extending the flight time of quadrotors, from battery expansion and battery replacement methods [2, 3, 4, 5, 6, 7], wireless charging [8, 9, 10, 11], contact charging [12, 13, 14, 15], and tethered charging [16, 17, 18]. However, these approaches do not meet all the requirements of the ideal autonomous charging system for quadrotors, but trade-off efficiency, portability, universality, and robustness. In this paper, we propose AutoCharge, an autonomous charging solution for quadrotors that is designed to meet the requirements of the ideal autonomous charging system. AutoCharge consists of a compact ground station and a flexible charging tether, as shown in Figure 2. The charging is performed through a pair of circular magnetic connectors that establish a precise, orientation-agnostic connection between the tether and the station. Therefore, by leveraging direct contact charging, AutoCharge ensures low impedance and Fig. 1: AutoCharge’s operating principle. By leveraging circular magnetic connectors and electromagnets, the proposed charging system ensures solidly repeatable docking (a-b) and un-docking (d-e), enabling perpetual flight missions. Fig. 2: Un-docking maneuver. AutoCharge ensures universal, highly efficient, and robust charging by combining a portable ground station with a flexible, lightweight charging tether. thus high electrical efficiency while charging. The ground station is supplied with a powerful electromagnet (EM) to strengthen the magnetic field generated by the connectors. The EM is only active during docking and disabled during charging and un-docking. This guarantees a natural mechanical guide to ensure contact when approaching the ground station, but also an easy and smooth detachment when the charging operation is completed. Consequently, by leveraging the circular magnetic connectors and the EM, AutoCharge is robust to control and localization errors. The charging tether acts solely as an additional add-on to the onboard battery, hence introducing minimum quadrotor modifications and enabling AutoCharge to charge quadrotors of different frame shapes and sizes. Moreover, the ground station is supplied with a parallel balance charger, enabling the proposed system to target any lithium polymer (LiPo) battery size. All these characteristics make AutoCharge a universal charging solution. AutoCharge does not require any reserved area for the quadrotor's body to dock on, as illustrated in Figure 1. As a consequence, the ground station's dimensions are agnostic to the quadrotor's size and the station can be much smaller than the drone making AutoCharge highly portable. **Contributions.** (i) We design and present AutoCharge, an autonomous charging system for quadrotors that consists of a portable ground station and a lightweight, flexible charging tether and is capable of universal, highly efficient, and robust charging; (ii) We provide a simple and precise description of the manufacturing process used to develop the proposed ground station and charging tether. Some components of AutoCharge are simple to manufacture from a low-cost (\(\sim\$300\)) 3D printer or milling machine, while others can be directly purchased off the shelf. While commercial solutions available are remarkably expensive, reaching prices up to \(\$30\)K, AutoCharge's full price does not exceed \(\$50\). (iii) We perform an extensive evaluation of multiple magnet choices to relate their strength and weight to AutoCharge robustness to control and localization errors. Moreover, we validate AutoCharge on a continual \(10\,\mathrm{h}\) flight test and show that docking and un-docking operations are smooth and repeatable, enabling perpetual flight missions. ## II Related Works _Battery expansion_ represents the simplest option available to increase the quadrotor's mission time. However, increasing the battery size does not linearly increment the flight time, as demonstrated in [19]. One of the core reasons is that expanding the battery capacity and size also inevitably increases the weight. Consequently, the motors need to provide more power for lifting and controlling the quadrotor, resulting in more energy being consumed. _Battery replacement_ represents a highly efficient solution because it provides the shortest recovery time for a quadrotor to return to flight and can be fully automated through external robotic systems. However, battery replacement solutions generally include highly-engineered bulky systems that are specifically designed for particular robot structures [2, 3, 5]. For example, [7] proposes a dual-drum structure that holds several batteries and automatically swaps the onboard battery with a charged one. Despite being an efficient solution for extending the flight time of quadrotors, the entire system structure is bulky and composed of a tremendous number of components, from microcontrollers and control motors to locking arms and rotational encoders. Therefore, the system is not portable and all these components introduce several failure points that may damage the quadrotor and critically interrupt the performed mission. [6] attempts to simplify the system structure by using fundamental design principles but still does not resolve the problem of failure points during the battery replacement operation. Another major issue for battery replacement strategies is the need to precisely land on the docking station [4]. While additional mechanical components can be designed to minimize this problem, this would introduce more complexity and failure points. In conclusion, battery replacement solutions are _not universal_, _not robust_, and _not portable_. _Wireless charging_ provides a straightforward charging operation that typically only requires introducing a receiver coil on the quadrotor's frame and developing a wireless charging station supplied with a transmitter coil. When the coils are close to each other, the onboard battery begins to charge. For example, [9] presents a charging station using wireless inductive charging, the same technology used for charging smartphones and other electronic devices. However, the power transfer efficiency is only about \(75\,\mathrm{\char 37}\) when the receiver and transmitter coils are precisely aligned and it significantly degrades even more as the misalignment increases. Several works have sought to address the issues of alignment and poor power transfer efficiency [11]. For example, the authors in [10] design a wireless charging station that uses ultrasonic sensors for identifying the quadrotor's position after landing. Then, some stepper motors slide the transmitter coil under the quadrotor. As with battery replacement systems, this solution employs multiple mechanical components to coordinate and precisely move, resulting in additional failure points. Despite the advances in state estimation and mechanical systems for the alignment of the coils, even when these are accurately aligned, wireless charging efficiency remains \(25-30\,\mathrm{\char 37}\) inferior compared to tethered charging solutions [8], including AutoCharge. As a result, wireless charging solutions are _not robust_ and _not efficient_. _Contact charging_ provides high-efficiency charging by modifying a quadrotor's component, such as the landing gear, to accommodate connectors that establish electrical contact between the vehicle and the charging station after docking [14]. For example, [15] proposes new landing gears to host the wires to charge the system as well as a charging station that consists of four metal plates. After landing, the quadrotor is switched off by a weight sensor on the station and the battery gets charged. [13] presents similar landing gears with electrical connections from the battery to their lower ends and a passive centering system made of four upside-down hollow cones for correcting the landing positional error. [12] shows a mid-air docking and in-flight battery charging approach. A small quadrotor carrying a fully charged battery docks on a bigger quadrotor in mid-air and charges the battery of the latter by using electrical connectors threaded in its landing gear. Despite the appealing results, contact charging solutions require developing specific quadrotor components for connecting the battery to the external power source, hence not generalizing to different robot structures. Moreover, these solutions require the quadrotor to precisely land to align the electrical connectors, thus facing control challenges, such as stochastic ground effects or disturbances [20, 21], during docking. Consequently, contact charging are _not universal_ and _not portable_. _Tethered charging_ enables unlimited flight time by directly connecting the quadrotor to a charging station. Hence, this strategy does not need precise physical landing and positioning on a charging station and avoids repeated recharging. [16] employs tethered charging to perform with a quadrotor a mission in a nuclear power plant. The major drawback of tethered charging is the flight area that the quadrotor can cover. The charging tether used can not be too long due to the internal resistance and weight of the cable itself which would reduce power efficiency and maneuverability respectively. Several works have been proposed to overcome this limitation by enabling the ground station to move with the quadrotor. For example, [17] uses an unmanned ground vehicle to carry the ground station that is directly connected to the quadrotor. The vehicle follows the quadrotor and extends the flight area. However, by combining aerial and ground vehicles, the quadrotor becomes limited by the ground conditions. As a result, tethered charging solutions are _not portable_. The authors in [18] propose a charging system that uses onboard sensing to attach a tether with a pair of loose hooks mid-flight. However, this method is not orientation-agnostic because the pair of hooks must match the station's polarity, requires precise control to localize and grasp the tether, and the loose tether attachment limits the quadrotor's ability to roll and pitch to avoid detachment. ## III Methodology In this section, we introduce the operating principle, key components, and circuit diagram of AutoCharge and describe the manufacturing process performed to fabricate the entire system. The proposed charging system is easy to assemble even by non-experts. For the sake of clarity and to simplify the design, we manufacture the components of AutoCharge for charging up to 4S LiPo batteries. However, the same manufacturing process can be extended to LiPo batteries of larger capacities by including more copper rings in the connectors. Figure 3 illustrates the manufactured components of AutoCharge. The 3D printed components were designed in SolidWorks and manufactured through a low-cost Chiron 3D printer, while the circuit components were designed in EAGLE and fabricated through an OtherMill Pro. ### _Operating Principle_ AutoCharge's operating principle is illustrated in Figure 1. When the onboard battery is running low, the quadrotor approaches the charging station and the natural magnetic force generated by the EM precisely auto-aligns the connectors. Once the electrical connection is established, the EM is deactivated, the charging operation begins, and the quadrotor lands. During charging, the quadrotor's software stack remains active and no power cycling occurs. This guarantees that while refueling the quadrotor can perform multiple secondary mission tasks [22, 23, 24]. When the charging operation is completed, the quadrotor smoothly un-docks from the ground station and continues the mission. ### _Ground Station_ The ground station (Figure 2(a)) is designed to enable efficient charging once the electrical connection with the charging tether is established. The station is mounted to the ground and attached to an external power source. The key components of the station are an electrical circuit (Section III-D), an EM, a female circular magnetic connector, and a poly-lactic acid (PLA) enclosure. The EM generates a powerful magnetic field that attracts the magnetic head of the charging tether when the quadrotor is approaching the station. The magnetic force is then switched off during charging and un-docking. This design ensures a fast, robust docking procedure along with smooth un-docking. The ground station is designed to be flexible and adapt to different flight operations, hence trading-off between portability and robustness. For example, if the mission is carried out in an outdoor environment characterized by stochastic Fig. 3: AutoCharge consists of a compact ground station (a) and a flexible charging tether (b). The charging is performed through a pair of circular magnetic connectors (c) that establish a precise connection between the tether and the station. wind effects that degrade the control and localization performance, then it is key to strengthen the magnetic field generated by the EM. Contrarily, if the flight mission is performed indoor with relatively accurate state estimation and control algorithms, then portability can be maximized by employing a smaller-scale EM. **Manufacturing.** The female circular magnetic connector (Figure 2(c)) is manufactured in-house through a \(\mathrm{mm}\) level precision (printed circuit board) PCB mill. A through hole is added to the female connector to allow electric connection from the back. The ground station enclosure and fasteners, which are used for aligning and holding the female connector and other electronic components, are 3D printed and assembled by simply screwing the appropriate parts together visible in Figure 2(a). Overall the manufactured ground station weights \(0.56\,\mathrm{kg}\) and has the dimensions \(15\times 10\times 6\ \mathrm{cm}^{3}\). ### _Charging Tether_ The charging tether (Figure 2(b)) is a custom cable that remains always connected to the battery, dangling down the quadrotor's frame during flight operation. The cable consists of a low-resistance 20 gauge multi-core wire that connects a male JST cap to a male circular magnetic connector. The tether's dangling head is supplied with a male line of pogo pins magnetic connector that matches the female circular connector on the ground station. The male connector is designed to be slightly concave, ensuring that while docking the electrical connection is established only when the male and female circuits perfectly mate, thus avoiding potential dangerous shorting issues. The charging tether's length is arbitrary and should be chosen based on the carried flight task. If during charging the quadrotor is passively waiting for the operation to be completed, then the tether's length should be chosen short to minimize the effect on the dynamics of the system and minimize efficiency loss from increased resistance from a longer tether. Contrarily, if the quadrotor is required to perform active tasks during charging, such as inspection or surveillance, then the tether's length can be relatively long. We refer to recent works on tethered flight for a detailed study on the choice of the tether's length and how the cable's resistance affects the charging efficiency [17, 25, 26]. The charging tether's weight is mainly dominated by the weight of the magnetic connector. The magnetic strength of this connector can be fully customized for the considered application. Hence, trading-off portability and robustness is analogous to the EM design choice. We explore this trade-off in detail in the proposed experiments in Section V. **Manufacturing.** Our tether is composed of \(20\ \mathrm{Gauge}\) multi-core wires which connect both our battery connector to the ground station. The battery connector is a JST soldered onto one end of the wires and hosts on the other end of the wires the male circular magnetic connector that establishes the electrical connection with the ground station. The connector is composed of a circular magnet and a set of pogo pins. We design and 3D print a PLA enclosure that can contain both the electromagnet and pogo pins and secure them in place. ### _Circuit Diagram_ AutoCharge's circuit diagram (Figure 4) is composed of a balance charger and an EM control circuitry. The balance charger is directly connected to the female magnetic circular connector which mates with the battery. This circuit block automatically detects and supplies power to the attached number of cells, hence providing a universal, efficient, and balanced charging operation up to \(4\) cells. The number of cells can be scaled up arbitrarily. The EM circuit is controlled through an Arduino Nano microcontroller and powered through an AC-DC converter rail, allowing charging operations anywhere nearby a power socket. A relay controls the switching action of the EM. In idle conditions, while the quadrotor is not attached, the relay closes and the current flows allowing the electromagnet to pull the tether. The microcontroller detects battery attachment by measuring the amount of current flowing through the battery connector and switches the relay open shutting down the EM. The vehicle can measure its internal battery voltage to estimate its current capacity and take off autonomously once a sufficient amount of charge has been accumulated. After the quadrotor takes off, no current flows through the connector and the relay closes after a short delay allowing another charging iteration to occur. This provides both the benefit of robust docking from high magnetic fields and easy detach operations. An additional wireless communication device can be implemented to remotely control the EM for greater control. **Manufacturing.** Each electronic block (Arduino Nano, relay, balance charger, converter, and EM), aside from the magnetic connector, were purchased off-the-shelf. All the components are electrically connected through soldering. ## IV Experimental Setup We validate the robustness of AutoCharge by running multiple experiments in both indoor and outdoor environments. Specifically, the indoor experiments are conducted at the Agile Robotics and Perception Lab (ARPL) lab at New York University flying arenas, and the outdoor experiments are performed on a rooftop terrace. The indoor flying arena is equipped with a Vicon motion capture system which provides accurate state estimates at \(100\ \mathrm{Hz}\). For outdoor Fig. 4: AutoCharge’s circuit diagram. flights, visual-inertial odometry algorithm combined with IMU measurements with an unscented kalman filter provides state estimates at \(500\ \mathrm{Hz}\) and controlled using a nonlinear controller based on our previous work [27]. Trajectories are planned using trapezoidal velocity profiles. We compare different design choices of AutoCharge and evaluate the trade-off between portability and robustness introduced in Section III. Specifically, we alter the quadrotor's default configuration (Def) with three charging tethers of length \(0.5\ \mathrm{m}\) with different male magnetic connectors: a small neodymium magnet of weight \(0.42\,\mathrm{g}\) and pulling force \(771.11\ \mathrm{g}\) (NeodS), a medium ceramic magnet of weight \(17.5\ \mathrm{g}\) and pulling force \(2721.55\ \mathrm{g}\) (CeraM), and a large ceramic magnet of weight \(34.7\ \mathrm{g}\) and pulling force \(4989.52\ \mathrm{g}\) (CeraL). We demonstrate the universality of AutoCharge by using two quadrotors of different frame sizes, battery capacities, and thrust-to-weight ratios for conducting the experiments. The first quadrotor is equipped with a Qualcomm(r) Snapdragon(tm) board and four brushless motors and weights \(250\ \mathrm{g}\) including the battery. This quadrotor is charged by a \(2\)-Cell/2S battery with a capacity of \(910\ \mathrm{mAh}\) that weights \(47\ \mathrm{g}\) and has a maximum voltage of \(7.4\ \mathrm{V}\). The second quadrotor is equipped with a Nvidia(r) Jetson Xavier(tm) XX board and four brushless motors and weights \(890\ \mathrm{g}\) including the battery. This quadrotor is equipped with a \(4\)-Cell/4S battery with a capacity of \(3000\ \mathrm{mAh}\) that weights \(281\ \mathrm{g}\) and has a maximum voltage of \(14.8\ \mathrm{V}\). We use _SD2S_ and _NX4S_ to refer to the lighter and heavier quadrotor respectively. ## V Results We design our evaluation procedure to address the following questions. (i) What is the impact of the charging tether's weight for different choices of magnet on the docking success, power consumption, and control performance? (ii) Can AutoCharge be employed to autonomously charge quadrotors with various frame shapes and battery capacities? (iii) Does the proposed system enable perpetual autonomous charging in a long flight mission? We encourage the reader to watch the multimedia material for additional qualitative results. ### _Portability vs Robustness_ We investigate the impact of different choices of magnetic connectors on the docking success, power consumption, and control degradation of the SD2S quadrotor. We evaluate the docking success in terms of the maximum distance from which the ground station pulls the male magnetic connector. Moreover, we compare the power consumed and the control degradation when using different tether configurations to continuously track a circular trajectory of radius \(1\,\mathrm{m}\) at \(2\,\mathrm{m/s}\) until the battery voltage reaches \(6.6\,\mathrm{V}\). The control degradation is evaluated based on the root mean squared error (RMSE) between the quadrotor position and reference trajectory at every control iteration and the power consumption in terms of battery voltage over time. The experiments are repeated \(5\) times to estimate the mean and standard deviation of both metrics. For each experiment, the quadrotor's mass is scaled appropriately for the controller, and the ground station's EM and magnetic attractiveness remain constant. Figure 5 illustrates the results of this experiment. The additional weight increases the amount of thrust that the motors need to provide for lifting and controlling the quadrotor. Consequently, the flight time for heavier magnetic connectors is inferior to smaller ones, resulting in a flight time degradation by up to \(15\%\). Moreover, the results show that altering the quadrotor's system with the lighter charging tether does not significantly affect the tracking performance. Therefore, this demonstrates that the proposed charging solution does not significantly alter the quadrotor's system dynamics. Importantly, the results show that employing larger and stronger magnets directly impacts the docking success, by improving the pull distance by \(\times 5\). This boosted docking performance may be critical for applications where localization and control errors are unavoidable (e.g., outdoor environments affected by stochastic wind gusts), enabling the quadrotor to reliably perform precise docking operations. When the male circular magnetic connector is within the bounds of the docking success area, the attachment operation had a \(100\%\) success rate in all the performed experiments. We further study and validate this performance by controlling a quadrotor to continuously attach and detach from Fig. 5: Impact of different choices of magnetic connectors. (Inset) Average RMSE. the ground station over \(100\) iterations. AutoCharge enables solidly repetitive attachment and detachment. We refer to the supplementary video for a qualitative demonstration. ### _Universality Analysis_ Universality is a desirable characteristic of any charging system. Every system should demonstrate the capability to autonomously charge different quadrotor frame sizes and battery capacities. Therefore, we study AutoCharge's ability to autonomously charge the quadrotors SD2S and NX4S. Specifically, we control the quadrotors to repetitively perform the docking and un-docking operations to simulate the charging process during perpetual missions. Figure 1 and Figure 6 illustrate some snapshots of this experiment. We encourage the reader to refer to the supplementary multimedia material for additional performances of both quadrotors. The results show that the docking and un-docking performance is solidly repeatable while using the same connectors for different quadrotors with 2S and 4S batteries, hence validating AutoCharge as a universal charging solution. ### _Perpetual Quadrotor Flight_ We demonstrate the ability and flight time benefits of employing AutoCharge on a long perpetual flight test. Specifically, we employ the quadrotor SD2S to track multiple trajectories until the battery voltage reaches \(6.6\,\mathrm{V}\). Then, the quadrotor is required to reach the ground station, dock, and recharge. After charging is complete, the quadrotor detaches from the ground station and continues tracking the random trajectories. The experiment ends after \(10\,\mathrm{h}\). Figure 7 illustrates how the battery voltage changes over time during the entire experiment. The quadrotor consistently and robustly docks, charges, and un-docks for long periods without any human intervention. Moreover, the results show no noticeable battery degradation over the entire flight, hence validating AutoCharge for safe and efficient charging for quadrotors. Towards the end of the \(10\,\mathrm{h}\) flight, the charger's temperature protection is triggered causing it to throttle the charging current till the ideal operating temperature range is reached. Successively, the charging operation is resumed and the battery is charged until completion. This behavior creates short voltage dips that characterize the last voltage peaks. ## VI Discussion and Limitations Autonomously charging has the potential to staggeringly empower future applications for quadrotors, such as expanding the range of delivery systems, persistently inspecting large crop fields to identify pests, and acting as a mobile communication hub during disaster management. Commercial solutions available do not satisfy the requirements of the ideal autonomous charging solution and are terribly expensive, reaching prices up to $30K [28, 29, 30, 31]. In this paper, we proposed AutoCharge, an autonomous charging system for quadrotors that is capable of universal, highly efficient, and robust charging. We validated these capabilities in several experiments where AutoCharge demonstrated high flexibility to different quadrotors, battery capacities, system dynamics, and task objectives. Moreover, we stress-tested AutoCharge for over \(10\,\mathrm{hours}\) to validate its charging repeatability. AutoCharge offers a highly-flexible charging solution that can be customized to the considered application. Specifically, larger stations can employ stronger magnets allowing less accurate control to dock with the station. This magnet force increase comes at the cost of less portable stations and more external forces on the vehicle. Future work will tackle this problem by modeling the charging tether as a cable suspended payload [32] and developing an admittance controller to accommodate large magnetic forces [33] creating a smooth transition for the quadrotor during the docking maneuver. Future works will also focus on boosting the usability of the proposed charging solution without prior knowledge of the location of the ground station, but using cameras to visually localize it and control the quadrotor in an image-based visual servoing fashion [34]. Fig. 6: Docking and un-docking performance in the indoor environment with NX4S quadrotor. Fig. 7: Battery voltage over time during a fully autonomous persistent flight mission.
2307.05238
Moduli of abelian varieties near the locus of products of elliptic curves
We study various naturally defined subvarieties of the moduli space ${\mathcal A}_g$ of complex principally polarized abelian varieties (ppav) in a neighborhood of the locus of products of $g$ elliptic curves. In this neighborhood, we obtain a local description for the locus of hyperelliptic curves, reproving the recent result of Shepherd-Barron that the hyperelliptic locus is locally given by tridiagonal matrices. We further reprove and generalize to arbitrary genus the recent result of Agostini and Chua showing that the locus of Jacobians of genus 5 curves with a theta-null is an irreducible component of the locus of ppav with a theta-null such that the singular locus of the theta divisor at the corresponding two-torsion point has tangent cone of rank at most 3. We further show that the locus of ppav such that the gradient vanishes, for some odd theta characteristic, locally has codimension $g$ near the diagonal. Finally, we obtain new results on the locus where the rank of the Hessian of the theta function at a two-torsion point that lies on the theta divisor is equal to 2.
Samuel Grushevsky, Riccardo Salvati Manni
2023-07-11T13:10:11Z
http://arxiv.org/abs/2307.05238v2
# Moduli of Abelian varieties near the locus of products of elliptic curves ###### Abstract. We study various naturally defined subvarieties of the moduli space \(\mathcal{A}_{g}\) of complex principally polarized abelian varieties (ppav) in a neighborhood of the locus of products of \(g\) elliptic curves. In this neighborhood, we obtain a local description for the locus of hyperelliptic curves, reproving the recent result of Shepherd-Barron [2] that the hyperelliptic locus is locally given by tridiagonal matrices. We further reprove and generalize to arbitrary genus the recent result of Agostini and Chua [1] showing that the locus of jacobians of genus \(5\) curves with a theta-null is an irreducible component of the locus of ppav with a theta-null such that the singular locus of the theta divisor at the corresponding two-torsion point has tangent cone of rank at most \(3\). We further show that the locus of ppav such that the gradient vanishes, for some odd theta characteristic, locally has codimension \(g\) near the diagonal. Finally, we obtain new results on the locus where the rank of the Hessian of the theta function at a two-torsion point that lies on the theta divisor is equal to \(2\). Research of the first author is supported in part by NSF grant DMS-21-01631. ## Introduction Most known constructions of geometrically meaningful subvarieties of the moduli space \(\mathcal{A}_{g}\) of complex principally polarized abelian varieties (ppav) are either via the Jacobian or Albanese map, or by imposing certain conditions on the theta divisor and its singularities. Two most classical such constructions are of course the locus \(\mathcal{J}_{g}^{\circ}\) of Jacobians of smooth genus \(g\) curves, and the theta-null divisor \(\vartheta_{\text{null}}\) -- the locus of those ppav that have a vanishing theta constant, or equivalently for which the theta divisor contains an even two-torsion point. Geometrically, one can further consider Jacobians of hyperelliptic curves, intermediate Jacobians of cubic threefolds, etc. Working with the theta divisor, one can impose conditions on the dimension of its singular locus, existence of points or higher multiplicity, or on the local structure of singularities. In this paper we present a unified approach to determining the local structure and some irreducible components of the subvarieties of \(\mathcal{A}_{g}\) defined in these ways, which we apply in various situations. We thus reprove recent results of Shepherd-Barron [2] characterizing the locus of hyperelliptic Jacobians locally near the locus of products of elliptic curves. We show that the locus of ppav with an (odd) two-torsion point of multiplicity three on the theta divisor is smooth, locally of codimension \(g\), as expected, near the diagonal. We further show that the locus of Jacobians with a vanishing theta null is an irreducible component of the locus of ppav with a theta-null such that the Hessian matrix of theta has rank 3 -- thus extending to arbitrary genus the results of Agostini and Chua [1] in genus 5, and generalizing our work in genus 4. We further show that the locus of products with an elliptic curve is an irreducible component of the locus where the rank of the Hessian as above is equal to 2. Our method is inspired by our recent work [13] with Hershel Farkas, where the geometric study in the neighborhood of the diagonal allowed us to give an explicit solution to the weak Schottky problem. Our approach consists of investigating the geometry of the various loci near the locus of diagonal period matrices, i.e. geometrically near \(\mathcal{A}_{1}\times\dots\times\mathcal{A}_{1}\subset\mathcal{A}_{g}\), and using Taylor expansions of theta functions and infinitesimal geometry there. We denote \(\mathbb{H}_{g}\) the Siegel upper half-space, denote \(\mathcal{A}_{g}\) the moduli space of ppav, and denote \(p:\mathbb{H}_{g}\to\mathcal{A}_{g}\) the universal covering map, which is the quotient by the action of \(\operatorname{Sp}(2g,\mathbb{Z})\). We use \(\mathcal{J}_{g}\subset\mathcal{A}_{g}\) to denote the closure of the locus \(\mathcal{J}_{g}^{\circ}\) of Jacobians of smooth genus \(g\) curves, and denote by \(\mathcal{H}\mathcal{J}_{g}^{\circ}\subset\mathcal{H}\mathcal{J}_{g}\subset \mathcal{J}_{g}\) respectively the locus of Jacobians of hyperelliptic genus \(g\) curves and its closure. We denote \(\mathbb{H}\mathbb{J}_{g}^{\circ}\subset\mathbb{H}\mathbb{J}_{g}\subset \mathbb{J}_{g}\subset\mathbb{H}_{g}\) their respective preimages in the Siegel space, and denote \(\mathbb{H}\mathbb{J}_{g}^{\circ}\) and \(\mathbb{J}_{g}^{\circ}\) the open subsets of hyperelliptic Jacobians, and Jacobians, of smooth curves. It is a classical result of Mumford [14] that the geometrically defined locus \(\mathbb{H}\mathbb{J}_{g}\subset\mathbb{H}_{g}\) is defined from the point of view of the geometry of the theta divisor as the locus where a certain configuration of theta constants with characteristics vanishes (we'll review this below in detail). We denote \(\mathcal{R}_{g}:=(\mathcal{A}_{1}\times\mathcal{A}_{g-1})\cup(\mathcal{A}_{2} \times\mathcal{A}_{g-2})\cup\dots\subset\mathcal{A}_{g}\) the locus of decomposable (classically called reducible) ppav -- this of course includes ppav that have more than two factors, which may lie in more than one component of the above union. Finally, we denote \(\mathcal{D}_{g}:=\mathcal{A}_{1}\times\dots\times\mathcal{A}_{1}\subset \mathcal{R}_{g}\) the locus of products of elliptic curves, and denote \(\mathbb{D}_{g}\subset\mathbb{H}_{g}\) its preimage in the universal cover. One irreducible component of \(\mathbb{D}_{g}\) is the locus \(\mathbb{I}_{g}:=\mathbb{H}_{1}\times\dots\times\mathbb{H}_{1}\subset\mathbb{H }_{g}\) of diagonal period matrices. Recently, Shepherd-Barron [1] determined the local structure of \(\mathbb{H}\mathbb{J}_{g}\) near \(\mathbb{I}_{g}\). Our first result is an alternative proof of this side result of his (the main thrust, and the main results of [1], are on elliptic surfaces, which are beyond the scope of our work): **Theorem 1** (Shepherd-Barron [1, Theorem 14.6]).: _For every irreducible component \(\mathbb{X}\) of \(\mathbb{H}\mathbb{J}_{g}\) containing \(\mathbb{I}_{g}\subset\mathbb{H}_{g}\), to first order at any point of \(\mathbb{I}_{g}\), \(\mathbb{X}\subset\mathbb{H}_{g}\) is defined by the vanishing of the entries \(\tau_{ij}\) of the period matrix \(\tau\) where \((i,j)\) runs over the set of pairs that are not edges of the corresponding alkane. In particular, the branch corresponding to the linear alkane equals, to first order, the locus of tridiagonal matrices, i.e. matrices with non-zero entries only on the main diagonal and on the two diagonals directly above and below it._ **Remark 2**.: We have taken care to phrase the result above carefully on the Siegel space. Note that there is a delicate point here: while \(\mathcal{HJ}_{g}\subset\mathcal{A}_{g}\) is an irreducible algebraic variety, its preimage \(\mathbb{HJ}_{g}\subset\mathbb{H}_{g}\) has many irreducible components, a number of which contain \(\mathbb{I}_{g}\). The result above describes the tangent space at a point of \(\mathbb{I}_{g}\) to _every_ irreducible component of \(\mathbb{HJ}_{g}\) containing \(\mathbb{I}_{g}\). Recall that \(\vartheta_{\mathrm{null}}\subset\mathcal{A}_{g}\) denotes the locus where some even theta constant \(\theta\left[\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}\right](\tau,0)\) vanishes. Following [1], we denote \(\vartheta_{\mathrm{null}}^{k}\subset\vartheta_{\mathrm{null}}\) the locus where the rank of the corresponding Hessian matrix \((\partial_{z_{a}}\partial_{z_{b}}\theta\left[\begin{smallmatrix}\varepsilon \\ \delta\end{smallmatrix}\right](\tau,z))|_{z=0}\) is at most \(k\). Since the theta function for a block-diagonal period matrix factorizes as the product of theta functions, one immediately sees that \(\mathcal{A}_{g_{1}}\times\mathcal{A}_{g_{2}}\subset\vartheta_{\mathrm{null}}^ {2}\), for any \(g_{1}+g_{2}=g\) with \(g_{1},g_{2}>0\) (see also the explicit expansions in Section 3). This is to say that \(\mathcal{R}_{g}\subset\vartheta_{\mathrm{null}}^{2}\), and we prove that at least the largest irreducible component of \(\mathcal{R}_{g}\) is also an irreducible component of \(\vartheta_{\mathrm{null}}^{2}\). **Theorem 3**.: _The locus \(\mathcal{A}_{1}\times\mathcal{A}_{g-1}\) is an irreducible component of \(\vartheta_{\mathrm{null}}^{2}\)._ From Riemann's theta singularity theorem for Jacobians of curves one deduces the inclusion \(\mathcal{J}_{g}\cap\vartheta_{\mathrm{null}}\subset\vartheta_{\mathrm{null}}^ {3}\). Our next result is an alternative proof and a generalization to arbitrary genus of the recent result of Agostini and Chua [1]. They prove that in genus \(5\) there exists an irreducible component of \(\mathbb{J}_{5}\cap p^{-1}(\vartheta_{\mathrm{null}})\) that is also an irreducible component of \(p^{-1}(\vartheta_{\mathrm{null}}^{3})\) (while we recall that in genus \(4\) the equality \(\mathcal{J}_{4}\cap\vartheta_{\mathrm{null}}=\vartheta_{\mathrm{null}}^{3}\) was conjectured by H. Farkas [11] and proven by us in [1]). **Theorem 4**.: _For any genus \(g\geq 3\), \(\mathcal{J}_{g}\cap\vartheta_{\mathrm{null}}\) is an irreducible component of \(\vartheta_{\mathrm{null}}^{3}\)._ Finally, we recall that the theta-null divisor has a natural "odd" counterpart \(\mathcal{G}_{\mathrm{null}}\subset\mathcal{A}_{g}\), which is the locus of all ppav such that the gradient \((\partial_{z_{i}}\theta\left[\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}\right](\tau,z))|_{z=0}\) vanishes, for some odd theta characteristic \(\left[\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}\right]\). It was conjectured in [1] that the locus \(\mathcal{G}_{\mathrm{null}}\) is purely of codimension \(g\) in \(\mathcal{A}_{g}\). In [1] this conjecture was proven completely for all \(g\leq 5\). We now prove that this also holds for every component intersecting the diagonal or containing the hyperelliptic locus: **Theorem 5**.: _The locus \(\vartheta_{\mathrm{null}}\times\mathcal{A}_{1}\,\subset\,\mathcal{A}_{g-1} \times\mathcal{A}_{1}\) is an irreducible component of \(\mathcal{G}_{\mathrm{null}}\)._ _Moreover, any irreducible component of the locus \(\mathcal{G}_{\mathrm{null}}\) containing the diagonal is locally smooth along the diagonal, and has codimension \(g\) in \(\mathcal{A}_{g}\)._ **Remark 6**.: We note that the first part of the above theorem is the \(k=1\) case of [1, Thm. 6]. The dimensionality in the second statement can in fact be reduced to the argument in [10], which was proven by a detailed degeneration argument. Indeed, [10, Prop. 12] describes the boundary of \(\mathcal{G}_{\mathrm{null}}\) in the partial toroidal compactification, which turns out to be described geometrically as a union of two components which involve respectively the singular locus of the universal theta divisor in genus \(g-1\) (which always has expected dimension), and of the locus \(\mathcal{G}_{\mathrm{null}}\) in genus \(g-1\). In [10, Thm. 13] this is used to deduced that the codimension of \(\mathcal{G}_{\mathrm{null}}\) is precisely \(g\) if all of its components, and all components of such loci for lower genera, intersect the boundary of the partial toroidal compactification. However, what is really used in the proof of that theorem is that for a given irreducible component of \(\mathcal{G}_{\mathrm{null}}\), it intersects the partial toroidal boundary, and if the intersection involves \(\mathcal{G}_{\mathrm{null}}\) dimension in one less, then that also intersects the partial toroidal boundary, etc. Since the diagonal \(\mathcal{D}_{g}\subset\mathcal{A}_{g}\) clearly intersects the generic boundary stratum \(\mathcal{A}_{g-1}\subset\partial\mathcal{A}_{g}^{\mathrm{Sat}}\) of the Satake compactification, just by sending one of the \(t_{i}\in\mathbb{H}_{1}\) to \(i\infty=\partial\mathcal{A}_{1}\) constructs such a degeneration, this means that it also intersects the partial toroidal boundary. Moreover its intersection with partial toroidal boundary will again involve the (preimage in the universal family) of the diagonal \(\mathcal{D}_{g-1}\), which will again intersect the partial toroidal boundary, and thus the inductive proof of [10, Thm. 13] applies to any irreducible component of \(\mathcal{G}_{\mathrm{null}}\) containing \(\mathcal{D}_{g}\). The local smoothness statement of the theorem above is new. We note that it appears much harder to try to apply such degeneration arguments for the study of the Hessian rank loci in Theorem 3 and Theorem 4. While the degeneration of the derivatives of theta constants to the boundary of the partial compactification is well-known, the Hessian matrix would involve second order derivatives of the theta constants in genus \(g-1\), but also first order derivatives of theta functions in genus \(g-1\) evaluated at the point of the abelian variety that gives the semiabelian extension data, and thus the rank condition appears much harder to work with by induction in genus. In Section 1 we recall the basic notions about theta functions and the action of the symplectic group on the set of theta characteristics. In Section 2 we recall the classical characterizations of the hyperelliptic and decomposable loci \(\mathcal{HJ}_{g}\) and \(\mathcal{R}_{g}\) within \(\mathcal{A}_{g}\) in terms of vanishing of theta constants. In Section 3 we set up convenient notation for writing down the expansions of the theta function and its derivatives near the locus \(\mathbb{I}_{g}\) of diagonal period matrices. One new technical result that we prove is the description of the action of the stabilizer group of the theta function with characteristic \(m\) on the set of irreducible components of \(\mathbb{D}_{g}\). In Section 4 we reprove Shepherd-Barron's result on the infinitesimal structure of \(\mathcal{HJ}_{g}\) near \(\mathcal{D}_{g}\). In Section 5 we prove Theorem 5 on the locus \(\mathcal{G}_{\mathrm{null}}\). Finally, in Section 6 we prove Theorems 3 and 4 on the Hessian rank loci \(\vartheta^{2}_{\mathrm{null}}\) and \(\vartheta^{3}_{\mathrm{null}}\). ### Acknowledgments We are grateful to Daniele Agostini, Lynn Chua, and Nick Shepherd-Barron for sharing with us their preprints [1] and [2], respectively, and their interesting ideas, and thus reigniting our investigation of this subject. We are indebted to Hershel Farkas, a collaboration with whom on [13] led us to investigate and appreciate the importance of expansions of theta functions near the diagonal. The second author is grateful to Enrico Arbarello and Edoardo Sernesi for valuable conversations and correspondence in the past years. ## 1. Notation: theta functions and level covers We denote by \(\mathbb{H}_{g}:=\{\tau\in\operatorname{Mat}_{g\times g}(\mathbb{C})\mid\tau= \tau^{t},\operatorname{Im}\tau>0\}\) the Siegel upper half-space of complex symmetric matrices with positive definitive imaginary part. It is a homogeneous space for the action of \(\operatorname{Sp}(2g,\mathbb{R})\), where an element \[\sigma=\begin{pmatrix}A&B\\ C&D\end{pmatrix}\in Sp(2g,\mathbb{R})\] acts via \[\sigma\cdot\tau:=(A\tau+B)(C\tau+D)^{-1}\,.\] We denote by \(\Gamma_{g}:=\operatorname{Sp}(2g,\mathbb{Z})\) the Siegel modular group, and let \(\Gamma_{g}(n):=\{\sigma\in\Gamma_{g}:\sigma\equiv 1_{2g}\mod n\}\) (where from now on we denote by \(1_{k}\) the \(k\times k\) identity matrix) denote the principal congruence subgroup of \(\Gamma_{g}\). The quotient \(\mathcal{A}_{g}=\mathbb{H}_{g}/\Gamma_{g}\) is the moduli space of complex principally polarized abelian varieties (ppav), and \(\mathcal{A}_{g}(n)=\mathbb{H}_{g}/\Gamma_{g}(n)\) is the moduli space of ppav with a choice of a full symplectic level \(n\) structure. Recall that \(\mathcal{J}_{g}\) and \(\mathcal{HJ}_{g}\) denote the closures in \(\mathcal{A}_{g}\) of the loci of Jacobians and of hyperelliptic Jacobians, respectively. We denote by \(p:\mathbb{H}_{g}\to\mathcal{A}_{g}\) and \(p_{n}:\mathbb{H}_{g}\to\mathcal{A}_{g}(n)\) the quotient maps, and by abuse of notation will also denote the same way their restrictions to various submanifolds such as \(\mathbb{J}_{g}:=p^{-1}(\mathcal{J}_{g})\) or \(\mathbb{H}\mathbb{J}_{g}:=p^{-1}(\mathcal{HJ}_{g})\). For a subvariety \(\mathcal{X}\subset\mathcal{A}_{g}\), we will also write \(\mathcal{X}(n)\) to denote its preimage on a level cover: \(\mathcal{X}(n):=p_{n}(p^{-1}(\mathcal{X}))\subset\mathcal{A}_{g}(n)\). Very importantly, we note that since \(p:\mathbb{H}_{g}\to\mathcal{A}_{g}\) is a Galois cover, for any irreducible subvariety \(\mathcal{X}\subset\mathcal{A}_{g}\), for any two irreducible components \(\mathbb{X}^{\prime}\) and \(\mathbb{X}^{\prime\prime}\) of \(p^{-1}(\mathcal{X})\subset\mathbb{H}_{g}\), there must exist an element \(\gamma\in\Gamma_{g}\) mapping \(\mathbb{X}^{\prime}\) to \(\mathbb{X}^{\prime\prime}\). We call a ppav decomposable if it is isomorphic to a product of two lower-dimensional ppav. Analytically, \(\tau\) is decomposable if and only if there exists \(\sigma\in\Gamma_{g}\), such that \[\sigma\cdot\tau=\left(\begin{smallmatrix}\tau_{1}&0\\ 0&\tau_{2}\end{smallmatrix}\right),\quad\text{with}\,\tau_{i}\in\mathbb{H}_{ g_{i}},\ g=g_{1}+g_{2},\ g_{1},g_{2}>0\,.\] (classically, such ppav are called reducible). We denote by \[\mathcal{R}_{g}:=(\mathcal{A}_{1}\times\mathcal{A}_{g-1})\cup(\mathcal{A}_{2 }\times\mathcal{A}_{g-2})\cup\dots\subset\mathcal{A}_{g}\] the locus of decomposable ppav, and denote \(\mathbb{R}_{g}:=p^{-1}(\mathcal{R}_{g})\subset\mathbb{H}_{g}\) its preimage in the Siegel space. We recall that \(\mathcal{D}_{g}=\mathcal{A}_{1}\times\dots\times\mathcal{A}_{1}\subset \mathcal{A}_{g}\) denotes the locus of products of elliptic curves, and \(\mathbb{D}_{g}:=p^{-1}(\mathcal{D}_{g})\subset\mathbb{H}_{g}\) denotes its preimage, of which \(\mathbb{I}_{g}\) is an irreducible component. We thus have \(\mathcal{D}_{g}\subset\mathcal{H}\mathcal{J}_{g}\subset\mathcal{J}_{g}\subset \mathcal{A}_{g}\) and \(\mathcal{D}_{g}\subset\mathcal{R}_{g}\subset\mathcal{A}_{g}\). The goal of this paper is to describe these loci locally near \(\mathcal{D}_{g}\), and the main tool will be by analyzing the Taylor expansions of theta functions near \(\mathbb{I}_{g}\). Recall that the theta function with characteristics \(\varepsilon,\delta\in\mathbb{Z}_{2}^{g}\) is the function of \(\tau\in\mathbb{H}_{g}\) and \(z\in\mathbb{C}^{g}\) given by \[\theta\left[\tfrac{\varepsilon}{\delta}\right](\tau,z):=\sum_{p\in\mathbb{Z}^{ g}}\exp\pi i\left[\tfrac{t}{(p+\tfrac{\varepsilon}{2})}\tau(p+\tfrac{\varepsilon} {2})+2^{t}(p+\tfrac{\varepsilon}{2})(z+\tfrac{\delta}{2})\right]\,.\] We will write theta characteristics also as \(m=[\,\tfrac{\varepsilon}{\delta}\,]\in\mathbb{Z}_{2}^{2g}\); we will usually write \(\varepsilon,\delta\) as rows (or sometimes columns, if notationally more convenient) of \(g\) zeroes and ones, and operate with them over \(\mathbb{Z}_{2}\) unless stated otherwise. In particular, \(m\) is called even or odd depending on whether the scalar product \(\varepsilon\cdot\delta\) is zero or one as an element of \(\mathbb{Z}_{2}\). As a function of \(z\), the theta function is even or odd, respectively. The theta constants are the values of theta functions at \(z=0\), and theta gradients are the values of the \(z\)-gradient of the theta function, evaluated at \(z=0\). We will drop the \(z\) variable from notation in both cases, and write \[\theta_{m}(\tau):=\theta_{m}(\tau,0)\in\mathbb{C};\qquad\operatorname{grad} \theta_{m}(\tau):=\left\{\tfrac{\partial}{\partial z_{a}}\theta_{m}(\tau,z)|_{z =0}\right\}_{a=1,\ldots,g}\in\mathbb{C}^{g}\,.\] Note that theta constants vanish identically for \(m\) odd, while theta gradients vanish identically for \(m\) even. Theta constants and theta gradients are examples of scalar- (resp. vector-) valued Siegel modular forms, i.e. are sections of a suitable line (resp. rank \(g\) vector) bundle on a suitable cover of \(\mathcal{A}_{g}\). In fact these are modular forms with non-trivial multiplier with respect to \(\Gamma_{g}(2)\). Moreover, \(\Gamma_{g}\) acts on theta characteristics, considered as elements of \(\mathbb{Z}_{2}^{2g}\), via an affine-linear action of its quotient \(\operatorname{Sp}(2g,\mathbb{Z}_{2})=\Gamma_{g}/\Gamma_{g}(2)\). This action is given explicitly by \[\sigma\circ\left[\tfrac{\varepsilon}{\delta}\right]:=\left(\begin{smallmatrix} D&-B\\ -C&A\end{smallmatrix}\right)\left[\tfrac{\varepsilon}{\delta}\right]+\left[ \tfrac{\operatorname{diag}(c\,^{t}d)}{\operatorname{diag}(a\,^{t}b)}\right]\,. \tag{1}\] We refer to [14] for further details, and note that \(\Gamma_{g}(2)\) is precisely the subgroup of \(\Gamma_{g}\) that fixes every characteristic. We recall from [14, 15] that the orbits of \(\Gamma_{g}\) on tuples of characteristics are fully characterized by parity of characteristics, by the a/syzygy properties of triples of characteristics, and by linear relations with an even number of terms. We will not use the details of this except to note that the zero loci of \(\theta_{m}(\tau)\) and \(\operatorname{grad}\theta_{m}(\tau)\), \[\theta_{\operatorname{null}}\left[\tfrac{\varepsilon}{\delta} \right] :=\{\theta\left[\tfrac{\varepsilon}{\delta}\right](\tau)=0\}\subset \mathbb{H}_{g}\quad\text{and}\] \[\operatorname{grad}_{\operatorname{null}}\left[\tfrac{\varepsilon}{ \delta}\right] :=\{\operatorname{grad}\theta\left[\tfrac{\varepsilon}{\delta} \right](\tau)=0\}\subset\mathbb{H}_{g}\,,\] are invariant under the action of \(\Gamma_{g}(2)\), and thus are preimages of well-defined subvarieties in \(\mathcal{A}_{g}(2)\). For any \(2\leq k\leq g\) we define \(\theta_{\mathrm{null}}^{k}\left[\,\frac{\varepsilon}{\delta}\right]\subset\theta_{ \mathrm{null}}\left[\,\frac{\varepsilon}{\delta}\right]\) to be the locus where the rank of the Hessian matrix \((\partial_{z_{a}}\partial_{z_{b}}\theta\left[\,\frac{\varepsilon}{\delta} \right](\tau,z)|_{z=0})_{1\leq a,b\leq g}\) is at most \(k\); by abuse of notation, we will use this notation for both a subvariety of \(\mathcal{A}_{g}(2)\) and an analytic subset of \(\mathbb{H}_{g}\), when no confusion can arise. Since the action of \(\Gamma_{g}/\Gamma_{g}(2)\) permutes theta characteristics and the loci \(\theta_{\mathrm{null}}\left[\,\frac{\varepsilon}{\delta}\,\right]\) and respectively \(\mathrm{grad}_{\mathrm{null}}\left[\,\frac{\varepsilon}{\delta}\,\right]\) transitively, it follows that their images \[\vartheta_{\mathrm{null}}:=p(\theta_{\mathrm{null}}\left[\,\frac{\varepsilon }{\delta}\,\right])\subset\mathcal{A}_{g}\quad\text{and}\quad\mathcal{G}_{ \mathrm{null}}:=p(\mathrm{grad}_{\mathrm{null}}\left[\,\frac{\varepsilon}{ \delta}\,\right])\subset\mathcal{A}_{g}\] are independent of the choices of even or odd characteristic \(\left[\,\frac{\varepsilon}{\delta}\,\right]\), respectively. Geometrically, \(\vartheta_{\mathrm{null}}\) is the locus of ppav whose theta divisor has a singularity (necessarily of even multiplicity, at least \(2\)) at an even two-torsion point, while \(\mathcal{G}_{\mathrm{null}}\) is the locus of ppav whose theta divisor has a singularity (necessarily of odd multiplicity, at least \(3\)) at an odd two-torsion point of the abelian variety. The loci \(\vartheta_{\mathrm{null}}^{k}:=p(\theta_{\mathrm{null}}^{k}\left[\,\frac{ \varepsilon}{\delta}\,\right])\subset\mathcal{A}_{g}\) are similarly independent of the choice of characteristic \(\left[\,\frac{\varepsilon}{\delta}\,\right]\). For low genera these loci have a simple geometric interpretation, which is part of the motivation for studying them: \[\begin{array}{llll}g=2:&\vartheta_{\mathrm{null}}=\mathcal{R}_{2}= \mathcal{A}_{1}\times\mathcal{A}_{1}&\mathcal{G}_{\mathrm{null}}=\emptyset\\ g=3:&\vartheta_{\mathrm{null}}^{2}=\mathcal{R}_{3}&\vartheta_{\mathrm{null}} =\mathcal{H}\mathcal{J}_{3}&\mathcal{G}_{\mathrm{null}}=\mathcal{A}_{1} \times\mathcal{A}_{1}\times\mathcal{A}_{1}\\ g=4:&\vartheta_{\mathrm{null}}^{2}=\mathcal{R}_{4}&\vartheta_{\mathrm{null}} ^{3}=\mathcal{J}_{4}\cap\vartheta_{\mathrm{null}}&\mathcal{G}_{\mathrm{null} }=\mathcal{A}_{1}\times\mathcal{H}\mathcal{J}_{3}\\ g=5:&\mathcal{G}_{\mathrm{null}}=(\mathcal{A}_{1}\times\vartheta_{\mathrm{null }})\cup\mathcal{I}\mathcal{J}\,,\end{array}\] where in genera \(4\) and \(5\) the locus \(\vartheta_{\mathrm{null}}\) does not admit such a quick geometric description (while genus \(4\) curves with a theta-null are canonical curves that lie on a singular quadric, there is no similarly easy description for principally polarized abelian fourfolds with a theta-null), and \(\mathcal{I}\mathcal{J}\) denotes the closure of the locus of intermediate Jacobians of cubic threefolds; see [10] for more discussion of the cases \(g=4,5\). The locus \(\vartheta_{\mathrm{null}}\) was studied classically, and the first result in this study is that \(\vartheta_{\mathrm{null}}\) is always an irreducible divisor in \(\mathcal{A}_{g}\)[12, p. 88], while in [11] we conjectured that \(\mathcal{G}_{\mathrm{null}}\) is always of pure codimension \(g\) in \(\mathcal{A}_{g}\), and proved this for every irreducible component of \(\mathcal{G}_{\mathrm{null}}\) that intersects the boundary of the partial compactification of \(\mathcal{A}_{g}\). One can easily see that \(\mathcal{R}_{g}\subset\vartheta_{\mathrm{null}}^{2}\), while Riemann theta singularity theorem for Jacobians implies the inclusion \(\mathcal{J}_{g}\cap\vartheta_{\mathrm{null}}\subset\vartheta_{\mathrm{null}} ^{3}\). In [11] we proved the conjecture of H. Farkas that in genus \(4\) the equality \(\mathcal{J}_{4}^{\circ}\cap\vartheta_{\mathrm{null}}=\vartheta_{\mathrm{null} }^{3}\setminus\vartheta_{\mathrm{null}}^{2}\) holds. One of our main results is Theorem 4, extending this genus \(4\) statement, and the recent genus \(5\) result of Agostini and Chua to arbitrary genus, showing that \(\mathcal{J}_{g}^{\circ}\cap\vartheta_{\mathrm{null}}\) is an irreducible component of \(\theta_{\mathrm{null}}^{3}\) for any genus. One technical point that pervades our work is whether we work on \(\mathcal{A}_{g}\), \(\mathbb{H}_{g}\), or (as the above discussion shows is often useful) on \(\mathcal{A}_{g}(2)\). Note for example that while \(\vartheta_{\mathrm{null}}\subset\mathcal{A}_{g}\) is irreducible, \(\vartheta_{\mathrm{null}}(2)=p_{2}(p^{-1}(\vartheta_{\mathrm{null}}))=\cup \vartheta_{\mathrm{null}}\left[\,\frac{\varepsilon}{\delta}\,\right]\subset \mathcal{A}_{g}(2)\) has \(2^{g-1}(2^{g}+1)\) irreducible components, indexed by characteristics. However, by [14] for any \(g\geq 3\) the analytic spaces \(\theta_{\operatorname{null}}\left[\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}\right]\subset\mathbb{H}_{g}\) are irreducible for each \(\left[\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}\right]\). ## 2. The decomposable and hyperelliptic loci In this section we recall the known characterizations of \(\mathcal{R}_{g}\) and \(\mathcal{HJ}_{g}\) in terms of vanishing of certain sets of theta constants, and study the combinatorics of the relevant characteristics. This is equivalent to describing the irreducible components of the corresponding loci on the level covers. For further use we denote \[\langle\varepsilon,\delta\rangle:=\sum_{a=1}^{g}\varepsilon_{a}\cdot\delta_{ a}\in\mathbb{Z} \tag{2}\] the scalar product of \(\varepsilon\) and \(\delta\), considered in \(\mathbb{Z}\) (unlike the usual pairing \(\varepsilon\cdot\delta\in\mathbb{Z}_{2}\)). We further denote by \(\mathcal{E}\) the set of all even characteristics, and for any even \(\ell\in\mathbb{Z}_{\geq 0}\) denote by \(\mathcal{E}_{\ell}\subset\mathcal{E}\) the set of all even characteristics such that \(\langle\varepsilon,\delta\rangle=\ell\), and denote \(\mathcal{E}^{*}:=\mathcal{E}\setminus\mathcal{E}_{0}\). We will similarly decompose the set \(\mathcal{O}\) of odd characteristics as \(\mathcal{O}=\sqcup_{1\leq\ell\leq g,\ \ell\ \mathrm{odd}}\ \mathcal{O}_{\ell}\), and denote \(\mathcal{O}^{*}:=\mathcal{O}\setminus\mathcal{O}_{1}\) to exclude the "simplest" odd characteristics. ### The decomposable locus, and the diagonal Recall that the theta function near a block-diagonal period matrix factorizes as follows: \[\theta\left[\begin{smallmatrix}\varepsilon_{1}\varepsilon_{2}\\ \delta_{1}\delta_{2}\end{smallmatrix}\right]\left(\left(\begin{smallmatrix} \tau_{1}&0\\ 0&\tau_{2}\end{smallmatrix}\right),\begin{smallmatrix}z_{1}\\ z_{2}\end{smallmatrix}\right)=\theta\left[\begin{smallmatrix}\varepsilon_{1} \\ \delta_{1}\end{smallmatrix}\right]\left(\tau_{1},z_{1}\right)\cdot\theta\left[ \begin{smallmatrix}\varepsilon_{2}\\ \delta_{2}\end{smallmatrix}\right]\left(\tau_{2},z_{2}\right) \tag{3}\] for any \(\tau_{i}\in\mathbb{H}_{g_{i}}\) and \(z_{i}\in\mathbb{C}^{g_{i}}\) with \(g_{1}+g_{2}=g\). By applying this formula recursively, we see that for a diagonal period matrix \(\tau_{0}=\left(\begin{smallmatrix}t_{1}&0&\ldots&0\\ 0&t_{2}&\ldots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\ldots&t_{g}\end{smallmatrix}\right)\in\mathbb{I}_{g}\) the value of the theta constant is given by \[\theta\left[\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}\right]\left(\tau_{0}\right)=\theta\left[\begin{smallmatrix} \varepsilon_{1}\\ \delta_{1}\end{smallmatrix}\right]\left(t_{1}\right)\cdot\ldots\cdot\theta \left[\begin{smallmatrix}\varepsilon_{g}\\ \delta_{g}\end{smallmatrix}\right]\left(t_{g}\right).\] We recall the characterization of block-diagonal periods: **Proposition 7** ([14, Theorem 5]).: _A period matrix \(\tau\in\mathbb{H}_{g}\) lies in the \(\Gamma_{g}(2)\) orbit of the locus \(\mathbb{H}_{g_{1}}\times\mathbb{H}_{g_{2}}\) if and only if \(\theta\left[\begin{smallmatrix}\varepsilon_{1}&\delta_{1}\\ \varepsilon_{2}&\delta_{2}\end{smallmatrix}\right]\left(\tau\right)=0\) for all pairs of odd characteristics \(\left[\begin{smallmatrix}\varepsilon_{1}\\ \delta_{1}\end{smallmatrix}\right]\in\mathbb{Z}_{2}^{2g_{1}}\) and \(\left[\begin{smallmatrix}\varepsilon_{2}\\ \delta_{2}\end{smallmatrix}\right]\in\mathbb{Z}_{2}^{2g_{2}}\)._ By applying this proposition and using the factorization formula (3) recursively, one obtains **Corollary 8**.: _A period matrix \(\tau\in\mathbb{H}_{g}\) lies in the \(\Gamma_{g}(2)\) orbit of the locus \(\mathbb{I}_{g}\) if and only if all theta constants \(\theta\left[\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}\right]\) such that at least one column \(\left[\begin{smallmatrix}\varepsilon_{a}\\ \delta_{a}\end{smallmatrix}\right]\) of \(\left[\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}\right]\) is equal to \(\left[\begin{smallmatrix}1\\ 1\end{smallmatrix}\right]\) vanish at \(\tau\)._ In our notation, this corollary can be reformulated as the statement that \(\tau\in\Gamma_{g}(2)\circ\mathbb{I}_{g}\) if and only if \(\tau\in\theta_{\operatorname{m,\,null}}\) for all \(m\in\mathcal{E}^{*}\). Proposition 7 and Corollary 8 characterize the loci of block-diagonal and diagonal period matrices in \(\mathbb{H}_{g}\), and their images in \(\mathcal{A}_{g}(2)\); to obtain from these a characterization of the loci \(\mathcal{R}_{g}\) and \(\mathcal{D}_{g}\) in \(\mathcal{A}_{g}\) one needs to consider the orbits of the action of \(\Gamma_{g}\) on the locus of block-diagonal period matrices. The (setwise) stabilizers of such loci are known classically [10]: **Proposition 9**.: _The (setwise) stabilizer \(\operatorname{Stab}_{\mathbb{I}_{g}}(\mathbb{H}_{g_{1}}\times\mathbb{H}_{g_{2}})\) is equal to the direct product \(Stab_{g_{1},g_{2}}:=\Gamma_{g_{1}}\times\Gamma_{g_{2}}\), except for the case \(g_{1}=g_{2}=g/2\), when the stabilizer is the semi-direct product \(Stab_{g/2,g/2}:=(\Gamma_{g/2}\times\Gamma_{g/2})\ltimes S_{2}\) with the involution interchanging the two blocks._ **Corollary 10**.: _The (setwise) stabilizer of the diagonal \(\operatorname{Stab}_{\mathbb{I}_{g}}\subset\Gamma_{g}\) is equal to the wreath product \(Stab_{\mathbb{I}_{g}}:=\Gamma_{1}\wr S_{g}\), i.e. is the semidirect product of \((\Gamma_{1})^{\times g}=\operatorname{SL}(2,\mathbb{Z})^{\times g}\) and the permutation group \(S_{g}\) of \(g\) elements._ Here we think of the permutation group \(S_{g}\) as embedded into \(\Gamma_{g}\) as block matrices with two off-diagonal \(g\times g\) blocks equal to zero, and two on-diagonal \(g\times g\) blocks equal to each other, and each being a permutation matrix. For our purposes, we are interested in finding an explicit manageable subgroup acting transitively on the sets \(\mathcal{E}\) and \(\mathcal{O}\), and we will find such a subgroup that contains \(\operatorname{Stab}_{\mathbb{I}_{g}}\), but is slightly larger. This is a manifestation of the idea we'll use later in the paper: instead of just considering diagonal period matrices, we will allow some \(2\times 2\) blocks, and will enlarge the group correspondingly. We thus set \(G_{g}:=\Gamma_{2}\times(\Gamma_{1})^{\times(g-2)}\wr S_{g}\), where we think of the first factor as block-diagonal period matrices with one \(2\times 2\) block and \(g-2\) blocks of size \(1\times 1\), and \(S_{g}\) is embedded into \(\operatorname{Sp}(2g,\mathbb{Z})\) as before. **Lemma 11**.: _The group \(G_{g}\) acts transitively on each of the two sets of characteristics \(\mathcal{E}\) and \(\mathcal{O}\)._ Proof.: We do the even case, the odd case being completely analogous. We first permute the coordinates so that all the \(\ell\) columns of characteristic that are equal to \([\begin{smallmatrix}1\\ 1\end{smallmatrix}]\) appear first. Then since \(\Gamma_{1}\) acts transitively on the set of \(3\) even characteristics in genus one, acting by \(\Gamma_{1}\) on the characteristics in each even column maps them all to \([\begin{smallmatrix}0\\ 0\end{smallmatrix}]\). Altogether, this shows that for a fixed \(\ell\), the stabilizer \(\operatorname{Stab}_{\mathbb{I}_{g}}\subset\Gamma_{g}\) acts transitively on the set \(\mathcal{E}_{\ell}\). Since \(\operatorname{Stab}_{\mathbb{I}_{g}}\subset G_{g}\), it is thus enough to show that \(G_{g}\) can change \(\ell\) arbitrarily. For this, we observe that the element \[\sigma_{0}:=\left(\begin{smallmatrix}1&1&1&0\\ 1&1&0&1\\ 0&1&1&0\\ 1&0&0&1\end{smallmatrix}\right)\in\Gamma_{2}\] sends \([\begin{smallmatrix}00\\ 00\end{smallmatrix}]\) to \([\begin{smallmatrix}11\\ 11\end{smallmatrix}]\). Once the columns of a characteristic are permuted so that the first \(\ell\) ones are equal to \([\begin{smallmatrix}1\\ 1\end{smallmatrix}]\), we apply \(\sigma_{0}\) in the first two coordinates to make the first two columns equal to \([\begin{smallmatrix}0\\ 0\end{smallmatrix}]\), thus going from a characteristic in \(\mathcal{E}_{\ell}\) to a characteristic in \(\mathcal{E}_{\ell-2}\). Repeating this process shows that the \(G_{g}\) orbit of any characteristic in \(\mathcal{E}_{\ell}\) contains a characteristic in \(\mathcal{E}_{0}\). ### The hyperelliptic locus We recall from [13] and [14] that the irreducible components of \(\mathbb{H}\mathbb{J}_{g}\subset\mathbb{H}_{g}\) are in bijection with (and are in fact preimages of) the irreducible components of \(\mathcal{HJ}_{g}(2)\subset\mathcal{A}_{g}(2)\). We will describe one such component explicitly, which will suffice since \(\Gamma_{g}\) acts transitively on the set of irreducible components of \(\mathbb{HJ}_{g}\) or \(\mathcal{HJ}_{g}(2)\). We say that a set \(m_{0},\ldots,m_{2g}\) of characteristics is called an essential basis if any characteristic \(m\in\mathbb{Z}_{2}^{2g}\) can be written uniquely as a sum of an odd number of \(m_{i}\)'s. It follows from the description of the action that an element of \(\Gamma_{g}\) lies in \(\Gamma_{g}(2)\) (equivalently, fixes all characteristics) if and only if it fixes every element of a chosen essential basis. Recall that a special fundamental system of characteristics is a set of \(g\) odd characteristics and \(g+2\) even characteristics such that every triple of characteristics is azygetic. The description of the orbits of the action of \(\Gamma_{g}\) on tuples of characteristics implies that \(\Gamma_{g}\) acts transitively on the set of special fundamental systems. Moreover, the condition of being azygetic implies that any subsequence of a fundamental system is a sequence of essentially independent characteristics,i.e. the sum of an even number of characteristics is always different from \(0\), see [11]. As a consequence, eliminating any characteristic from any special fundamental system of characteristics gives an essential basis. We now fix the following special fundamental system: \[I:=(o_{1},\ldots,o_{g},e_{1},\ldots,e_{g+2})=\left(\begin{smallmatrix}1&0&0& \ldots&0&0&0&1&0&0&\ldots&0&0&0\\ 0&1&0&\ldots&0&0&0&0&1&0&\ldots&0&0&0\\ 0&0&1&\ldots&0&0&0&0&0&1&\ldots&0&0&0\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots&\vdots&\ddots&\vdots& \vdots&\vdots\\ 0&0&0&\ldots&1&0&0&0&0&0&\ldots&1&0&0\\ 0&0&0&\ldots&0&1&0&0&0&0&\ldots&0&1&0\\ \hline\end{smallmatrix}\right)\,, \tag{4}\] where we have denoted the \(g\) odd characteristics by \(o_{j}\), and the \(g+2\) even characteristics by \(e_{j}\). We further denote \[b^{g}:=o_{1}+\cdots+o_{g}=\left[\begin{smallmatrix}1&1&\cdots&1&1\\ \frac{1-(-1)^{g}}{2}&\frac{1-(-1)^{g-1}}{2}&\cdots&0&1\end{smallmatrix} \right]\equiv\left[\begin{smallmatrix}1&1&\cdots&1&1\\ g&g-1&\cdots&2&1\end{smallmatrix}\right]\mod 2 \tag{5}\] the sum of the odd characteristics in this special fundamental system. If we exclude \(e_{1}=\left[\begin{smallmatrix}0&\ldots&0\\ 0&\ldots&0\end{smallmatrix}\right]\), the remaining \(2g+1\) characteristics of the special fundamental system \(I\) form an essential basis. Thus any characteristic \(m\in\mathbb{Z}_{2}^{2g}\) can be written as a sum of an odd number among these \(2g+1\) characteristics. Since the sum of all characteristics in \(I\) is zero, the sum of any subset of characteristics in \(I\) is equal to the sum of the complementary subset of characteristics in \(I\). Thus altogether we see that every characteristic \(m\) can be written uniquely as a sum of at most \(g\) among the characteristics \(o_{1},\ldots,o_{g},e_{2},\ldots,e_{g+2}\). Suppose \(m\) is the sum of \(k\) among these characteristics; then it can be checked that the characteristic \(m+b^{g}\) is even if and only if \(k\equiv g\) or \(k\equiv g+1\mod 4\). We then have the following **Proposition 12** ([14], [15], [16]).: _There exists an irreducible component \(\mathbb{HJ}_{g}^{I}\) of \(\mathbb{HJ}_{g}\) that is defined by the equations_ \[\theta_{m+b^{g}}(\tau)=0 \tag{6}\] _for all \(m\) that are equal to a sum of strictly less than \(g\) among the characteristics \(o_{1},\dots,o_{g},e_{2},\dots,e_{g+2}\)._ **Remark 13**.: The actual result of Mumford is that if \(\tau\in\mathbb{H}_{g}\) is such that \(\theta_{m+b^{g}}(\tau)\neq 0\) if _and only if \(k=g\)_, then \(\tau\in\mathbb{HJ}_{g}^{\circ}\). In the above proposition we do not require the non-vanishing of those \(\theta_{m+b^{g}}(\tau)\) where \(m\) is the sum of precisely \(g\) elements of the special fundamental system. Thus clearly the locus described in the proposition contains an irreducible component of \(\mathbb{HJ}_{g}\). Furthermore, Poor [15] proved that the vanishing conditions (6) by themselves cut out an irreducible component of \(\mathbb{HJ}_{g}\setminus\mathbb{R}_{g}\). The action of \(\Gamma_{g}\) is transitive on the set of all special fundamental systems, and thus one has the following characterization of the hyperelliptic locus: **Proposition 14** ([14], [15]).: _An indecomposable period matrix \(\tau\in\mathbb{H}_{g}\setminus\mathbb{R}_{g}\) lies in \(\mathbb{HJ}_{g}\) if and only if there exists a special fundamental system \(o_{1}^{\prime},\dots,o_{g}^{\prime}\), \(e_{1}^{\prime},\dots,e_{g+2}^{\prime}\) such that defining \(b^{fg}:=o_{1}^{\prime}+\dots+o_{g}^{\prime}\), the theta constant \(\theta_{m+b^{g}}\) vanishes at \(\tau\) if and only if \(m\) can be written as a sum of strictly less than \(g\) elements of the special fundamental system._ **Remark 15**.: About the odd counterpart, we observe that, when \(g\geq 5\), the theta gradient with characteristic \(m_{I}=o_{1}+o_{2}+o_{3}+o_{4}+o_{5}=[\begin{smallmatrix}1111110\dots&0\\ 101010\dots&0\end{smallmatrix}]\) vanishes along the component \(\mathbb{HJ}_{g}^{I}\). Indeed, \[m_{I}=b^{g}+o_{6}+\dots+o_{g}\] is the sum of \(b^{g}\) and \(g-5\) elements of the special fundamental system, and this condition implies the vanishing of the gradient of the theta function at the hyperelliptic point \(\tau\), see [13]. **Remark 16**.: We will show that Theorem 1 applies to \(\mathbb{HJ}_{g}^{I}\), though in fact the irreducible component that Shepherd-Barron uses in [1] is a different one. Note that \(\mathbb{I}_{g}\) is contained in multiple irreducible components of \(\mathbb{HJ}_{g}\), see the discussion after the proof of Theorem 20. In [16], Tsuyumine studies the intersection of irreducible components of \(\mathcal{R}_{g}(2)\) and of \(\mathcal{HJ}_{g}(2)\). He also shows that the stabilizer of the component \(\mathcal{HJ}_{g}^{I}\) of \(\mathcal{HJ}_{g}(2)\) is isomorphic to the symmetric group \(S_{2g+2}\). Moreover he considers also the boundary components of \(\mathcal{HJ}_{g}^{I}\) contained in \(\mathcal{R}_{g}(2)\). While his analysis again is only for decomposable ppav that are products of two indecomposable ones, his analysis extends in full generality to yield the statement that all boundary components related to a decomposition \(g=g_{1}+\dots+g_{k}\) are conjugated via the stabilizer subgroup at \(\mathcal{HJ}_{g}^{I}\): **Lemma 17**.: _For any two irreducible components \(Z\) and \(W\) of \(\mathbb{D}_{g}\) contained in an irreducible component \(X\) of \(\mathbb{H}\mathbb{J}_{g}\), there exists \(\sigma\in Stab_{X}\) such that \(\sigma(Z)=W\)._ _For any two irreducible components \(X\) and \(Y\) of \(\mathbb{H}\mathbb{J}_{g}\) containing \(\mathbb{I}_{g}\), there exists \(\sigma\in Stab_{\mathbb{I}_{g}}\) such that \(\sigma(Y)=X\)._ Proof.: For the first statement, recall that as already discussed in Section 2.2, the irreducible components of \(\mathbb{H}\mathbb{J}_{g}\) are in bijection with those of \(\mathcal{H}\mathcal{J}_{g}(2)\). Hence the stabilizer of \(X\) acts transitively on the set of all its boundary components related to a decomposition \(g=1+\cdots+1\). For the second statement, since the cover \(\mathbb{H}_{g}\to\mathcal{A}_{g}\) is Galois, being the quotient by \(\Gamma_{g}\), we know that \(\Gamma_{g}\) acts transitively on the set of irreducible components of \(\mathbb{H}\mathbb{J}_{g}\), and thus there exists some \(\sigma_{1}\in\Gamma_{g}\) such that \(X=\sigma_{1}(Y)\). Denoting \(\mathbb{I}_{g}^{\prime}:=\sigma_{1}(\mathbb{I}_{g})\) the irreducible component of \(\mathbb{D}_{g}\) that \(\mathbb{I}_{g}\) is mapped to, by the first statement there exists \(\sigma_{2}\in Stab_{X}\) such that \(\sigma_{2}(\mathbb{I}_{g}^{\prime})=\mathbb{I}_{g}\). Thus \(\sigma:=\sigma_{2}\circ\sigma_{1}\) satisfies \(\sigma(\mathbb{I}_{g})=\sigma_{2}(\mathbb{I}_{g}^{\prime})=\mathbb{I}_{g}\) and maps \(Y\) to \(X\), as required. ## 3. Expansions of theta functions near the diagonal Our main computational tool is working with Taylor expansions of defining equations of our loci near \(\mathbb{I}_{g}\). We will work in a sufficiently small analytic neighborhood \(U\) of \(\mathbb{I}_{g}\); that is, we will fix arbitrary generic \(t_{1},\ldots,t_{g}\in\mathbb{H}_{1}\), and assume that all \(\tau_{ab}\) with \(a<b\) satisfy \(|\tau_{ab}|<\varepsilon\) for some sufficiently small \(\varepsilon\) (small compared to all \(t_{i}\)). Since the diagonal period matrix \(\operatorname{diag}(t_{1},\ldots,t_{g})\) lies in the open set \(\mathbb{H}_{g}\), so doing this for every \(t_{1},\ldots,t_{g}\) we get an open neighborhood \(\mathbb{I}_{g}\subset U\subset\mathbb{H}_{g}\). We can thus expand theta constants, theta gradients, etc. with respect to all the variables \(\tau_{ab}\) for \(1\leq a<b\leq g\) at a fixed generic point \(\operatorname{diag}(t_{1},\ldots,t_{g})\in\mathbb{I}_{g}\). The Taylor expansion of theta constants near \(\mathbb{I}_{g}\) was recently used in our work [10] with H. Farkas on the Schottky problem, and we now recall it. We also give the formulas for the Taylor expansions of theta gradients near \(\mathbb{I}_{g}\), and for the Hessian of the theta function. These are the formulas that will make all of our results work, and we introduce various conventions to be able to keep track of the formulas in a reasonable way. First of all, we recall that by (3) the theta constant near a diagonal period matrix in \(\mathbb{I}_{g}\) factorizes. Furthermore, the \(z\)-derivatives of the theta constant factorize the same way. Thus recalling the heat equation satisfied by theta functions \[\frac{\partial\theta\left[\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}\right](\tau,z)}{\partial_{\tau_{jj}}}=\frac{1}{4\pi i} \frac{\partial^{2}\theta\left[\begin{smallmatrix}\tilde{\delta}\\ \delta\end{smallmatrix}\right](\tau,z)}{\partial_{zj}\partial z_{j}};\quad\frac{ \partial\theta\left[\begin{smallmatrix}\tilde{\delta}\\ \delta\end{smallmatrix}\right](\tau,z)}{\partial_{\tau_{jk}}}=\frac{1}{2\pi i} \frac{\partial^{2}\theta\left[\begin{smallmatrix}\tilde{\varepsilon}\\ \delta\end{smallmatrix}\right](\tau,z)}{\partial z_{j}\partial z_{k}}\,\text{ for }j\neq k\] allows us to evaluate at a point of \(\mathbb{I}_{g}\) the derivatives of \(\theta\left[\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}\right]\) with respect to \(\tau\). We will be expanding theta functions and their derivatives in Taylor series near \(\mathbb{I}_{g}\) with respect to the variables \(\tau_{ab}\) for all \(1\leq a<b\leq g\), with each term of the expansion being a function in the variables \(\tau_{11},\ldots,\tau_{gg}\) which we will denote \(t_{1},\dots,t_{g}\), to remember that they are elements of the upper half-plane. To shorten the formulas in the rest of the text, we adopt the following **Convention 1**.: For \(t_{j}\in\mathbb{H}_{1}\) and fixed \(\left[\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}\right]\) we denote by \[\vartheta_{j}:=\theta\left[\begin{smallmatrix}\varepsilon_{j}\\ \delta_{j}\end{smallmatrix}\right](t_{j});\quad\vartheta^{\prime}_{j}:=\frac{ \partial}{\partial z}\theta\left[\begin{smallmatrix}\varepsilon_{j}\\ \delta_{j}\end{smallmatrix}\right](t_{j},z)|_{z=0};\quad\vartheta^{\prime \prime}_{j}:=\dots\] the one-variable theta functions and their \(z\)-derivatives evaluated at \(z=0\in\mathbb{C}\) (which may be identically zero depending on the parity of \(\left[\begin{smallmatrix}\varepsilon_{j}\\ \delta_{j}\end{smallmatrix}\right]\)). We write \(O(\varepsilon^{N})\) to signify a sum of monomials of total degree at least \(N\) in all the variables \(\tau_{ab}\), for all \(1\leq a<b\leq g\). Even with this notation, the formulas, as in [10], would get very complicated, so we introduce further conventions to make them more readable. For a characteristic of the form \(\left[\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}\right]=\left[\begin{smallmatrix}1\dots 10\dots 0\\ \dots 10\dots 0\end{smallmatrix}\right]\) with the first \(\ell\) columns equal to \(\left[\begin{smallmatrix}1\\ 1\end{smallmatrix}\right]\), we will use capital letters \(J,K,\dots\) to denote columns where the characteristic is \(\left[\begin{smallmatrix}1\\ 1\end{smallmatrix}\right]\), i.e. \(1\leq J\leq\ell\), and we will use small letters \(j,k,\dots\) to denote columns where the characteristic is \(\left[\begin{smallmatrix}0\\ 0\end{smallmatrix}\right]\), i.e. \(\ell+1\leq j\leq g\). We denote \(S_{2n}\) the permutation group on \(2n\) elements, and by \(T_{2n}\subset S_{2n}\) the set of permutations that can be written as products of \(n\) disjoint transpositions. For \(\sigma\in T_{2n}\), we denote by \(\mu\subset\sigma\) one of the transpositions \(\mu:\alpha_{\mu}\longleftrightarrow\beta_{\mu}\) whose product is \(\sigma\). Finally, for a set of an even number of possibly repeating indices \(a_{1},\dots,a_{2n}\in\{1,\dots,g\}\) we will denote \[[a_{1},\dots,a_{2n}]:=\frac{1}{(2\pi i)^{n}}\sum_{\sigma\in T_{2n}}n_{\sigma} \prod_{\mu\subset\sigma}\tau_{a_{\alpha\mu}\,a_{\beta\mu}}\,,\] where the combinatorial coefficient \(n_{\sigma}=a_{\sigma}\cdot b_{\sigma}\cdot c_{\sigma}\) is computed as follows. The factor \(a_{\sigma}\) is equal to \(0\) if there exists any \(\mu\subset\sigma\) such that \(a_{\alpha_{\mu}}=a_{\beta_{\mu}}\), and is equal to \(1\) otherwise. If there are precisely \(N\) elements \(\sigma=\sigma_{1},\sigma_{2},\dots,\sigma_{N}\in T_{2n}\) such that the resulting monomials are equal, then \(b_{\sigma}\) is set to be equal to \(1/N\) -- so that the result is essentially that each distinct monomial is counted exactly once in the sum. Finally, for a given \(\mu\) we rewrite \[\prod_{\mu\subset\sigma}\tau_{a_{\alpha\mu}\,a_{\beta_{\mu}}}=\prod_{1\leq i \leq j\leq g}\tau_{ij}^{d_{ij}}\] by gathering the powers of the same \(\tau_{ij}\) together, and let then \(c_{\sigma}:=1/\prod(d_{ij}!)\). The reason for this last factor is that this is the coefficient with which \[\prod_{\mu\subset\sigma}\partial_{\tau_{a_{\alpha\mu}\,a_{\beta\mu}}}=\prod_{1 \leq i\leq j\leq g}\partial_{\tau_{ij}}^{d_{ij}}\] appears in the Taylor expansion. Note that \([a_{1},\dots,a_{2n}]=O(\varepsilon^{n})\), as each summand is a degree \(n\) monomial in the \(\tau\)'s. To unravel this very useful notation, we give some examples: \[[1,1]=0;\quad[1,2]=\frac{1}{2\pi i}\tau_{12};\quad[1,1,2,3]=\frac{1}{(2\pi i)^{2}} \tau_{12}\cdot\tau_{13};\] \[[1,2,3,4]=\frac{1}{(2\pi i)^{2}}\left(\tau_{12}\cdot\tau_{34}+\tau_{13}\cdot \tau_{24}+\tau_{14}\cdot\tau_{23}\right),\] \[[1,1,2,2,3,4]=\frac{1}{(2\pi i)^{3}}\left(\frac{1}{2}\tau_{12}^{2}\tau_{34}+ \tau_{12}\tau_{13}\tau_{24}+\tau_{12}\tau_{14}\tau_{23}\right)\,.\] The reason this notation is so useful for us is that in computing the terms of the expansion of theta functions and their derivatives we use the heat equation repeatedly. Each factor \(\tau_{ab}\) arises when the corresponding derivative \(\partial_{\tau_{ab}}\) is taken, so then \(\vartheta_{a}\) and \(\vartheta_{b}\) are differentiated, by the heat equation. Each summand of the Taylor expansion of the theta constant in variables \(\tau_{ab}\) for all \(1\leq a<b\leq g\) is thus of the form \(\prod_{\alpha=1\dots g}\left(\partial^{\eta_{\alpha}}\vartheta_{\alpha}\right)\) times the following polynomial in \(\tau\)'s: \[[\underbrace{1,\dots,1}_{n_{1}},\underbrace{2,\dots,2}_{n_{2}},\dots, \underbrace{g,\dots,g}_{n_{g}}]\] in our new notation. For the expansion of the derivative \(\partial_{\tau_{ab}}\theta_{m}\), the polynomial multiplying \(\prod_{\alpha=1\dots g}\partial^{n_{\alpha}}\vartheta_{\alpha}\) is similar, except that \(a\) and \(b\) will be included in the expression \(n_{a}-1\) and \(n_{b}-1\) times, respectively. As a warm-up, we write down in genus \(4\) the expansion of \(\theta_{m}=\theta\left[\begin{smallmatrix}1100\\ 1100\end{smallmatrix}\right]\) near \(\mathbb{I}_{4}\), using this notation. \[\theta_{m}=\theta_{m}(\operatorname{diag}(t_{1},t_{2},t_{3},t_{4 }))+\sum_{1\leq a<b\leq g}\tau_{ab}\frac{\partial\theta_{m}}{\partial\tau_{ab }}(\operatorname{diag}(t_{1},t_{2},t_{3},t_{4}))+\dots\] \[=\vartheta_{1}^{\prime}\vartheta_{2}^{\prime}\vartheta_{3} \vartheta_{4}\] \[\cdot\left([1,2]+\frac{\vartheta_{1}^{\prime\prime\prime}}{ \vartheta_{1}^{\prime}}[1,1,1,2]+\frac{\vartheta_{2}^{\prime\prime\prime}}{ \vartheta_{2}^{\prime}}[2,2,1,2]+\frac{\vartheta_{3}^{\prime\prime\prime}}{ \vartheta_{3}}[3,3,1,2]+\frac{\vartheta_{4}^{\prime\prime}}{\vartheta_{4}}[4,4,1,2]\right)+O(\varepsilon^{3})\,. \tag{7}\] Of course all of the terms above can easily be written out explicitly, the terms \([1,1,1,2]\) and \([2,2,1,2]\) are in fact zero, but note that our conventions make the formula readable. We note in particular that the lowest order term of this expansion is \([1,2]=O(\varepsilon)\), while in general for \([\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}]\in\mathcal{E}_{\ell}\), the lowest order term of the expansion would be of order \(O(\varepsilon^{\ell/2})\). From now on, we denote \(s:=\ell/2\), for even characteristics. **Convention 2**.: To make the formulas still nicer, we finally denote, for a given \(\ell\) with \(1\leq\ell\leq g\) (for thinking about even characteristics in \(\mathcal{E}_{\ell}\) or odd characteristics in \(\mathcal{O}_{\ell}\)) \[f_{\alpha}:=\begin{cases}\vartheta_{\alpha}^{\prime},&\text{if }1\leq\alpha \leq\ell\\ \vartheta_{\alpha},&\text{if }\ell+1\leq\alpha\leq g.\end{cases}\] and denote \[\phi_{\alpha}:=\frac{f_{\alpha}^{\prime\prime}}{f_{\alpha}};\qquad\psi_{\alpha}: =\frac{f_{\alpha}^{\prime\prime\prime\prime}}{f_{\alpha}}-\phi_{\alpha}^{2}\,,\] where we now use the index \(1\leq\alpha\leq g\). \(\triangle\) We are now ready to compute the two lowest order terms of the expansion of the theta constant with characteristics \([\begin{smallmatrix}1&\ldots&10\ldots\\ 1&\ldots&10\ldots&0\end{smallmatrix}]\) for arbitrary \(g\) and arbitrary \(2\leq\ell=2s\leq g\): \[\theta_{m}=\left([1,\ldots,\ell]+\sum_{\alpha}\phi_{\alpha}\cdot[\alpha, \alpha,1,\ldots,\ell]\right)\cdot\prod_{\alpha}f_{\alpha}+O(\varepsilon^{s+2}) \tag{8}\] For further use, we denote \[X_{\ell}:=[1,\ldots,\ell];\quad Y_{\ell}:=\sum_{\alpha}\phi_{\alpha}\cdot[ \alpha,\alpha,1,\ldots,\ell] \tag{9}\] these two leading terms (for \(\ell\) of any parity), so that the above becomes \(\theta_{m}=X_{\ell}+Y_{\ell}+O(\varepsilon^{s+2})\). Similarly, we can obtain formulas for the expansions of the derivatives, where recall we use indices \(1\leq J,K\leq\ell\) and \(\ell+1\leq j,k\leq g\). We first deal with the \(z\)-derivatives, to be used in our investigation of the locus \(\mathcal{G}_{\mathrm{null}}\). In this case the expansion of the theta gradient \(\mathrm{grad}\,\theta\,[\begin{smallmatrix}1&\ldots&10\ldots&0\\ 1&\ldots&10\ldots&0\end{smallmatrix}]\) where the characteristic lies in \(\mathcal{O}_{\ell}\), i.e. has an odd number \(\ell=2s+1\) of columns equal to \([\begin{smallmatrix}1\\ 1\end{smallmatrix}]\), is \[\frac{\partial\theta\,[\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}]}{\partial z_{J}}(\tau)=\left([1,\ldots,\widehat{J}, \ldots,\ell]+\sum_{\alpha}\phi_{\alpha}\cdot[\alpha,\alpha,1,\ldots,\widehat{ J},\ldots,\ell]\right)\prod f_{\alpha}+O(\varepsilon^{s+2}) \tag{10}\] and \[\begin{split}\frac{\partial\theta\,[\begin{smallmatrix} \varepsilon\\ \delta\end{smallmatrix}]}{\partial z_{j}}(\tau)&=(\phi_{j}\cdot[ j,1,\ldots,\ell]+\psi_{j}\cdot[j,j,j,1,\ldots,\ell]\\ &+\phi_{j}\sum_{\alpha}\phi_{\alpha}\cdot[\alpha,\alpha,j,1, \ldots,\ell]\right)\prod f_{\alpha}+O(\varepsilon^{s+3})\,,\end{split} \tag{11}\] where as usual the hat denotes omission of the index. In what follows we will actually only need to use the formulas above for \(\ell=3\), but setting \(\ell=3\) does not simplify the formula above much, so we have chosen to give the general expression. For investigating the rank of the Hessian of the theta function we will need to compute the second order derivatives of the theta function. While these formulas can also be written for arbitrary \(\ell\), in our arguments we will only need \(\ell=2\), and here this makes the formulas much shorter. So from now on we take \(m=m_{0}=[\begin{smallmatrix}110\ldots&0\\ 110\ldots&0\end{smallmatrix}]\), and first compute for \(J=1,2\) \[\partial_{\tau_{JJ}}\theta_{m_{0}}=\frac{\prod f_{\alpha}}{2}\cdot(\phi_{J} \cdot X_{2}+\phi_{J}\cdot Y_{2}+\psi_{J}\cdot X_{2})+O(\varepsilon^{3})\,,\] where the extra factor of \(\frac{1}{2}\) is due to \(\frac{1}{4\pi i}\) appearing instead of \(\frac{1}{2\pi i}\) in the heat equation, for differentiating with respect to \(\tau_{JJ}\). Next, we compute \[\partial_{\tau_{12}}\theta_{m_{0}}=\prod f_{\alpha}\cdot\left(1+ \sum\phi_{\alpha}\cdot[\alpha,\alpha]+\psi_{1}\cdot[1,1]+\psi_{2}\cdot[2,2] \right)+O(\varepsilon^{2})\,;\] \[\partial_{\tau_{1j}}\theta_{m}=\phi_{j}\cdot[j,2]\cdot\prod_{ \alpha}f_{\alpha}+O(\varepsilon^{2})\,;\] \[\partial_{\tau_{2j}}\theta_{m}=\phi_{j}\cdot[j,1]\cdot\prod_{ \alpha}f_{\alpha}+O(\varepsilon^{2})\,;\] \[\partial_{\tau_{jk}}\theta_{m}=\phi_{j}\cdot\phi_{k}\cdot[j,k,1,2 ]\cdot\prod_{\alpha}f_{\alpha}+O(\varepsilon^{3})\,,\] and finally \[\partial_{\tau_{jj}}\theta_{m} =\frac{\prod f_{\alpha}}{2}\cdot\left([1,2]+\psi_{j}\cdot[j,j,1,2 ]+\phi_{j}\cdot\sum_{\alpha}\phi_{\alpha}\cdot[\alpha,\alpha,1,2]\right)+O( \varepsilon^{3})\] \[=\frac{\prod f_{\alpha}}{2}\cdot\left(\phi_{j}\cdot(X_{2}+Y_{2})+ \psi_{j}\cdot[j,j,1,2]\right)+O(\varepsilon^{3})\,,\] where of course in each of these formulas further terms in the expansion can also be easily written down, though the formulas become very lengthy. We give an example: the principal \(4\times 4\) minor of the Hessian of \(\theta_{m_{0}}\), formed by the rows and columns numbered \(1,2,j,k\) (for \(3\leq j<k\leq g\)) is given by \(\prod f_{\alpha}\) multiplied by \[\begin{array}{cccc}\frac{1}{2}\phi_{1}\cdot(X_{2}+Y_{2})&1+\sum_{\alpha, \beta}\phi_{\alpha}\phi_{\beta}\cdot[\alpha,\alpha,\beta,\beta]&\phi_{j}\cdot [2,j]+\phi_{j}\sum\phi_{\alpha}\cdot[\alpha,\alpha,2,j]&\phi_{k}\cdot[2,k]+ \phi_{k}\cdot\sum\phi_{\alpha}\cdot[\alpha,\alpha,2,k]\\ *&\frac{1}{2}\phi_{2}\cdot(X_{2}+Y_{2})&\phi_{j}\cdot[1,j]+\phi_{j}\sum\phi_ {\alpha}\cdot[\alpha,\alpha,1,j]&\phi_{k}\cdot[1,k]+\phi_{k}\sum\phi_{\alpha} \cdot[\alpha,\alpha,1,k]\\ *&*&\frac{1}{2}\phi_{j}\cdot(X_{2}+Y_{2})+\frac{1}{2}\psi_{j}\cdot[j,j,1,2]& \phi_{j}\cdot\phi_{k}\cdot[j,k,1,2]\\ *&*&*&\frac{1}{2}\phi_{k}\cdot(X_{2}+Y_{2})+\frac{1}{2}\psi_{k}\cdot[k,k,1,2] \end{array} \tag{12}\] where the \(*\)'s below the diagonal signify the fact that the matrix is symmetric, and we have expanded the matrix up to dropping the \(\varepsilon^{3}\) terms. ## 4. The local form of the hyperelliptic locus In this section we describe the hyperelliptic locus near \(\mathbb{I}_{g}\), proving Theorem 20, which is a slightly stronger version of Theorem 1. We first check that using the special fundamental system (4), the resulting irreducible component \(\mathbb{HJ}_{g}^{I}\) of \(\mathbb{HJ}_{g}\subset\mathbb{H}_{g}\) given by Proposition 12 contains the locus \(\mathbb{I}_{g}\) of diagonal period matrices. This follows from the following **Lemma 18**.: _If \(m\in\mathcal{E}\) can be written as a sum of strictly less than \(g\) among the characteristics \(o_{1},\dots,o_{g},e_{2},\dots,e_{g+2}\) of the special fundamental system (4), then \(m+b^{g}\in\mathcal{E}^{*}\)._ Proof.: We proceed by induction on \(g\), with the lemma being easily true for \(g=1,2\). Then observe that deleting the characteristics \(o_{g}\) and \(e_{g+2}\) from the chosen special fundamental system, and forgetting the \(g\)-th column of each characteristic gives the special fundamental system of genus \(g-1\). Note now that the last column of \(b^{g}\) is equal to \(\left[\begin{smallmatrix}1\\ 1\end{smallmatrix}\right]\). Thus unless the sum for \(m\) includes one of the characteristics that have a non-zero \(g\)'th column, the characteristic \(m+b^{g}\) also has the last column \(\left[\begin{smallmatrix}1\\ 1\end{smallmatrix}\right]\), and thus does not lie in \(\mathcal{E}_{0}\). The only three characteristics among the special fundamental system that have a non-zero \(g\)'th column are \(o_{g},e_{g+1},e_{g+2}\). If \(m\) is a sum of characteristics including \(o_{g}\), then we observe that \(b^{g}+o_{g}=b^{g-1}\oplus\left[\begin{smallmatrix}0\\ 0\end{smallmatrix}\right]\), and thus the statement follows from the inductive assumption, by ignoring the \(g\)'th column of characteristics (since if any of the first \(g-1\) columns of a genus \(g\) characteristic is equal to \(\left[\begin{smallmatrix}1\\ 1\end{smallmatrix}\right]\), it is not in \(\mathcal{E}_{0}\), and we are now using one less characteristic in the sum for \(m\)). If \(m\) is a sum of characteristics not including \(o_{g}\), but including exactly one of \(e_{g+1}\) or \(e_{g+2}\), we note that \(b^{g}+e_{g+1}=b^{g-1}\oplus\left[\begin{smallmatrix}0\\ 1\end{smallmatrix}\right]\) and \(b^{g}+e_{g+2}=b^{g-1}\oplus\left[\begin{smallmatrix}1\\ 0\end{smallmatrix}\right]\), and the same argument applies as for the previous case of \(o_{g}\). Finally, if both \(e_{g+1}\) and \(e_{g+2}\) are used in this sum representation, we note that since the bottom characteristic of \(e_{g+1}+e_{g+2}\) is equal to zero vector in \(\mathbb{Z}_{2}^{g}\), we can again proceed by induction. **Corollary 19**.: _All theta constants \(\theta_{m+b^{g}}\) vanishing along the hyperelliptic component \(\mathbb{H}\mathbb{J}_{g}^{I}\) also vanish along \(\mathbb{I}_{g}\)._ We are now ready to prove the most precise version of our result on the hyperelliptic locus. **Theorem 20**.: _For any irreducible component \(\mathbb{X}\) of the hyperelliptic locus \(\mathbb{H}\mathbb{J}_{g}\subset\mathbb{H}_{g}\), such that \(\mathbb{X}\supset\mathbb{I}_{g}\), the tangent space to \(\mathbb{X}\) at any point of \(\mathbb{I}_{g}\) is the set of period matrices satisfying the set of equations \(\{\tau_{\pi(i)\pi(j)}=0\}_{\forall 1\leq i,j\leq g,|i-j|>1}\), where \(\pi\in S_{g}\) is the permutation that is the image in \(S_{g}\) of the element \(\sigma\in Stab_{\mathbb{I}_{g}}\) that sends \(\mathbb{H}\mathbb{J}_{g}^{I}\) to \(\mathbb{X}\)._ Proof.: We first use Lemma 18 to prove Theorem 1, showing that the tangent space to \(\mathbb{H}\mathbb{J}_{g}^{I}\) at any point of \(\mathbb{I}_{g}\) is given by equations \(\tau_{jk}=0\) for all \(|j-k|>1\). Indeed, recall that \(b^{g}=o_{1}+\cdots+o_{g}\), and consider a characteristic of the form \[m=b^{g}+o_{1}+\cdots+\widehat{o_{j_{1}}}+\cdots+\widehat{o_{j_{2}}}+\cdots+ \widehat{o_{j_{3}}}+\cdots+o_{g}=o_{j_{1}}+o_{j_{2}}+o_{j_{3}}\,,\] with \(1\leq j_{1}<j_{2}<j_{3}\leq g\). From the first expression for \(m\) it follows that \(\theta_{m}\) vanishes identically on \(\mathbb{H}\mathbb{J}_{g}^{I}\). Writing the columns of \(m\) as \(\left[\begin{smallmatrix}\varepsilon_{i}\\ \delta_{i}\end{smallmatrix}\right]\), we compute that \[\varepsilon_{j_{1}}=\varepsilon_{j_{2}}=\varepsilon_{j_{3}}=\delta_{j_{1}}= \delta_{j_{3}}=1\] and that all other \(\varepsilon_{i}\) and \(\delta_{i}\) are zero (note that this includes \(\delta_{j_{2}}=0\)). Thus \(m\in\mathcal{E}_{2}\), in agreement with Lemma 18, and in particular the lowest order term of the expansion of \(\theta_{m}\) near \(\mathbb{I}_{g}\) is linear, given explicitly by (8) (where we use \(\ell=2\), see the genus \(4\) example there) as \[\theta_{m}(\tau)=\frac{1}{2\pi i}\tau_{j_{1}j_{3}}\vartheta_{j_{1}}^{\prime} \vartheta_{j_{3}}^{\prime}\prod_{h\neq j_{1},j_{3}}\vartheta_{h} \tag{13}\] in our conventions. Going over all possible choices of \(1\leq j_{1}<j_{2}<j_{3}\leq g\) means that in the above expressions there appear all \(\tau_{j_{1}j_{3}}\) such that \(j_{3}-j_{1}>1\). Recalling that we are working within the space of symmetric matrices, it follows that the tangent space to \(\mathbb{H}\mathbb{J}^{I}\) at a point in \(\mathbb{I}_{g}\) is contained in the locus of tridiagonal matrices given by equations \(\{\tau_{jk}=0\}_{\forall|k-j|>1}\). Since the hyperelliptic locus \(\mathcal{H}\mathcal{J}_{g}\) is of dimension \(2g-1\), any irreducible component of its preimage \(\mathbb{H}\mathbb{J}_{g}\) is also of dimension \(2g-1\). Thus the tangent space to \(\mathbb{H}\mathbb{J}_{g}^{I}\) at any point of \(\mathbb{I}_{g}\) must be of dimension at least \(2g-1\). Since this tangent space is contained in the space of tridiagonal matrices, which also has dimension \(2g-1\), it must be equal to it, and Theorem 1 is thus proven. We now combine this with the study of the action of \(\Gamma_{g}\) on irreducible components of \(\mathbb{H}\mathbb{J}_{g}\) to obtain the full statement. By Lemma 17, for any irreducible component \(\mathbb{X}\) of \(\mathbb{H}\mathbb{J}_{g}\) containing \(\mathbb{I}_{g}\), there exists \(\sigma\in Stab_{\mathbb{I}_{g}}\) such that \(\sigma(\mathbb{H}\mathbb{J}_{g}^{I})=\mathbb{X}\). Thus the tangent space to \(\mathbb{X}\) is the image of the tangent space to \(\mathbb{H}\mathbb{J}_{g}^{I}\) under the action of \(\sigma\). Since clearly the action of \(\Gamma_{1}\) on each column of the characteristic does not change the linear term of the expansion (13), it follows that the action of \(\sigma\) on the linear equations for the tangent space to \(\mathbb{H}\mathbb{J}_{g}^{I}\) along \(\mathbb{I}_{g}\) is simply by permuting the columns of the period matrix according to the image of the permutation \(\sigma\) under the surjection \(Stab_{\mathbb{I}_{g}}\to S_{g}\). **Remark 21**.: From the proof of the theorem we see that multiple irreducible components of \(\mathbb{H}\mathbb{J}_{g}\) contain \(\mathbb{I}_{g}\), and many of them may have the same tangent space along \(\mathbb{I}_{g}\). This can already be seen in the first interesting case of \(g=3\). Recall that \(\mathbb{H}\mathbb{J}_{3}\) has \(36\) irreducible components, each one being the zero locus of one of the \(36\) even genus \(3\) theta constants. Nine of these \(36\) components, those with characteristics in \(\mathcal{E}^{*}=\mathcal{E}_{2}\), contain \(\mathbb{I}_{3}\): namely these correspond to characteristics \[[\begin{smallmatrix}110\\ 110\end{smallmatrix}]\,,\ [\begin{smallmatrix}111\\ 110\end{smallmatrix}]\,,\ [\begin{smallmatrix}110\\ 111\end{smallmatrix}]\,,\ [\begin{smallmatrix}101\\ 101\end{smallmatrix}]\,,\ [\begin{smallmatrix}101\\ 101\end{smallmatrix}]\,,\ [\begin{smallmatrix}111\\ 101\end{smallmatrix}]\,,\ [\begin{smallmatrix}101\\ 111\end{smallmatrix}]\,,\ [\begin{smallmatrix}011\\ 011\end{smallmatrix}]\,,\ [\begin{smallmatrix}111\\ 011\end{smallmatrix}]\,\ [\begin{smallmatrix}011\\ 111\end{smallmatrix}]\,.\] The irreducible components of \(\mathbb{H}\mathbb{J}_{3}\) corresponding to each of the first three characteristics have local equation \(\tau_{12}=0\) near \(\mathbb{I}_{3}\); the next three components have local equation \(\tau_{13}=0\), and the final three cases have local equation \(\tau_{23}=0\). For the next, more interesting, case of \(g=4\), a similar analysis can be performed, noting that here \(10\) even theta constants vanish on every irreducible component of \(\mathbb{H}\mathbb{J}_{4}\). As a side remark, notice that no matter how the columns are permuted, the three local defining equations for the tangent space of a component of \(\mathbb{H}\mathbb{J}_{4}\) near \(\mathbb{I}_{4}\) cannot be of the form \(\tau_{12}=\tau_{13}=\tau_{14}=0\). Indeed, this corresponds to the fact that the locus of products \(\mathbb{H}_{1}\times\mathbb{H}_{3}\) (which locally along \(\mathbb{I}_{g}\) is given by precisely these three equations) is not contained in \(\mathbb{H}\mathbb{J}_{4}\), as \(\mathbb{H}\mathbb{J}_{3}\subsetneq\mathbb{H}_{3}\). In arbitrary genus, we see that the only element of \(S_{g}\) that fixes the set of equations \(\{\tau_{jk}=0\}_{\forall\,|j-k|>1}\), is the product of transpositions \(\pi=(1,g)(2,g-1)\dots(\frac{g+3}{2},\frac{g}{2})\). Thus there are altogether \(g!/2\) possible different tangent spaces along \(\mathbb{I}_{g}\) to the different irreducible components of \(\mathbb{H}\mathbb{J}_{g}\) containing \(\mathbb{I}_{g}\). ## 5. The theta-null divisor and the vanishing gradient locus We now proceed to investigate the loci \(\vartheta_{\text{null}}\) and \(\mathcal{G}_{\text{null}}\) locally near \(\mathcal{D}_{g}\). The method we use here, and then also for dealing with the Hessian rank loci, will be different, and more general, than what we have done for the hyperelliptic locus. The outline of our argument is as follows. First we use the expansion of theta functions near \(\mathbb{I}_{g}\), as computed in Section 3, to determine the dimensions of the tangent spaces near \(\mathbb{I}_{g}\) to the loci given by equations \(\theta_{m_{0}}(z)=0\) (resp. \(\operatorname{grad}\theta_{m_{0}}(z)=0\)), where \(m_{0}\) is the simplest even (resp. odd) characteristic, i.e. a characteristic that lies in \(\mathcal{E}_{2}\) (resp. \(\mathcal{O}_{3}\)) for which the expansions are given. This is done by intersecting the Taylor expansions of the defining equations with a suitable "transverse" subvariety, to simplify computations. As one can already see from the formulas in Section 3, and from the proofs of these statements, such expansions near \(\mathbb{I}_{g}\) for characteristics in \(\mathcal{E}_{\ell}\) or \(\mathcal{O}_{\ell}\) with \(\ell\gg 2\) can become very complicated -- essentially as the corresponding locus contains the diagonal \(\mathbb{I}_{g}\) with high multiplicity. Thus to deal with arbitrary characteristics, which is necessary to understand the loci \(\vartheta_{\text{null}},\mathcal{G}_{\text{null}}\subset\mathcal{A}_{g}\), we will act by \(\Gamma_{g}\), and will have to use the fact that the loci of interest contain the "big" diagonal \(\mathbb{L}_{g}^{e}\) or \(\mathbb{L}_{g}^{o}\) defined below, which consists of block-diagonal matrices that have two to four \(1\times 1\) blocks, and the remaining blocks are \(2\times 2\). Since the setwise stabilizer \(Stab_{\mathbb{L}_{g}}\) contains suitable subgroups of \(G_{g}\), which by Lemma 11 permute all even (resp. odd) characteristics, it will turn out that the statement for an arbitrary characteristic \(m\) will reduce to the statement for \(m_{0}\) -- and Proposition 25 is a general statement to this effect. Local structure of \(\theta_{\text{m}_{0},\,\text{null}}\) and \(\operatorname{grad}_{m_{0},\,\text{null}}\) near the diagonal We will start by explicit computations for the characteristics in \(\mathcal{E}_{2}\) and \(\mathcal{O}_{3}\). The even case is straightforward, using the first term of the Taylor expansion. **Proposition 22**.: _Denote \(m_{0}:=\left[\begin{smallmatrix}110\ldots 0\\ 110\ldots 0\end{smallmatrix}\right]\in\mathcal{E}_{2}\). Then in a sufficiently small neighborhood of \(\mathbb{I}_{g}\), the locus_ \[\theta_{\text{m}_{0},\,\text{null}}=\{\tau\in\mathbb{H}_{g}\colon\theta_{m_{0 }}(\tau)=0\}\] _is smooth, of codimension \(1\) in \(\mathbb{H}_{g}\)._ Proof.: Since we are dealing with one non-trivial equation, it is clear that the locus is of codimension one, so the point of the statement is smoothness. For this, we observe that by (8) the local defining equation admits the expansion \[\theta_{m_{0}}(\tau)=c\tau_{12}+O(\varepsilon^{2})\] for some non-zero \(c\). Thus clearly the zero locus is smooth, of codimension one. Of course by acting by \(Stab_{2}\) we obtain the same statement for \(\theta_{\text{m},\,\text{null}}\) for any \(m=\left[\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}\right]\in\mathcal{E}_{2}\) -- and in this case the local lowest order defining equation is \(\tau_{ab}=0\), where \(a\) and \(b\) are the two \(\left[\begin{smallmatrix}1\\ 1\end{smallmatrix}\right]\) columns of the characteristics.. The odd case is much more elaborate, as there are \(g\) components of the gradient giving the \(g\) defining equations of \(\mathcal{G}_{\mathrm{null}}\), and we want to check that these equations are locally independent. This is hard to do directly, and for bounding the dimension of local irreducible components of various loci near \(\mathbb{I}_{g}\), we will use the following well-known statement. **Lemma 23**.: _Let \(X=\operatorname{Spec}\left(\mathbb{C}[x_{1},\dots,x_{N}]/\langle F_{1},\dots, F_{k}\rangle\right)\) be an irreducible affine subscheme of \(\mathbb{A}^{N}=\operatorname{Spec}\mathbb{C}[x_{1},\dots,x_{N}]\), of dimension \(n=\dim X\). Suppose \(x\in X\subset\mathbb{A}^{N}\), and let \(\mathfrak{M}_{x}\subset\mathcal{O}_{\mathbb{A}^{N},x}\) be the maximal ideal of the local ring. For any \(h\geq 1\) let \(N(h)\) denote the number of algebraically independent among the images of \(F_{1},\dots,F_{k}\) modulo \((\mathfrak{M}_{x})^{h}\). Then the inequality \(n\leq N-N(h)\) holds for any \(h\)._ We note that of course if \(X\) is a not necessarily irreducible affine scheme, the lemma shows that every irreducible component of \(X\) that contains \(x\) must have dimension at most \(N-N(h)\). Proof.: We denote \(\mathfrak{m}_{x}\subset\mathcal{O}_{X,x}\) the maximal ideal; since \(X\) is \(n\)-dimensional, we of course have \(\dim\left(\mathcal{O}_{X,x}/(\mathfrak{m}_{x}^{h})\right)=O(n^{h})\) as \(h\to\infty\). Note now that the map \(\mathcal{O}_{\mathbb{A}^{N},x}\to\mathcal{O}_{X,x}\) maps the ideal \(\langle F_{1},\dots,F_{k}\rangle\) to \(0\), and maps \(\mathfrak{M}_{x}\) onto \(\mathfrak{m}_{x}\). Thus we have the bound \[\dim\left(\mathcal{O}_{\mathbb{A}^{N},x}/\langle F_{1},\dots,F_{k},(\mathfrak{ M}_{x})^{h}\rangle\right)\geq\dim\left(\mathcal{O}_{X,x}/(\mathfrak{m}_{x})^{h }\right)\,,\] while the map of the completions \[\widehat{\mathcal{O}_{\mathbb{A}^{N},x}}/\langle F_{1},\dots,F_{k}\rangle\to \widehat{\mathcal{O}_{X,x}}\] is surjective. Since the function \(N(h)\) is monotone, and bounded above by \(N\), it must have a limit, and from the above inequalities we thus obtain \[n\leq N-\lim_{h\to\infty}N(h)\leq N-N(h)\] for any \(n\). We now obtain the dimension result for odd characteristics in \(\mathcal{O}_{3}\). **Proposition 24**.: _Denote \(m_{0}:=\left[\begin{smallmatrix}1110...0\\ 1110...0\end{smallmatrix}\right]\). Then in a sufficiently small neighborhood of \(\mathbb{I}_{g}\), the locus_ \[\operatorname{grad}_{m_{0},\,\mathrm{null}}=\{\tau\in\mathbb{H}_{g}\colon \,\operatorname{grad}\theta_{m_{0}}(\tau)=0\}\] _is of codimension \(g\) in \(\mathbb{H}_{g}\)._ Proof.: We will apply Lemma 23 for \(X=\operatorname{grad}_{m_{0},\,\mathrm{null}}\), and consider the intersection of \(X\) with the locus \(Y\) given by equations \[\{\tau_{jk}=0\quad\forall\,4\leq j<k\leq g\}\qquad\text{and}\qquad\{\tau_{1j}= \tau_{2j}=\tau_{3j}\quad\forall\,4\leq j\leq g\}\,.\] These are altogether \[\frac{(g-3)(g-4)}{2}+2(g-3)=\frac{g(g-3)}{2}\] equations, which are clearly independent, and all of which are of course satisfied on \(\mathbb{I}_{g}\). Thus \(\mathbb{I}_{g}\subset Y\), and \(\operatorname{codim}_{\mathbb{H}_{g}}Y=\frac{g(g-3)}{2}\). We claim that in a sufficiently small neighborhood \(U\) of \(\mathbb{I}_{g}\) we have \(X\cap Y\cap U=\mathbb{I}_{g}\). Since \(\dim\mathbb{I}_{g}=g\), this would imply that \[\dim(X\cap U)\leq\dim\mathbb{I}_{g}+\operatorname{codim}_{\mathbb{H}_{g}}Y=g+ \frac{g(g-3)}{2}=\frac{g(g-1)}{2}\,,\] and thus that the codimension of \(X\cap U\) in \(U\) is at least \(g\). Since \(X\subset\mathbb{H}_{g}\) is given by \(g\) equations, it follows that \(\operatorname{codim}_{\mathbb{H}_{g}}X\cap U=g\). To see that \(X\cap Y\cap U=\mathbb{I}_{g}\), we simply plug in the defining equations of \(Y\) into the expansion (10) of the theta gradient computed above, always excluding the common factor of \(\prod f_{\alpha}\), which is non-zero at a generic point of \(\mathbb{I}_{g}\): \[\partial_{z_{1}}\theta_{m_{0}}\equiv[2,3]+\sum_{\alpha}\phi_{\alpha}\cdot[ \alpha,\alpha,2,3]\equiv\tau_{23}+\phi_{1}\tau_{12}\tau_{13}+\sum_{j\geq 4} \phi_{j}\tau_{2j}\tau_{3j}\mod\mathfrak{m}^{3}\,,\] where we recall that \(t_{1},\ldots,t_{g}\) are considered fixed, so that \(\mathfrak{m}=\langle\{\tau_{ab}\}_{1\leq a<b\leq g}\rangle\). Now substituting into this the defining equations for \(Y\) we obtain \[\partial_{z_{1}}\theta_{m}|_{Y}\equiv\tau_{23}+\phi_{1}\tau_{12}\tau_{13}+\sum _{j\geq 4}\phi_{j}\tau_{1j}^{2}\mod\mathfrak{m}^{3}\,,\] and of course the expressions for \(\partial_{z_{2}}\theta_{m}|_{Y}\) and \(\partial_{z_{3}}\theta_{m}|_{Y}\) are completely analogous: \[\partial_{z_{2}}\theta_{m}|_{Y}\equiv\tau_{13}+\phi_{1}\tau_{12}\tau_{23}+\sum _{j\geq 4}\phi_{j}\tau_{1j}^{2}\mod\mathfrak{m}^{3}\,,\] \[\partial_{z_{3}}\theta_{m}|_{Y}\equiv\tau_{12}+\phi_{1}\tau_{13}\tau_{23}+\sum _{j\geq 4}\phi_{j}\tau_{1j}^{2}\mod\mathfrak{m}^{3}\,.\] For the partial \(z\)-derivatives in the other directions we obtain from (11) \[\partial_{z_{j}}\theta_{m} \equiv\phi_{j}\cdot\left([j,1,2,3]+\sum\phi_{\alpha}\cdot[\alpha, \alpha,j,1,2,3]\right)+\psi_{j}\cdot[j,j,j,1,2,3]\] \[\equiv\phi_{j}\cdot\left(\tau_{1j}\tau_{23}+\tau_{2j}\tau_{13}+ \tau_{3j}\tau_{12}+\sum\phi_{\alpha}\cdot[\alpha,\alpha,j,1,2,3]\right)+\psi_ {j}\tau_{1j}\tau_{2j}\tau_{3j}\mod\mathfrak{m}^{4}\,.\] We compute \([1,1,j,1,2,3]|_{Y}=\tau_{12}\tau_{13}\tau_{1j}\), \([j,j,j,1,2,3]|_{Y}=\tau_{1j}^{3}\), and \[[k,k,j,1,2,3]|_{Y}=\tau_{1j}\tau_{2k}\tau_{3k}+\tau_{2j}\tau_{1k}\tau_{3k}+ \tau_{3j}\tau_{1k}\tau_{2k}=3\tau_{1j}\tau_{1k}^{2}\,,\] so that we can finally evaluate \[\partial_{z_{j}}\theta_{m}|_{Y} \equiv\phi_{j}\tau_{1j}\cdot\left(\tau_{23}+\tau_{13}+\tau_{12}+ \phi_{1}\tau_{12}\tau_{13}+\phi_{2}\tau_{12}\tau_{23}+\phi_{3}\tau_{13}\tau_{2 3}\right)\] \[+3\phi_{j}\tau_{1j}\sum_{k\geq 4}\phi_{k}\cdot\tau_{1k}^{2}+ \psi_{j}\tau_{1j}^{3}\mod\mathfrak{m}^{4}\,.\] We are interested in the locus \(Y\cap\operatorname{grad}_{\operatorname{null}}\left[\frac{\varepsilon}{ \delta}\right]\), and thus we can substitute the expansions of the equations \(\partial_{z_{1}}\theta_{m}|_{Y}=\partial_{z_{2}}\theta_{m}|_{Y}=\partial_{z_{ 3}}\theta_{m}|_{Y}=0\) in the last equation, obtaining \[\partial_{z_{j}}\theta_{m}|_{Y} \equiv\phi_{j}\tau_{1j}\cdot(\partial_{z_{1}}\theta_{m}|_{Y}+ \partial_{z_{2}}\theta_{m}|_{Y}+\partial_{z_{3}}\theta_{m}|_{Y})\] \[+\tau_{1j}^{3}(\psi_{j}-2\phi_{j}^{2})\mod\mathfrak{m}^{4}\,.\] Since \(\psi_{j}-2\phi_{j}^{2}\) is a not identically zero function on \(\mathbb{H}_{1}\), it does not vanish at a generic point of \(\mathbb{I}_{g}\), and thus the vanishing of \(\operatorname{grad}\theta_{m_{0}}\mod\mathfrak{m}^{4}\) implies \(\tau_{1j}=0\) for all \(4\leq j\leq g\), which together with the defining equations for \(Y\) implies that \(\tau\) is diagonal except possibly for the entries \(\tau_{12},\tau_{23},\tau_{13}\). However, the vanishing of \(z_{1},z_{2},z_{3}\) derivatives \(\mod\mathfrak{m}^{2}\) gives the equations \(\tau_{12}=\tau_{23}=\tau_{13}\). Thus altogether it follows that every irreducible component of \(\operatorname{grad}_{m_{0},\,\operatorname{null}}\) that contains \(\mathbb{I}_{g}\) has codimension at least \(g\) in \(\mathbb{H}_{g}\), and thus codimension precisely \(g\). ### Theta-null and gradient loci for arbitrary characteristics Note that the computations above for \(m_{0}\in\mathcal{O}_{3}\) are already quite involved. If we wanted to deal with the locus \(\theta_{m}(\tau)=0\) or \(\operatorname{grad}\theta_{m}(\tau)=0\) for \(\ell\gg 3\), the computation would be daunting, as we would need to consider the Taylor expansion to order \(\ell/2\), see eg. (8). Thus instead we will use the action of \(G_{g}\), and the fact that the loci we are interested in contain the big diagonal, defined below. The loci we deal with will be defined in terms of the geometry of the theta divisor. While just thinking of the locus of ppav such that the theta divisor satisfies some geometric condition (eg has a singular point with some properties) only defines a locus _set-theoretically_ as a _subset_ of \(\mathcal{A}_{g}\), of course the loci we are interested in are in fact algebraic. However, thinking of them as _subvarieties_ of \(\mathcal{A}_{g}\), by arguing that the conditions that define them are _algebraic_ is also insufficient for our purposes. Indeed, for us it is important to consider these as _subschemes_ of \(\mathcal{A}_{g}\) -- as it is for Mumford's computation [12] of the class of the Andreotti-Mayer divisor, and in general for thinking about the Andreotti-Mayer loci. The way we think of the scheme structure on these loci is as follows: recall that the universal cover of the universal family \(\mathcal{X}_{g}\to\mathcal{A}_{g}\) of ppav is \(\mathbb{H}_{g}\times\mathbb{C}^{g}\), with the covering group being the semidirect product \(\operatorname{Sp}(2g,\mathbb{Z})\rtimes\mathbb{Z}^{2g}\). The theta function is a global holomorphic function on \(\mathbb{H}_{g}\times\mathbb{C}^{g}\), and various geometric conditions on the singularities of the theta divisor can then be interpreted as various analytic equations on \(\mathbb{H}_{g}\times\mathbb{C}^{g}\) involving the theta function and its partial derivatives. For the purposes of obtaining the results below, we will only be interested in the geometry of the theta divisor at the two-torsion points of the ppav, and these conditions can be defined analytically over \(\mathbb{H}_{g}\) in terms of theta constants with characteristics and their derivatives. The loci thus defined naturally come with an analytic defining ideal on \(\mathbb{H}_{g}\); the defining ideal is invariant under the action of \(\Gamma_{g}\) (or \(\Gamma_{g}(2)\) depending on the context), and thus images of these in \(\mathcal{A}_{g}\) (or \(\mathcal{A}_{g}(2)\)) have natural defining algebraic equations, and thus a natural scheme structure. This explains our care in describing the following general setup. For any \(m=[\,\raisebox{0.86pt}{\tiny$\frac{\varepsilon}{\delta}$}\,]\in\mathcal{E}\) we set \(\theta_{\operatorname{m,null}}:=\theta_{\operatorname{null}}[\,\raisebox{0.86pt} {\tiny$\frac{\varepsilon}{\delta}$}\,]\), and for any \(m=[\,\raisebox{0.86pt}{\tiny$\frac{\varepsilon}{\delta}$}\,]\in\mathcal{O}\) we set \(\operatorname{grad}\theta_{\operatorname{m,null}}:=\operatorname{grad}\theta_{ \operatorname{null}}[\,\raisebox{0.86pt}{\tiny$\frac{\varepsilon}{\delta}$}\,]\). Recall that by definition \(\theta_{\operatorname{null}}=\cup_{m\in\mathcal{E}}\theta_{\operatorname{m, null}}\). Given any analytic subspace \(\mathbb{X}\subset\mathbb{H}_{g}\), which will be contained in either \(\theta_{\operatorname{null}}\) or \(\operatorname{grad}_{\operatorname{null}}\), depending on the context, we decompose it as \(\mathbb{X}=\cup_{m}\mathbb{X}_{m}\), where \(\mathbb{X}_{m}:=\mathbb{X}\cap\theta_{\operatorname{m,null}}\) (or respectively \(\mathbb{X}_{m}:=\mathbb{X}\cap\operatorname{grad}\theta_{\operatorname{m, null}}\)). We note that \(\mathbb{X}_{m}\cap\mathbb{X}_{n}\) may be non-empty. Assume \(\mathbb{X}\subset\theta_{\operatorname{null}}\) (or \(\mathbb{X}\subset\operatorname{grad}_{\operatorname{null}}\)) is an analytic subspace of \(\mathbb{H}_{g}\) satisfying \(\Gamma_{g}\circ\mathbb{X}=\mathbb{X}\) (as a set), so in particular \(\Gamma_{g}\) acts transitively on the set of \(\mathbb{X}_{m}\) for all \(m\in\mathcal{E}\), and for any \(m\) the setwise stabilizer \(\Gamma_{m}\) of \(\mathbb{X}_{m}\) contains \(\Gamma_{g}(2)\). Denoting \(X:=p(\mathbb{X})\subset\mathcal{A}_{g}\) the image, observe that \(p^{-1}(X)=\mathbb{X}\), and there exists a well-defined scheme structure on \(X\) induced by the defining equations of \(\mathbb{X}\). We will be interested in computing the dimension of irreducible components of \(X\) containing \(\mathcal{D}_{g}\). This is related to computing the dimension of irreducible components of \(\mathbb{X}\) containing \(\mathbb{I}_{g}\), which we will approach via Taylor expansions in the neighborhood of \(\mathbb{I}_{g}\). The difficulty is that a priori the scheme \(X\) may have embedded components containing \(\mathcal{D}_{g}\), and thus thinking of \(X\) as a subvariety may not suffice. Essentially the difficulty is that if \(\mathbb{I}_{g}\subset\mathbb{X}_{m}\), and we can determine irreducible components of \(\mathbb{X}_{m}\) containing \(\mathbb{I}_{g}\), it could be that also \(\mathbb{I}_{g}\subset\mathbb{X}_{n}\) for some other \(n\), and the image in \(\mathcal{A}_{g}\) of an irreducible component of \(\mathbb{X}_{n}\) containing \(\mathbb{I}_{g}\) may be strictly contained in the image in \(\mathcal{A}_{g}\) of an irreducible component of \(\mathbb{X}_{m}\) containing \(\mathbb{I}_{g}\). We will deal with this by explicitly imposing the additional assumption that a component contains the big diagonal that we now define (this condition will hold for all those loci that we are interested in). For the even case (i.e. when we are interested in \(\mathbb{X}\subset\theta_{\operatorname{null}}\)), we define the big diagonal as \[\mathbb{L}_{g}^{e}:=\mathbb{H}_{1}\times\mathbb{H}_{1}\times\mathbb{H}_{2} \times\cdots\times\mathbb{H}_{2}\quad\text{or}\quad\mathbb{L}_{g}^{e}:=\mathbb{ H}_{1}\times\mathbb{H}_{1}\times\mathbb{H}_{2}\times\cdots\times\mathbb{H}_{2} \times\mathbb{H}_{1}\,,\] (where the presence of the last factor depends on the parity of \(g\)) as the locus of period matrices that have one less than the maximal possible number of \(2\times 2\) blocks along the diagonal. The direct product \(L_{g}^{e}:=\Gamma_{1}\times\Gamma_{1}\times\Gamma_{2}\times\cdots\times\Gamma _{2}\) or \(L_{g}^{e}:=\Gamma_{1}\times\Gamma_{1}\times\Gamma_{2}\times\cdots\times\Gamma _{2}\times\Gamma_{1}\) (where the presence of the last factor depends on the parity of \(g\)) is clearly contained in the stabilizer \(Stab_{\mathbb{L}_{g}^{e}}\). Finally, in the even case we set \(m_{0}:=[\,\raisebox{0.86pt}{\tiny$\frac{110...0}{110...0}$}\,]\in\mathcal{E}_{2}\), so that \(\theta_{\operatorname{m_{0},\,null}}\supset\mathbb{I}_{g}\). Similarly for the odd case of \(\mathbb{X}\subset\operatorname{grad}_{\operatorname{null}}\) we define the big diagonal to be \(\mathbb{L}_{g}^{o}:=\mathbb{H}_{1}\times\mathbb{L}_{g-1}^{e}\), and note that its stabilizer contains the direct product \(L_{g}^{o}:=\Gamma_{1}\times L_{g}^{e}\). In this case we set \(m_{0}:=[\,\raisebox{0.86pt}{\tiny$\frac{1110...0}{110...0}$}\,]\in\mathcal{O}_ {3}\), so that again \(\operatorname{grad}_{m_{0},\,null}\supset\mathbb{I}_{g}\). We will use these to investigate irreducible components locally near \(\mathbb{I}_{g}\) by applying the following statement. **Proposition 25**.: _Let \(\mathbb{X}\subset\theta_{\rm null}\) (resp. \(\mathbb{X}\subset\operatorname{grad}_{\rm null}\)) be an analytic subspace of \(\mathbb{H}_{g}\) containing \(\mathbb{I}_{g}\), such that \(\mathbb{X}=\cup\mathbb{X}_{m}\) is invariant under \(\Gamma_{g}\). Let \(\mathbb{Y}\subset\mathbb{X}_{m_{0}}\) be an irreducible component of \(\mathbb{X}_{m_{0}}\) containing \(\mathbb{I}_{g}\)._ _If \(\mathbb{L}_{g}^{e}\subset\mathbb{Y}\) (resp. \(\mathbb{L}_{g}^{o}\subset\mathbb{Y}\)), then for each \(m\in\mathcal{E}^{*}\) (resp. \(m\in\mathcal{O}^{*}\)) there exists an element \(\sigma_{m}\in G_{g}\) mapping \(m_{0}\) to \(m\), such that \(\sigma_{m}(\mathbb{Y})\) is an irreducible component of \(\mathbb{X}_{m}\) containing \(\mathbb{I}_{g}\)._ Proof.: We give the argument for the even case; the argument for the odd case being completely analogous, using \(\mathbb{L}_{g}^{o}\) and \(L_{g}^{o}\) instead of \(\mathbb{L}_{g}^{e}\) and \(L_{g}^{e}\). We first observe that if \(m\in\mathcal{E}_{2}\) (resp. \(m\in\mathcal{O}_{3}\)), then, since \(Stab_{\mathbb{I}_{g}}\) acts transitively on \(\mathcal{E}_{2}\) (resp. \(\mathcal{O}_{3}\)), there exists \(\sigma_{m}\in Stab_{\mathbb{I}_{g}}\) sending \(m_{0}\) to \(m\). Since \(\sigma_{m}(\mathbb{I}_{g})=\mathbb{I}_{g}\) (as a set), the image \(\sigma_{m}(\mathbb{Y})\) contains \(\mathbb{I}_{g}\). Since \(\sigma_{m}(\mathbb{X}_{m_{0}})=\mathbb{X}_{m}\) by the \(\Gamma_{g}\)-invariance of \(\mathbb{X}\) and by the definition of \(\mathbb{X}_{m}\), it follows that \(\sigma_{m}(\mathbb{Y})\) is an irreducible component of \(\mathbb{X}_{m}\), containing \(\mathbb{I}_{g}\). To deal with the case of \(m\in\mathcal{E}_{\ell}\) with \(\ell\geq 4\), we first observe that since \(Stab_{\mathbb{I}_{g}}\) acts transitively on \(\mathcal{E}_{\ell}\), it is enough to deal with the case of \(m=\left[\begin{smallmatrix}1&\ldots&0&0\\ 1&\ldots&10&\ldots\end{smallmatrix}\right]\) with \(\ell\) columns equal to \(\left[\begin{smallmatrix}1&1\\ 1\end{smallmatrix}\right]\). By the proof of Lemma 11 there exists an element \(\sigma_{m}\in L_{g}^{e}\subset Stab_{\mathbb{L}_{g}^{e}}\) such that \(\sigma_{m}\cdot m_{0}=m\). Since \(\mathbb{L}_{g}^{e}\subset\mathbb{Y}\subset\mathbb{X}_{m_{0}}\) by assumption, and since \(\mathbb{X}\) is \(\Gamma_{g}\)-invariant, it follows that \(\sigma_{m}(\mathbb{L}_{g}^{e})=\mathbb{L}_{g}^{e}\subset\sigma_{m}(\mathbb{Y} )\subset\mathbb{X}_{m}\), and in particular \(\sigma_{m}(\mathbb{Y})\supset\mathbb{L}_{g}^{e}\supset\mathbb{I}_{g}\). If \(\sigma_{m}(\mathbb{Y})\) were not an irreducible component of \(\mathbb{X}_{m}\), i.e. if there existed an irreducible component \(\mathbb{W}\) of \(\mathbb{X}_{m}\) strictly containing \(\sigma_{m}(\mathbb{Y})\), then by invariance of \(\mathbb{X}\) under \(\Gamma_{g}\), the preimage \(\sigma_{m}^{-1}(\mathbb{W})\) would be an irreducible component of \(\mathbb{X}_{m_{0}}\) containing \(\mathbb{Y}\), giving a contradiction. The proposition can be rephrased as a statement on subvarieties of \(\mathcal{A}_{g}\): **Corollary 26**.: _Let \(\mathcal{X}\subset\vartheta_{\rm null}\subset\mathcal{A}_{g}\) (resp. \(\mathcal{X}\subset\mathcal{G}_{\rm null}\subset\mathcal{A}_{g}\)) be an algebraic subvariety containing \(p(\mathbb{L}_{g}^{e})\supset\mathcal{D}_{g}\) (resp. \(p(\mathbb{L}_{g}^{o})\supset\mathcal{D}_{g}\)). Denote \(\mathbb{X}:=p^{-1}(\mathcal{X})\subset\mathbb{H}_{g}\), and let \(\mathbb{Y}\subset\mathbb{X}_{m}:=\mathbb{X}\cap\theta_{\rm m,\,null}\) (resp. \(\mathbb{X}\cap\operatorname{grad}_{m,\,null}\)) be an irreducible component containing \(\mathbb{L}_{g}^{e}\) (resp. \(\mathbb{L}_{g}^{o}\)). Then \(p(\mathbb{Y})\) is an irreducible component of \(\mathcal{X}\) containing \(\mathcal{D}_{g}\)._ What this proposition essentially rules out is the situation discussed above, where \(\mathbb{I}_{g}\subset\mathbb{Y}\subset\mathbb{X}_{m_{0}}\), but where also \(\mathbb{I}_{g}\subset\mathbb{W}\subset\mathbb{X}_{m}\) such that \(p(\mathbb{Y})\subsetneq p(\mathbb{W})\subset\mathcal{A}_{g}\). Notice that since \(\mathbb{I}_{g}\subset\mathbb{X}_{m}\) if and only if \(m\in\mathcal{E}^{*}\) (resp. \(\mathcal{O}^{*}\)), the characteristics \(m\in\mathcal{E}^{0}\) (resp. \(\mathcal{O}^{1}\)) do not occur in the above discussion. **Remark 27**.: The above proposition also holds in a more general context. For example, let \(M=(m_{1},\ldots,m_{k})\) be a sequence of even characteristics, and let \(\mathcal{E}_{M}\) be the set of all ordered \(k\)-tuples of characteristics that form the \(\Gamma_{g}\) orbit of \(M\). Then we can also decompose \(\mathbb{X}\subset\mathbb{H}_{g}\) as \[\mathbb{X}=\cup_{(n_{1},\ldots,n_{k})\in\mathcal{E}_{M}}\mathbb{X}_{n_{1}, \ldots,n_{k}}\,,\] where \(\mathbb{X}_{n_{1},\ldots,n_{k}}:=\mathbb{X}\cap\theta_{n_{1},\,{\rm null}} \cap\cdots\cap\theta_{n_{k},\,{\rm null}}\). In this case we obtain similar statements under the assumption \(\mathbb{L}_{g}\subset\mathbb{X}_{(m_{1},\ldots,m_{k})}\). In particular this applies to the hyperelliptic case discussed in the previous section, giving an alternative approach to the results there. We will now apply this setup for the vanishing theta gradient loci. Proof of Theorem 5.: We apply Proposition 25 for \[\mathbb{X}=\operatorname{grad}_{\operatorname{null}}=\cup_{m}\operatorname{ grad}_{m,\,\operatorname{null}}\subset\mathbb{H}_{g}\,.\] To avoid confusion, denote \(n_{0}:=[\begin{smallmatrix}110\dots&0\\ 110\dots&0\end{smallmatrix}]\) the even characteristic in genus \(g-1\). Note that the locus \(\mathbb{Y}=\mathbb{H}_{1}\times\theta_{\operatorname{n_{0},\,null}}\subset \mathbb{H}_{1}\times\mathbb{H}_{g-1}\) is irreducible and has codimension \(g\) in \(\mathbb{H}_{g}\), thus is an irreducible component of \(\operatorname{grad}_{m_{0,\,\operatorname{null}}}\), containing \(\mathbb{L}_{g}^{o}\). Then Corollary 26 implies that \(p(\mathbb{Y})=\mathcal{A}_{1}\times\vartheta_{\operatorname{null}}\) is an irreducible component of \(\mathcal{G}_{\operatorname{null}}\), which contains \(\mathcal{D}_{g}\) (and in fact of course contains \(p(\mathbb{L}_{g}^{o})\)). For components containing the hyperelliptic locus, suppose \(\mathbb{Y}\subset\operatorname{grad}_{m,\,\operatorname{null}}\) is an irreducible component of \(\operatorname{grad}_{m,\,\operatorname{null}}\) containing some component \(\mathbb{X}\) of the hyperelliptic locus \(\mathbb{H}\mathbb{Y}_{g}\), such that \(\mathbb{X}\supset\mathbb{I}_{g}\). By Lemma 17, there exists \(\sigma\in\Gamma_{g}\) which lies in \(Stab_{\mathbb{I}_{g}}\) and maps \(m\) to \(m_{0}\). We can now apply Corollary 26 for \(\sigma(\mathbb{Y})\subset\operatorname{grad}_{m_{0},\,\operatorname{null}}\), yielding the result. The hessian rank loci \(\vartheta^{2}_{\operatorname{null}}\) and \(\vartheta^{3}_{\operatorname{null}}\) In this section we investigate the geometry of the loci with given rank of the Hessian of the theta function near \(\mathbb{I}_{g}\), proving Theorems 3 and 4. Using Corollary 26, at the end of the day it will suffice to study the hessian of \(\theta_{m_{0}}\) near \(\mathbb{I}_{g}\). ### The rank two locus Our first goal is to prove Theorem 3, that \(\mathcal{A}_{1}\times\mathcal{A}_{g-1}\) is an irreducible component of \(\vartheta^{2}_{\operatorname{null}}\). By applying Corollary 26, it will suffice to show that \(\mathbb{H}_{1}\times\mathbb{H}_{g-1}\) is an irreducible component of \(\theta^{2}_{\operatorname{m_{0},\,null}}\). For this, similarly to how we dealt with the locus \(\mathcal{G}_{\operatorname{null}}\), working locally near \(\mathbb{I}_{g}\) we will compute the intersection of \(\theta^{2}_{\operatorname{m_{0},\,null}}\) with the locus \(Z\subset\mathbb{H}_{g}\) given by the equations \(\tau_{jk}=0\) for all \(2\leq j<k\leq g\) and \(\tau_{1j}=\tau_{2j}\) for all \(3\leq j\leq g\). Note that \((\mathbb{H}_{1}\times\mathbb{H}_{g-1})\cap Z=\mathbb{I}_{g}\) by definition. Since \(\mathbb{H}_{1}\times\mathbb{H}_{g-1}\subset\theta^{2}_{\operatorname{m_{0}, \,null}}\), the following proposition will suffice to prove Theorem 3. **Proposition 28**.: _For a sufficiently small neighborhood \(U\) of \(\mathbb{I}_{g}\) the equality \(\theta^{2}_{\operatorname{m_{0},\,null}}\cap Z\cap U=\mathbb{I}_{g}\) holds._ Proof.: Indeed, we will plug in the defining equations of \(Z\) into the defining equations of \(\theta^{2}_{\operatorname{m_{0},\,null}}\) and check that they imply \(\tau_{1j}=0\) for all \(1<j\leq g\). To see this, it will be sufficient to consider the principal \(3\times 3\) minors of the Hessian of \(\theta_{m_{0}}\) that include the first and second rows. Using the expansion (12), for the \(3\times 3\) principal minor obtained by taking rows and columns \(1,2,j\) for some \(3\leq j\leq g\), we compute the determinant to be \[D_{12j}:=\det(\dot{\,}\ddots) =\det\left(\begin{smallmatrix}\tfrac{1}{2}\phi_{1}\cdot X_{2}&1& \phi_{j}\tau_{2j}+\phi_{j}\sum\phi_{\alpha}\tau_{2\alpha}\tau_{j\alpha}\\ *&\tfrac{1}{2}\phi_{2}\cdot(X_{2}+Y_{2})&\phi_{j}\tau_{1j}+\phi_{j}\sum\phi_{ \alpha}\tau_{2\alpha}\tau_{j\alpha}\\ *&*&\tfrac{1}{2}\phi_{j}\cdot(X_{2}+Y_{2})+\tfrac{1}{2}\psi_{j}\tau_{1j}\tau_ {2j}\end{smallmatrix}\right)\] \[=-\frac{1}{2}\phi_{j}\cdot(X_{2}+Y_{2})+(2\phi_{j}^{2}-\psi_{j}/2 )\tau_{1j}\tau_{2j}+O(\varepsilon^{3})\,.\] We also recall the expansion of \(\theta_{m_{0}}\) itself, given by equation (8). Using Lemma 23, we will work with the expansions of the theta constants and the determinants of the \(3\times 3\) minors of the Hessian up to \(O(\varepsilon^{3})\), intersected with \(Z\). We thus compute \[\theta_{m_{0}}|_{Z}=([1,2]+Y_{2})\,|_{Z}+O(\varepsilon^{3})=\tfrac{1}{2\pi i} \tau_{12}+\tfrac{1}{(2\pi i)^{2}}\sum_{j\geq 3}\phi_{j}\tau_{1j}^{2}+O( \varepsilon^{3})\,.\] Substituting this into \(D_{12j}\) and dropping the common and generically non-zero factor of \(\prod f_{\alpha}\) gives \[D_{12j}|_{Z\cap\theta_{m_{0},\,\mathrm{null}}} =\,-\tfrac{1}{2}\phi_{j}\cdot(\tau_{12}+Y_{2})+(2\phi_{j}^{2}- \tfrac{\psi_{j}}{2})\tau_{1j}\tau_{2j}\Big{|}_{Z\cap\theta_{m_{0},\,\mathrm{ null}}}+O(\varepsilon^{3})\] \[=O(\varepsilon^{3})+(2\phi_{j}^{2}-\psi_{j}/2)\tau_{1j}^{2}\,.\] Since the expression \(2\phi_{j}^{2}-\psi_{j}/2\) is not identically zero in \(t_{j}\), for a generic value of \(t_{j}\) the vanishing of \(D_{12j}|_{Z\cap\theta_{m_{0},\,\mathrm{null}}}\) implies \(\tau_{1j}^{2}=O(\varepsilon^{3})\) and then substituting this back, the vanishing of \(\theta_{m_{0}}|_{Z}\) implies \(\tau_{12}=O(\varepsilon^{3})\), so that altogether we get precisely the vanishing of \(\tau_{12},\tau_{13}=\tau_{23},\dots,\tau_{1g}=\tau_{2g}\) up to higher order, as required. Proof of Theorem 3.: We observe that by the factorization of theta functions the big diagonal \(\mathbb{L}^{e}\subset\theta_{\mathrm{m}_{0},\,\mathrm{null}}^{2}\subset\theta _{\mathrm{m}_{0},\,\mathrm{null}}\). The above computation, using Lemma 23, shows that \(\mathbb{H}_{1}\times\mathbb{H}_{g_{1}}\) is an irreducible component of \(\theta_{\mathrm{m}_{0},\,\mathrm{null}}^{2}\) containing \(\mathbb{I}_{g}\), and since the defining equations of \(\vartheta_{\mathrm{null}}^{2}\) are \(\Gamma_{g}\) invariant, by Corollary 26 it follows that \(\mathcal{A}_{1}\times\mathcal{A}_{g-1}\) is an irreducible component of \(\vartheta_{\mathrm{null}}^{2}\). ### The rank three locus We now deal with the locus \(\vartheta_{\mathrm{null}}^{3}\); here our goal is to show that the locus of Jacobians with a vanishing theta-null is an irreducible component. Recall that the locus \(\theta_{\mathrm{null}}\left[\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}\right]\cap\mathbb{J}_{g}\) is purely of dimension \(3g-4\), and in fact by [13], both \(\mathbb{J}_{g}\subset\mathbb{H}_{g}\) and \(\theta_{\mathrm{null}}\left[\begin{smallmatrix}\varepsilon\\ \delta\end{smallmatrix}\right]\subset\mathbb{H}_{g}\) are irreducible; moreover, the intersection \(\vartheta_{\mathrm{null}}\cap\mathcal{J}_{g}\subset\mathcal{A}_{g}\) is irreducible by [12]. Thus we can apply Corollary 26, so that it will again suffice to work with the Taylor expansions of the Hessian of \(\theta_{m_{0}}\), and not with an arbitrary characteristic. The additional complication in this case is that, for high genus, the dimension of \(\vartheta_{\mathrm{null}}\cap\mathcal{J}_{g}\) is smaller than the dimension of a number of irreducible components of \(\mathcal{R}_{g}\) that are contained in \(\vartheta_{\mathrm{null}}^{3}\) and contain \(\mathcal{D}_{g}\) and \(\mathbb{L}_{g}^{o}\) (for example, for the component \(\mathcal{A}_{g}\times\mathcal{A}_{g-2}\)). We demonstrate this issue in detail for the various components of interest in low genus. For \(g=5\) we have \[\dim(\mathcal{A}_{3}\times\mathcal{A}_{2})=9<\dim(\mathcal{J}_{5}\cap\vartheta_{ \mathrm{null}})=11=\dim(\mathcal{A}_{4}\times\mathcal{A}_{1})\,,\] and indeed \(\mathcal{A}_{3}\times\mathcal{A}_{2}\) is contained in the closure of \(\mathcal{J}_{5}\cap\vartheta_{\mathrm{null}}\) since \(\mathcal{J}_{3}=\mathcal{A}_{3}\). However, already for genus \(6\) we have \[\dim(\mathcal{A}_{4}\times\mathcal{A}_{2})=10<\dim(\mathcal{J}_{6}\cap \vartheta_{\mathrm{null}})=14<\dim(\mathcal{A}_{5}\times\mathcal{A}_{1})=16\,,\] and thus \(\mathcal{A}_{4}\times\mathcal{A}_{2}\) must be contained in an irreducible component of \(\vartheta_{\mathrm{null}}^{3}\) that has dimension at least \(14\); moreover, note that \(\mathcal{J}_{6}\cap\vartheta_{\mathrm{null}}\not\supset\mathcal{A}_{4}\times \mathcal{A}_{2}\). This discussion makes the following statement more surprising in that we can show that components of \(\vartheta_{\mathrm{null}}^{3}\) not contained in the decomposable locus have expected dimension. **Theorem 29**.: _For any genus \(g\), the locus \(\theta_{\mathrm{m}_{0},\,\mathrm{null}}^{3}\setminus\mathbb{R}_{g}\) locally near \(\mathbb{I}_{g}\) has dimension \(3g-4\)._ As a consequence, all irreducible components of \(\vartheta_{\mathrm{null}}^{3}\) containing \(\mathcal{D}_{g}\) and not contained in the decomposable locus \(\mathcal{R}_{g}\) must have dimension \(\leq 3g-4\). **Corollary 30**.: _For any genus \(g\geq 3\), the locus \(\vartheta_{\mathrm{null}}^{3}\setminus\mathcal{R}_{g}\) locally near \(\mathcal{D}_{g}\) has dimension equal to \(3g-4\)._ Proof.: Indeed, we observe that \(\mathbb{L}_{g}^{e}\subset\theta_{\mathrm{m}_{0},\,\mathrm{null}}\cap\mathbb{J }_{g}\subset\theta_{\mathrm{m}_{0},\,\mathrm{null}}^{3}\). The locus \(\vartheta_{\mathrm{null}}^{3}\) is by definition \(\Gamma_{g}\)-invariant, and hence the conditions of Corollary 26 are satisfied. An immediate consequence is that the \((3g-4)\)-dimensional irreducible locus \(\mathcal{J}_{g}\cap\vartheta_{\mathrm{null}}=p(\theta_{\mathrm{m}_{0},\, \mathrm{null}}\cap\mathbb{J}_{g})\), contained in \(\vartheta_{\mathrm{null}}^{3}\), is an irreducible component of \(\vartheta_{\mathrm{null}}^{3}\). This finishes the proof of Theorem 4, once we obtain the local dimension statement. Proof of Theorem 29.: As above, we will be working in a sufficiently small neighborhood \(U\supset\mathbb{I}_{g}\) of the diagonal, using the expansions (8) for \(\theta_{m_{0}}\) and the expansion (12) for the \(4\times 4\) minors of its Hessian. We first note that the vanishing of \(\theta_{m_{0}}=X_{2}+Y_{2}+O(\varepsilon^{3})\) implies that \(\tau_{12}=O(\varepsilon^{2})\) and moreover that \(X_{2}+Y_{2}=O(\varepsilon^{3})\). Since we are interested in the locus where the rank of the Hessian is equal to \(3\), and not \(2\), and since the Hessian symmetric, there must exist a _principal_\(3\times 3\) minor of the Hessian with a non-zero determinant. Notice that the \(2\times 2\) principal minor of the Hessian formed by rows and columns \(1\) and \(2\) becomes, after plugging in \(X_{2}+Y_{2}=O(\varepsilon^{3})\) from the vanishing of \(\theta_{m_{0}}\), equal to \(\left(\begin{smallmatrix}O(\varepsilon^{3})&1+O(\varepsilon^{2})\\ 1+O(\varepsilon^{2})&O(\varepsilon^{3})\end{smallmatrix}\right)\), so that its determinant is equal to \(-1+O(\varepsilon^{2})\), and thus non-zero. Since this is a non-degenerate principal \(2\times 2\) minor, for the matrix to have rank equal to \(3\), it must be contained in a non-degenerate \(3\times 3\) minor. Moreover, since the matrix is symmetric, there must exists a principal such non-degenerate \(3\times 3\) minor, and by renumbering the coordinates, we thus assume without loss of generality that this non-degenerate minor is made up by rows and columns number \(1,2,3\). Similarly to the proof of Theorem 3, for convenience we will intersect, in a neighborhood \(U\supset\mathbb{I}_{g}\), the locus \(\theta^{3}_{\operatorname{m_{0},\,null}}\), with the codimension \(g-2\) subvariety \(Z\subset\mathbb{H}_{g}\) given by equations \(\tau_{1j}=\tau_{2j}\) for all \(3\leq j\leq g\). Our goal is to prove that \(U\cap Z\cap\theta^{3}_{\operatorname{m_{0},\,null}}\) has dimension at most \((3g-4)-(g-2)=2g-2\). Indeed, since we know that \(\mathbb{J}_{g}\cap\theta_{\operatorname{m_{0},\,null}})\) has dimension precisely \(3g-4\), and thus \(\dim(U\cap\mathbb{J}_{g}\cap\theta_{\operatorname{m_{0},\,null}}\cap Z)\geq 2g-2\) is contained in \(\theta^{3}_{\operatorname{m_{0},\,null}}\cap Z\), this will then imply Theorem 29. To bound from above the dimension of \(U\cap Z\cap\theta^{3}_{\operatorname{m_{0},\,null}}\), we will look at determinants \(D_{jk}\) of the \(4\times 4\) minors of the Hessian made up of rows \((123j)\) and columns \((123k)\), for any \(4\leq j\leq k\leq g\). We will see that on \(U\cap Z\cap\theta_{\operatorname{m_{0},\,null}}\), for \(j=k\) the vanishing of \(D_{jj}\) will require \(\tau_{1j}=\tau_{2j}\) to vanish to higher order, and then we will see that the vanishing of \(D_{jk}\) for \(j<k\) determines \(\tau_{jk}\) in terms of the other variables, up to higher order. Thus altogether, up to higher order, the point of \(U\cap Z\cap\theta^{3}_{\operatorname{m_{0},\,null}}\) will be determined by the values of the diagonal period matrix elements \(t_{1}=\tau_{11},\ldots,t_{g}=\tau_{gg}\), together with \(\tau_{13}=\tau_{23}\), and \(\tau_{34},\ldots,\tau_{3g}\) (recall that \(\tau_{12}\) is determined in terms of other coordinates, up to higher order, from the vanishing of \(\theta_{m_{0}}\). Thus altogether by applying Lemma 23, we will see that the dimension of \(U\cap Z\cap\theta^{3}_{\operatorname{m_{0},\,null}}\) is equal the number of these coordinates, i.e. \(g+1+(g-3)=2g-2\). We now inspect these \(4\times 4\) minors in detail. First, recall from the proof of Theorem 3 that the determinant \(D_{123}\) of the \(3\times 3\) minor of the Hessian formed by the first \(3\) rows and columns is equal, up to higher order terms and generically non-vanishing factor, to \(\tau_{13}^{2}\). Since \(D_{123}\neq 0\) by assumption, this means that \(\tau_{13}\neq 0\). Now, for a principal \(4\times 4\) minor \(D_{jj}\), we plug in \(X_{2}+Y_{2}=O(\varepsilon^{3})\) into (12), and see that the lowest order entries of the minor are as follows: \[\left(\begin{smallmatrix}O(\varepsilon^{3})&1+O(\varepsilon^{2})&\phi_{3} \cdot[2,3]+O(\varepsilon^{2})&\phi_{j}\cdot[2,j]+O(\varepsilon^{2})\\ *&O(\varepsilon^{3})&\phi_{3}\cdot[1,3]+O(\varepsilon^{2})&\phi_{j}\cdot[1,j]+ O(\varepsilon^{2})\\ *&*&\frac{1}{2}\psi_{3}\cdot[3,3,1,2]+O(\varepsilon^{3})&\phi_{3}\cdot[3,j,1,2]+ \ldots\\ *&*&*&\frac{1}{2}\psi_{j}\cdot[j,j,1,2]+O(\varepsilon^{3})\end{smallmatrix} \right)\,.\] Thus the lowest order term that could appear in the determinant of this matrix is of order \(O(\varepsilon^{4})\), and we write it explicitly in terms of the entries of the period matrix (noting, importantly, that in \([j,k,1,2]\) the term \(\tau_{jk}\tau_{12}\) is higher order, and using Maple to compute safely) \[D_{12jk}=-\tau_{13}^{2}\tau_{1j}^{2}(4\phi_{3}^{2}-\psi_{3})(4\phi_{j}^{2}- \psi_{j})/4+O(\varepsilon^{5}) \tag{14}\] For generic values of \(t_{3},t_{j}\) the expressions depending on them are non-zero, and thus the vanishing of such a determinant implies, since \(\tau_{13}\neq 0\), that \(\tau_{1j}=0\), up to higher order terms. We now inspect the determinant of \(D_{jk}\) of the \(4\times 4\) minor formed by rows \((123j)\) and columns \((123k)\) for \(j<k\); all the terms can again be read off from (12), so that the corresponding \(4\times 4\) minor is as follows (where to make the formula fit we dropped the \(1/2\pi i\) factors in front of each \(\tau\), coming from the bracket expressions, and we recalled \(\tau_{1a}=\tau_{2a}\) for \(a=3,j,k\)). Note that the fourth row and column of the minor are no longer symmetric. \[\left(\begin{smallmatrix}O(\varepsilon^{3})&1+O(\varepsilon^{2})&\phi_{3}\tau_{ 13}+O(\varepsilon^{2})&\phi_{k}\tau_{1k}+O(\varepsilon^{2})\\ 1+O(\varepsilon^{2})&O(\varepsilon^{3})&\phi_{3}\tau_{13}+O(\varepsilon^{2})& \phi_{k}\tau_{1k}+O(\varepsilon^{2})\\ \phi_{3}\tau_{13}+O(\varepsilon^{2})&\phi_{3}\tau_{13}+O(\varepsilon^{2})& \frac{1}{2}\psi_{3}\cdot[3,3,1,2]+O(\varepsilon^{3})&\phi_{3}\cdot\phi_{k} \cdot[3,k,1,2]+\ldots\\ \phi_{j}\tau_{1j}+O(\varepsilon^{2})&\phi_{j}\tau_{1j}+O(\varepsilon^{2})& \phi_{3}\cdot\phi_{j}\cdot[3,j,1,2]+\ldots&\phi_{j}\cdot\phi_{k}\cdot[j,k,1,2]+ \ldots\end{smallmatrix}\right)\,.\] Notice, however, that this formula does not really give the lowest order terms of the expansion, as indeed by the vanishing of the determinants \(D_{jj}\) and \(D_{kk}\) of the principal minors we know that \(\tau_{1j},\tau_{1k}=O(\varepsilon^{2})\). Thus in fact the entries \((1,k),(2,k),(j,1),(k,1)\) of the minor containing these entries are themselves of order \(O(\varepsilon^{2})\), while the correction term to \(\phi_{j}\tau_{1j}\) is actually of higher order, as all the brackets involved will contain \(\tau_{1j}\) or \(\tau_{12}\), and are thus of order at least one higher than their degree in \(\tau\)'s. What we want to determine is the dependence of \(D_{jk}\) on \(\tau_{jk}\), more precisely we want to determine the lowest order term that contains \(\tau_{jk}\). By inspection, we see that \[[j,k,1,2]=\tau_{jk}\tau_{12}+2\tau_{1j}\tau_{2k}=\tau_{jk}\tau_{12}+O( \varepsilon^{4})\] appearing in the \((j,k)\) entry of the minor above is the only entry where \(\tau_{jk}\) appears. We recall that from the vanishing of \(\theta_{\mathrm{m}_{0},\,\mathrm{null}}\), given by expansion (8), we have (again, up to all the \(\pm 2\pi i\) factors) \[X_{2}+Y_{2}=O(\varepsilon^{3})=\tau_{12}+\sum_{a>2}\phi_{a}\tau_{1a}\tau_{2a} =\tau_{12}+\tau_{13}^{2}+\sum_{j>3}\tau_{1j}^{2}=\tau_{12}+\tau_{13}^{2}+O( \varepsilon^{4})\,,\] since \(\tau_{1j}=O(\varepsilon^{2})\). Thus we see that \(\tau_{12}=-\tau_{13}^{2}+O(\varepsilon^{3})\), and is of order precisely \(\varepsilon^{2}\), as \(\tau_{13}\) is non-zero due to the assumed non-vanishing of the determinant \(D_{123}\). Thus the \((j,k)\) entry of the minor above contributes \(\phi_{j}\phi_{k}(-\tau_{jk}\tau_{13}^{2}+O(\varepsilon^{4}))\cdot D_{123}\) to \(D_{jk}\), where we expanded \(D_{jk}\) using the last row. By assumption \(D_{123}\) is non-zero, and in fact of order \(O(\varepsilon^{2})\) as discussed above. By inspection of the minor, the only other places where \(\tau_{jk}\) appears in the minor are when expanding brackets of \(6\) terms, and then at least two of these terms would be of order \(O(\varepsilon^{2})\), so we have found that the only dependence of \(D_{jk}\) modulo \(O(\varepsilon^{6})\) on \(\tau_{jk}\) is the term \(-\phi_{j}\phi_{k}\tau_{jk}\tau_{13}^{2}\cdot D_{123}\). Thus requiring \(D_{jk}\) to vanish modulo \(O(\varepsilon^{6})\) expresses \(\tau_{jk}\) in terms of the other variables, modulo \(O(\varepsilon^{2})\). Thus altogether each \(\tau_{1j}=\tau_{2j}\) must be of order \(O(\varepsilon^{2})\) by the vanishing of \(D_{jj}\), while each \(\tau_{jk}\) is expressed in terms of the rest of the entries in the first \(3\) rows of the period matrix, and the diagonal entries, by computing the \(O(\varepsilon^{5})\) term of \(D_{jk}\) and requiring it to vanish. Altogether we see that the local dimension of the locus \(\theta_{\mathrm{m}_{0},\,\mathrm{null}}^{3}\cap U\cap Z\) is as claimed.
2302.01680
Two-Stage Constrained Actor-Critic for Short Video Recommendation
The wide popularity of short videos on social media poses new opportunities and challenges to optimize recommender systems on the video-sharing platforms. Users sequentially interact with the system and provide complex and multi-faceted responses, including watch time and various types of interactions with multiple videos. One the one hand, the platforms aims at optimizing the users' cumulative watch time (main goal) in long term, which can be effectively optimized by Reinforcement Learning. On the other hand, the platforms also needs to satisfy the constraint of accommodating the responses of multiple user interactions (auxiliary goals) such like, follow, share etc. In this paper, we formulate the problem of short video recommendation as a Constrained Markov Decision Process (CMDP). We find that traditional constrained reinforcement learning algorithms can not work well in this setting. We propose a novel two-stage constrained actor-critic method: At stage one, we learn individual policies to optimize each auxiliary signal. At stage two, we learn a policy to (i) optimize the main signal and (ii) stay close to policies learned at the first stage, which effectively guarantees the performance of this main policy on the auxiliaries. Through extensive offline evaluations, we demonstrate effectiveness of our method over alternatives in both optimizing the main goal as well as balancing the others. We further show the advantage of our method in live experiments of short video recommendations, where it significantly outperforms other baselines in terms of both watch time and interactions. Our approach has been fully launched in the production system to optimize user experiences on the platform.
Qingpeng Cai, Zhenghai Xue, Chi Zhang, Wanqi Xue, Shuchang Liu, Ruohan Zhan, Xueliang Wang, Tianyou Zuo, Wentao Xie, Dong Zheng, Peng Jiang, Kun Gai
2023-02-03T12:02:54Z
http://arxiv.org/abs/2302.01680v3
# Two-Stage Constrained Actor-Critic for Short Video Recommendation ###### Abstract. The wide popularity of short videos on social media poses new opportunities and challenges to optimize recommender systems on the video-sharing platforms. Users sequentially interact with the system and provide complex and multi-faceted responses, including WatchTime and various types of interactions with multiple videos. On the one hand, the platforms aim at optimizing the users' cumulative WatchTime (main goal) in the long term, which can be effectively optimized by Reinforcement Learning. On the other hand, the platforms also need to satisfy the constraint of accommodating the responses of multiple user interactions (auxiliary goals) such asLike,Follow,Share, etc. In this paper, we formulate the problem of short video recommendation as a Constrained Markov Decision Process (CMDP). We find that traditional constrained reinforcement learning algorithms fail to work well in this setting. We propose a novel two-stage constrained actor-critic method: At stage one, we learn individual policies to optimize each auxiliary signal. In stage two, we learn a policy to (i) optimize the main signal and (ii) stay close to policies learned in the first stage, which effectively guarantees the performance of this main policy on the auxiliaries. Through extensive offline evaluations, we demonstrate the effectiveness of our method over alternatives in both optimizing the main goal as well as balancing the others. We further show the advantage of our method in live experiments of short video recommendations, where it significantly outperforms other baselines in terms of both WatchTime and interactions. Our approach has been fully launched in the production system to optimize user experiences on the platform. 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 23 2023 2023 2023 2023 2023 23 2023 2023 23 2023 2023 2023 23 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 232 2023 23 2023 2 2022] for short video platforms. Users interact with the platform by scrolling up and down and watching multiple videos as shown in Figure 1(a). Users provide multi-dimensional responses at each video. As shown in the left part of Figure 1(b), potential responses from a user after consuming a video include WatchTime (the time spent on watching the video), and several types of interactions: Follow (follow the author of the video), Like (Like this video), Comment (provide comments on the video), Collect (Collect this video), Share (share this video with his/her friends), etc. On the one hand, the main goal of the platform is to optimize the cumulative WatchTime of multiple videos, as WatchTime reflects user attention and is highly related to daily active users (DAU). Recently, a growing literature has focused on applying reinforcement learning (RL) to recommender systems, due to its ability to improve cumulative reward [1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. In particular, WatchTime, can be effectively cumulatively maximized to increase user spent time across multiple videos with RL approaches. On the other hand, other responses such as Like/Follow/Share also reflect user satisfaction levels. Thus the platform needs to satisfy the constraints of user interactions. Thereby, established recommender systems that exclusively optimize a single objective (such as gross merchandise volume for e-commerce platforms [12]) is no longer sufficient--the applied systems should take all aspects of responses into consideration to optimize user experiences. In this paper, we model the problem of short video recommendation as a Constrained Markov Decision Process: users serve as the environments, and the recommendation algorithm is the agent; at each time step the agent plays an action (recommend a video to the user), the environment sends multiple rewards (responses) to the agent. The objective of the agent is to maximize the cumulative WatchTime (main goal) subject to the constraints of other interaction responses (auxiliary goals). Our aim is different from Pareto optimality that aims to find a Pareto optimal solution [1, 1, 10, 11], which may not prioritize the main goal of the system. The problem of this constrained policy optimization is much more challenging as compared to its unconstrained counterpart. A natural idea would be applying standard constrained reinforcement learning algorithms that maximize the Lagrangian with pre-specified multipliers [16]. However, such method can not apply to our setting for the following two reasons: First, it is not sufficient to use a single policy evaluation model to estimate the Lagrangian dual objective due to different types of responses from the user. Such response combination is not adequate, particularly for responses with their own discount factors--the formulation of temporal difference error in value-based models only allows for a single discount value. In scenarios where one discount factor suffices, it can still be difficult for a single value model to evaluate the policy accurately, especially when different responses are observed at various frequencies, as typical for short video recommendations. The WatchTime response is dense and observed from each video view, while the interaction-signal such as Like/Follow/Share is much more sparse and may not be provided within dozens of views. The signal from the sparse responses will be weakened by the dense responses when naively summing them up together. To address this multi-response evaluation difficulty, we separately evaluate each response via its own value model, which allows for response-specific discount factors and mitigates the interference on evaluation from one response on another. Experiments in Section 4.1 validates the effectiveness of this method. Second, different from only one constraint is considered in [16], multiple constraints exist in recommender systems, especially in short video systems. We find that it is more difficult for algorithms that maximize the Lagrangian to optimize due to larger search space of multi-dimensional Lagrangian multipliers. It is time costly to grid search on the Lagrangian multipliers as the training of reinforcement learning algorithms takes long time. On account of this, we propose to firstly learn policies to optimize each auxiliary response and then "softly" regularize the policy of the main response to be close to others instead of searching optimal value of Lagrangian multipliers. We theoretically prove the closed form of the optimal solution. We demonstrate empirically that our approach can better maximize the main response and balance other responses in both offline and live experiments. Together, we summarize our contributions as below: * **Constrained Optimization in Short Video Recommendations**: We formalize the problem of constrained policy learning in short video recommendations, where different responses may be observed at various frequencies, and the agent maximizes one with the constraint of balancing others. * **Two-Stage Constrained Actor-Critic Algorithm** We propose a novel two-stage constrained actor-critic algorithm that effectively tackles the challenge: (1) Multi-Critic Policy Estimation: To better evaluate policy on multiple responses that may differ in discount factors and observation frequencies, we propose to separately learn a value model to evaluate each response. (2) Two-Stage Actor Learning: We propose a Figure 1. An example of a popular short video (TikTok, Kuaishou, etc) platform. two-stage actor learning method which firstly learns a policy to optimize each auxiliary response and secondly softly regularizes the policy of the main response to be not far from others, which we demonstrate to be a more effective way in constrained optimization with multiple constraints as compared with other alternatives. * **Significant Gains in Offline and Live Experiments:** We demonstrate the effectiveness of our method in both offline and live experiments. * **Deployment in real world short video application**: We fully launch our method in a popular short video platform. ## 2. Related Work Reinforcement Learning for RecommendationThere is a growing literature in applying RL to recommender systems, for its ability to optimize user long-term satisfaction (Afsar et al., 2021). Value-based approaches estimate user satisfaction of being recommended an item from the available candidate set and then select the one with the largest predicted satisfaction (Chen et al., 2018; Liu and Yang, 2019; Nemati et al., 2016; Zhao et al., 2018). Policy-based methods directly learn the policy (which item to recommend) and optimize it in the direction of increasing user satisfaction (Chen et al., 2019, 2019; Ma et al., 2020; Xian et al., 2019). Recently, growing attention has been paid to adapting reinforcement learning for more complex recommendation applications beyond optimizing one single objective, such as promoting equal exposure opportunities for content items (Ge et al., 2021), increasing diversity and novelty of recommendations (Stamenkovic et al., 2021), and characterizing more comprehensive user dynamics with representational reward shaping (Chen et al., 2021); we view our work as complementary to the third line. In face of the multi-faceted user responses, the system in real applications often has preferences on different types of user responses, for which we propose the constrained optimization problem in contrast to pursuing the Pareto optimality as proposed in (Chen et al., 2021) and (Ge et al., 2022). Constrained Reinforcement LearningOur is also closely related to the literature of constrained reinforcement learning, where the sequential decision making problem is formulated into a constrained Markov Decision Process (Sutton and Barto, 2018), and the policy learning procedure is expected to respect the constraints(Chow et al., 2017; Dalal et al., 2018; Garcia and Fernandez, 2015; Liu et al., 2021; Tessler et al., 2018). As an example, (Tessler et al., 2018) propose to update the policy and the Lagrangian multiplier alternatively and prove the convergence of their algorithm to a fixed point. This approach however only models one constraint, and can not scale well on problems with multiple constraints. In contrast, for each auxiliary response, we learn a policy to maximize it specifically, then we "softly" regularize the main policy to be close to others. We show empirically that this is a more effective way for constrained policy learning when dealing with multiple responses in recommender systems. Different from (Nair et al., 2020) that studies in offline RL and regularizes the learned policy to be near to one behavior policy, we softly restrict the policy within other policies maximizing other auxiliary responses. Multi-objective OptimizationWe also discuss a relevant line on multi-objective optimization. To trade off different objectives, methods in this field can be broadly categorized into two classes: the Pareto optimization and the joint optimization with pre-specified weights. The goal of Pareto optimization is to find a solution such that no other solutions can concurrently improve all objectives, named as _Pareto optimality_(Chen et al., 2021; Ge et al., 2022; Nguyen et al., 2020; Sener and Koltun, 2018). However, a Pareto optimal solution may not prioritize the objective that is most valued in applications. The other method combines different objectives together into a single one via pre-specifying the weights (Mossalam et al., 2016; White et al., 1980). However, it is difficult to quantify these weights that can accurately reflect preferences in real applications (Tessler et al., 2018). ## 3. Constrained Markov Decision Process for Short Video Recommendation We start by formulating the problem of short video recommendation, which is shown in Figure 2. When a user \(u\) opens the app, a new _session_ starts. A session consists of multiple _requests_. At each request \(t\) the recommender system (agent) takes an _action_\(a_{t}\) that recommends the user a video based on the user current _state_. Then the user provides _multi-faceted_ responses (such as WatchTime, Like, Share, and Follow) on the shown video, which are received by the agent as vector-valued _reward_ signal. After the user leaves the app, the session ends. The goal of the recommender system is to optimize cumulative reward of the main response (_e.g._, WatchTime), with the constraint of not sacrificing others much. We model the above procedure as a Constrained Markov Decision Process (CMDP) (Sutton and Barto, 2018) \((S,A,P,R,C,\rho_{0},\Gamma)\), where \(S\) is the state space of user current representation \(s_{t}\), \(A\) is the action space (and each action \(a_{t}\) corresponds to a recommended video for one request), \(P:S\times A\to\Delta(S)\) captures the state transition, \(R:S\times A\to\mathbb{R}^{m}\) defines the vector-valued reward function that yields \(m\) different rewards \(r(s_{t},a_{t})=\left(r_{1}(s_{t},a_{t}),\ldots,r_{m}(s_{t},a_{t})\right)\), \(\rho_{0}\) is the initial state distribution, \(\Gamma=(\gamma_{1},\ldots,\gamma_{m})\in(0,1)^{m}\) denotes the vector of discount factor for reward of each response. \(C\) specifies the constraints on the auxiliary responses, which denotes the lower bound of the total numbers of signals of other objectives. Define the vector-valued discounted cumulative reward \(R_{t}\) as \(R_{t}=\sum_{t^{\prime}=t}^{\Gamma^{\prime}-t}\Gamma^{\prime-t}\cdot r(s_{t^{ \prime}},a_{t^{\prime}})\), where \(T\) is the session length (i.e., the number of requests), \(\Gamma^{b}=\left(\gamma_{1}^{b},\ldots,\gamma_{m}^{b}\right)\), and \(\mathbf{x}\cdot\mathbf{y}\) denotes the point-wise product. Let \(V^{\pi}(s)=\left(V_{1}^{\pi}(s),\ldots,V_{m}^{\pi}(s)\right)\) be the state value Figure 2. The MDP of short video recommendation. \(E_{\pi}[R_{i}|s_{t}=s]\) under actions sampled in accordance with policy \(\pi\) and \(Q(s,a)=\left(Q_{\pi}^{\pi}(s,a),\ldots,Q_{\pi}^{\pi}(s,a)\right)\) be its state-action value \(E_{\pi}[R_{i}|s_{t}=s,a_{t}=a]\). Denote \(\rho_{\pi}\) as the state distribution induced by policy \(\pi\). Without loss of generality, we set the first response as our main response. The goal is to learn a recommendation policy \(\pi(\cdot|s)\) to solve the following optimization problem: \[\begin{split}\max_{\pi}& E_{\rho_{\pi}}\left[V_{1}^{\pi}(s) \right]\\ \text{s.t.}& E_{\rho_{\pi}}\left[V_{i}^{\pi}(s) \right]\geq C_{i},\quad i=2,\ldots,m\end{split} \tag{1}\] where \(C_{i}\) is constraint on the _auxillary_ response \(i\). ## 4. Two-stage constrained actor-critic In this section, we propose a novel two-stage constrained actor-critic method, addressing the learning challenges in the context of short video recommendation: **Multi-Critic Policy Estimation**: We propose to estimate the responses separately to better estimate dense and sparse signals. **Stage One**: For each auxiliary response, we learn a policy to optimize its cumulative reward. **Stage Two**: For the main response, we learn a policy to optimize its cumulative reward, while softly limiting it to be close to other policies that are learned to optimize the auxiliary. We first discuss the advantage of evaluating different policies separately over estimating jointly. Secondly, we elaborate our method in the settings of online learning with stochastic policies in Sections 4.2 and 4.3. We then discuss its extensions to the offline setting and deterministic policies. ### Multi-Critic Policy Estimation We showcase the advantage of separate evaluation for each response over a joint evaluation of summed response. Specifically, we consider two types of responses from each video view: WatchTime and interactions (which is an indicator function of whether the interactions happen during the view). * For the joint evaluation, we learn a value model \(V_{joint}\) with reward as a sum of WatchTime and interactions. * For the separate evaluation, we learn two value models \(V_{w}\) and \(V_{i}\) with reward as WatchTime and interactions respectively. Define the value of separate evaluation as \(V_{separate}=V_{w}+V_{i}\) For fair comparison, we share the same discount factor \(0.95\) for all value models and train them on the same data collected from a popular short video platform for one day. To evaluate the accuracy of the value model in terms of WatchTime and interactions, we compute the correlation between model values \(V_{joint}\) and \(V_{separate}\) with the Monte Carlo value of the sum of the corresponding responses in each session. As compared to \(V_{joint}\), \(V_{separate}\) is more correlated with WatchTime and interactions by \(0.19\%\) and \(0.14\%\) respectively(a \(0.1\%\) improvement on WatchTime and interactions is significant), demonstrating that the separate evaluation better learns different reward responses than jointly learning. ### Stage One: Policy Learning for Auxiliary Responses At this stage, we learn policies to optimize the cumulative reward of each auxiliary response separately. For completeness, we write out our procedure for stochastic policies (Williams, 1992). Considering response \(i\), let the learned actor and the critic be parameterized by \(\pi_{\theta_{i}}\) and \(V_{\phi_{i}}\) respectively. At iteration \(k\), we observe sample \((s,a,s^{*})\) collected by \(\pi_{\theta_{i}^{(k)}}\), _i.e._, \(s\sim\rho_{\pi_{\theta_{i}^{(k)}}}\), \(a\sim\pi_{\theta_{i}^{(k)}}\cdot(|s)\) and \(s^{*}\sim P(\cdot|s,a)\). We update the critic to minimize the Bellman equation: \[\phi_{i}^{(k+1)}\leftarrow\arg\min_{\phi}E_{\pi_{\theta_{i}^{(k)}}}\left[P_{i} (s,a)+\gamma_{i}V_{\phi_{i}^{(k)}}(s^{*})-V_{\phi}(s)\right]^{2}\right]. \tag{2}\] We update the actor to maximize the advantage: \[\begin{split}&\theta_{i}^{(k+1)}\leftarrow\arg\max_{\theta}E_{ \pi_{\theta_{i}^{(k)}}}\left[A_{i}^{(k)}\log\left(\pi_{\theta}(a|s)\right) \right]\\ &\text{where}\quad A_{i}^{(k)}=r_{i}(s,a)+\gamma_{i}V_{\phi_{i}^{(k )}}(s^{*})-V_{\phi_{i}^{(k)}}(s).\end{split} \tag{3}\] ### Stage Two: Softly Constrained Optimization of the Main Response After pre-training the policies \(\pi_{\theta_{2}},\ldots,\pi_{\theta_{m}}\) that optimize the auxiliary responses, we now move onto the second stage of learning the policy to optimize the main response. We propose a new constrained policy optimization method with multiple constraints. Let the actor and the critic be \(\pi_{\theta_{1}}\) and \(V_{\phi_{1}}\) respectively. At iteration \(k\), we similarly update the critic to minimize the Bellman equation: \[\phi_{1}^{(k+1)}\leftarrow\arg\min_{\phi}E_{\pi_{\theta_{1}^{(k)}}}\left[\left( r_{1}(s,a)+\gamma_{1}V_{\phi_{1}^{(k)}}(s^{*})-V_{\phi}(s)\right)^{2}\right]. \tag{4}\] The principle of updating the actor is two-fold: (i) maximizing the advantage; (ii) restricting the policy to the domain that is not far from other policies. The optimization is formalized below: \[\begin{split}\max_{\pi}& E_{\pi}[A_{1}^{(k)}]\\ \text{s.t.}& D_{KL}(\pi||\pi_{\theta_{i}})\leq\epsilon_{i}, \quad i=2,\ldots,m,\\ \text{where}& A_{1}^{(k)}=r_{1}(s,a)+\gamma_{1}V_{ \phi_{1}^{(k)}}(s^{*})-V_{\phi_{1}^{(k)}}(s).\end{split} \tag{5}\] We get the closed form solution of the Lagrangian of Eq. (5) in the following theorem. We omit the proof due to lack of space, please refer to Appendix A. Theorem 1 ().: _The Lagrangian of Eq. (5) has the closed form solution_ \[\pi^{*}(a|s)\propto\prod_{i=2}^{m}\left(\pi_{\theta_{i}}(a|s)\right)^{\frac{ \lambda_{i}}{\sum_{j=2}^{m}\lambda_{j}}}\exp\left(\frac{A_{1}^{(k)}}{\sum_{j=2 }^{m}\lambda_{j}}\right), \tag{6}\] _where \(\lambda_{i}\) with \(i=2,\ldots,m\) are Lagrangian multipliers._ Given data collected by \(\pi_{\theta_{i}^{(k)}}\), we learn the policy \(\pi_{\theta_{i}}\) by minimizing its KL divergence from the optimal policy \(\pi^{*}\): \[\begin{split}&\theta_{1}^{(k+1)}\leftarrow\arg\min_{\theta}E_{ \pi_{\theta_{1}^{(k)}}}\left[D_{KL}(\pi^{*}(a|s)||\pi_{\theta}(a|s))\right]\\ =&\arg\max_{\theta}E_{\pi_{\theta_{1}^{(k)}}}\left[ \frac{\prod_{i=2}^{m}\left(\pi_{\theta_{i}}(a|s)\right)^{\frac{\lambda_{i}}{ \sum_{j=2}^{m}\lambda_{j}}}}{\pi_{\theta_{i}^{(k)}}(a|s)}\exp\left(\frac{A_{1}^ {(k)}}{\sum_{j=2}^{m}\lambda_{j}}\right)\log\pi_{\theta}(a|s)\right].\end{split} \tag{7}\] The procedure of the two-stage constrained actor-critic algorithm is shown in Appendix B, and we name it as TSCAC for short. We here provide some intuition behind actor updating in (7). The term \(\pi_{\theta_{i}}(a|s)\) denotes the probability the action selected by policy \(i\) and serves as an importance, which softly regularizes the learned policy \(\pi_{\theta_{i}}\) to be close to other policies \(\pi_{\theta_{i}}\). Smaller Lagrangian multipliers \(\lambda_{i}\) indicate weaker constraints, and when \(\lambda_{i}=0\), we allow the learned policy \(\pi_{\theta_{i}}\) to be irrelevant of the constraint policy \(\pi_{\theta_{i}}\). Note that we set the value of \(\lambda\) to be the same, which is more practical for the production system. The performance of TSCAC would be better if we fine-tune it with different Lagrangian multiplier value. But the effectiveness of TSCAC with the same value of \(\lambda\) is validated in both offline and live experiments, as we will see in following sections. **Offline Learning** We now discuss adapting our constrained actor-critic method to the offline setting, i.e., a fixed dataset. The main change when moving from the online learning to the offline learning is the bias correction on the policy gradient. The actor is no longer updated on data collected by current policy but by another behavior policy \(\pi_{\theta}\), which may result in a different data distribution induced by the policy being updated. To address the distribution mismatch when estimating the policy gradient, a common strategy is to apply bias-correction ratio via importance sampling (Precup, 2000; Precup et al., 2001). Given a trajectory \(\tau=(s_{1},a_{1},s_{2},a_{2},\dots)\), the bias-correction ratio on the policy gradient for policy \(\pi_{\theta_{i}}\) is \(w(s_{t},a_{t})=\prod_{\ell^{\prime}=1}^{t}\frac{\pi_{\theta_{i}}(s_{\ell^{ \prime}}|a_{\ell^{\prime}})}{\pi_{\theta}(s_{\ell^{\prime}}|a_{\ell^{\prime}})}\), which gives an unbiased estimation, but the variance can be huge. Therefore, we suggest using a first-order approximation, and using the current action-selection ratio when optimizing the actors of auxiliary responses, \[\theta_{i}^{(k+1)}\leftarrow\arg\max_{\theta}E_{\pi_{\theta}}\bigg{[}\frac{ \pi_{\theta_{i}^{(k)}}(a|s)}{\pi_{\theta}(a|s)}A_{i}^{(k)}\log(\pi_{\theta}(a |s))\bigg{]}. \tag{8}\] When updating the actor of the main response, we have \[\theta_{1}^{(k+1)}\leftarrow\arg\max_{\theta}E_{\pi_{\theta}}\bigg{[}\frac{ \prod_{i=2}^{m}\Big{(}\pi_{\theta_{i}}(a|s)\Big{)}\frac{\lambda_{i}}{\sum_{j= 2}^{m}\lambda_{j}}}{\pi_{\theta}(a|s)}\] \[\times\exp\bigg{(}\frac{A_{1}^{(k)}}{\sum_{j=2}^{m}\lambda_{j}} \bigg{)}\log(\pi_{\theta}(a|s))\bigg{]}. \tag{9}\] **Deterministic Policies** We now discuss the extension of TSCAC to deterministic policies(Lillicrap et al., 2015), inspired by the updating rule for the actor of constrained policy discussed in (7). Similarly, at stage one, for each auxiliary response \(i\), we learn separate critic models \(Q_{\phi_{i}}(s,a)\) and actor models \(\pi_{\theta_{i}}(s)\). At stage two, for the main response, we learn critic \(Q_{\phi_{i}}(s,a)\) via temporal learning, and for actor \(\pi_{\theta_{1}}(s)\), the updating rule follows the form: \[\max_{\theta}\quad\prod_{i=2}^{m}\bigg{(}\hbar(\pi_{\theta_{i}}(s),\pi_{ \theta_{i}}(s))\bigg{)}^{\lambda_{i}}Q_{\phi_{1}}(s,\pi_{\theta}(s)), \tag{10}\] where \(h(a_{1},a_{2})\) scores high when two actions \(a_{1},a_{2}\) are close to each other and scores low vice versa, and \(h(\pi_{\theta_{i}}(s),\pi_{\theta_{i}}(s))\) scores high when the actions selected by policy \(\pi_{\theta_{1}}\) and \(\pi_{\theta_{i}}\) are close. \(\lambda_{i}\geq 0\) plays a similar role as the constraint Lagrangian multiplier--larger \(\lambda_{i}\) denotes stronger constraint. As an example, given \(n\) dimensional action space, one can choose \(h(a_{1},a_{2})=\sum_{d=1}^{n}\exp\big{(}-\frac{(a_{1d}-a_{2d})^{2}}{2}\big{)}\). The deterministic version of TSCAC can apply to the setting with continuous actions, such as the embedding of the user preference. ## 5. Offline Experiments In this section, we evaluate our method on a public dataset about short video recommendation via extensive offline learning simulations. We demonstrate the effectiveness of our approach as compared to existing baselines in both achieving the main goal and balancing the auxiliaries. We also test the versatility of our method on another public recommendation dataset, please refer to Appendix C due to lack of space. ### Setup DatasetWe consider a public dataset for short video recommendation named _KuaiRand_ ([https://kuairand.com/](https://kuairand.com/)) (Gao et al., 2022), which is collected from a famous video-sharing mobile app and suitable for the offline evaluation of RL methods as it is unbiased. This dataset collects not only the overall WatchTime of the videos, but also the interaction behavior of the users including Click, Like, Comment and Hate. The statistics of the dataset are illustrated in Table 1. It shows that Like, Comment, and Hate are sparse signals. Note that Hate is extremely sparse. Logs provided by the same user are concatenated to form a trajectory; we choose top 150 videos that are most frequently viewed. MDP. * state \(s_{t}\): A 1044 dimension vector, which is a concatenation of user features(user property), the last 20 video features viewed by the user(user history) and all the 150 candidate video features(context). * action \(a_{t}\): the video ID to be recommended currently. * reward \(r_{t}\): a vector of five scores the user provided for the viewed videos in terms of Click, Like, Comment, Hate, and WatchTime. * episode: a sequence of users' video viewing history. * discount factor \(y\): 0.99 * objective: We set the main goal to be maximizing the video WatchTime, and treat others as the auxiliaries. EvaluationWe use the _Normalised Capped Importance Sampling_ (NCIS) approach to evaluate different policies, which is a standard offline evaluation approach for RL methods in recommender systems (Zou et al., 2019). We also evaluate our method in terms of \begin{table} \begin{tabular}{c|c|c} \hline Dimension & Number & Sparse Ratio \\ \hline users & 26858 & - \\ items & 10,221,515 & - \\ samples & 68,148,288 & - \\ click & 25,693,008 & 37.70\% \\ like & 1094434 & 1.61\% \\ comment & 163977 & 0.24\% \\ hate & 32449 & 0.048\% \\ \hline \end{tabular} \end{table} Table 1. The statistics of KuaiRand. other metrics, please refer to Appendix D. The NCIS score is defined: \[N(\pi)=\frac{\sum_{s,a\in D}w(s,a)r(s,a)}{\sum_{s,a\in D}w(s,a)},w(s,a)=\min\{c, \frac{\pi(a|s)}{\pi_{\beta}(a|s)}\}, \tag{11}\] where \(D\) is the dataset, \(w(s,a)\) is the clipped importance sampling ratio, \(\pi_{\beta}\) denotes the behavior policy, \(c\) is a positive constant. BaselinesWe compare TSCAC with the following baselines. * **BC**: A supervised behavior-cloning policy \(\pi_{\beta}\) to mimic the recommendation policy in the dataset, which inputs the user state and outputs the video ID. * **Wide&Deep**(Cheng et al., 2016): A supervised model which utilizes wide and deep layers to balance both memorization and generalization, which inputs the user state, outputs the item id, and the weight of each sample is set to be the weighted sum of all responses of this item. * **DeepFM**(Guo et al., 2017): a supervised recommendation model which combines deep neural network and factorization machine, which inputs the user state, outputs the item id, and the weight of each sample is set to be the weighted sum of all responses of this item. * **RCPO**(Tessler et al., 2018) : A constrained actor-critic approach called reward-constrained policy optimization which optimizes the policy to maximize the Lagrange dual function of the constrained program. Specifically, the reward function is defined as \(r=r_{0}+\sum_{i=1}^{n}\lambda_{i}*r_{i}\), where \(r_{0}\) is main objective, WatchTime and \(r_{i}\) denotes other feedback, and \(\lambda_{i}\) is the Lagrangian Multiplier. * **RCPO-Multi-Critic**: We test an improved version of RCPO with multiple critics. We separately learn multiple critic models to evaluate the cumulative rewards of each feedback. Then when optimizing the actor, we maximize a linear combination of critics, weighted by the Lagrangian multipliers. * **Pareto**(Chen et al., 2021): A multi-objective RL algorithm that finds the Pareto optimal solution for recommender systems. * **TSCAC**: our two-stage constrained actor-critic algorithm. ### Overall Performance Table 2 presents the performance of different algorithms in terms of five scores. We can see that our TSCAC algorithm significantly outperforms other algorithms including both constrained reinforcement learning and supervised learning methods: for the main goal (WatchTime), TSCAC achieves the highest performance \(13.14(2.23\%)\); for the auxiliary goal, TSCAC also ranks highest for \(3\) out of \(4\) scores (Click, Like, Comment). Note that TSCAC outperforms BC and RCPO at each dimension. The Pareto algorithm indeed learns a Pareto optimal solution that achieves best performance at Hate, but gets the lowest performance \(11.90(-7.4\%)\), i.e., it does not satisfy the setting with the main goal to optimize the WatchTime. The RCPO algorithm achieves the second highest performance at WatchTime, \(13.07(1.70\%)\), but the score at Hate is the worst as the sparse signals are dominated by dense signals in a single evaluation model. Compared with RCPO, RCPO-Multi-Critic achieves much better score at Hate, which demonstrates the effectiveness of the multi-critic policy estimation method. TSCAC also outperforms RCPO-Multi-Critic at each dimension, which shows that the ability of our two-stage actor learning method to deal with multiple responses. ### Ablation Study We investigate how the value of Lagrangian multiplier affects the performance. As we set the value of \(\lambda\) of all constraints to be the same in the second stage, we vary \(\lambda\) across \([1e-1,1e-2,1e-3,1e-4,1e-5]\) and present performance of TSCAC in terms of all responses. Recall that larger \(\lambda\) denotes stronger constraints of auxiliary responses. Figure 3 shows that with \(\lambda\) increasing, the main goal, WatchTime decreases as the constraints of auxiliary responses become stronger. As shown in Figure 3, the performance of interactions drops with small \(\lambda\)\(1e-5\) as the constraints are weak. Interestingly, the performance of interactions also decreases with larger \(\lambda\), which shows that too strong constraints affect the learning of the policy. The value of \(1e-4\) achieves the best performance at interactions, and improve WatchTime significantly compared with other baselines. ## 6. Live Experiments To demonstrate the effectiveness of our algorithm, we test its performance as well as other alternatives via live experiments in a popular short video platform. Algorithms are embodied in a candidate-ranking system used in production at a popular short video platform, that is, when a user arrives, these algorithms are expected to rank the candidate videos, and the system will recommend the top video to the user. We show that the proposed TSCAC algorithm is able to learn a policy that maximizes the main goal while also effectively balancing the auxiliary goal, and in particular, we set the main one as maximizing the WatchTime and the auxiliary one as improving the interactions between users and videos. ### Setup Evaluation metricsWe use online metrics to evaluate policy performance. For the main goal, we look at the total amount of time user spend on the videos, referred to as WatchTime. For the auxiliary goal, users can interact with videos through multiple ways, such as sharing the video to friends, downloading it, or providing comments. Here, we focus on the three online metrics associated with the user-video interactions--the total number of Share, Download, Comment interactions. MDPFollowing the formulation in Section 3, we present the details of the Constrained MDP for short video recommendation. * state \(s_{t}\): user historical interactions (the list of items recommended to users at previous rounds and corresponding user feedbacks), user property (such as device and location) and the feature (the embeddings and statistics) of candidate videos at time \(t\). * action \(a_{t}\): a vector embedding of algorithm-predicted user preferences on different video topics, which determines the actual recommendation action(the video to be recommended) via a ranking function described below: **the ranking function**: for each candidate video, this function calculates the dot product between the predicted user preference vector (\(a_{t}\)) and the video embedding (representing its topic and quality) as in [10]. Then the video with the largest score is recommended. * reward \(r_{t}=(l_{t},i_{t})\): after each recommendation, the system observes how long the user spent on the video, WatchTime, denoted as \(l_{t}\), and whether the user has interacted with the video (Share/Download/Comment), denoted as \(i_{t}\). * episode: a trajectory starts when a user opens the app and ends when the user leaves. * policy: we choose to learn a Gaussian policy in the live experiments. Specifically, the action \(a_{t}\) is sampled from a multivariate Gaussian distribution whose mean and variance are output of the actor model. WorkflowAs shown in Figure 4, RL runs as follows: * **Inference** When the user comes, the user state are sent to the actor network, the actor network sample action by the Gaussian distribution. Then the ranking function inputs both \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline Algorithm & Click\(\uparrow\) & Like\(\uparrow\)(e-2) & Comment\(\uparrow\)(e-3) & Hate\(\downarrow\)(e-4) & WatchTime\(\uparrow\) \\ \hline BC & 0.5338 & 1.231 & 3.225 & 2.304 & 12.85 \\ \hline Wide\&Deep & 0.5544 & 1.244 & 3.344 & 2.011 & 12.84 \\ & 3.86\% & 1.07\% & 3.69\% & \(-\)12.7\% & \(-\)0.08\% \\ \hline DeepFM & 0.5549* & 1.388* & 3.310 & 2.112 & 12.92 \\ & 3.95\%* & 12.76\%* & 2.64\% & \(-\)8.31\% & 0.53\% \\ \hline RCPO & 0.5510 & 1.386 & 3.628* & 2.951 & 13.07* \\ & 3.23\% & 12.57\% & 12.5\%* & 28.1\% & 1.70\%* \\ \hline RCPO-Multi-Critic & 0.5519 & 1.367 & 3.413 & 2.108 & 13.00 \\ & 3.41\% & 11.04\% & 5.83\% & \(-\)8.49\% & 1.14\% \\ \hline Pareto & 0.5438 & 1.171 & 3.393 & 0.991* & 11.90 \\ & 1.87\% & \(-\)4.85\% & 5.22\% & \(-\)56.96\% & \(-\)7.4\% \\ \hline TSCAC & **0.5570** & **1.462** & **3.728** & 1.870 & **13.14** \\ & **4.35\%** & **18.80\%** & **15.6**\% & \(-\)18.83\% & **2.23** \\ \hline \hline \end{tabular} The number in the bracket stands for the unit of this column; The number in the first row of each algorithm is the NCIS score. The percentage in the second row means the performance gap between the algorithm and the BC algorithm. The numbers with \(*\) denote the best performance among all baseline methods in each response dimension. The last row is marked by bold font when TSCAC achieves the best performance at each response dimension. \end{table} Table 2. Performance of different algorithms on KuaiRand. Figure 4. The workflow of RL in production system. Figure 3. Effect of the value of the Lagrangian multiplier on the performance. the action and the embedding of candidates, calculates the dot product between the action and the video embeddings as scores, and output the item with the highest score to the user. After that, (state, action, rewards, next state) are saved in the replay buffer. * **Training** The actor and the critic networks are trained with a mini-batch (state, action, rewards, next state), sampled from the replay buffer. _Compared algorithms._ We complement our evaluation with a supervised learning-to-rank (LTR) baseline, which is the default model run on the platform. * **RCPO**: Following (Tessler et al., 2018), we define a combined reward \(l_{t}+\lambda l_{t}\) and learn a policy to maximize the cumulative combined reward with discount factor 0.95, where \(\lambda\) is the Lagrangian multiplier. * **TSCAC**: We first learn a policy \(\pi_{2}\) to optimize the auxiliary goal. Then we learn a policy \(\pi_{1}\) to optimize the main goal with the soft constraint that \(\pi_{1}\) is close to \(\pi_{2}\). * **Interaction-AC**: At the first stage, we learn a policy \(\pi_{2}\) to maximize the interaction reward, with critic update following (2) and actor update following (3). * At the second stage, we learn a main policy \(\pi_{1}\) to maximize the cumulative reward of WatchTime and softly regularize \(\pi_{1}\) to be close to \(\pi_{2}\), with critic update following (4) and actor update following (7). * **LTR (Baseline)**: The learning-to-rank model (Liu et al., 2009) that takes user state embedding and video embedding as input and fits the sum of responses. _Experimental details._ To test different algorithms, we randomly split users on the platform into several buckets. The first bucket runs the baseline LTR model, and the remaining buckets run models RCPO, Interaction-AC, and TSCAC. Models are trained for a couple of days and then are fixed to test performance within one day. ### Results Table 3 shows the performance improvement of algorithm comparison with the LTR baseline regarding metrics WatchTime, Share, Download, and Comment. As we can see, RCPO can learn to improve the WatchTime as compared to the baseline; but interaction-signals are too sparse with respect to WatchTime, such that when combining these responses together, it cannot effectively balance the interaction well. Performance of the Interaction-AC algorithm is as expected: with signal from only the interaction reward, it learns to improve the interaction-related metrics (Share, Download, Comment); such interactions between users and videos also improve the user WatchTime, since more interesting videos with high potential of invoking interactions are recommended, which optimizes user whole experience. Finally, The TSCAC algorithm achieves the best performance: as compared to RCPO, it has better WatchTime and does much better on interaction metrics, thanks to the effective softly regularization during training that it should not be too far from the Interaction-AC policy. Note that 0.1% improvement of WatchTime and 1% improvement of interactions are statistically significant in the short video platform. That is, the performance improvement of our proposed method over baselines is significant. The universal drop of Comment for all RL methods is due to the natural trade-off between WatchTime and Comment. To understand how the TSCAC algorithm learns to balance the main and auxiliary goal, Figure 5 plots the online performance gap of the second stage over the LTR baseline on both WatchTime and interactions. As shown, the algorithm quickly learns to improve the interaction metrics Share and Comment at the beginning, with the constraint of Interaction-AC policy. Then gradually, the model learns to improve WatchTime over time with sacrificing interactions a little. Note that the live performance of TSCAC outperforms RCPO significantly at each dimension, which demonstrates the effectiveness of our method. ## 7. Conclusion In this paper we study the problem to optimize main cumulative responses with multiple auxiliary sparse constraints in short video platforms. To tackle the challenge of multiple constraints, we propose a novel constrained reinforcement learning method, called TSCAC, that optimizes the main goal as well as balancing the others for short video platforms. Our method consists of multiple critic estimation and two learning stages. At stage one, for each auxiliary response, we learn a policy to optimize its cumulative reward respectively. At stage two, we learn the major policy to optimize the cumulative main response, with a soft constraint that restricts the policy to be close to policies maximized for other responses. We demonstrate the advantages of our method over existing alternatives via extensive offline evaluations as well as live experiments. For the future work, it is promising to apply our method to other recommender systems. It is also an interesting future work to study the performance of the deterministic version of TSCAC. \begin{table} \begin{tabular}{l|c|c|c|c} \hline \hline Algorithm & WatchTime & Share & Download & Comment \\ \hline RCPO & +0.309\% & \(-0.707\%\) & \(0.153\%\) & \(-1.313\%\) \\ Interaction-AC & +0.117\% & +5.008\% & +1.952\% & \(-0.101\%\) \\ TSCAC & +0.379\% & +3.376\% & +1.733\% & \(-0.619\%\) \\ \hline \hline \end{tabular} \end{table} Table 3. Performance comparison of different algorithms with the LTR baseline in live experiments. Figure 5. Online performance gap of TSCAC over the LTR baseline of each day.
2307.04126
Compactness of sequences of warped product circles over spheres with nonnegative scalar curvature
Gromov and Sormani conjectured that a sequence of three dimensional Riemannian manifolds with nonnegative scalar curvature and some additional uniform geometric bounds should have a subsequence which converges in some sense to a limit space with generalized notion of nonnegative scalar curvature. In this paper, we study the pre-compactness of a sequence of three dimensional warped product manifolds with warped circles over standard $\mathbb{S}^2$ that have nonnegative scalar curvature, a uniform upper bound on the volume, and a positive uniform lower bound on the MinA, which is the minimum area of closed minimal surfaces in the manifold. We prove that such a sequence has a subsequence converging to a $W^{1, p}$ Riemannian metric for all $p<2$, and that the limit metric has nonnegative scalar curvature in the distributional sense as defined by Lee-LeFloch.
Wenchuan Tian, Changliang Wang
2023-07-09T08:42:01Z
http://arxiv.org/abs/2307.04126v2
# Compactness of sequences of warped product circles over spheres with nonnegative scalar curvature ###### Abstract. Gromov and Sormani conjectured that a sequence of three dimensional Riemannian manifolds with nonnegative scalar curvature and some additional uniform geometric bounds should have a subsequence which converges in some sense to a limit space with some generalized notion of nonnegative scalar curvature. In this paper, we study the precompactness of a sequence of three dimensional warped product manifolds with warped circles over standard \(\mathbb{S}^{2}\) that have nonnegative scalar curvature, a uniform upper bound on the volume, and a positive uniform lower bound on the MinA, which is the minimum area of closed minimal surfaces in the manifold. We prove that such a sequence has a subsequence converging to a \(W^{1,p}\) Riemannian metric for all \(p<2\), and that the limit metric has nonnegative scalar curvature in the distributional sense as defined by Lee-LeFloch. ###### Contents * 1 Introduction * 2 Consequences of the geometric hypotheses on \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\) * 2.1 Basic consequences of the hypotheses * 2.2 Spherical mean inequality * 2.3 Ball average monotonicity * 3 \(W^{1,p}\) limit of warping function for \(1\leq p<2\) * 3.1 \(W^{1,p}\) limit function for \(p<2\) * 3.2 Lower semi-continuous representative of the limit function * 4 Positivity of the limit warping functions * 4.1 \(W^{1,2}\) regularity of limit of truncated warping functions * 4.2 A \(1\)-sweepout of the warped product manifold \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\) * 4.3 Bound MinA from above by \(L^{1}\)-norm of warping function * 4.4 Positivity of the limit of warping functions * 4.5 Uniform systole positive lower bound * 5 Nonnegative distributional scalar curvature of limit metric * 5.1 \(W^{1,p}\) limit Riemannian metric \(g_{\infty}\) * 5.2 Nonnegative distributional scalar curvature of \(g_{\infty}\) ###### Contents * 1 Introduction * 2 The \(W^{1,2}\) norm of the \(W^{1,2}\) norm **Definition 1.2**.: Let \(\{(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{j})\}_{j=1}^{\infty}\) be a sequence of Riemannian manifold such that \[g_{j}=g_{\mathbb{S}^{2}}+f_{j}^{2}g_{\mathbb{S}^{1}}=dr^{2}+\sin(r)^{2}d\theta^{ 2}+f_{j}^{2}d\varphi^{2},\ \text{for}\ j=1,2,3,... \tag{3}\] where \(g_{\mathbb{S}^{2}}\) and \(g_{\mathbb{S}^{1}}\) are the standard metrics on \(\mathbb{S}^{2}\) and \(\mathbb{S}^{1}\) respectively, and the function \(f_{j}:\mathbb{S}^{2}\to(0,\infty)\) is smooth for each \(j\). Here \(r\) and \(\theta\) are the geodesic polar coordinate for \(\mathbb{S}^{2}\). We also use the notation \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\) to denote \((\mathbb{S}^{2}\times\mathbb{S}^{1},g_{j})\). We consider the convergence of the warping function and prove the sharp regularity of the limit warping function in the following theorem: **Theorem 1.3**.: _Let \(\{\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\}_{j=1}^{\infty}\) be a sequence of warped product Riemannian manifolds such that each \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\) has non-negative scalar curvature. If we assume that_ \[\operatorname{Vol}(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1})\leq V\ \text{and}\ \operatorname{MinA}(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1})\geq A>0,\ \ \forall j\in\mathbb{N}, \tag{4}\] _then we have the following:_ 1. _After passing to a subsequence if needed, the sequence of warping functions_ \(\{f_{j}\}_{j=1}^{\infty}\) _converges to some limit function_ \(f_{\infty}\) _in_ \(L^{q}(\mathbb{S}^{2})\) _for all_ \(q\in[1,\infty)\)_._ 2. _The limit function_ \(f_{\infty}\) _is in_ \(W^{1,p}(\mathbb{S}^{2})\)_, for all_ \(p\) _such that_ \(1\leq p<2\)_._ 3. _The essential infimum of_ \(f_{\infty}\) _is strictly positive, i.e._ \(\inf\limits_{\mathbb{S}^{2}}f_{\infty}>0\)_._ 4. _If we allow_ \(+\infty\) _as a limit, then the limit_ (5) \[\overline{f_{\infty}}(x):=\lim_{r\to 0}\fint_{B_{r}(x)}f_{\infty}\] _exists for every_ \(x\in\mathbb{S}^{2}\)_. Moreover,_ \(\overline{f_{\infty}}\) _is lower semi-continuous and strictly positive everywhere on_ \(\mathbb{S}^{2}\)_, and_ \(\overline{f_{\infty}}=f_{\infty}\) _a.e. on_ \(\mathbb{S}^{2}\)_._ The definition of essential infimum is given in Definition 4.6. In the proof of convergence properties in items (i) and (ii) in Theorem 1.3, we only need nonnegative scalar curvature condition and volume uniform upper bound condition. In the proof of part (iii) of Theorem 1.3, we make essential use of MinA condition combined with the spherical mean inequality [Proposition 2.4], Min-Max minimal surface theory and a covering argument. This is an interesting new way of applying the MinA condition to prevent collapsing. Then the part (iv) follows from (iii) and an interesting ball average monotonicity property [Proposition 2.6]. The ball average monotonicity is obtained from spherical mean inequality by using the trick as in the proof of Bishop-Gromov volume comparison theorem. **Remark 1.4**.: The extreme example constructed by Sormani and authors in [19] shows that the \(W^{1,p}\) regularity for \(1\leq p<2\) is sharp for the limit warping function \(f_{\infty}\). By applying Theorem 1.3 and the spherical mean inequality [Proposition 2.4], we obtain:. **Proposition 1.5**.: _Let \(\{\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\}_{j=1}^{\infty}\) be a sequence of warped product manifolds such that each \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\) has non-negative scalar curvature, and the sequence satisfies conditions in (4). Then there exists \(j_{0}\in\mathbb{N}\) such that \(f_{j}(x)\geq\frac{e_{\infty}}{4}>0\), for all \(j\geq j_{0}\) and \(x\in\mathbb{S}^{2}\), where \(e_{\infty}=\inf_{\mathbb{S}^{2}}f_{\infty}>0\) obtained in Theorem 1.3._ As an application of Proposition 1.5, we have: **Corollary 1.6**.: _Let \(\{\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\}_{j=1}^{\infty}\) be a sequence of warped product manifolds such that each \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\) has non-negative scalar curvature, and the sequence satisfies conditions in (4). Then the systoles of \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\), for all \(j\in\mathbb{N}\), have a uniform positive lower bound given by \(\min\left\{2\pi,\frac{e_{\infty}}{2}\pi\right\}\), where \(e_{\infty}:=\inf_{\mathbb{S}^{2}}f_{\infty}>0\) obtained in Theorem 1.3._ The systole of a Riemannian manifold is defined to be the length of the shortest closed geodesic in the manifold [Definition 4.16]. In order to estimate systole of warped product manifolds: \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\), in Lemma 4.18 we establish an interesting dichotomy property for closed geodesics in a general warped product manifold \(N\times_{f}\mathbb{S}^{1}\) with \(\mathbb{S}^{1}\) as a typical fiber, with metric tensor as \(g=g_{N}+f^{2}g_{\mathbb{S}^{1}}\), where \((N,g_{N})\) is a \(n\)-dimensional complete Riemannian manifold without boundary and \(f\) is a positive smooth function on \(N\). The dichotomy property in Lemma 4.18 has its own interests independently, and shall be useful in other studies of closed geodesics in such warped product manifolds. The convergence of the warping functions in Theorem 1.3 leads to the convergence of the Riemannian metrics, we prove the following: **Theorem 1.7**.: _Let \(\{\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\}_{j=1}^{\infty}\) be a sequence of warped product Riemannian manifolds such that each \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\) has non-negative scalar curvature. If we assume that_ \[\operatorname{Vol}(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1})\leq V\text{ and }\operatorname{Min}\!\operatorname{A}(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}) \geq A>0,\ \ \forall j\in\mathbb{N}, \tag{6}\] _Then there exists a subsequence \(g_{j_{k}}\) and a (weak) warped product Riemannian metric \(g_{\infty}\in W^{1,p}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\) for \(p\in[1,2)\) such that_ \[g_{j_{k}}\to g_{\infty}\ \ \ \text{in}\ \ L^{q}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0}),\ \ \forall q\in[1,\infty). \tag{7}\] Theorem 1.7 is proved in SS5.1. The definition of a (weak) warped product Riemannian metric is given in Definition 5.1, and the spaces \(L^{q}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\) and \(W^{1,p}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\) are defined in Definition 5.3. The MinA condition is used to prevent \(g_{j_{k}}\) converging to a non-metric tensor in \(W^{1,p}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\), with the help of the non-collapsing property of \(f_{\infty}\) in the item (iii) in Theorem 1.3. In the limit space we calculate the scalar curvature as a distribution using the definition by Lee and LeFloch [10], and we prove the following: **Theorem 1.8**.: _The limit metric \(g_{\infty}\) obtained in Theorem 1.7 has nonnegative distributional scalar curvature on \(\mathbb{S}^{2}\times\mathbb{S}^{1}\) in the sense of Lee-LeFloch. [10]. Moreover, the total scalar curvatures of \(g_{j}\) converge to the distributional total scalar curvature of \(g_{\infty}\)._ Theorem 1.8 is proved in SS5.2. In general, it is still an interesting and difficult problem to formulate suitable notions of generalized (or weak) nonnegative scalar curvature in Conjecture 1.1. A natural candidate is the volume-limit notion of nonnegative scalar curvature. But recently Kazara and Xu constructed a sequence of warped product metrics on \(\mathbb{S}^{2}\times\mathbb{S}^{1}\) whose limit space does not have nonnegative scalar curvature in the sense of volume-limit in Theorem 1.3 in [9]. There are other candidates, like Gromov's polyhedron comparison notion [7, 12] and Burkhardt-Guim's Ricci flow notion [4] of nonnegative scalar curvature for \(C^{0}\)-metrics. However, as mentioned in Remark 1.4, the \(W^{1,p}\) regularity, for \(1\leq p<2\), is the best regularity for our limit metrics, and in general our limit metrics are not continuous. Lee and Lefloch [10] defined the scalar curvature distribution for \(W^{1,2}_{loc}\)-metrics. Our limit metric \(g_{\infty}\) obtained in Theorem 1.7 does not satisfy the regularity requirement in [10], but when we add up different terms in the integrand, the divergent terms cancel with each other and the scalar curvature is still well defined as a distribution. This is discussed in detail in Remark 5.18. Interestingly, we obtain the continuity of distributional total scalar curvature in Theorem 1.8. More importantly, the scalar curvature distribution of Lee-LeFloch enables us to see the concentration of scalar curvature on the singular set, see SS4.4 in [19]. In Appendix A, we study pre-compactness of the sequence of warped product spheres over circle \((M^{3}_{j},g_{j})\), that is, \(M^{3}_{j}\) are diffeomorphic to \(\mathbb{S}^{1}\times\mathbb{S}^{2}\) with warped product metric tensors \[g_{j}=g_{\mathbb{S}^{1}}+h^{2}_{j}g_{\mathbb{S}^{2}},\ \ \text{where}\ \ h_{j}:\mathbb{S}^{1}\to(0,\infty). \tag{8}\] The study of this case is similar to the rotationally symmetric case studied in [15]. The key is to obtain a uniform bound for the norm of gradient of \(h_{j}\) from nonnegative scalar curvature condition [Lemma A.4]. By combining this with uniform diameter upper bound and the MinA condition, we prove that a subsequence of \(\{h_{j}\}_{j=1}^{\infty}\) converges in \(C^{0}\) and \(W^{1,2}\) sense to a bounded positive Lipschitz function \(h_{\infty}:\mathbb{S}^{1}\to(0,\infty)\) [Theorem A.1]. Moreover, we prove that the limit \(W^{1,2}\) Riemannian metric \(g_{\infty}=g_{\mathbb{S}^{1}}+h_{\infty}^{2}g_{\mathbb{S}^{2}}\) has nonnegative distributional scalar curvature in the sense of Lee-LeFloch [Theorem A.2]. The proof of Theorem A.1 is similar to that of Theorems 4.1 and 4.8 in [15]. We include it here to show the difference with the rotationally symmetric case and the difference with Theorem 1.3 and Theorem 1.7. The proof of Theorem A.2 shows that in this case the regularity requirement in Lee-LeFloch [10] is essential for the definition of the scalar curvature as a distribution. This provides an interesting contrast with the proof of Theorem 1.8. The article is organized as follows: in Section 2, we derive several analysis properties of warping functions \(f_{j}\) from the uniform geometric bounds of metric \(g_{j}\) as in (3). In particular, we show that metrics \(g_{j}\) in (3) have nonnegative scalar curvature if and only if the warping functions \(f_{j}\) satisfy the differential inequality [Lemma 2.1]: \[\Delta f_{j}\leq f_{j},\ \ \text{on}\ \ \mathbb{S}^{2}, \tag{9}\] where \(\Delta\) is the Lapacian on the standard round sphere \(\mathbb{S}^{2}\), taken to be the trace of the Hessian. Moreover, a positive number \(V\) is a uniform upper bound of volumes of metrics \(g_{j}\) in (3) if and only if \(f_{j}\) satisfy [Lemma 2.2] \[\int_{\mathbb{S}^{2}}f_{j}d\text{vol}_{g_{\mathbb{S}^{2}}}\leq\frac{V}{2\pi}. \tag{10}\] It is well-known that the spherical mean property of (sub, sup)-harmonic functions plays important roles in the study of these functions. Inspired by this, we prove a spherical mean inequality for functions \(f_{j}\) satisfying the differential inequality (9) [Proposition 2.4]. It turns out that the spherical mean inequality is very important in the proof of non-collapsing property in Section 4, in particular, in the proof of Proposition 4.10. Furthermore, by employing the trick in the proof of Bishop-Gromov volume comparison theorem, we prove a ball average monotonicity property for \(f_{j}\) [Proposition 2.6], which helps us to obtain lower semi-continuity of the limit warping function \(f_{\infty}\) in Proposition 3.7. In Section 3, we study the convergence of a sequence \(\{f_{j}\}_{j=1}^{\infty}\) of positive functions on \(\mathbb{S}^{2}\) satisfying (9) and (10). We prove that there exists a subsequence of such sequence \(\{f_{j}\}\) and a function \(f_{\infty}\in W^{1,p}(\mathbb{S}^{2})\,(1\leq p<2)\) such that the subsequence converges to \(f_{\infty}\) in \(L^{q}(\mathbb{S}^{2})\) for any \(q\geq 1\) [Proposition 3.5]. The proof of this convergence result is very different from that in cases of warped product metrics as in [15] and in (8). Because warping functions \(h_{j}\) in [15] and in (8) have one variable, whereas \(f_{j}\) in (3) have two variables, it is more difficult to obtain sub-convergence of \(\{f_{j}\}\), and we make use of the Moser-Trudinger inequality in (25) in [14]. The regularity of the limit function \(f_{\infty}\) is weaker than \(h_{\infty}\). The extreme example constructed by Sormani and authors in [19] shows that the \(W^{1,p}\) regularity for \(1\leq p<2\) is sharp for \(f_{\infty}\). In Section 4, we use the MinA condition to show that the limit function \(f_{\infty}\) has positive essential infimum [Theorem 4.13] and that the warping functions \(f_{j}\) have a positive uniform lower bound [Proposition 4.15]. This enables us to define weak warped product Riemnian metric \(g_{\infty}\) on \(\mathbb{S}^{2}\times\mathbb{S}^{1}\) in Definition 5.1, and is crucial in the study of geometric convergence of warped product circles over sphere with metric tensor as in (3). Moreover, as a consequence of Proposition 4.15, we obtain a positive uniform lower bound for the systole of the warped product manifolds \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\) [Proposition 4.20]. The MinA condition can be viewed as a noncollapsing condition. As shown in [15] and in Lemma A.6 below, it is not difficult to see this in cases of metric tensors as in [15] and (8). In the case of metric tensors as in (3), however, the implication of the MinA condition is much more complicated. We need to use the Min-Max minimal surface theory of Marques and Neves (see e.g. [13]), the maximum principle for weak solutions (Theorem 8.19 in [6]), and the spherical mean inequality obtained in Proposition 2.4, in order to obtain noncollapsing from the MinA condition. In Section 5, we prove that a subsequence of \(\{g_{j}\}_{j=1}^{\infty}\), with \(g_{j}\) as in (3) having nonnegative scalar curvatures and uniform upper bounded volumes and satisfying MinA condition, converges to a weak metric tensor \(g_{\infty}\in W^{1,p}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\,(1\leq p<2)\) in the sense of \(L^{q}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\) for all \(q\geq 1\) [Theorem 5.5]. Moreover, we prove that the limit metric \(g_{\infty}\) has nonnegative distributional scalar curvature in the sense of Lee-LeFloch [Theorem 5.11]. Note that in the case of metric tensors as in [15] and (8), we need the diameter uniform upper bound condition in addition to nonnegative scalar curvature condition and the MinA condition for getting convergence [Theorem 1.3 in [15] and Theorem A.1], whereas in the case of metric tensors as in (3), we need the volume uniform upper bound condition instead of the diameter uniform upper bound condition [Theorem 5.5]. **Acknowledgements:** The authors would like to thank the Fields Institute for hosting the _Summer School on Geometric Analysis_ in July 2017 where we met Professor Christina Sormani and she started to guide us working on the project concerning compactness of manifolds with nonnegative scalar curvatures. We are grateful to Professor Sormani for her constant encouragement and inspiring discussions. In particular, Professor Sormani suggested us the method of spherical means, and it turns out to be very useful in the study of warping functions in Theorem 1.3. We thank Brian Allen for discussions and interest in this work. Wenchuan Tian was partially supported by the AMS Simons Travel Grant. Changliang Wang was partially supported by the Fundamental Research Funds for the Central Universities and Shanghai Pilot Program for Basic Research. ## 2. Consequences of the geometric hypotheses on \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\) In this section we prove several consequences of the uniform geometric bounds. In Subsection 2.1, we derive the differential inequality satisfied by the warping function \(f_{j}\) and prove that the uniform volume bounds on sequence of Riemannian manifolds implies the uniform \(L^{1}\) norm of the warping function. In Subsection 2.2, we prove the spherical mean inequality for the warping function \(f\) [Proposition 2.6], which is our main analytic tool. In Subsection 2.3, we prove a ball average monotonicity property for the warping function \(f\) [Proposition 2.4]. The implication of the MinA condition is more complicated we discuss that in Section 4. ### Basic consequences of the hypotheses **Lemma 2.1** (Non-negative scalar curvature condition).: _The scalar curvature of warped product manifolds \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\) are given by_ \[\operatorname{Scalar}_{j}=2-2\frac{\Delta f_{j}}{f_{j}}, \tag{11}\] _where \(\Delta\) is the Laplacian on \(\mathbb{S}^{2}\) with respect to the standard metric \(g_{\mathbb{S}^{2}}\), taken to be the trace of the Hessian (without the negative sign)._ _Thus \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\) have nonnegative scalar curvature if and only if_ \[\Delta f_{j}\leq f_{j}. \tag{12}\] Proof.: By using the Ricci curvature formula for warped product metrics as in Proposition 9.106 of [3], we can easily obtain the scalar curvature of \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\) as \(\operatorname{Scalar}_{j}=2-2\frac{\Delta f_{j}}{f_{j}}.\) Then the second claim directly follows, since \(f_{j}>0\). **Lemma 2.2** (Volume upper bound condition).: _The warped product manifolds \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\) have volume \(\operatorname{Vol}(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1})\leq V\) if and only if_ \[\int_{\mathbb{S}^{2}}f_{j}d\operatorname{vol}_{\mathbb{S}^{2}}\leq\frac{V}{2 \pi}. \tag{13}\] Proof.: The Riemannian volume measure of \(g_{j}\) is given by \[d\mathrm{vol}_{g_{j}}=f_{j}d\mathrm{vol}_{g_{\mathbb{S}^{2}}}d\mathrm{vol}_{g_{ \mathbb{S}^{1}}}. \tag{14}\] Thus the volume of \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\) is given by \[\mathrm{Vol}(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1})=\int_{\mathbb{S}^{2} \times\mathbb{S}^{1}}f_{j}d\mathrm{vol}_{g_{\mathbb{S}^{2}}}d\mathrm{vol}_{ \mathbb{S}^{1}}=2\pi\int_{\mathbb{S}^{2}}f_{j}d\mathrm{vol}_{g_{\mathbb{S}^{2}}}. \tag{15}\] Then the claim directly follows. ### Spherical mean inequality In this subsection, we prove a spherical mean inequality [Proposition 2.4] for the smooth functions \(f\) on \(\mathbb{S}^{2}\) satisfying the differential inequality \(\Delta f\leq f\). By Lemma 2.1, this is equivalent to studying the warping function of warped product manifolds \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\) with nonnegative scalar curvature. The spherical mean inequality plays an important role in the proof of Proposition 4.10. The derivation of the spherical mean value inequality is similar to that of the mean value property of harmonic functions. We start with the following lemma. **Lemma 2.3**.: _Let \(f\) be a smooth function on \(\mathbb{S}^{2}\). Consider the spherical mean given by_ \[\phi(r):=\fint_{\partial B_{r}(p)}fds, \tag{16}\] _where \(B_{r}(p)\) is the geodesic ball in the standard \(\mathbb{S}^{2}\) with center \(p\) and radius \(r\). The derivative of \(\phi(r)\) satisfies_ \[\frac{d}{dr}\phi(r)=\frac{1}{2\pi\sin r}\int_{B_{r}(p)}\Delta fd\mathrm{vol}_{ \mathbb{S}^{2}}. \tag{17}\] Proof.: Using the geodesic polar coordinate \((r,\theta)\) on \(\mathbb{S}^{2}\) centered at \(p\), one can write \(\phi(r)\) as \[\phi(r)=\frac{\int_{0}^{2\pi}f(r,\theta)\sin rd\theta}{2\pi\sin r}=\frac{\int _{0}^{2\pi}f(r,\theta)d\theta}{2\pi}. \tag{18}\] Then taking derivative with respective to \(r\) gives \[\phi^{\prime}(r) = \frac{1}{2\pi}\int_{0}^{2\pi}\frac{\partial f}{\partial r}d\theta \tag{20}\] \[= \frac{1}{2\pi}\int_{0}^{2\pi}\langle\nabla f,\partial_{r}\rangle\] (21) \[= \frac{1}{2\pi\sin r}\int_{0}^{2\pi}\langle\nabla f,\partial_{r} \rangle\sin rd\theta \tag{19}\] \[= \frac{1}{2\pi\sin r}\int_{\partial B_{r}(p)}\langle\nabla f,\partial_ {r}\rangle ds \tag{23}\] \[\stackrel{{ Stokes}}{{=}} \frac{1}{2\pi\sin r}\int_{B_{r}(p)}\Delta fd\mathrm{vol}_{\mathbb{ S}^{2}}. \tag{22}\] Now we use Lemma 2.3 to prove the spherical mean inequality. **Proposition 2.4**.: _Let \(f\) be a smooth function on \(\mathbb{S}^{2}\) satisfying \(\Delta f\leq f\). Then for any fixed \(p\in\mathbb{S}^{2}\) and \(0<r_{0}<r_{1}\leq\frac{\pi}{2}\), one has_ \[\fint_{\partial B_{r_{1}}(p)}fds-\fint_{\partial B_{r_{0}}(p)}fds\leq\frac{\| f\|_{L^{2}(\mathbb{S}^{2})}}{\sqrt{2\pi}}(r_{1}-r_{0}), \tag{24}\] _where \(B_{r}(p)\) is the geodesic ball in the \(\mathbb{S}^{2}\) with center \(p\) and radius \(r\)._ _Moreover, by taking limit as \(r_{0}{\rightarrow}0\), one has_ \[\fint_{\partial B_{r}(p)}fds-f(p)\leq\frac{\|f\|_{L^{2}(\mathbb{S}^{2})}}{ \sqrt{2\pi}}r, \tag{25}\] _for any \(0<r\leq\frac{\pi}{2}\)._ Proof.: By Lemma 2.3 and the assumption \(\Delta f\leq f\), one has \[\phi^{\prime}(r)\leq\frac{1}{2\pi\sin r}\int_{B_{r}(p)}fd\mathrm{vol}_{ \mathbb{S}^{2}}. \tag{26}\] Integrating this differential inequality for \(r\) from \(r_{0}\) to \(r_{1}\) gives \[\phi(r_{1})-\phi(r_{0}) \leq \int_{r_{0}}^{r_{1}}\left(\frac{1}{2\pi\sin r}\int_{B_{r}(p)}fd \mathrm{vol}_{\mathbb{S}^{2}}\right)dr \tag{28}\] \[\leq \int_{r_{0}}^{r_{1}}\left(\frac{1}{2\pi\sin r}\|f\|_{L^{2}( \mathbb{S}^{2})}\sqrt{\mathrm{Area}(B_{r}(p))}\right)dr\] (29) \[= \frac{\|f\|_{L^{2}(\mathbb{S}^{2})}}{\sqrt{2\pi}}\int_{r_{0}}^{r_ {1}}\frac{\sqrt{1-\cos r}}{\sin r}dr\] (30) \[= \frac{\|f\|_{L^{2}(\mathbb{S}^{2})}}{\sqrt{2\pi}}\int_{r_{0}}^{r_ {1}}\frac{1}{\sqrt{1+\cos r}}dr\] (31) \[\leq \frac{\|f\|_{L^{2}(\mathbb{S}^{2})}}{\sqrt{2\pi}}\int_{r_{0}}^{r _{1}}1dr\qquad\left(0<r_{0}<r_{1}\leq\frac{\pi}{2}\right)\] (32) \[= \frac{\|f\|_{L^{2}(\mathbb{S}^{2})}}{\sqrt{2\pi}}(r_{1}-r_{0}). \tag{27}\] ### Ball average monotonicity In this subsection, we further derive a ball average monotonicity [Proposition 2.6] for a smooth function on \(\mathbb{S}^{2}\) satisfying \(\Delta f\leq f\). The proof uses the spherical mean inequality [Proposition 2.4] and the trick as in the proof of Bishop-Gromov volume comparison theorem. This ball average monotonicity is used in Proposition 3.7 to prove that the ball average limit as \(r\to 0\) exists everywhere for the limit function. **Lemma 2.5**.: _Let \(f\) be a smooth function on \(\mathbb{S}^{2}\) satisfying \(\Delta f\leq f\) and \(\|f\|_{L^{2}(\mathbb{S}^{2})}\leq C\sqrt{2\pi}\), where \(C\) is a positive constant. For any fixed \(x\in\mathbb{S}^{2}\), the spherical mean_ \[\fint_{\partial B_{r}(x)}(f-Cr)=\frac{\int_{\partial B_{r}(x)}(f-Cr)}{2\pi\sin r} \tag{33}\] _is a non-increasing function in \(r\) for \(r\in(0,\frac{\pi}{2}]\)_ Proof.: The spherical mean inequality in Proposition 2.4 says that for any \(x\in\mathbb{S}^{2}\) and \(0<r_{0}<r_{1}\leq\frac{\pi}{2}\), \[\fint_{\partial B_{1}(x)}f-\fint_{\partial B_{r_{0}}(x)}f\leq\frac{\|f\|_{L^{2 }(\mathbb{S}^{2})}}{\sqrt{2\pi}}(r_{1}-r_{0})\leq C(r_{1}-r_{0}). \tag{34}\] By rearranging this inequality, we obtain that for any fixed \(x\in\mathbb{S}^{2}\), \[\fint_{\partial B_{r_{1}}(x)}(f-Cr_{1})\leq\fint_{\partial B_{r_{0}}(x)}(f-Cr_ {0}),\quad\forall 0<r_{0}\leq r_{1}\leq\frac{\pi}{2}. \tag{35}\] This completes the proof. Combine this spherical mean monotonicity with the trick as in the proof of Bishop-Gromov volume comparison theorem, we obtain the following ball average monotonicity. **Proposition 2.6**.: _Let \(f\) be a smooth function on \(\mathbb{S}^{2}\) satisfying \(\Delta f\leq f\) and \(\|f\|_{L^{2}(\mathbb{S}^{2})}\leq C\sqrt{2\pi}\), then \(\forall 0<r<R\leq\frac{\pi}{2}\),_ \[\fint_{B_{R}(x)}(f(y)-Cd(y,x))\,d\mathrm{vol}(y)\leq\fint_{B_{r}(x)}(f(y)-Cd( y,x))\,d\mathrm{vol}(y), \tag{36}\] _where \(d(y,x)\) is the distance between \(y\) and \(x\) in the standard \(\mathbb{S}^{2}\)._ Proof.: **Step 1**. \[\int_{B_{r}(x)}(f(y)-Cd(y,x))\,d\mathrm{vol}(y)\] \[= \int_{0}^{r}\left(\int_{\partial B_{R}(x)}(f-Cs)\right)ds \tag{37}\] \[= \int_{0}^{r}(2\pi\sin s)\left(\fint_{\partial B_{s}(x)}(f-Cs)\right)ds \tag{40}\] \[\geq \fint_{\partial B_{r}(x)}(f-Cr)\cdot\int_{0}^{r}2\pi\sin sds\quad \text{(by \eqref{eq:2.1} and \ $s\leq r$)}\] (41) \[= \text{Vol}(B_{r}(x))\fint_{\partial B_{r}(x)}(f-Cr). \tag{39}\] So \[\fint_{B_{r}(x)}(f(y)-Cd(y,x))d\text{vol}(y)\geq\fint_{\partial B_{r}(x)}(f(y)- Cr) \tag{42}\] **Step 2**. Let \(A_{r,R}(x)=B_{R}(x)\setminus B_{r}(x)\). Similar as in step 1, we have \[\int_{A_{r,R}(x)}(f(y)-Cd(y,x))d\text{vol}(y) \tag{44}\] \[= \int_{r}^{R}\left(\int_{\partial B_{s}(x)}(f-Cs)d\sigma\right)ds\] (45) \[= \int_{r}^{R}(2\pi\sin s)\left(\fint_{\partial B_{s}(x)}(f-Cs)d \sigma\right)ds\] (46) \[\leq \fint_{\partial B_{r}(x)}(f-Cr)d\sigma\cdot\int_{r}^{R}(2\pi\sin s )ds\quad\text{(by \eqref{eq:2.1} and \ $s\geq r$)}\] (47) \[= \text{vol}(A_{r,R}(x))\fint_{\partial B_{r}(x)}(f-Cr)d\sigma \tag{43}\] So \[\fint_{A_{r,R}(x)}(f(y)-Cd(y,x))d\text{vol}(y)\leq\fint_{\partial B_{r}(x)}(f- Cr)d\sigma. \tag{48}\] **Step 3**. By combining (42) and (48), we obtain that for \(0<r<R\leq\frac{\pi}{2}\) \[\fint_{A_{r,R}(x)}(f(y)-Cd(y,x))d\text{vol}(y)\leq\fint_{B_{r}(x)}(f(y)-Cd(y, x))d\text{vol}(y). \tag{49}\] **Step 4**. \[\int_{B_{R}(x)}(f-Cd(y,x))d\text{vol}(y) \tag{51}\] \[= \int_{B_{r}(x)}(f-Cd(y,x))d\text{vol}(y)+\int_{A_{r,R}(x)}(f-Cd(y,x))d\text{vol}(y)\] (52) \[\leq \int_{B_{r}(x)}(f-Cd(y,x))d\text{vol}(y)\] (53) \[+\text{Vol}(A_{r,R}(x))\cdot\fint_{B_{r}(x)}(f(y)-Cd(y,x))d\text {vol}(y) \tag{50}\] \[= \left(\mathrm{Vol}(B_{r}(x))+\mathrm{vol}(A_{r,R}(x))\right)\fint_{B_ {r}(x)}(f(y)-Cd(y,x))d\mathrm{vol}(y) \tag{55}\] \[= \mathrm{Vol}(B_{R}(x))\fint_{B_{r}(x)}(f(y)-Cd(y,x))d\mathrm{vol}( y). \tag{54}\] This completes the proof. ## 3. \(W^{1,p}\) limit of warping function for \(1\leq p<2\) In this section, we study the \(L^{q}\) pre-compactness of a sequence of positive smooth functions \(f_{j}\) satisfying the inequalities \[\Delta f_{j}\leq f_{j},\quad\int_{\mathbb{S}^{2}}f_{j}d\mathrm{vol}_{\mathbb{S }^{2}}\leq\frac{V}{2\pi},\quad\forall j\in\mathbb{N}. \tag{56}\] Here \(V\) is a positive constant. By Lemmas 2.1 and 2.2, the ineqlities in (56) are equivalent to the requirements that the Riemannian manifolds \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\) have nonnegative scalar curvature and uniform volume upper bound. In Subsection 3.1, we prove that a sequence of positive smooth functions \(f_{j}\) on \(\mathbb{S}^{2}\) satisfying requirements in (56) has a convergent subsequence in \(L^{q}(\mathbb{S}^{2})\) for any \(1\leq q<+\infty\), and that the limit function is in \(W^{1,p}(\mathbb{S}^{2})\) for any \(1\leq p<2\) [Proposition 3.5]. In Subsection 3.2, we apply the ball average monotonicity property obtained in Proposition 2.6 to prove that the limit function has a lower semicontinuous representative [Proposition 3.7, Remark 3.8]. ### \(W^{1,p}\) limit function for \(p<2\) We first derive the gradient estimate for the sequence of function \(\ln f_{j}\) in Lemma 3.1, which is used to obtain \(L^{p}\) estimate for \(f_{i}\) by using Moser-Trudinger inequality in Lemma 3.2. **Lemma 3.1**.: _Let \(\{f_{j}\}_{j=1}^{\infty}\) be a sequence of positive functions on \(\mathbb{S}^{2}\) satisfying_ \[\Delta f_{j}\leq f_{j},\quad\forall j\in\mathbb{N}. \tag{57}\] _We have_ \[\left\|\nabla\ln f_{j}\right\|_{L^{2}(\mathbb{S}^{2})}^{2}\leq\mathrm{Vol}( \mathbb{S}^{2}),\quad\forall j\in\mathbb{N}. \tag{58}\] Proof.: Note that \[\Delta\ln f_{j}=\frac{\Delta f_{j}}{f_{j}}-\frac{|\nabla f_{j}|^{2}}{f_{j}^{2}}. \tag{59}\] By equation (59) and the assumption, we have \[|\nabla\ln f_{j}|^{2}=\frac{|\nabla f_{j}|^{2}}{f_{j}^{2}}=\frac{\Delta f_{j}} {f_{j}}-\Delta\ln f_{j}\leq 1-\Delta\ln f_{j}. \tag{60}\] Integrating it over \(\mathbb{S}^{2}\), and using Stokes' theorem, we get \[\|\nabla\ln f_{j}\|^{2}_{L^{2}(\mathbb{S}^{2})}=\int_{\mathbb{S}^{2}}|\nabla\ln f _{j}|^{2}\leq\text{Vol}(\mathbb{S}^{2}). \tag{61}\] **Lemma 3.2**.: _Let \(\{f_{j}\}_{j=1}^{\infty}\) be a sequence of positive functions on \(\mathbb{S}^{2}\) satisfying_ \[\Delta f_{j}\leq f_{j},\quad\int_{\mathbb{S}^{2}}f_{j}d\text{\rm vol}_{\mathbb{ S}^{2}}\leq\frac{V}{2\pi},\quad\forall j\in\mathbb{N}. \tag{62}\] _Then we have_ \[\|f_{j}\|^{p}_{L^{p}(\mathbb{S}^{2})}\leq 4\pi\exp\left(\frac{Vp}{8\pi^{2}}+ \frac{p^{2}}{4}\right), \tag{63}\] _for all \(j\in\mathbb{N}\) and \(p\in[1,+\infty)\)._ Proof.: By the Moser-Trudinger inequality (inequality (25) in [14]), for any smooth function \(\psi:\mathbb{S}^{2}\to\mathbb{R}\) we have \[\int_{\mathbb{S}^{2}}e^{\psi}d\text{\rm vol}_{\mathbb{S}^{2}}\leq 4\pi\exp \left(\frac{1}{4\pi}\int_{\mathbb{S}^{2}}\left(\psi+\frac{1}{4}|\nabla\psi|^{ 2}\right)d\text{\rm vol}_{\mathbb{S}^{2}}\right). \tag{64}\] Here \(\nabla\) is the Levi-Civita connection of the standard metric \(g_{\mathbb{S}^{2}}\) and \(d\text{\rm vol}_{\mathbb{S}^{2}}\) is the volume form on \(\mathbb{S}^{2}\) with respect to the standard metric \(g_{\mathbb{S}^{2}}\). Take \(\psi=p\ln f_{j}\), then we have \[\|f_{j}\|^{p}_{L^{p}(\mathbb{S}^{2})} = \int_{\mathbb{S}^{2}}f_{j}^{p}d\text{\rm vol}_{\mathbb{S}^{2}} \tag{66}\] \[= \int_{\mathbb{S}^{2}}e^{p\ln f_{j}}d\text{\rm vol}_{\mathbb{S}^{ 2}}\] (67) \[\leq 4\pi\exp\left(\frac{1}{4\pi}\int_{\mathbb{S}^{2}}\left(p\ln f_{ j}+\frac{p^{2}}{4}|\nabla\ln f_{j}|^{2}\right)d\text{\rm vol}_{\mathbb{S}^{2}} \right). \tag{65}\] By the fact that \(\ln x\leq x,\forall x>0\), we have \[\int_{\mathbb{S}^{2}}\ln f_{j}\leq\int_{\mathbb{S}^{2}}f_{j}\leq\frac{V}{2\pi}. \tag{68}\] On the hand, by Lemma 3.1 we have \[\int_{\mathbb{S}^{2}}|\nabla\ln f_{j}|^{2}\leq\text{\rm vol}(\mathbb{S}^{2})= 4\pi. \tag{69}\] This completes the proof. Next, we show that such sequence of function is uniformly bounded in \(W^{1,p}(\mathbb{S}^{2})\) for \(p\in[1,2)\). **Lemma 3.3**.: _Let \(\{f_{j}\}_{j=1}^{\infty}\) be a sequence of positive functions on \(\mathbb{S}^{2}\) satisfying_ \[\Delta f_{j}\leq f_{j},\quad\int_{\mathbb{S}^{2}}f_{j}d\mathrm{vol}_{\mathbb{S}^ {2}}\leq\frac{V}{2\pi},\quad\forall j\in\mathbb{N}. \tag{70}\] _Then the sequence is uniformly bounded in \(W^{1,p}(\mathbb{S}^{2})\) for \(p\in[1,2)\), i.e. for each \(p\in[1,2)\), there exists a constant \(C(p)\) such that_ \[\|f_{j}\|_{W^{1,p}(\mathbb{S}^{2})}\leq C(p),\quad\forall j\in\mathbb{N}. \tag{71}\] Proof.: For any \(1\leq p<2\), \[|\nabla f_{j}|^{p}=|\nabla\ln f_{j}|^{p}\cdot|f_{j}|^{p}. \tag{72}\] The Cauchy-Schwarz inequality implies that \[\|\nabla f_{j}\|_{L^{p}(\mathbb{S}^{2})}\] \[= \left(\int_{\mathbb{S}^{2}}|\nabla\ln f_{j}|^{p}\cdot|f_{j}|^{p} \right)^{\frac{1}{p}} \tag{74}\] \[\leq \|\nabla\ln f_{j}\|_{L^{2}(\mathbb{S}^{2})}\cdot\|f_{j}\|_{L^{ \frac{2p}{2-p}(\mathbb{S}^{2})}}\] (75) \[\leq \|\nabla\ln f_{j}\|_{L^{2}(\mathbb{S}^{2})}\cdot\left(\|f_{j}\|_ {L^{\frac{2p}{2-p}(\mathbb{S}^{2})}}+\mathrm{Vol}(\mathbb{S}^{2})\right)\] (76) \[\leq \left(\mathrm{vol}(\mathbb{S}^{2})\right)^{\frac{1}{2}}\left((4 \pi)^{\frac{2-p}{2p}}\exp\left(\frac{V}{8\pi^{2}}+\frac{p}{2(2-p)}\right)+ \mathrm{Vol}(\mathbb{S}^{2})\right). \tag{73}\] Here in the last step, we used Lemma 3.1 and Lemma 3.2. Moreover, by Lemma 3.2 again, for each \(p\in[1,2)\), \(\|f_{j}\|_{L^{p}(\mathbb{S}^{2})}\) is uniformly bounded for all \(j\in\mathbb{N}\). Hence for each \(p\in[1,2)\), \(\|f_{j}\|_{W^{1,p}(\mathbb{S}^{2})}\) is uniformly bounded for all \(j\in\mathbb{N}\). We use the uniform \(W^{1,p}(\mathbb{S}^{2})\) bound to prove convergence in the following lemma. **Lemma 3.4**.: _Let \(\{f_{j}\}_{j=1}^{\infty}\) be a sequence of positive functions on \(\mathbb{S}^{2}\) satisfying_ \[\Delta f_{j}\leq f_{j},\quad\int_{\mathbb{S}^{2}}f_{j}d\mathrm{vol}_{\mathbb{ S}^{2}}\leq\frac{V}{2\pi},\quad\forall j\in\mathbb{N}. \tag{78}\] _Then for each fixed \(p\in[1,2)\), there exists a subsequence \(\{f_{j_{k}^{(p)}}\}_{k=1}^{\infty}\) and \(f_{\infty,p}\in W^{1,p}(\mathbb{S}^{2})\) such that_ \[f_{j_{k}^{(p)}}\to f_{\infty,p},\quad\text{in}\;\;L^{q}(\mathbb{S}^{2}), \tag{79}\] _for each \(1\leq q<\frac{2p}{2-p}\)._ _Moreover, for any \(\varphi\in C^{\infty}(\mathbb{S}^{2})\),_ \[\int_{\mathbb{S}^{2}}\left(f_{j_{k}^{(p)}}\varphi+\langle\nabla f_{j_{k}^{(p) }},\nabla\varphi\rangle\right)d\mathrm{vol}_{g_{\mathbb{S}^{2}}}\to\int_{ \mathbb{S}^{2}}\left(f_{\infty,p}\varphi+\langle\nabla f_{\infty,p},\nabla \varphi\rangle\,d\mathrm{vol}_{g_{\mathbb{S}^{2}}},\] _as \(j_{k}^{(p)}\to\infty\), where \(\nabla f_{\infty,p}\) is the weak gradient of \(f_{\infty,p}\)._ Proof.: For each fixed \(p\in[1,2)\), by using Rellich-Kondrachov compactness theorem, the uniform estimate of Sobolev norms in Lemma 3.3 implies that there exists a subsequence of \(\{f_{j}\}\), which is still denoted by \(\{f_{j}\}\), converging to \(f_{\infty,p}\) in \(L^{q}(\mathbb{S}^{2})\) for \(1\leq q<\frac{2p}{2-p}\). Then by the weak compactness in \(L^{p}\) space (see, e.g. Theorem 1.42 in [5]), we can obtain that \(f_{\infty,p}\in W^{1,p}(\mathbb{S}^{2})\). Indeed, \(\|f_{j}\|_{W^{1,p}(\mathbb{S}^{2})}\leq C\) for all \(j\in\mathbb{N}\) implies that \(\|f_{j}\|_{L^{p}(\mathbb{S}^{2})}\) and \(\|\nabla f_{j}\|_{L^{p}(\mathbb{S}^{2})}\) are both uniformly bounded. Then the weak compactness in \(L^{p}\) space implies that there exist a further subsequence, denoted by \(f_{j_{k}^{(p)}}\), and \(X\in L^{p}(\mathbb{S}^{2},\mathrm{TS}^{2})\) such that \[\nabla f_{j_{k}^{(p)}}\rightharpoonup X\quad\text{in }\ L^{p}(\mathbb{S}^{2}, \mathrm{TS}^{2}), \tag{80}\] i.e. \[\int_{\mathbb{S}^{2}}\langle\nabla f_{j_{k}^{(p)}},Y\rangle d\mathrm{vol}_{g_{ \mathbb{S}^{2}}}\to\int_{\mathbb{S}^{2}}\langle X,Y\rangle d\mathrm{vol}_{g_{ \mathbb{S}^{2}}},\quad\forall Y\in C^{\infty}(\mathbb{S}^{2},\mathrm{TS}^{2}). \tag{81}\] On the other hand, \[\int_{\mathbb{S}^{2}}\langle\nabla f_{j_{k}^{(p)}},Y\rangle d\mathrm{vol}_{g_{ \mathbb{S}^{2}}}=\int_{\mathbb{S}^{2}}f_{j}\mathrm{div}Yd\mathrm{vol}_{g_{ \mathbb{S}^{2}}}\to\int_{\mathbb{S}^{2}}f_{\infty,p}\mathrm{div}Yd\mathrm{vol }_{g_{\mathbb{S}^{2}}}, \tag{82}\] since \(f_{j_{k}^{(p)}}\to f_{\infty,p}\) in \(L^{p}\). Thus, \[\int_{\mathbb{S}^{2}}f_{\infty,p}\mathrm{div}Yd\mathrm{vol}_{g_{\mathbb{S}^{2} }}=\int_{\mathbb{S}^{2}}\langle X,Y\rangle d\mathrm{vol}_{g_{\mathbb{S}^{2}}}, \quad\forall Y\in C^{\infty}(\mathbb{S}^{2},\mathrm{TS}^{2}). \tag{83}\] Therefore, \(X=\nabla f_{\infty,p}\) is the gradient of \(f_{\infty,p}\) in the sense of distribution, and so \(f_{\infty,p}\in W^{1,p}(\mathbb{S}^{2},g_{\mathbb{S}^{2}})\). For any \(\varphi\in C^{\infty}(\mathbb{S}^{2})\), by taking \(Y=\nabla\varphi\) in (81), we obtain \[\int_{\mathbb{S}^{2}}\left(f_{j_{k}^{(p)}}\varphi+\langle\nabla f_{j_{k}^{(p)} },\nabla\varphi\rangle\right)d\mathrm{vol}_{g_{\mathbb{S}^{2}}}\to\int_{ \mathbb{S}^{2}}\left(f_{\infty,p}\varphi+\langle\nabla f_{\infty,p},\nabla \varphi\rangle\right)d\mathrm{vol}_{g_{\mathbb{S}^{2}}}. \tag{84}\] Now we use Lemma 3.4 and diagonal argument to find a subsequence converging in \(L^{q}\) for all \(q\geq 1\) and prove the following proposition: **Proposition 3.5**.: _Let \(\{f_{j}\}_{j=1}^{\infty}\) be a sequence of positive functions on \(\mathbb{S}^{2}\) satisfying_ \[\Delta f_{j}\leq f_{j},\quad\int_{\mathbb{S}^{2}}f_{j}d\mathrm{vol}_{\mathbb{ S}^{2}}\leq\frac{V}{2\pi},\quad\forall j\in\mathbb{N}. \tag{85}\] _Then there exists a subsequence \(\{f_{j_{k}}\}_{k=1}^{\infty}\) and \(f_{\infty}\in W^{1,p}(\mathbb{S}^{2})\) for all \(p\in[1,2)\), such that_ \[f_{j_{k}}\to f_{\infty},\quad\text{in }\ L^{q}(\mathbb{S}^{2}),\ \ \forall q\in[1,\infty). \tag{86}\] _Moreover, for any \(\varphi\in C^{\infty}(\mathbb{S}^{2})\),_ \[\int_{\mathbb{S}^{2}}\left(f_{j_{k}}\varphi+\langle\nabla f_{j_{k}},\nabla \varphi\rangle\right)d\mathrm{vol}_{g_{\mathbb{S}^{2}}}\to\int_{\mathbb{S}^{2} }\left(f_{\infty}\varphi+\langle\nabla f_{\infty},\nabla\varphi\rangle\right)d \mathrm{vol}_{g_{\mathbb{S}^{2}}}, \tag{87}\] _as \(j_{k}\to\infty\), where \(\nabla f_{\infty}\) is the weak gradient of \(f_{\infty}\)._ Proof.: The proof is a diagonal argument. We apply Lemma 3.4 for \(p=2-\frac{1}{n+1},n=1,2,3,\dots\). For \(n=1\), by applying Lemma 3.4 to \(\left\{f_{j}\right\}_{j=1}^{\infty}\) and \(p=2-\frac{1}{2}\), we obtain a subsequence, denoted by \(f_{j_{k}^{(1)},1}\), and \(f_{\infty,1}\in W^{1,2-\frac{1}{2}}\) such that \[f_{j_{k}^{(1)},1}\to f_{\infty,1}\ \ \text{in}\ \ L^{q}(\mathbb{S}^{2}),\ \ \forall 1\leq q<6,\ \ \text{as}\ \ k\to\infty. \tag{88}\] For \(n=2\), by applying Lemma 3.4 to the subsequence \(\left\{f_{j_{k}^{(1)},1}\right\}_{k=1}^{\infty}\) and \(p=2-\frac{1}{3}\), we obtain a subsequence, \(\left\{f_{j_{k}^{(2)},2}\right\}_{k=1}^{\infty}\subset\left\{f_{j_{k}^{(1)},1 }\right\}_{k=1}^{\infty}\), and \(f_{\infty,2}\in W^{1,2-\frac{1}{3}}\) such that \[f_{j_{k}^{(2)},2}\to f_{\infty,2}\ \ \text{in}\ \ L^{q}(\mathbb{S}^{2}),\ \ \forall 1\leq q<10,\ \ \text{as}\ \ k\to\infty. \tag{89}\] Then by repeating this process for \(n=3,4,5,\dots\), we can obtain a family of decreasing subsequence \(\left\{f_{j_{k}^{(n)},n}\right\}_{k=1}^{\infty}\subset\left\{f_{j_{k}^{(n-1)},n-1}\right\}_{k=1}^{\infty}\) and \(f_{\infty,n}\in W^{1,2-\frac{1}{n+1}}\) for all \(n\in\mathbb{N}\), such that for each fixed \(n\in\mathbb{N}\) \[f_{j_{k}^{(n)},n}\to f_{\infty,n}\ \ \text{in}\ \ L^{q}(\mathbb{S}^{2}),\ \ \forall 1\leq q<4n+2,\ \ \text{as}\ \ k\to\infty. \tag{90}\] Now we take the diagonal subsequence \(\left\{f_{j_{k}}:=f_{f_{j_{k}^{(k)},k}}\mid k\in\mathbb{N}\right\}\). By the construction of \(f_{j_{k}}\) and \(4k+2\to+\infty\) as \(k\to+\infty\), we have that \(\left\{f_{j_{k}}\right\}\) is a Cauchy sequence in \(L^{q}(\mathbb{S}^{2})\) for all \(q\in[1,\infty)\). Thus there exists \(f_{\infty}\in L^{q}(\mathbb{S}^{2})\) such that \[f_{j_{k}}\to f_{\infty}\ \ \text{in}\ \ \in L^{q}(\mathbb{S}^{2}),\ \ \text{as}\ \ k\to\infty,\ \ \forall q\in[1,\infty). \tag{91}\] Then by the uniqueness of \(L^{2}\) limit, \(f_{\infty}=f_{\infty,n}\) in \(L^{2}(\mathbb{S}^{2})\) for all \(n\in\mathbb{N}\). Furthermore, because \(f_{\infty,n}\in W^{1,2-\frac{1}{n+1}}(\mathbb{S}^{2})\) and \(2-\frac{1}{n+1}\to 2^{-}\) as \(n\to\infty\), we see that the \(L^{p}\) norm of the weak derivative of \(f_{\infty}\) is bounded for any \(p\in[1,2)\). Thus \(f_{\infty}\in W^{1,p}(\mathbb{S}^{2})\) for all \(p\in[1,2)\). Finally, the last claim in (87) follows from that \(\left\{f_{j_{k}}\right\}_{k=1}^{\infty}\subset\left\{f_{j_{k}^{(1)},1}\right\} _{k=1}^{\infty}\) and the corresponding convergence in Lemma 3.4 for \(p=2-\frac{1}{2}\), in particular for the subsequence \(\left\{f_{j_{k}^{(1)},1}\right\}_{k=1}^{\infty}\). **Remark 3.6**.: The extreme example constructed by Christina Sormani and authors in [19] shows that \(W^{1,p}\) regularity for \(p<2\) is the best regularity we can expect for \(f_{\infty}\) in general (see Lemma 3.4 in [19]). ### Lower semi-continuous representative of the limit function For the limit function \(f_{\infty}\) obtained in Proposition 3.5, Lebesgue-Besicovitch differential theorem implies that \[\lim_{r\to 0}\fint_{B_{r}(x)}f_{\infty}d\mathrm{vol}_{g_{\mathbb{S}^{2}}}=f_{ \infty}(x) \tag{92}\] holds for a.e. \(x\in\mathbb{S}^{2}\) with respect to the volume measure \(d\mathrm{vol}_{g_{\mathbb{S}^{2}}}\). In Proposition 3.7, by applying the ball average monotonicity property in Proposition 2.6, we will show that the limit of ball average in (92) actually exists for all \(x\in\mathbb{S}^{2}\), and that the limit produces a lower semi-continuous function. **Proposition 3.7**.: _Let \(\{f_{j}\}_{j=1}^{\infty}\) be a sequence of smooth positive functions on \(\mathbb{S}^{2}\) satisfying_ \[\Delta f_{j}\leq f_{j},\quad\int_{\mathbb{S}^{2}}f_{j}d\mathrm{vol}_{\mathbb{S }^{2}}\leq\frac{V}{2\pi},\quad\forall j\in\mathbb{N}. \tag{93}\] _Then the limit function, \(f_{\infty}\), obtained in Proposition 3.5, has the following properties._ 1. _For each fixed_ \(x\in\mathbb{S}^{2}\)_, the ball average_ (94) \[\fint_{B_{r}(x)}\left(f_{\infty}(y)-Cd(y,x)\right)d\mathrm{vol}(y)\] _is non-increasing in_ \(r\in\left(0,\frac{\pi}{2}\right)\)_, where_ \(C\) _is a positive real number such that_ \(\sup_{j\in\mathbb{N}}\|f_{j}\|_{L^{2}(\mathbb{S}^{2})}\leq C\sqrt{2\pi}\)_. Note that the existence of such_ \(C\) _is guaranteed by Lemma_ 3.2_._ 2. _Consequently, the limit_ (95) \[\overline{f_{\infty}}(x):=\lim_{r\to 0}\fint_{B_{r}(x)}f_{\infty}=\lim_{r\to 0}\fint_{B_{ r}(x)}\left(f_{\infty}(y)-Cd(y,x)\right)d\mathrm{vol}(y)\] _exists, allowing_ \(+\infty\) _as a limit, for every_ \(x\in\mathbb{S}^{2}\)_. Moreover,_ \(\overline{f_{\infty}}\) _is a lower semi-continuous function on_ \(\mathbb{S}^{2}\)_._ Proof.: By Lemma 3.2, there exists \(C\in\mathbb{R}\) such that \[\|f_{j}\|_{L^{2}(\mathbb{S}^{2})}\leq C\sqrt{2\pi},\ \ \forall j\in\mathbb{N}. \tag{96}\] Then by applying Proposition 2.6 to functions \(f_{j}\), we obtain that for any fixed \(x\in\mathbb{S}^{2}\) \[\fint_{B_{R}(x)}(f_{j}(y)-Kd(y,x))d\mathrm{vol}(y)\leq\fint_{B_{r}(x)}(f_{j}(y )-Cd(y,x))d\mathrm{vol}(y) \tag{97}\] holds for any \(0<r<R<\frac{\pi}{2}\) and all \(j\in\mathbb{N}\). By Proposition 3.5\(f_{j}\to f_{\infty}\) in \(L^{1}(\mathbb{S}^{2})\). Then for any fixed \(x\in\mathbb{S}^{2}\), and any fixed \(0<r<R<\frac{\pi}{2}\), by taking the limit as \(j\to+\infty\), we obtain \[\fint_{B_{R}(x)}(f_{\infty}(y)-Cd(y,x))\,d\mathrm{vol}(y)\leq\fint_{B_{r}(x)}(f _{\infty}(y)-Cd(y,x))\,d\mathrm{vol}(y), \tag{98}\] So for each fixed \(x\in\mathbb{S}^{2}\), the ball average \[\fint_{B_{r}(x)}(f_{\infty}(y)-Cd(y,x))\,d\mathrm{vol}(y) \tag{99}\] is non-increasing for \(r\in\left(0,\frac{\pi}{2}\right)\). Therefore, for any \(x\in\mathbb{S}^{2}\) the limit \[\lim_{r\to 0}\fint_{B_{r}(x)}(f_{\infty}(y)-Cd(y,x))\,d\mathrm{vol}(y) \tag{100}\] exists as a finite number or \(+\infty\). On the other hand, by direct calculation \[\fint_{B_{r}(x)}d(y,x)d\mathrm{vol}(y)=\frac{\int_{0}^{r}2\pi s\sin sds}{\int_ {0}^{r}2\pi\sin(s)ds}=\frac{\sin r-r\cos r}{1-\cos r}\to 0, \tag{101}\] as \(r\to 0\). Thus the limit \[\overline{f_{\infty}}(x):=\lim_{r\to 0}\fint_{B_{r}(x)}f_{\infty}=\lim_{r\to 0}\fint_{B_{ r}(x)}(f_{\infty}(y)-Cd(y,x))\,d\mathrm{vol}(y) \tag{102}\] exists for all \(x\in\mathbb{S}^{2}\). For each fixed \(0<r<\frac{\pi}{2}\), we have that \(\fint_{B_{r}(x)}(f_{\infty}(y)-Cd(y,x))d\mathrm{vol}(y)\) is a continuous function of \(x\in\mathbb{S}^{2}\), since \(f_{\infty}\in L^{2}(\mathbb{S}^{2})\), \(Cd(y,x)\leq C\pi\), and \(\mathrm{Area}(B_{r}(x))=2\pi\sin r\) for all \(x\in\mathbb{S}^{2}\). Then by the monotonicity in (98), we have \[\overline{f_{\infty}}(x)=\sup_{r>0}\fint_{B_{r}(x)}(f_{\infty}(y)-Cd(y,x))\, d\mathrm{vol}(y). \tag{103}\] In other words, \(\overline{f_{\infty}}\) is the supremum of a sequence of continuous function. Thus \(\overline{f_{\infty}}\) is lower semi-continuous. **Remark 3.8**.: Recall that by (92), \(\lim_{r\to 0}\fint_{B_{r}(x)}f_{\infty}d\mathrm{vol}_{g_{\mathbb{S}^{2}}}=f_{ \infty}(x)\) hold for a.e. \(x\in\mathbb{S}^{2}\), thus \(\overline{f_{\infty}}(x)=f_{\infty}(x)\) holds for a.e. \(x\in\mathbb{S}^{2}\). So as a \(W^{1,p}\) function, \(f_{\infty}\) has a lower semi-continuous representative \(\overline{f_{\infty}}\). ## 4. Positivity of the limit warping functions In this section, we prove that the limit warping function \(f_{\infty}\) has a positive essential infimum, provided that the Riemannian manifold \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\) satisfies both requirements in (56) and the MinA condition [Theorem 4.13]. The main tools we use in the proof of Theorem 4.13 include the maximum principle, the Min-Max minimal surface theory of Marques and Neves, and the spherical mean inequality we obtained in Proposition 2.4. The maximum principle for weak solutions (Theorem 8.19 in [6]) requires \(W^{1,2}\) regularity, but in general we only have \(f_{\infty}\in W^{1,p}(\mathbb{S}^{2})\) for \(p<2\) [Remark 3.6]. To overcome this difficulty, in Subsection 4.1, we consider the truncation of warping functions \(\bar{f}^{K}_{j}\) as defined in Definition 4.1, and obtain a \(W^{1,2}(\mathbb{S}^{2})\) limit function \(\bar{f}^{K}_{\infty}\) for the sequence of truncated function \(\bar{f}^{K}_{j}\) [Lemma 4.4]. This enables us to apply maximum principle for weak solutions (Theorem 8.19 in [6]) to \(\bar{f}^{K}_{\infty}\), and prove that either \(\inf\bar{f}^{K}_{\infty}>0\) or \(\bar{f}^{K}_{\infty}\equiv 0\) on \(\mathbb{S}^{2}\) [Proposition 4.7]. In Subsection 4.3, we use Min-Max minimal surface theory of Marques and Neves and the spherical mean inequality in Proposition 2.4 to obtain an upper bound for \(\operatorname{MinA}(\mathbb{S}\times_{f}\mathbb{S}^{1})\) in terms of \(L^{1}\) norm of the warping function \(f\), provided that the \(L^{2}\) norm of \(f\) is sufficiently small [Proposition 4.10]. In Subsection 4.4, we use Proposition 4.7 and Proposition 4.10 to prove Theorem 4.13. Moreover, as an application of Theorem 4.13, we obtain a positive uniform lower bound for warping functions \(f_{j}\), if the warped product manifolds \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\) satisfy requirements in (56) and the MinA condition [Proposition 4.15]. ### \(W^{1,2}\) regularity of limit of truncated warping functions We define the truncation of a function firstly: **Definition 4.1**.: _Let \(f:\mathbb{S}^{2}\to\mathbb{R}\) be a positive smooth function. Let \(K>0\) be a real number, for each \(x\in\mathbb{S}^{2}\), we define_ \[\bar{f}^{K}(x)=\begin{cases}f(x),&\text{if}\ \ \ f(x)<K,\\ K,&\text{if}\ \ \ f(x)\geq K.\end{cases} \tag{104}\] _Then \(\bar{f}^{K}\) is a positive continuous function on \(\mathbb{S}^{2}\) with the maximal value not greater than \(K\)._ From the definition we can prove the following lemma: **Lemma 4.2**.: _Let \(f:\mathbb{S}^{2}\to\mathbb{R}\) be a positive smooth function, and let \(K>0\) be a regular value of the function \(f\). If_ \[\Delta f\leq f \tag{105}\] _then for all \(u\in W^{1,2}(\mathbb{S}^{2})\) such that \(u\geq 0\) we have_ \[-\int_{\mathbb{S}^{2}}\langle\nabla u,\nabla\bar{f}^{K}\rangle\leq\int_{ \mathbb{S}^{2}}u\bar{f}^{K}. \tag{106}\] Proof.: By Theorem 4.4 from [5], we have for all \(K>0\) \[\nabla\bar{f}^{K}=\begin{cases}\nabla f,&\text{a.e. on }\{f(x)<K\},\\ 0,&\text{a.e. on }\{f(x)\geq K\}.\end{cases} \tag{107}\] As a result we have \[-\int_{\mathbb{S}^{2}}\langle\nabla u,\nabla\bar{f}^{K}\rangle =-\int_{\{f<K\}}\langle\nabla u,\nabla f\rangle\] \[=\int_{\{f<K\}}u\Delta f-\int_{\partial\{f<K\}}u\partial_{\nu}f. \tag{108}\] Here, since \(K\) is a regular value of \(f\), from the Regular Level Set Theorem we know that the level set \(\{f=K\}=\partial\{f<K\}\) is am embedded submanifold of dimension \(1\) in \(\mathbb{S}^{2}\). Hence we can apply Stokes' theorem to get the last step. Moreover, since \(\nu\) is the outer unit normal vector on the boundary of the set \(\{f<K\}\), we have \[\partial_{\nu}f\geq 0. \tag{109}\] Hence we can drop the boundary term to get the inequality \[-\int_{\mathbb{S}^{2}}\langle\nabla u,\nabla\bar{f}^{K}\rangle\leq\int_{\{f<K \}}u\Delta f. \tag{110}\] Since \[\Delta f\leq f, \tag{111}\] we have \[-\int_{\mathbb{S}^{2}}\langle\nabla u,\nabla\bar{f}^{K}\rangle\leq\int_{\{f<K \}}u\Delta f\leq\int_{\{f<K\}}uf\leq\int_{\mathbb{S}^{2}}u\bar{f}^{K}. \tag{112}\] This finishes the proof. We can prove similar results for a sequence of functions: **Lemma 4.3**.: _Let \(\{f_{j}\}_{j=1}^{\infty}\) be a sequence of smooth positive function defined on \(\mathbb{S}^{2}\). If_ \[\Delta f_{j}\leq f_{j},\ \ \forall j\in\mathbb{N}, \tag{113}\] _then there exists \(K>0\) such that for all \(u\in W^{1,2}(\mathbb{S}^{2})\) with \(u\geq 0\) we have_ \[-\int_{\mathbb{S}^{2}}\langle\nabla u,\nabla\bar{f}^{K}_{j}\rangle\leq\int_{ \mathbb{S}^{2}}u\bar{f}^{K}_{j}\ \ \ \ \forall j\in\mathbb{N}. \tag{114}\] _Moreover, we can choose \(K\) as large as we want._ Proof.: Note that if \(0<K\leq\inf_{x\in\mathbb{S}^{2}}f_{j}(x)\) for some \(i\) then we have \(\bar{f}^{K}_{j}(x)=K\). On the other hand, if \(\sup_{x\in\mathbb{S}^{2}}f(x)\leq K\) for some \(i\) then \(\bar{f}^{K}_{j}(x)=f_{j}(x)\). Either way the inequality (114) holds. In general, by Sard's theorem, for each function \(f_{j}\), the critical values of \(f_{j}\) has measure zero, and the union of all the critical sets for each of the function also has measure zero. As a result, there exists \(K>0\) such that for each \(f_{j}\) either \(K\) is a regular value or \(f_{j}^{-1}(\{K\})=\emptyset\). By Lemma 4.2 we get inequality (114). Moreover, we can choose \(K\) as large as we want. This finishes the proof. Next we prove similar results for the limit function, but before that we need to consider the regularity of the limit function: **Lemma 4.4**.: _Let \(K>0\) be a real number. Let \(\{f_{j}\}_{j=1}^{\infty}\) be a sequence of positive smooth functions on \(\mathbb{S}^{2}\) satisfying_ \[\Delta f_{j}\leq f_{j},\quad\forall j\in\mathbb{N}. \tag{115}\] _Then the sequence \(\{\bar{f}_{j}^{K}\}_{j=1}^{\infty}\) is uniformly bounded in \(W^{1,2}(\mathbb{S}^{2})\):_ \[\|\bar{f}_{j}^{K}\|_{W^{1,2}(\mathbb{S}^{2})}\leq 2K\mathrm{vol}(\mathbb{S}^{2}). \tag{116}\] _As a result, there exists \(\bar{f}_{\infty}^{K}\in W^{1,2}(\mathbb{S}^{2})\) such that \(\bar{f}_{j}^{K}\) converges to \(\bar{f}_{\infty}^{K}\) in \(L^{2}(\mathbb{S}^{2})\), and that \(\bar{f}_{j}^{K}\) converges to \(\bar{f}_{\infty}^{K}\) weakly in \(W^{1,2}(\mathbb{S}^{2})\)._ Proof.: By definition of the cutoff in Definition 4.1, we get \[\|\bar{f}_{j}^{K}\|_{L^{2}(\mathbb{S}^{2})}\leq K\sqrt{\mathrm{vol}(\mathbb{S }^{2})}. \tag{117}\] By Theorem 4.4 from [5], we have for all \(K>0\) and for each \(i\) \[\nabla\bar{f}_{j}^{K}=\begin{cases}\nabla f_{j},&\text{a.e. on }\{f_{j}(x)<K\},\\ 0,&\text{a.e. on }\{f_{j}(x)\geq K\}.\end{cases} \tag{118}\] Hence \[\begin{split}\|\nabla\bar{f}_{j}\|_{L^{2}(\mathbb{S}^{2})}^{2}& =\int_{\{f_{j}<K\}}|\nabla f_{j}|^{2}\\ &=\int_{\{f_{i}<K\}}|f_{j}|^{2}|\nabla\ln f_{j}|^{2}\\ &\leq K^{2}\int_{\{f_{j}<K\}}|\nabla\ln f_{j}|^{2}\\ &\leq K^{2}\|\nabla\ln f_{j}\|^{2}\\ &\leq K^{2}\mathrm{vol}(\mathbb{S}^{2}),\end{split} \tag{119}\] where the last step follows from Lemma 3.1. Combine inequalities (117) and (119) then we get the desired results. Now we prove the following proposition concerning the limit function: **Lemma 4.5**.: _Let \(\{f_{j}\}_{j=1}^{\infty}\) be a sequence of positive smooth functions on \(\mathbb{S}^{2}\) satisfying_ \[\Delta f_{j}\leq f_{j},\quad\forall j\in\mathbb{N}. \tag{120}\] _Let \(K>0\) be a real number that satisfies the requirement in Lemma 4.3. Let \(\tilde{f}^{K}_{\infty}\in W^{1,2}(\mathbb{S}^{2})\) be the limit function as in Lemma 4.4. Then \(\tilde{f}^{K}_{\infty}\) satisfies the inequality_ \[-\int_{\mathbb{S}^{2}}\langle\nabla u,\nabla\bar{f}^{K}_{\infty}\rangle\leq \int_{\mathbb{S}^{2}}u\bar{f}^{K}_{\infty}, \tag{121}\] _for all \(u\in W^{1,2}(\mathbb{S}^{2})\) such that \(u\geq 0\)._ Proof.: By Lemma 4.4 we know that \(\bar{f}^{K}_{j}\) converges to \(\bar{f}^{K}_{\infty}\) in \(L^{2}(\mathbb{S}^{2})\), and that \(\bar{f}^{K}_{j}\) converges to \(\bar{f}^{K}_{\infty}\) weakly in \(W^{1,2}(\mathbb{S}^{2})\). As a result, for any \(u\in W^{1,2}(\mathbb{S}^{2})\) we have that \[\int_{\mathbb{S}^{2}}u\bar{f}^{K}_{j}\to\int_{\mathbb{S}^{2}}u\bar{f}^{K}_{ \infty},\ \ \ \text{as}\ \ \ j\to\infty, \tag{122}\] and that \[\int_{\mathbb{S}^{2}}\langle\nabla u,\nabla\bar{f}^{K}_{j}\rangle\to\int_{ \mathbb{S}^{2}}\langle\nabla u,\nabla\bar{f}^{K}_{\infty}\rangle,\ \ \ \text{as}\ j\to\infty. \tag{123}\] As a result, by (114) we have for all \(u\in W^{1,2}(\mathbb{S}^{2})\) such that \(u\geq 0\) \[-\int_{\mathbb{S}^{2}}\langle\nabla u,\nabla\bar{f}^{K}_{\infty}\rangle\leq \int_{\mathbb{S}^{2}}u\bar{f}^{K}_{\infty}. \tag{124}\] Hence by Theorem 8.19 in [6], we have that either the essential infimum of \(\bar{f}^{K}_{\infty}\) is bounded away from zero or \(\bar{f}^{K}_{\infty}\) is the zero function. This finishes the proof. We need the definition of essential infimum of a function: **Definition 4.6**.: _Consider the standard \(\mathbb{S}^{2}\) and use \(m\) to denote the standard volume measure in \(\mathbb{S}^{2}\). Let \(U\) be an open subset of \(\mathbb{S}^{2}\). Let \(f:U\to\mathbb{R}\) be measurable. Define the set_ \[U^{ess}_{f}=\{a\in\mathbb{R}:m(f^{-1}(-\infty,a))=0\}. \tag{125}\] _We use \(\inf_{U}f\) to denote the essential infimum of \(f\) in \(U\) and define_ \[\inf_{U}f=\sup U^{ess}_{f} \tag{126}\] Finally, we apply the maximum principle for weak solution to prove the following property for the essential infimum of \(f_{\infty}\). **Proposition 4.7**.: _Let \(\{f_{j}\}_{j=1}^{\infty}\) be a sequence of positive smooth functions on \(\mathbb{S}^{2}\) satisfying_ \[\Delta f_{j}\leq f_{j},\ \ \ \forall j\in\mathbb{N}. \tag{127}\] _If we further assume that \(f_{j}\to f_{\infty}\) in \(L^{2}(\mathbb{S}^{2})\) for some \(f_{\infty}\), then either the essential infimum of \(f_{\infty}\) is bounded away from zero or \(f_{\infty}=0\) a.e. on \(\mathbb{S}^{2}\)._ Proof.: Since \(\|f_{j}-f_{\infty}\|_{L^{2}(\mathbb{S}^{2})}\to 0\) as \(j\to\infty\), choose a subsequence if needed, then we have \(f_{j}\to f_{\infty}\) pointwise almost everywhere in \(\mathbb{S}^{2}\). Let \(K>0\) be a real number that satisfies the requirement in Lemma 4.3. Construct a truncated sequence \(\{\bar{f}^{K}_{j}\}_{j=1}^{\infty}\) as in Definition 4.1. By Lemma 4.4, choose a subsequence if needed, there exists \(\bar{f}^{K}_{\infty}\in W^{1,2}(\mathbb{S}^{2})\) such that \(\bar{f}^{K}_{j}\) converges to \(\bar{f}^{K}_{\infty}\) in \(L^{2}(\mathbb{S}^{2})\) norm. As a result, choose a subsequence if needed we have \(\bar{f}^{K}_{j}\to\bar{f}^{K}_{\infty}\) pointwise almost everywhere in \(\mathbb{S}^{2}\). It suffices to show that if the essential infimum \(\inf_{\mathbb{S}^{2}}f_{\infty}=0\) then \(\bar{f}^{K}_{\infty}=f_{\infty}=0\) in \(\mathbb{S}^{2}\). We assume that \(\inf_{\mathbb{S}^{2}}f_{\infty}=0\). Since for each \(j\) we have \(0<\bar{f}^{K}_{j}\leq f_{j}\), we have \(0\leq\inf_{\mathbb{S}^{2}}\bar{f}^{K}_{j}\leq\inf_{\mathbb{S}^{2}}f_{\infty}=0\). This implies that for any \(\delta,\delta^{\prime}>0\), we have \[m\left(\left(\bar{f}^{K}_{\infty}\right)^{-1}(-\infty,\delta)\right)>0, \tag{128}\] and \[m\left(\left(\bar{f}^{K}_{\infty}\right)^{-1}(-\infty,-\delta^{\prime})\right) =0. \tag{129}\] Let \(N\) be the north pole of \(\mathbb{S}^{2}\), and \(S\) be the south pole. \(B_{\frac{\pi}{2}}(N)\) and \(B_{\frac{\pi}{2}}(S)\) are upper and lower hemispheres respectively. Then either \[\inf_{B_{\frac{\pi}{2}}(N)}\bar{f}^{K}_{\infty}=0, \tag{130}\] or \[\inf_{B_{\frac{\pi}{2}}(S)}\bar{f}^{K}_{\infty}=0. \tag{131}\] Without loss of generality we assume that \(\inf_{B_{\frac{\pi}{2}}(N)}\bar{f}^{K}_{\infty}=0\). Since \(\bar{f}^{K}_{\infty}\geq 0\) in \(\mathbb{S}^{2}\), for any \(r>\frac{\pi}{2}\), and \(\epsilon>0\) such that \(r+\epsilon<\pi\) we have \[\inf_{B_{r}(N)}\bar{f}^{K}_{\infty}=\inf_{B_{r+\epsilon}(N)}\bar{f}^{K}_{ \infty}=0. \tag{132}\] Now by Lemma 4.5, \(\bar{f}^{K}_{\infty}\) satisfies \[(\Delta-1)\bar{f}^{K}_{\infty}\leq 0, \tag{133}\] on \(B_{r+\epsilon}(N)\) in the weak sense. Hence by the strong maximum principle for weak solutions (see Theorem 8.19 in [6]), the equality in (132) implies that \(\bar{f}^{K}_{\infty}\) is constant on \(B_{r}(N)\). This is true for any \(r>\frac{\pi}{2}\), thus \(\bar{f}^{K}_{\infty}\equiv 0\) on \(\mathbb{S}^{2}\). Moreover, since \(K>0\), for almost every \(x\in\mathbb{S}^{2}\) we have, \[\lim_{j\to\infty}\bar{f}^{K}_{j}=\lim_{j\to\infty}f_{j}=0, \tag{134}\] and hence \(f_{\infty}=0\) a.e. on \(\mathbb{S}^{2}\). This finishes the proof. ### A \(1\)-sweepout of the warped product manifold \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\) Because we will apply the Min-Max minimal surface theory to get an upper bound for MinA in SS4.3, in this subsection we briefly recall some basic notions in geometric measure theory following Marques and Neves [13], and construct a \(1\)-sweepout for \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\), which will be used in the proof in Lemma 4.11. For an excellent survey and more details about these materials we refer to [13] and references therein. A \(k\)_-current_\(T\) on \(\mathbb{R}^{J}\) is a continuous linear functional on the space of compactly supported smooth \(k\)-forms: \(\mathcal{D}^{k}(\mathbb{R}^{J})\). Its boundary \(\partial T\) is a \((k-1)\)-current that is defined as \(\partial T(\phi):=T(d\phi)\) for \(\phi\in\mathcal{D}^{k-1}(\mathbb{R}^{J})\). A \(k\)-current \(T\) is said to be an _integer multiplicity \(k\)-current_ if it can be written as \[T(\phi)=\int_{S}\langle\phi(x),\tau(x)\rangle\theta(x)d\mathcal{H}^{k},\quad \phi\in\mathcal{D}^{k}(\mathbb{R}^{J}), \tag{135}\] where \(S\) is a \(\mathcal{H}^{k}\)-measurable countable \(k\)-rectifiable set, that is \(S\subset S_{0}\cup_{j\in\mathbb{N}}S_{j}\) with \(\mathcal{H}^{k}(S_{0})=0\) and \(S_{j}\) is an embedded \(k\)-dimensional \(C^{1}\)-submanifold for all \(j\in\mathbb{N}\), \(\theta\) is a \(\mathcal{H}^{k}\)-integrable \(\mathbb{N}\)-valued function, and \(\tau\) is a \(k\)-form such that \(\tau(x)\) is a volume form for \(T_{x}S\) at \(x\) where a \(k\)-dimensional tangent space \(T_{x}S\) is well-defined. Note that this tangent space \(T_{x}S\) is well-defined for \(\mathcal{H}^{k}\)-a.e. \(x\in S\), provided \(\mathcal{H}^{k}(S\cap K)<+\infty\) for every compact set \(K\subset\mathbb{R}^{J}\). Also note that the form \(\tau\) give an orientation for \(T_{x}S\). The _mass_ of an integer multiplicity \(k\)-current \(T\) is defined as \[\mathbf{M}(T):=\sup\{T(\phi)\mid\phi\in\mathcal{D}^{k}(\mathbb{R}^{J}),\ \ |\phi|\leq 1\}, \tag{136}\] where \(|\phi|\) is the pointwise maximal norm of a form \(\phi\). In particular, a \(k\)-dimensional embedded smooth submanifold of \(\mathbb{R}^{J}\) can be viewed as an integer multiplicity \(k\)-current by integrating a \(k\)-form over it. Its current boundary is given by its usual boundary, and its mass is the \(k\)-dimensional volume of the submanifold. Let \(M\) be a manifold embedded in \(\mathbb{R}^{J}\). The space of _integral \(k\)-currents_ on \(M\), denoted by \(\mathbf{I}_{k}(M)\), is defined to be the space of \(k\)-current such that both \(T\) and \(\partial T\) are integer multiplicity currents with finite mass and support contained in \(M\). The space of \(k\)_-cycles_, denoted by \(\mathcal{Z}_{k}(M)\), is defined to be the space of those \(T\in\mathbf{I}_{k}(M)\) so that \(T=\partial Q\) for some \(Q\in\mathbf{I}_{k+1}(M)\). A _rectifiable \(k\)-varifold_\(\mathrm{V}\) is defined to be a certain Radon measure on \(\mathbb{R}^{J}\times G_{k}(\mathbb{R}^{J})\), where \(G_{k}(\mathbb{R}^{J})\) is the Grassmannian of \(k\)-planes in \(\mathbb{R}^{J}\). An integral \(k\)-current \(T\in\mathbf{I}_{k}(M)\) given as in (135) naturally associates a rectifiable \(k\)-varifold, denoted by \(|T|\), as \[|T|(A)=\int_{S\cap\pi(TS\cap A)}\theta(x)d\mathcal{H}^{k}. \tag{137}\] Here \(\pi\) is the natural projection map from \(\mathbb{R}^{J}\times G_{k}(\mathbb{R}^{J})\) to \(\mathbb{R}^{J}\), and \(TS\) is rank-\(k\) tangent bundle of \(S\) consisting of \(T_{x}S\) at \(x\in S\) where its \(k\)-dimensional tangent plane can be well defined. Note that: in the varifold expression (137) of \(|T|\), we forget the orientation of \(S\) determined by the \(k\)-form \(\tau\) in the current expression (135) of \(T\). The space \(\mathbf{I}_{k}(M)\) can be endowed with various metrics and have different induced topologies. Given \(T,S\in\mathbf{I}_{k}(M)\), the _flat metric_ is defined by \[\mathcal{F}(T,S):=\inf\left\{\mathbf{M}(Q)+\mathbf{M}(R)\mid T-S=Q+\partial R,\ \ Q\in\mathbf{I}_{k}(M),\ \ R\in\mathbf{I}_{k+1}(M)\right\}\] and induces the _flat topology_ on \(\mathbf{I}_{k}(M)\). We also denote \(\mathcal{F}(T):=\mathcal{F}(T,0)\) and have \[\mathcal{F}(T)\leq\mathbf{M}(T),\quad\forall T\in\mathbf{I}_{k}(M). \tag{138}\] For \(T,S\in\mathbf{I}_{k}(M)\), the \(\mathbf{F}\)_-metric_ is defined by Pitts in [16] as: \[\mathbf{F}(S,T):=\mathcal{F}(S-T)+\mathbf{F}(|S|,|T|), \tag{139}\] where \(\mathbf{F}(|S|,|T|)\) is the \(\mathbf{F}\)-metric on the associated varifolds defined on page 66 in [16] as: \[\mathbf{F}(|S|,|T|):=\sup\left\{|S|(f)-|T|(f)\mid f\in C_{c}(G_{k}(\mathbb{R} ^{J})),\ \ |f|\leq 1,\ \ \text{Lip}(f)\leq 1\right\}.\] Recall that (see page 66 in [16]) \[\mathbf{F}(|S|,|T|)\leq\mathbf{M}(S-T), \tag{140}\] and hence \[\mathbf{F}(S,T)\leq 2\mathbf{M}(S-T),\quad\forall S,T\in\mathbf{I}_{k}(M). \tag{141}\] For the Min-Max theory for minimal surfaces, the space of mod 2 integral \(k\)-currents and mod 2 \(k\)-cycles are also needed. They are denoted by \(\mathbf{I}_{k}(M;\mathbb{Z}_{2})\) and \(\mathcal{Z}_{k}(M;\mathbb{Z}_{2})\), respectively, and defined by an equivalence relation: \(T\equiv S\) if \(T-S=2Q\) for \(T,S,Q\in\mathbf{I}_{k}(M)\). The notions of boundary, mass and metrics defined above for \(\mathbf{I}_{k}(M)\) can be extended to \(\mathbf{I}_{k}(M;\mathbb{Z}_{2})\). For a \(n\)-dimensional manifold \(M\), the Constancy Theorem (Theorem 26.27 in [17]) says that if \(T\in\mathbf{I}_{n}(M;\mathbb{Z}_{2})\) has \(\partial T=0\), then either \(T=M\) or \(T=0\). Then we recall some basic facts about the topology of \(\mathcal{Z}_{k}(M;\mathcal{F};\mathbb{Z}_{2})\), that is \(\mathcal{Z}_{k}(M;\mathbb{Z}_{2})\) endowed with flat metric. Their proofs can be found in [13], also see [1]. Let \(n\) be the dimension of the manifold \(M\). Then \(\mathbf{I}_{n}(M;\mathcal{F};\mathbb{Z}_{2})\) is contractible and the continuous map \[\partial:\mathbf{I}_{n}(M;\mathcal{F};\mathbb{Z}_{2})\to\mathcal{Z}_{n-1}(M; \mathcal{F};\mathbb{Z}_{2}) \tag{142}\] is a 2-fold covering map. The homotopy groups are: \[\pi_{k}\left(\mathcal{Z}_{n-1}(M;\mathcal{F};\mathbb{Z}_{2}),0\right)=\begin{cases} 0,&\text{when}\ \ k\geq 2,\\ \mathbb{Z}_{2},&\text{when}\ \ k=1.\end{cases} \tag{143}\] For the calculation of the fundamental group, one notes that the map \[P:\pi_{1}\left(\mathcal{Z}_{n-1}(M;\mathcal{F};\mathbb{Z}_{2}),0\right) \to \{0,M\} \tag{145}\] \[\left[\gamma\right] \mapsto \tilde{\gamma}(1) \tag{144}\] is an isomorphism. Here \(\gamma\) is a loop in \(\mathcal{Z}_{n-1}(M;\mathcal{F};\mathbb{Z}_{2})\) with \(\gamma(0)=\gamma(1)=0\), and \(\tilde{\gamma}\) is the unique lift to \(\mathbf{I}_{n}(M;\mathcal{F};\mathbb{Z}_{2})\) with \(\tilde{\gamma}(0)=0\). Then by applying Hurewicz Theorem, one can obtain: \[H^{1}\left(\mathcal{Z}_{n-1}(M;\mathcal{F};\mathbb{Z}_{2});\mathbb{Z}_{2} \right)=\mathbb{Z}_{2}=\{0,\bar{\lambda}\}. \tag{146}\] The the action of the fundamental cohomology class \(\bar{\lambda}\) on a homology class induced by a loop is nonzero if and only if the loop is homotopically nontrivial. We take the following definition of \(1\)-sweepout from [13]. **Definition 4.8**.: _A continuous map \(\Phi:\mathbb{S}^{1}\to\mathcal{Z}_{n-1}(M;\mathbf{F};\mathbb{Z}_{2})\) is called a \(1\)-sweepout if \(\Phi^{*}(\bar{\lambda})\neq 0\in H^{1}(\mathbb{S}^{1},\mathbb{Z}_{2})\)._ Here \(\mathcal{Z}_{n-1}(M;\mathbf{F};\mathbb{Z}_{2})\) is the space \(\mathcal{Z}_{n-1}(M;\mathbb{Z}_{2})\) endowed with the \(\mathbf{F}\)-metric given in (139). Now we return back our warped product manifold \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\), that is \(\mathbb{S}^{2}\times\mathbb{S}^{1}\) with Riemannian metric \[g=g_{\mathbb{S}^{2}}+f^{2}g_{\mathbb{S}^{1}}. \tag{147}\] For each fixed \(x\in\mathbb{S}^{2}\), we construct a \(1\)-sweepout of \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\) consisting of tori \(\{\Sigma_{x,r}:=\partial B_{r}(x)\times\mathbb{S}^{1}\mid 0\leq r\leq\pi\}\), where \(B_{r}(x)\) denotes the geodesic ball on \(\mathbb{S}^{2}\) centered at \(x\) with radius \(r\). In other words, we consider the map \[\Phi:[0,\pi] \to\mathcal{Z}_{2}(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}; \mathbf{F};\mathbb{Z}_{2}),\] \[r \mapsto\partial\left(B_{r}(x)\times\mathbb{S}^{1}\right)=\partial B _{r}(x)\times\mathbb{S}^{1}. \tag{148}\] **Lemma 4.9**.: _The map \(\Phi\) given in (148) provides a \(1\)-sweepout of \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\) as in Definition 4.8._ Proof.: Clearly, \(\Phi(0)=\Phi(\pi)=0\), and hence \(\Phi\) can be viewed as a map from \(\mathbb{S}^{1}\) to \(\mathcal{Z}_{2}(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1};\mathbf{F};\mathbb{Z}_ {2})\) by identifying the end points of the interval \([0,\pi]\). Now we show the continuity of the map \(\Phi\) on \([0,\pi]\). This is clear for \(r\in(0,\pi)\), since \(\partial B_{r}(x)\) varies smoothly for \(r\in(0,\pi)\). Then the continuity at \(t=0\) follows from the inequality in (141) and the estimate: \[\mathbf{M}(\Phi(r)-\Phi(0))=\mathbf{M}(\Phi(r))=\mathbf{M}\left(\partial B_{r }(x)\times\mathbb{S}^{1}\right)=f\cdot 4\pi^{2}\sin r\to 0, \tag{149}\] as \(r\to 0\), since the warping function \(f\) is smooth on \(\mathbb{S}^{2}\). The continuity at \(t=\pi\) follows similarly, since \(\sin r\to 0\) as \(r\to\pi\). Because by the definition flat metric is less than or equal to \(\mathbf{F}\)-metric, \(\Phi\) is also continuous if we endow the flat metric on \(\mathcal{Z}_{2}(M;\mathbb{Z}_{2})\). So \(\Phi\) is a loop in \(\mathcal{Z}_{2}(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1};\mathcal{F};\mathbb{Z}_{2})\), and represents a non-trivial element: \[[\Phi]\neq 0\in\pi_{1}\left(\mathcal{Z}_{2}(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}; \mathcal{F};\mathbb{Z}_{2})\right). \tag{150}\] This is because by the definition of the map \(\Phi\) we have that the unique lift \(\tilde{\Phi}\) of \(\Phi\) with \(\tilde{\Phi}(0)=0\) is given by \[\begin{split}\tilde{\Phi}:[0,\pi]&\to\mathcal{Z}_{ 3}(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1};\mathcal{F};\mathbb{Z}_{2}),\\ r&\mapsto B_{r}(x)\times\mathbb{S}^{1},\end{split} \tag{151}\] and has \(\tilde{\Phi}(\pi)=\mathbb{S}^{2}\times\mathbb{S}^{1}\). Consequently, \(\Phi^{*}(\bar{\lambda})\neq 0\), and so \(\Phi\) is a 1-sweepout. ### Bound \(\mathrm{MinA}\) from above by \(L^{1}\)-norm of warping function In this subsection, we derive an upper bound for \(\mathrm{MinA}(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1})\) in terms of \(\|f\|_{L^{1}(\mathbb{S}^{2})}\), provided that \(\|f\|_{L^{2}(\mathbb{S}^{2})}\) is small relative to \(\mathrm{MinA}(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1})\). **Proposition 4.10**.: _Let \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\) be a warped product Riemannian manifolds with metric tensor as in (3) that has nonnegative scalar curvature and \(\mathrm{MinA}(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1})\geq A>0\). If \(\|f\|_{L^{2}(\mathbb{S}^{2})}<\frac{A}{2^{\frac{3}{2}}\pi^{\frac{5}{2}}}\), then we have \(\|f\|_{L^{1}(\mathbb{S}^{1})}\geq\frac{A}{100\pi}\)._ Recall that \(\mathrm{MinA}(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1})\) is the infimum of areas of closed embedded minimal surfaces in \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\). Proposition 4.10 is crucial in the proof of Theorem 4.13 below. In order to prove Proposition 4.10, we first prove the following two lemmas. First of all, we use the Min-Max minimal surface theory of Marques and Neves to bound \(\mathrm{MinA}(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1})\) from above by areas of some tori in \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\). **Lemma 4.11**.: _Let \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\) be a warped product Riemannian manifold with metric tensor as in (3). For each \(x\in\mathbb{S}^{2}\), there exists a torus \(\Sigma_{x,r_{x}}=\partial B_{r_{x}}(x)\times\mathbb{S}^{1}\subset\mathbb{S}^{ 2}\times_{f}\mathbb{S}^{1}\), \(0<r_{x}<\pi\), whose area is not less than \(\mathrm{MinA}(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1})\), i.e._ \[\mathrm{Area}(\Sigma_{x,r_{x}})\geq\mathrm{MinA}(\mathbb{S}^{2}\times_{f} \mathbb{S}^{1}), \tag{152}\] _where \(B_{r_{x}}(x)\) is the geodesic ball in the standard \(\mathbb{S}^{2}\) centered at \(x\) with radius \(r_{x}\)._ Proof.: We will use Min-Max minimal surface theory of Marques and Neves to prove the lemma. For each fixed point \(x\in\mathbb{S}^{2}\), by Lemma 4.9, the map \(\Phi\) in (148) gives a 1-sweepout of \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\) as in Definition 4.8. For \(r\in[0,\pi]\), the image \(\Phi(r)=\partial B_{r}(x)\times\mathbb{S}^{1}=:\Sigma_{x,r}\) are tori in \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\) with mass: \[\mathbf{M}(\Phi(r))=\mathrm{Area}(\Sigma_{x,r})=2\pi\int_{\partial B_{r}(x)}fds. \tag{153}\] Clearly, \(\mathbf{M}(\Phi(r))\) is a continuous function of \(r\) on \([0,\pi]\) with \(\mathbf{M}(\Phi(0))=\mathbf{M}(\Phi(\pi))=0\). Thus there exist \(r_{x}\in(0,\pi)\) such that \[\mathbf{M}(\Phi(r_{x}))=\max\{\mathbf{M}(\Phi(r))\mid 0\leq r\leq\pi\}. \tag{154}\] Let \(\Pi\) be the homotopy class of the \(1\)-sweepout \(\Phi\), which consists of all continuous maps \(\Phi^{\prime}:[0,\pi]\to\mathcal{Z}_{2}(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1 };\mathbf{F};\mathbb{Z}_{2})\) with \(\Phi^{\prime}(0)=\Phi^{\prime}(\pi)\) such that \(\Phi\) and \(\Phi^{\prime}\) are homotopic to each other in the flat topology. By Lemma 2.2.6 in [13], the width \[\mathbf{L}(\Pi)=\inf_{\Phi^{\prime}\in\Pi}\sup_{r\in[0,\pi]}\{\mathbf{M}(\Phi ^{\prime}(r))\}>0, \tag{155}\] since \(\Phi\) is a \(1\)-sweepout and so \(\Pi\) is a non-trivial homotopy class. Then Min-Max Theorem of Marques-Neves (see Theorem 2.2.7 in [13]) implies that there exists a smooth embedded minimal surface \(\Sigma\) in \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\) achieving the width, i.e. \(\mathrm{Area}(\Sigma)=\mathbf{L}(\Pi)>0\). Finally, by the definitions of the width in (155) and MinA, and by the choice of \(\Sigma_{x,r_{x}}\), we have \[\mathrm{Area}(\Sigma_{x,r_{x}})\geq\mathbf{L}(\Pi)=\mathrm{Area}(\Sigma)\geq \mathrm{MinA}(\mathbb{S}^{2}\times\mathbb{S}^{1}). \tag{156}\] Because \(x\) is an arbitrary point on \(\mathbb{S}^{2}\), this completes the proof. Next, we apply Lemma 4.11 and the spherical mean inequality from Proposition 2.4 to prove the following lemma. **Lemma 4.12**.: _Let \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\) be a warped product Riemannian manifold with metric tensors as in (3) that have non-negative scalar curvatures and \(\mathrm{MinA}(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1})\geq A>0\). If \(\|f\|_{L^{2}(\mathbb{S}^{2})}<\frac{A}{\frac{1}{2}\pi^{2}}\), then there exists a set \(\mathcal{H}\subset\mathbb{S}^{2}\) satisfying that for each \(x\in\mathcal{H}\) there exists \(0<r_{x}\leq\frac{\pi}{2}\) such that_ 1. \(\mathrm{Area}\left(\underset{x\in\mathcal{H}}{\cup}B_{\frac{r_{x}}{10}}(x) \right)\geq\frac{1}{2}\mathrm{Area}(\mathbb{S}^{2})\)_,_ 2. _and_ (157) \[\fint_{\partial B_{r}(x)}fds\geq\frac{A}{2(2\pi)^{2}}\] _holds for all_ \(r\in[0,r_{x}]\)_._ Proof.: For any point \(x\in\mathbb{S}^{2}\), we denote its antipodal point by \(\bar{x}\). By Lemma 4.11, for any \(x\in\mathbb{S}^{2}\), there exists \(0<r_{x}<\pi\) such that the torus \(\Sigma_{x,r_{x}}=\partial B_{r_{x}}(x)\times\mathbb{S}^{1}\) in \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\) has area \[\mathrm{Area}(\Sigma_{x,r_{x}})\geq\mathrm{MinA}(\mathbb{S}^{2}\times_{f} \mathbb{S}^{1})\geq A. \tag{158}\] Since \(\mathrm{Area}(\Sigma_{x,r_{x}})=2\pi\int_{\partial B_{r_{x}}(x)}fds\), we have \[2\pi\int_{\partial B_{r_{x}}(x)}fds\geq A. \tag{159}\] Thus, we have \[\int_{\partial B_{r_{x}}(x)}fds\geq\frac{A}{2\pi}. \tag{160}\] Now if \(0<r_{x}\leq\frac{\pi}{2}\), then we include the point \(x\) in the set \(\mathcal{H}\), and if \(r_{x}>\frac{\pi}{2}\), then we include its antipodal point \(\bar{x}\) in the set \(\mathcal{H}\), and we set \(r_{\bar{x}}=\pi-r_{x}<\frac{\pi}{2}\). Then we still have \[\int_{\partial B_{r_{x}}(\bar{x})}fds=\int_{\partial B_{r_{x}}(x)}fds\geq \frac{A}{2\pi}, \tag{161}\] since \(\partial B_{r_{x}}(\bar{x})=\partial B_{r_{x}}(x)\). By the construction of the set \(\mathcal{H}\subset\mathbb{S}^{2}\), \(\mathcal{H}\) contains at least one of any pair of antipodal points on \(\mathbb{S}^{2}\), and for any \(x\in\mathcal{H}\), there exists \(0<r_{x}\leq\frac{\pi}{2}\) such that \[\int_{\partial B_{r_{x}}(x)}fds\geq\frac{A}{2\pi}. \tag{162}\] Then we have that the area of the open set \(\underset{x\in\mathcal{H}}{\cup}B_{\frac{r_{x}}{10}}(x)\) is at least half of the area of the whole sphere \(\mathbb{S}^{2}\), i.e. \[\operatorname{Area}\left(\underset{x\in\mathcal{H}}{\cup}B_{\frac{r_{x}}{10} }(x)\right)\geq\frac{1}{2}\operatorname{Area}(\mathbb{S}^{2}). \tag{163}\] Indeed, otherwise, we have \[\operatorname{Area}\left(\underset{x\in\mathcal{H}}{\cup}B_{\frac{r_{x}}{10} }(\bar{x})\right)=\operatorname{Area}\left(\underset{x\in\mathcal{H}}{\cup}B _{\frac{r_{x}}{10}}(x)\right)<\frac{1}{2}\operatorname{Area}(\mathbb{S}^{2}). \tag{164}\] On the other hand, because for each \(x\in\mathbb{S}^{2}\) either \(x\) or \(\bar{x}\) is contained in \(\mathcal{H}\), we have \[\mathbb{S}^{2}=\left(\underset{x\in\mathcal{H}}{\cup}B_{\frac{r_{x}}{10}}(x) \right)\cup\left(\underset{x\in\mathcal{H}}{\cup}B_{\frac{r_{x}}{10}}(\bar{x })\right). \tag{165}\] So \[\operatorname{Area}(\mathbb{S}^{2}) = \operatorname{Area}\left(\left(\underset{x\in\mathcal{H}}{\cup}B _{\frac{r_{x}}{10}}(x)\right)\cup\left(\underset{x\in\mathcal{H}}{\cup}B_{ \frac{r_{x}}{10}}(\bar{x})\right)\right) \tag{167}\] \[\leq \operatorname{Area}\left(\underset{x\in\mathcal{H}}{\cup}B_{\frac {r_{x}}{10}}(x)\right)+\operatorname{Area}\left(\underset{x\in\mathcal{H}}{ \cup}B_{\frac{r_{x}}{10}}(\bar{x})\right)\] (168) \[< \frac{1}{2}\operatorname{Area}(\mathbb{S}^{2})+\frac{1}{2} \operatorname{Area}(\mathbb{S}^{2})=\operatorname{Area}(\mathbb{S}^{2}). \tag{166}\] This gives a contradiction. So we have \(\operatorname{Area}\left(\underset{x\in\mathcal{H}}{\cup}B_{\frac{r_{x}}{10} }(x)\right)\geq\frac{1}{2}\operatorname{Area}(\mathbb{S}^{2})\). Because \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\) has non-negative scalar curvature, by Lemma 2.1, we have \(\Delta f\leq f\). Then by the spherical mean inequality in Proposition 2.4, for any \(x\in\mathcal{H}\subset\mathbb{S}^{2}\) and any \(0\leq r\leq r_{x}(\leq\frac{\pi}{2})\) we have that \[\fint_{\partial B_{r_{x}}(x)}fds-\fint_{\partial B_{r}(x)}fds\leq\frac{\|f\|_{L^ {2}(\mathbb{S}^{2})}}{\sqrt{2\pi}}(r_{x}-r)\leq\frac{A}{2(2\pi)^{2}}, \tag{169}\] since \(\|f\|_{L^{2}(\mathbb{S}^{2})}\leq\frac{A}{2^{\frac{1}{2}}\pi^{\frac{3}{2}}}\) and \(r_{x}-r\leq\frac{\pi}{2}\). By rearrange the inequality, we obtain that for any \(x\in\mathcal{H}\) and any \(0\leq r\leq r_{x}\), \[\fint_{\partial B_{r}(x)}fds \geq \fint_{\partial B_{r_{x}}(x)}fds-\frac{A}{2(2\pi)^{2}} \tag{171}\] \[= \frac{1}{2\pi\sin r_{x}}\int_{\partial B_{r_{x}}(x)}fds-\frac{A }{2(2\pi)^{2}}\] (172) \[\geq \frac{1}{2\pi}\int_{\partial B_{r_{x}}(x)}fds-\frac{A}{2(2\pi)^{ 2}}\] (173) \[\geq \frac{A}{(2\pi)^{2}}-\frac{A}{2(2\pi)^{2}}=\frac{A}{2(2\pi)^{2}}. \tag{170}\] We now apply Lemma 4.12 and Vitali covering theorem to prove Proposition 4.10: Proof of Proposition 4.10.: By Lemma 4.12, there exists a set \(\mathcal{H}\subset\mathbb{S}^{2}\) such that \[\text{Area}(\mathop{\cup}_{x\in\mathcal{H}}B_{\frac{r_{x}}{10}}(x))\geq\frac{ 1}{2}\text{Area}(\mathbb{S}^{2}), \tag{174}\] and for any \(x\in\mathcal{H}\), there exists \(r_{x}\leq\frac{\pi}{2}\) such that \[\fint_{\partial B_{r}(x)}f\geq\frac{A}{2(2\pi)^{2}} \tag{175}\] holds for all \(r\in[0,r_{x}]\). By the Vitali covering theorem, there exists a countable sequence of points \(\{x_{i}\mid i\in\mathbb{N}\}\subset\mathcal{H}\) such that the collection of balls \(\{B_{\frac{r_{x_{i}}}{10}}(x_{i})\}\) are disjoint with each other, and that \[\mathop{\cup}_{x\in\mathcal{H}}B_{\frac{r_{x}}{10}}(x)\subset\mathop{\cup}_ {i\in\mathbb{N}}B_{\frac{r_{x_{i}}}{2}}(x_{i}). \tag{176}\] By Lemma 4.12 we have \[\frac{A}{8\pi^{2}}\leq\fint_{\partial B_{r}(x_{i})}f=\frac{1}{2\pi\sin r}\int _{\partial B_{r}(x_{i})}fds,\quad\forall r\in[0,r_{x_{i}}]. \tag{177}\] As a result, we have \[\frac{A}{4\pi}\sin r\leq\int_{\partial B_{r}(x_{i})}fds,\quad\forall r\in[0,r _{x_{i}}]. \tag{178}\] Integrating this inequality from \(0\) to \(\frac{r_{x_{j}}}{10}\) gives \[\frac{A}{8\pi^{2}}\operatorname{Area}(B_{\frac{r_{x_{j}}}{10}}) = \frac{A}{8\pi^{2}}\int_{0}^{\frac{r_{x_{j}}}{10}}2\pi\sin rdr \tag{180}\] \[\leq \int_{0}^{\frac{r_{x_{j}}}{10}}\left(\int_{\partial B_{r}(x_{j})} fds\right)dr\] (181) \[= \int_{B_{\frac{r_{x_{j}}}{10}}(x_{j})}f\mathrm{vol}_{\mathbb{S}^{ 2}}. \tag{179}\] Then by summing the above inequalities for \(i\in\mathbb{N}\) together, we obtain \[\frac{A}{8\pi^{2}}\sum_{i=1}^{+\infty}\operatorname{Area}(B_{\frac{r_{x_{j}}}{1 0}})\leq\sum_{i=1}^{+\infty}\int_{B_{\frac{r_{x_{j}}}{10}}(x_{i})}f\mathrm{vol }_{\mathbb{S}^{2}}\leq\|f\|_{L^{1}(\mathbb{S}^{2})}, \tag{182}\] since \(\{B_{\frac{r_{x_{j}}}{10}}(x_{i})\mid i\in\mathbb{N}\}\) are disjoint balls. In the standard \(\mathbb{S}^{2}\) we have \[\operatorname{Area}\left(B_{\frac{r_{x_{j}}}{10}}(x_{i})\right)\geq\frac{1}{ 25}\operatorname{Area}\left(B_{\frac{r_{x_{j}}}{2}}(x_{i})\right). \tag{183}\] As a result, we have (184) \[\|f\|_{L^{1}(\mathbb{S}^{2})} \geq \frac{A}{8\pi^{2}}\sum_{i=1}^{+\infty}\operatorname{Area}\left(B _{\frac{r_{x_{j}}}{10}}\right)\] (185) \[\geq \frac{A}{200\pi^{2}}\sum_{i=1}^{+\infty}\operatorname{Area}\left( B_{\frac{r_{x_{j}}}{2}}(x_{i})\right)\] (186) \[\geq \frac{A}{200\pi^{2}}\operatorname{Area}\left(\cup_{x\in\mathcal{ H}}B_{\frac{r_{x}}{10}}(x)\right)\] (187) \[\geq \frac{A}{200\pi^{2}}\frac{1}{2}\operatorname{Area}(\mathbb{S}^{2 })=\frac{A}{100\pi}.\] (188) This completes the proof. ### Positivity of the limit of warping functions In this subsection, we use Proposition 4.7 and Proposition 4.10 to prove Theorem 1.3, we restate it here for the convenience of the reader **Theorem 4.13**.: _Let \(\{\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\}_{j=1}^{\infty}\) be a sequence of warped product manifolds such that each \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\) has non-negative scalar curvature. If we assume that_ \[\mathrm{Vol}(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1})\leq V\text{ and } \operatorname{MinA}(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1})\geq A>0,\forall j \in\mathbb{N}, \tag{189}\] _then we have the following:_ 1. _After passing to a subsequence if needed, the sequence of warping functions_ \(\{f_{j}\}_{j=1}^{\infty}\) _converges to some limit function_ \(f_{\infty}\) _in_ \(L^{q}(\mathbb{S}^{2})\) _for all_ \(q\in[1,\infty)\)_._ 2. _The limit function_ \(f_{\infty}\) _is in_ \(W^{1,p}(\mathbb{S}^{2})\)_, for all_ \(p\) _such that_ \(1\leq p<2\)_._ 3. _The essential infimum of_ \(f_{\infty}\) _is strictly positive, i.e._ \(\inf\limits_{\mathbb{S}^{2}}f_{\infty}>0\)_._ 4. _If we allow_ \(+\infty\) _as a limit, then the limit_ (190) \[\overline{f_{\infty}}(x):=\lim_{r\to 0}\fint_{B_{r}(x)}f_{\infty}\] _exists for every_ \(x\in\mathbb{S}^{2}\)_. Moreover,_ \(\overline{f_{\infty}}\) _is lower semi-continuous and strictly positive everywhere on_ \(\mathbb{S}^{2}\)_, and_ \(\overline{f_{\infty}}=f_{\infty}\) _a.e. on_ \(\mathbb{S}^{2}\)_._ Proof.: (\(i\)) By Lemma 2.1 and Lemma 2.2, the nonnegative scalar curvature condition and \(\operatorname{Vol}(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{2})\leq V\) imply that the sequence of warping functions \(\{f_{j}\}_{j=1}^{\infty}\) satisfies the hypothesis in Proposition 3.5. By applying Proposition 3.5, we get the desired convergence. (\(ii\)) By applying Proposition 3.5 we get that \(f_{\infty}\in W^{1,p}(\mathbb{S}^{2})\), for all \(p\in[1,2)\). (\(iii\)) We prove \(\inf\limits_{\mathbb{S}^{2}}f_{\infty}>0\) by contradiction. Recall that \(\inf\limits_{\mathbb{S}^{2}}f_{\infty}\) is the essential infimum of \(f_{\infty}\) as defined in Definition 4.6. First note that \(f_{\infty}\geq 0\), since \(f_{j}>0,\forall j\in\mathbb{N}\). Assume that \(\inf\limits_{\mathbb{S}^{2}}f_{\infty}=0\), then by Proposition 4.7 we have \(f_{\infty}=0\) almost everywhere in \(\mathbb{S}^{2}\) and hence \[f_{j}\to 0\ \ \text{in}\ \ L^{2}(\mathbb{S}^{2}),\ \ \text{as}\ \ j\to+\infty. \tag{191}\] Therefore, for all sufficiently large \(j\), we have \(\|f_{j}\|_{L^{2}(\mathbb{S}^{2})}<\frac{A}{2^{\frac{1}{2}}\pi^{\frac{3}{2}}}\). Then by Proposition 4.10, we have \(\|f_{j}\|_{L^{1}(\mathbb{S}^{2})}\geq\frac{A}{100\pi}>0\) for all sufficiently large \(j\in\mathbb{N}\). This contradicts with that \(f_{j}\to 0\) in \(L^{2}(\mathbb{S}^{2})\) as \(j\to+\infty\) in (191). This finishes the proof of part (\(ii\)). (\(iv\)) Because warping functions \(f_{i}\) satisfy the requirements in Proposition 3.7, the existence of the limit \[\overline{f_{\infty}}(x):=\lim_{r\to 0}\fint_{B_{r}(x)}f_{\infty}, \tag{192}\] the lower semi-continuity of \(\overline{f_{\infty}}\) and \(\overline{f_{\infty}}=f_{\infty}\) a.e. on \(\mathbb{S}^{2}\) directly follow from Proposition 3.7. Thus we only need to prove that \(\overline{f_{\infty}}(x)>0\) for all \(x\in\mathbb{S}^{2}\). Let \[e_{\infty}:=\inf\limits_{\mathbb{S}^{2}}f_{\infty}>0. \tag{193}\] By the continuity of the distance funciton \(d(y,x)\), there exists \(0<r_{0}<\frac{\pi}{2}\) such that for all \(x\in\mathbb{S}^{2}\) we have \[f_{\infty}(y)-Cd(y,x)>\frac{e_{\infty}}{2},\ \ \ \text{for a.e.}\ \ y\in B_{r_{0}}(x). \tag{194}\] As a result, we have \[\fint_{B_{r_{0}}(x)}\left(f_{\infty}(y)-Cd(y,x)\right)d\text{vol}(y)>\frac{e_{ \infty}}{2},\ \ \forall x\in\mathbb{S}^{2}. \tag{195}\] Then because in Proposition 3.7 we proved that for each fixed \(x\in\mathbb{S}^{2}\) the ball average \(\fint_{B_{r_{0}}(x)}\left(f_{\infty}(y)-Cd(y,x)\right)d\text{vol}(y)\) is non-increasing in \(r\in\left(0,\frac{\pi}{2}\right)\), and \[\lim_{r\to 0}\fint_{B_{r}(x)}f_{\infty}=\lim_{r\to 0}\fint_{B_{r}(x)} \left(f_{\infty}(y)-Cd(y,x)\right)d\text{vol}(y), \tag{196}\] we have that for each fixed \(x\in\mathbb{S}^{2}\), \[\overline{f_{\infty}}(x) := \lim_{r\to 0}\fint_{B_{r}(x)}f_{\infty} \tag{198}\] \[= \sup_{0<r<\frac{\pi}{2}}\fint_{B_{r}(x)}\left(f_{\infty}(y)-Cd(y, x)\right)d\text{vol}(y)\] (199) \[\geq \fint_{B_{r_{0}}(x)}\left(f_{\infty}(y)-Cd(y,x)\right)d\text{vol }(y)\] (200) \[> \frac{e_{\infty}}{2}>0. \tag{197}\] This completes the proof of theorem. **Remark 4.14**.: Theorem 4.13 implies that the limit function \(f_{\infty}\) has a everywhere positive lower semi-continuous representative \(\overline{f_{\infty}}\) as a function in \(W^{1,p}(\mathbb{S}^{2})\) for \(1\leq p<2\). For the rest of paper, \(f_{\infty}\in W^{1,p}(\mathbb{S}^{2})\) will always denote this everywhere positive lower semi-continuous representative. We end this section with Proposition 4.15 below. The proof of Proposition 4.15 uses Theorem 4.13 and the spherical mean inequality from Proposition 2.4. The positive uniform lower bound for warping functions \(f_{j}\) obtained in Proposition 4.15 is important in proving geometric convergences of the sequence of warped product manifolds \(\{\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\}_{j=1}^{\infty}\) in our next paper. **Proposition 4.15**.: _Let \(\{\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\}_{j=1}^{\infty}\) be a sequence of warped product manifolds with metric tensors as in (3) that have non-negative scalar curvature and satisfy_ \[\text{Vol}(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1})\leq V\text{ and }\ \text{MinA}(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1})\geq A>0,\forall j\in \mathbb{N}. \tag{201}\] _Let \(e_{\infty}:=\inf_{\mathbb{S}^{2}}f_{\infty}>0\). Then there exists \(j_{0}\in\mathbb{N}\) such that \(f_{j}(x)\geq\frac{e_{\infty}}{4}>0\), for all \(j\geq j_{0}\) and all \(x\in\mathbb{S}^{2}\)._ Proof.: By Lemma 2.1, the non-negativity of scalar curvature of \(\mathbb{S}^{2}\times_{f_{l}}\mathbb{S}^{1}\) implies that \[\Delta f_{j}\leq f_{j},\quad\forall j\in\mathbb{N}. \tag{202}\] Therefore, by the spherical mean inequality in Proposition 2.4, we have \[f_{j}(x)\geq\fint_{\partial B_{s}(x)}f_{j}ds-\frac{\|f_{j}\|_{L^{2}(\mathbb{S} ^{2})}}{\sqrt{2\pi}}s,\quad\forall s\in\left(0,\frac{\pi}{2}\right),x\in \mathbb{S}^{2},j\in\mathbb{N}. \tag{203}\] Then multiplying the inequality by \(\operatorname{Area}(\partial B_{s}(x))=2\pi\sin(s)\) gives us \[2\pi\sin(s)f_{j}(x)\geq\int_{\partial B_{s}(x)}f_{j}ds-\frac{\|f_{j}\|_{L^{2}( \mathbb{S}^{2})}}{\sqrt{2\pi}}2\pi\sin(s)s, \tag{204}\] for all \(s\in\left(0,\frac{\pi}{2}\right),x\in\mathbb{S}^{2}\) and \(j\in\mathbb{N}\). Let \[V(r):=\operatorname{vol}(B_{r}(x))=\int_{0}^{r}2\pi\sin sds=2\pi(1-\cos r), \tag{205}\] and let \(e_{\infty}:=\inf_{\mathbb{S}^{2}}f_{\infty}\) denote the essential infimum of the limit function \(f_{\infty}\) which is strictly positive by Theorem 4.13. Now integrating the inequality (204) with respect to \(s\) from \(0\) to \(r<\frac{\pi}{2}\) gives us \[V(r)f_{j}(x) \geq \int_{B_{r}(x)}f_{j}d\operatorname{vol}_{\mathbb{S}^{2}}-\frac{\| f_{j}\|_{L^{2}(\mathbb{S}^{2})}}{\sqrt{2\pi}}\int_{0}^{r}2\pi s\sin sds \tag{207}\] \[\geq \int_{B_{r}(x)}f_{\infty}d\operatorname{vol}_{\mathbb{S}^{2}}-\| f_{\infty}-f_{j}\|_{L^{1}(\mathbb{S}^{2})}\] (208) \[-\sqrt{2\pi}\|f_{j}\|_{L^{2}(\mathbb{S}^{2})}(\sin r-r\cos r)\] (209) \[\geq e_{\infty}V(r)-\|f_{\infty}-f_{j}\|_{L^{1}(\mathbb{S}^{2})}\] (210) \[-\sqrt{2\pi}\|f_{j}\|_{L^{2}(\mathbb{S}^{2})}(\sin r-r\cos r). \tag{206}\] Then by dividing the inequality by \(V(r)\) we obtain \[f_{j}(x)\geq e_{\infty}-\frac{\|f_{\infty}-f_{j}\|_{L^{1}(\mathbb{S}^{2})}}{V( r)}-\frac{\|f_{j}\|_{L^{2}(\mathbb{S}^{2})}}{\sqrt{2\pi}}\frac{\sin r-r\cos r}{1- \cos r}, \tag{211}\] for all \(0<r<\frac{\pi}{2},x\in\mathbb{S}^{2}\) and \(j\in\mathbb{N}\). By Lemma 3.2 we have \(\sup_{j}\|f_{j}\|_{L^{2}(\mathbb{S}^{2})}<\infty\), and by direct calculation we have that \[\lim_{r\to 0}\frac{\sin r-r\cos r}{1-\cos r}=0, \tag{212}\] we can choose \(0<r_{1}<\frac{\pi}{2}\) such that \[\left|\frac{\|f_{j}\|_{L^{2}(\mathbb{S}^{2})}}{\sqrt{2\pi}}\frac{\sin r_{1}-r_{1 }\cos r_{1}}{1-\cos r_{1}}\right|<\frac{e_{\infty}}{2},\quad\forall j\in\mathbb{ N}. \tag{213}\] Moreover, because \(f_{j}\to f_{\infty}\) in \(L^{1}(\mathbb{S}^{2})\), we can choose \(j_{0}\in\mathbb{N}\) such that \[\frac{\|f_{\infty}-f_{j}\|_{L^{1}(\mathbb{S}^{2})}}{V(r_{1})}\leq\frac{e_{ \infty}}{4},\quad\forall j\geq j_{0}. \tag{214}\] Finally by combining (211), (213) and (214) together, we conclude that \(f_{j}(x)\geq\frac{e_{\infty}}{4}>0\) for all \(j\geq j_{0}\) and \(x\in\mathbb{S}^{2}\). ### Uniform systole positive lower bound In this subsection, as an application of non-collapsing of warping functions \(f_{j}\) obtained in Proposition 4.15, we derive a uniform positive lower bound for the systole of the sequence of warped product manifolds \(\mathbb{S}^{2}\times_{f_{i}}\mathbb{S}^{1}\) satisfying assumptions in Proposition 4.15. **Definition 4.16** (Systole).: _The systole of a Riemannian manifold \((M,g)\), which is denoted by \(sys(M,g)\) is defined to be the length of the shortest closed geodesic in \(M\)._ **Remark 4.17**.: People may usually consider so-called \(\pi_{1}\)-systole that is the length of a shortest _non-contractible_ closed geodesic. But in the study of compactness problem of manifolds with nonnegative scalar curvature, we also need to take into account contractible closed geodesic, for example, in a dumbell, which is diffeomorphic to \(\mathbb{S}^{3}\), we may have a short contractible closed geodesic. First of all we derive an interesting dichotomy property for closed geodesics in warped product manifolds: \(N\times_{f}\mathbb{S}^{1}\), that is, the product manifold \(N\times\mathbb{S}^{1}\) endowed with the metric \(g=g_{N}+f^{2}g_{\mathbb{S}^{1}}\), where \((N,g_{N})\) is a \(n\)-dimensional (either compact or complete non-compact) Riemannian manifold without boundary, and \(f\) is a positive smooth function on \(N\). **Lemma 4.18**.: _There is a dichotomy for closed geodesics in \(N\times_{f}\mathbb{S}^{1}\), that is, a closed geodesic in \(N\times_{f}\mathbb{S}^{1}\) either wraps around the fiber \(\mathbb{S}^{1}\), or is a geodesic in the base \(N\)._ Proof.: Let \(\varphi\in[0,2\pi]\) is a coordinate on the fiber \(\mathbb{S}^{1}\). The warped product metric \(g\) then can be written as \[g=g_{N}+f^{2}d\varphi^{2}. \tag{215}\] Let \[\gamma(t)=(\gamma_{N}(t),\varphi(t))\ \ t\in[0,1] \tag{216}\] be a closed geodesic in \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\), and without loss of generality, we assume \(\varphi(0)=0\). We have two possible cases as following: **Case 1**: \(\varphi([0,1])=[0,2\pi]\). In this case, clearly, the geodesic wraps around the fiber \(\mathbb{S}^{1}\). **Case 2**: \(\varphi([0,1])\neq[0,2\pi]\). In this case, we show that \(\varphi([0,1])=\{0\}\) by a proof by contradiction, and then clearly, \(\gamma\) is a closed geodesic on base \(N\cong N\times\{\varphi=0\}\). Otherwise, we have \[0<\varphi_{0}:=\max\{\varphi(t)\mid t\in[0,1]\}<2\pi. \tag{217}\] Moreover, there exists \(0<t_{0}<1\) such that \(\varphi(t_{0})=\varphi_{0}\), since \(\varphi(1)=\varphi(0)=0\) due to the closeness of the geodesic \(\gamma\). Consequently, \(t_{0}\) is a critical point of the function \(\varphi(t)\), i.e. \(\varphi^{\prime}(t_{0})=0\). As a result, the tangent vector of the geodesic at \(t_{0}\), \(\gamma^{\prime}(t_{0})=(\gamma^{\prime}_{N}(t_{0}),0)\), is tangent to \(N\times\{\varphi=\varphi_{0}\}\). On the other hand, there is a geodesic contained in \(N\times\{\varphi=\varphi_{0}\}\) that passes through the point \((\gamma_{N}(t_{0}),\varphi_{0})\) and is tangent to \((\gamma^{\prime}_{N}(t_{0}),0)\) at this point. Then by the uniqueness of the geodesic with given tangent vector at a point, and the fact that base \(N\) is totally geodesic in the warped product manifold \(N\times_{f}\mathbb{S}^{1}\), which can be seen easily by Koszul's formula, or see Proposition 9.104 in [3], we can obtain \(\varphi([0,1])=\{\varphi_{0}\}\), and this contradicts with \(\varphi(0)=0\). By the dichotomy of closed geodesics in Lemma 4.18, we can obtain a lower bound estimate for the systole of \(N\times_{f}\mathbb{S}^{1}\). **Lemma 4.19**.: _The systole of the warped product Riemannian manifold \(N\times_{f}\mathbb{S}^{1}\) is greater than or equal to \(\min\left\{sys(N,g_{N}),2\pi\min_{\mathbb{S}^{2}}f\right\}\)._ Proof.: Let \(\gamma(t)=(r(t),\theta(t),\varphi(t)),t\in[0,1]\), is a closed geodesic in \(\mathbb{S}^{2}\times_{f}\mathbb{S}^{1}\). By Lemma 4.18, \(\gamma\) either wraps around the fiber \(\mathbb{S}^{1}\), or \(\gamma\) is a closed geodesic in the base manifold \((N,g_{N})\). If \(\gamma\) wraps around the fiber \(\mathbb{S}^{1}\), then \(\varphi([0,1])=[0,2\pi]\), and so the length of \(\gamma\): \[L(\gamma)=\int_{0}^{1}|\gamma^{\prime}(t)|_{g}dt \geq \int_{0}^{1}f(\gamma(t))|\varphi^{\prime}(t)|dt \tag{219}\] \[\geq \min_{\mathbb{S}^{2}}f\int_{0}^{1}|\varphi^{\prime}(t)|dt\] (220) \[\geq 2\pi\min_{\mathbb{S}^{2}}f. \tag{218}\] If \(\gamma\) is a closed geodesic in the base \((N,g_{N})\), then by the definition of systole, the length of \(\gamma\) is greater than or equal to \(sys(N,g_{N})\). These estimates of length of closed geodesics imply the lower bound of systole in the conclusion. By combining the lower bound estimate of systole in Lemma 4.19 and Proposition 4.15, we immediately have the following uniform lower bound for systoles. **Proposition 4.20**.: _Let \(\{\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\}_{j=1}^{\infty}\) be a sequence of warped product manifolds with metric tensors as in (3) that have non-negative scalar curvature and satisfy_ \[\operatorname{Vol}(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1})\leq V\text{ and }\operatorname{Min}\!\operatorname{A}(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}) \geq A>0,\forall j\in\mathbb{N}. \tag{221}\] _Let \(e_{\infty}:=\inf\limits_{\mathbb{S}^{2}}f_{\infty}>0\). Then the systoles of \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\), for all \(j\in\mathbb{N}\), have a uniform positive lower bound given by \(\min\left\{2\pi,\frac{e_{\infty}}{2}\pi\right\}\)._ Proof.: First note that the base manifold of the sequence of the warped product manifolds is the standard \(2\)-sphere, and its systole is equal to \(2\pi\), since the image of a closed geodesic in \((\mathbb{S}^{2},g_{\mathbb{S}^{2}})\) is always a great circle. Then note that \(e_{\infty}>0\) follows from the item \((iii)\) in Theorem 4.13. For each \(j\in\mathbb{N}\), by Lemma 4.19, the systole of \(\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\) has a lower bound given by \(\min\left\{2\pi,2\pi\min\limits_{\mathbb{S}^{2}}f_{j}\right\}\). Then by Proposition 4.15, \(\min\limits_{\mathbb{S}^{2}}f_{j}\geq\frac{e_{\infty}}{4}\) holds for all \(j\in\mathbb{N}\). Hence the conclusion follows and we complete the proof. ## 5. Nonnegative distributional scalar curvature of limit metric Now we use the positive limit function \(f_{\infty}\) obtained in Theorem 4.13 to define a weak warped product metrics: **Definition 5.1**.: _Let \(f_{\infty}\) be a function defined on \(\mathbb{S}^{2}\) such that it is almost everywhere positive and finite on \(\mathbb{S}^{2}\). We further assume that \(f_{\infty}\in W^{1,p}(\mathbb{S}^{2})\) for \(1\leq p<2\). Define_ \[g_{\infty}:=g_{\mathbb{S}^{2}}+f_{\infty}^{2}g_{\mathbb{S}^{1}}, \tag{222}\] _to be a (weak) warped product Riemannian metric on \(\mathbb{S}^{2}\times\mathbb{S}^{1}\) in the sense of defining an inner product on the tangent space at (almost) every point of \(\mathbb{S}^{2}\times\mathbb{S}^{1}\)._ **Remark 5.2**.: In general, \(g_{\infty}\) is only defined almost everywhere in \(\mathbb{S}^{2}\times\mathbb{S}^{1}\) with respect to the standard product volume measure \(d\mathsf{vol}_{g_{\mathbb{S}^{2}}}d\mathsf{vol}_{g_{\mathbb{S}^{1}}}\), since \(f_{\infty}\) may have value as \(+\infty\) on a measure zero set in \(\mathbb{S}^{2}\). Note that we allow \(+\infty\) as ball average limit in Proposition 3.7. For example, in the extreme example constructed by Christina Sormani and authors in [19], the limit warping function equal to \(+\infty\) at two poles of \(\mathbb{S}^{2}\). In Subsection 5.1, we show \(W^{1,p}\) regularity of the weak metric tensor \(g_{\infty}\) defined in Definition 5.1 for \(1\leq p<2\) [Proposition 5.4], and prove that the warped product metrics \(g_{j}=g_{\mathbb{S}^{2}}+f_{j}^{2}g_{\mathbb{S}^{1}}\) converge to \(g_{\infty}\) in the \(L^{q}\) sense for any \(1\leq q<+\infty\) [Theorem 5.5]. In Subsection 5.2, we show that the limit weak metric \(g_{\infty}\) has nonnegative distributional scalar curvature in the sense of Lee-LeFloch [Theorem 5.11]. ### \(W^{1,p}\) limit Riemannian metric \(g_{\infty}\) we prove the regularity of the metric tensor. Before that we need the following definition: **Definition 5.3**.: _We define \(L^{p}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\) as the set of all tensors defined almost everywhere on \(\mathbb{S}^{2}\times\mathbb{S}^{1}\) such that its \(L^{p}\) norm measured in terms of \(g_{0}\) is finite where \(g_{0}\) is the isometric product metric_ \[g_{0}=g_{\mathbb{S}^{2}}+g_{\mathbb{S}^{1}}\text{ on }\mathbb{S}^{2}\times \mathbb{S}^{1}. \tag{223}\] _We define \(W^{1,p}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\) as the set of all tensors, \(h\), defined almost everywhere on \(\mathbb{S}^{2}\times\mathbb{S}^{1}\) such that both the \(L^{p}\) norm of \(h\) and the \(L^{p}\) norm of \(\overline{\nabla}h\) measured in terms of \(g_{0}\) are finite where \(\overline{\nabla}\) is the connection corresponding to the metric \(g_{0}\)._ Now we prove the regularity of the metric tensor \(g_{\infty}\) defined in Definition 5.1: **Proposition 5.4** (Regularity of the metric tensor).: _The Riemannian metric tensor \(g_{\infty}\) as in Definition 5.1 satisfies_ \[g_{\infty}\in W^{1,p}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0}) \tag{224}\] _for all \(p\in[1,2)\) in the sense of Definition 5.3._ Proof.: Using the background metric, \(g_{0}\), we have \[\|g_{\infty}\|_{L^{p}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})} = (2\pi)^{\frac{1}{p}}\|(2+f_{\infty}^{4})^{\frac{1}{2}}\|_{L^{p}( \mathbb{S}^{2})} \tag{226}\] \[\leq (2\pi)^{\frac{1}{p}}\|\sqrt{2}+f_{\infty}^{2}\|_{L^{p}(\mathbb{S }^{2})}\] (227) \[\leq (2\pi)^{\frac{1}{p}}\left(\sqrt{2}(4\pi)^{\frac{1}{p}}+\|f_{ \infty}\|_{L^{2p}(\mathbb{S}^{2})}^{2}\right) \tag{225}\] is finite, since by the assumption, \(f_{\infty}\in W^{1,p}(\mathbb{S}^{2})\) for any \(p\in[1,2)\), and Sobolev embedding theorem, we have \(f_{\infty}\in L^{2p}(\mathbb{S}^{2})\) for any \(p\in[1,\infty)\). Now for the gradient estimate, we fix an arbitrary \(p\in[1,2)\). We use \(\overline{\nabla}\) to denote the connection of the background metric \(g_{0}\). Clearly, we have \[\overline{\nabla}g_{\infty}=\overline{\nabla}g_{\mathbb{S}^{2}}+\overline{ \nabla}f_{\infty}^{2}\otimes g_{\mathbb{S}^{1}}+f_{\infty}^{2}\overline{ \nabla}g_{\mathbb{S}^{1}}. \tag{228}\] and \[\overline{\nabla}g_{\mathbb{S}^{2}}=0,\text{ and }\overline{\nabla}g_{\mathbb{S}^{1 }}=0. \tag{229}\] Moreover, since \(\overline{\nabla}f_{\infty}^{2}=2f_{\infty}\nabla f_{\infty}\) we have \[\overline{\nabla}g_{\infty}=2f_{\infty}\nabla f_{\infty}\otimes g_{\mathbb{S} ^{1}}, \tag{230}\] where \(\nabla f_{\infty}\) is the gradient of \(f_{\infty}\) on \((\mathbb{S}^{2},g_{\mathbb{S}^{2}})\). As a result, we have \[\|\overline{\nabla}g_{\infty}\|_{L^{p}(\mathbb{S}^{2}\times\mathbb{ S}^{1},g_{0})}^{p} = 2\pi\int_{\mathbb{S}^{2}}2^{p}f_{\infty}^{p}|\nabla f_{\infty}|^{ p}d\mathrm{vol}_{g_{\mathbb{S}^{2}}} \tag{232}\] \[= 2^{p+1}\pi\|f_{\infty}\|_{L^{pq^{+}}(\mathbb{S}^{2},g_{\mathbb{S }^{2}})}\cdot\|\nabla f_{\infty}\|_{L^{pq}(\mathbb{S}^{2})}, \tag{231}\] where \(q>1\) is chosen so that \(pq<2\), and \(q^{*}=\frac{q}{q-1}\). Then again by Sobolev embedding theorem we have \(f_{\infty}\in L^{q}\) for any \(p\in[1,\infty)\), thus we obtain that \(\|\overline{\nabla}g_{\infty}\|_{L^{p}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{ 0})}\) is finite for any \(p\in[1,2)\). This completes the proof. Then we apply Proposition 3.5 to prove Theorem 1.7 which concerns the \(L^{q}\) pre-compactness of warped product circles over sphere with non-negative scalar curvature. We restate Theorem 1.7 as follows: **Theorem 5.5**.: _Let \(\{g_{j}=g_{\mathbb{S}^{2}}+f_{j}^{2}g_{\mathbb{S}^{1}}\mid j\in\mathbb{N}\}\) be a sequence of warped Riemannian metrics on \(\mathbb{S}^{2}\times\mathbb{S}^{1}\) satisfying requirements in (4). Then there exists a subsequence \(g_{j_{k}}\) and a (weak) warped Riemannian metric \(g_{\infty}\in W^{1,p}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\) for \(p\in[1,2)\) as in Definition 5.1 such that_ \[g_{j_{k}}\to g_{\infty}\ \ \text{in}\ \ L^{q}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0}), \ \ \forall q\in[1,\infty). \tag{233}\] Proof.: By Lemma 2.1 and Lemma 2.2, the assumptions in (4) for \(g_{j}\) implies that the warping functions \(f_{j}\) satisfy the assumptions in Proposition 3.5. Thus, by applying Proposition 3.5, we have that there exists a subsequence \(f_{j_{k}}\) of warping functions and \(f_{\infty}\in W^{1,p}(\mathbb{S}^{2})\) for all \(1\leq p<2\), such that \[f_{j_{k}}\to f_{\infty},\ \ \text{in}\ \ L^{q}(\mathbb{S}^{2}),\ \ \forall q\in[1,\infty). \tag{234}\] Let \(g_{\infty}:=g_{\mathbb{S}^{2}}+f_{\infty}^{2}g_{\mathbb{S}^{1}}\). Then by Proposition 5.4, we have \[g_{\infty}\in W^{1,p}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\ \ \forall 1\leq p<2. \tag{235}\] Moreover, because \[g_{j}-g_{\infty}=(f_{j}^{2}-f_{\infty}^{2})g_{\mathbb{S}^{1}}, \tag{236}\] we have that for any \(q\in[1,\infty)\), \[\|g_{j_{k}}-g_{\infty}\|_{L^{q}(\mathbb{S}^{2}\times\mathbb{S}^{ 1},g_{0})} \tag{238}\] \[= (2\pi)^{\frac{1}{q}}\|f_{j_{k}}^{2}-f_{\infty}^{2}\|_{L^{q}( \mathbb{S}^{2})}\] (239) \[= (2\pi)^{\frac{1}{q}}\|(f_{j_{k}}-f_{\infty})\cdot(f_{j_{k}}+f_{ \infty})\|_{L^{q}(\mathbb{S}^{2})}\] (240) \[\leq (2\pi)^{\frac{1}{q}}\|f_{j_{k}}-f_{\infty}\|_{L^{2q}(\mathbb{S}^ {2})}\cdot\|f_{j_{k}}+f_{\infty}\|_{L^{2q}(\mathbb{S}^{2})}\] (241) \[\to 0,\ \ \text{as}\ \ j_{k}\to\infty, \tag{237}\] since \(f_{j_{k}}\to f_{\infty}\) in \(L^{2q}(\mathbb{S}^{2})\) for any \(q\in[1,\infty)\). **Remark 5.6**.: As showed by the example constructed by Christina Sormani and authors in [19], \(g_{\infty}\in W^{1,p}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\) for \(1\leq p<2\) is the best regularity we can expect in general for the limit weak Riemannian metric \(g_{\infty}\), see Proposition 3.6 and Remark 3.8 in [19]. ### Nonnegative distributional scalar curvature of \(g_{\infty}\) Building upon work of Mardare-LeFloch [11], Dan Lee and Philippe LeFloch defined a notion of distributional scalar curvature for smooth manifolds that have a metric tensor which is only \(L^{\infty}_{loc}\cap W^{1,2}_{loc}\). See Definition 2.1 of [10] which we review below in Definition 5.7. In Theorem 5.5 we proved that if a sequence of smooth warped product circles over the sphere \(\{\mathbb{S}^{2}\times_{f_{j}}\mathbb{S}^{1}\}\) with non-negative scalar curvature have uniform bounded volumes, then a subsequence of the smooth warped product metric \(g_{j}=g_{\mathbb{S}^{2}}+f_{j}^{2}g_{\mathbb{S}^{1}}\) converges to a weak warped product metric \(g_{\infty}=g_{\mathbb{S}^{2}}+f_{\infty}^{2}g_{\mathbb{S}^{1}}\in W^{1,p}( \mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})(1\leq p<2)\) in the sense of \(L^{q}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\) for any \(q\geq 1\). For the rest of this section, we use \(g_{\infty}\) to denote such limit metric. We use \(g_{0}=g_{\mathbb{S}^{2}}+g_{\mathbb{S}^{1}}\) as a background metric. In Theorem 5.11, we prove that this limit (weak) metric \(g_{\infty}\) has non-negative distributional scalar curvature in the sense of Lee-LeFloch. In Remarks 5.9-5.10, we discuss how the metric tensors studied by Lee and LeFloch have stronger regularity than the regularity of \(g_{\infty}\) but their definition of distributional scalar curvature is still valid in our case. First we recall Definition 2.1 in the work of Lee-LeFloch [10]. In their paper, they assume that **Definition 5.7** (Lee-LeFloch).: Let \(M\) be a smooth manifold endowed with a smooth background metric, \(g_{0}\). Let \(g\) be a metric tensor defined on \(M\) with \(L^{\infty}_{loc}\cap W^{1,2}_{loc}\) regularity and locally bounded inverse \(g^{-1}\in L^{\infty}_{loc}\). The _scalar curvature distribution_\(\operatorname{Scalar}_{g}\) is defined as a distributions in \(M\) such that for every test function \(u\in C^{\infty}_{0}(M)\) \[\langle\operatorname{Scalar}_{g},u\rangle:=\int_{M}\left(-V\cdot\overline{ \nabla}\left(u\frac{d\mu_{g}}{d\mu_{g_{0}}}\right)+Fu\frac{d\mu_{g}}{d\mu_{0} }\right)\,d\mu_{0}, \tag{242}\] where the dot product is taken using the metric \(g_{0}\), \(\overline{\nabla}\) is the Levi-Civita connection of \(g_{0}\), \(d\mu_{g}\) and \(d\mu_{g_{0}}\) are volume measure with respect to \(g\) and \(g_{0}\) respectively, \(V\) is a vector field given by \[V^{k}:=g^{ij}\Gamma^{k}_{ij}-g^{ik}\Gamma^{j}_{ji}, \tag{243}\] where \[\Gamma^{k}_{ij}:=\frac{1}{2}g^{kl}\left(\overline{\nabla}_{i}g_{jl}+\overline {\nabla}_{j}g_{il}-\overline{\nabla}_{l}g_{ij}\right), \tag{244}\] \[F:=\overline{R}-\overline{\nabla}_{k}g^{ij}\Gamma^{k}_{ij}+\overline{\nabla}_ {k}g^{ik}\Gamma^{j}_{ji}+g^{ij}\left(\Gamma^{k}_{kl}\Gamma^{l}_{ij}-\Gamma^{k }_{jl}\Gamma^{l}_{ik}\right), \tag{245}\] and \[\overline{R}:=g^{ij}\left(\partial_{k}\overline{\Gamma}_{ij}^{k}-\partial_{i} \overline{\Gamma}_{kj}^{k}+\overline{\Gamma}_{ij}^{l}\overline{\Gamma}_{kl}^{k }-\overline{\Gamma}_{kj}^{l}\overline{\Gamma}_{il}^{k}\right). \tag{246}\] The Riemannian metric \(g\) has _nonnegative distributional scalar curvature_, if \(\langle\operatorname{Scalar}_{g},u\rangle\geq 0\) for every nonnegative test function \(u\) in the integral in (242). **Definition 5.8** (Distributional total scalar curvature).: _For a weak metric \(g\) having the regularity as in Definition 5.7, we define the distributional total scalar curvature of \(g\) to be \(\langle\operatorname{Scalar}_{g},1\rangle\), which is obtained by setting the test function \(u\equiv 1\) in the integration in (242)._ Note that for a \(C^{2}\)-metric, the distributional total scalar curvature is exactly the usual total scalar curvature. **Remark 5.9**.: By the regularity assumption for the Riemannian metric \(g\) in the work of Lee-LeFloch [10], one has the regularity \(\Gamma_{ij}^{k}\in L^{2}_{loc}\), \(V\in L^{2}_{loc}\), \(F\in L^{1}_{loc}\), and the density of volume measure \(d\mu_{g}\) with respect to \(d\mu_{0}\) is \[\tfrac{d\mu_{g}}{d\mu_{0}}\in L^{\infty}_{loc}\cap W^{1,2}_{loc}. \tag{247}\] Thus \[FirstInt_{g}=\int_{M}\left(-V\cdot\overline{\nabla}\left(u\frac{d\mu_{g}}{d \mu_{g_{0}}}\right)\right)d\mu_{0} \tag{248}\] and \[SecondInt_{g}=\int_{M}\left(Fu\frac{d\mu_{g}}{d\mu_{0}}\right)d\mu_{0}. \tag{249}\] are both finite. **Remark 5.10**.: Our limit metric is less regular than the metrics studied by Lee-LeFloch in [10]. Recall that in Proposition 5.4 we showed \(g_{\infty}\in W^{1,p}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\) for \(1\leq p<2\), and as shown by the extreme example constructed in [19], in general \(g_{\infty}\notin W^{1,2}_{loc}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\), see Proposition 3.6 in [19]. In Remark 5.18 below we show that in genenral both integrals in (248) and (249) may be divergent. However, in Theorem 5.11 below, we show that in our case the sum of (248) and (249) is still well-defined since the singularity cancels out when we add them up. We are ready to prove Theorem 1.8. We restate it as follows: **Theorem 5.11**.: _The limit metric \(g_{\infty}\) obtained in Theorem 5.5 has nonnegative distributional scalar curvature on \(\mathbb{S}^{2}\times\mathbb{S}^{1}\) in the sense of Lee-LeFloch as in Definition 5.7. In particular, (242) is finite and nonnegative for any nonnegative test function, \(u\in C^{\infty}(\mathbb{S}^{2}\times\mathbb{S}^{1})\). Moreover, the total scalar curvatures of \(g_{j}\) converge to the distributional total scalar curvature of \(g_{\infty}\)._ The proof of Theorem 5.11 consists of straightforward but technical calculations. For the convenience of readers, we provide some details of the calculations in the following lemmas. We use \(g_{0}=g_{\mathbb{S}^{2}}+g_{\mathbb{S}^{1}}\) as background metric, and use coordinate \(\{r,\theta,\varphi\}\) on \(\mathbb{S}^{2}\times\mathbb{S}^{1}\), where \((r,\theta)\) is a polar coordinate on \(\mathbb{S}^{2}\) and \(\varphi\) is a coordinate on \(\mathbb{S}^{1}\). The corresponding local frame of the tangent bundle is \(\{\partial_{r},\partial_{\theta},\partial_{\varphi}\}\). In this coordinate system, both \(g_{0}\) and \(g_{\infty}\) are diagonal and given as \[g_{0}=\begin{pmatrix}1&0&0\\ 0&\sin^{2}r&0\\ 0&0&1\end{pmatrix}\text{ and }g_{\infty}=\begin{pmatrix}1&0&0\\ 0&\sin^{2}r&0\\ 0&0&f_{\infty}^{2}(r,\theta)\end{pmatrix}. \tag{250}\] First of all, by the formula of Christoffel symbols: \[\overline{\Gamma}^{i}_{jk}=\frac{1}{2}(g_{0})^{il}\left(\frac{\partial(g_{0} )_{il}}{\partial x^{k}}+\frac{\partial(g_{0})_{lk}}{\partial x^{j}}-\frac{ \partial(g_{0})_{jk}}{\partial x^{l}}\right), \tag{251}\] one can easily obtain the following lemma: **Lemma 5.12**.: _The Christoffel symbols of the Levi-Civita connection \(\overline{\nabla}\) of the background metric \(g_{0}=g_{\mathbb{S}^{2}}+g_{\mathbb{S}^{1}}\), in the coordinate \(\{r,\theta,\varphi\}\), all vanish except_ \[\overline{\Gamma}^{r}_{\theta\theta}=-\sin r\cos r, \tag{252}\] _and_ \[\overline{\Gamma}^{\theta}_{r\theta}=\overline{\Gamma}^{\theta}_{\theta r}= \frac{\cos r}{\sin r}. \tag{253}\] Then by Lemma 5.12, the formula \[\overline{\nabla}_{i}(g_{\infty})_{jl}=\partial_{i}\left((g_{\infty})_{jl} \right)-\overline{\Gamma}^{p}_{ij}(g_{\infty})_{pl}-\overline{\Gamma}^{q}_{ il}(g_{\infty})_{jq}, \tag{254}\] and the diagonal expression of \(g_{\infty}\) in (250), one can obtain the following lemma: **Lemma 5.13**.: _For the limit metric, \(g_{\infty}\), with the background metric, \(g_{0}\), the Christoffel symbols defined by Lee-LeFloch as in (244), in the coordinate \(\{r,\theta,\varphi\}\), all vanish except_ \[\Gamma^{r}_{\varphi\varphi}=-f_{\infty}\partial_{r}f_{\infty},\quad\Gamma^{ \theta}_{\varphi\varphi}=-\frac{1}{\sin^{2}r}f_{\infty}\partial_{\theta}f_{ \infty}, \tag{255}\] _and_ \[\Gamma^{\varphi}_{r\varphi}=\Gamma^{\varphi}_{\varphi r}=\frac{\partial_{r}f_{ \infty}}{f_{\infty}},\quad\Gamma^{\varphi}_{\theta\varphi}=\Gamma^{\varphi}_{ \varphi\theta}=\frac{\partial_{\theta}f_{\infty}}{f_{\infty}}. \tag{256}\] Note also that **Lemma 5.14**.: _Note that the volume forms are:_ \[d\mu_{0}=\,dr\wedge\sin(r)\,d\theta\wedge\,d\varphi \tag{257}\] _and_ \[d\mu_{\infty}=dr\wedge\sin(r)\,d\theta\wedge f_{\infty}(r,\theta)\,d\varphi \tag{258}\] _which are both defined almost everywhere. In particular,_ \[\frac{d\mu_{\infty}}{d\mu_{0}}=f_{\infty}(r,\theta) \tag{259}\] _is in \(W^{1,p}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\) for \(p<2\)._ Proof.: The first claim holds away from \(r=0\) and \(r=\pi\) by the definition of volume form, and the second claim holds almost everywhere on \((\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\). So \(d\mu_{\infty}=f_{\infty}d\mu_{0}\) almost everywhere which gives us the third claim. The rest follows from Proposition 3.5. Now we are ready to compute the vector field \(V\) and the function \(F\) defined by Lee-LeFloch as in (243) and (245). **Lemma 5.15**.: _For the limit metric \(g_{\infty}\) with the background metric \(g_{0}\), the vector field \(V\) defined in (243), in the local frame \(\{\partial_{r},\partial_{\theta},\partial_{\varphi}\}\), is given by_ \[V=\left(-2\frac{\partial_{r}f_{\infty}}{f_{\infty}},-\frac{2}{\sin^{2}r}\frac {\partial_{\theta}f_{\infty}}{f_{\infty}},0\right). \tag{260}\] _Furthermore_ \[-V\cdot\overline{\nabla}\left(u\frac{d\mu_{\infty}}{d\mu_{0}}\right)=2\frac{ \partial_{r}f_{\infty}}{f_{\infty}}\partial_{r}(uf_{\infty})+\frac{2}{\sin^{2 }r}\frac{\partial_{\theta}f_{\infty}}{f_{\infty}}\partial_{\theta}(uf_{\infty}). \tag{261}\] Proof.: By plugging the non-vanishing Christoffel symbols in Lemma 5.13 into \[V^{k}:=g_{\infty}^{ij}\Gamma_{ij}^{k}-g_{\infty}^{ik}\Gamma_{ji}^{j}, \tag{262}\] we get \[V^{r} = g_{\infty}^{\varphi\varphi}\Gamma_{\varphi\varphi}^{r}-g_{\infty }^{rr}\Gamma_{\varphi r}^{\varphi} \tag{264}\] \[= \frac{1}{(f_{\infty})^{2}}(-f_{\infty}\partial_{r}f_{\infty})- \frac{\partial_{r}f_{\infty}}{f_{\infty}}=-2\frac{\partial_{r}f_{\infty}}{f_{ \infty}}. \tag{263}\] Also \[V^{\theta} = g_{\infty}^{\varphi\varphi}\Gamma_{\varphi\varphi}^{\theta}-g_{ \infty}^{\theta\theta}\Gamma_{\varphi\theta}^{\varphi} \tag{266}\] \[= \frac{1}{f_{\infty}^{2}}\left(-\frac{1}{\sin^{2}r}f_{\infty} \partial_{\theta}f_{\infty}\right)-\frac{1}{\sin^{2}r}\frac{\partial_{\theta} f_{\infty}}{f_{\infty}}=-\frac{2}{\sin^{2}r}\frac{\partial_{\theta}f_{\infty}}{f_{ \infty}}.\] (267) \[V^{\varphi}=g_{\infty}^{ij}\Gamma_{ij}^{\varphi}-g_{\infty}^{ \varphi\varphi}\Gamma_{j\varphi}^{j}=0. \tag{265}\] By Lemma A.8, we now see that, \[\overline{\nabla}\left(u\frac{d\mu_{\infty}}{d\mu_{0}}\right) = \overline{\nabla}\left(uf_{\infty}\right) \tag{269}\] \[= \partial_{r}(uf_{\infty})\frac{\partial}{\partial r}+\frac{1}{ \sin^{2}r}\partial_{\theta}(uf_{\infty})\frac{\partial}{\partial\theta}+ \partial_{\varphi}(uf_{\infty})\frac{\partial}{\partial\varphi} \tag{268}\] Thus \[-V\cdot\overline{\nabla}\left(u\frac{d\mu_{\infty}}{d\mu_{0}}\right)=2\frac{ \partial_{r}f_{\infty}}{f_{\infty}}\partial_{r}(uf_{\infty})+\frac{2}{\sin^{2 }r}\frac{\partial_{\theta}f_{\infty}}{f_{\infty}}\partial_{\theta}(uf_{ \infty}) \tag{270}\] **Lemma 5.16**.: _For the limit metric \(g_{\infty}\) with the background metric \(g_{0}\), the function \(F\) defined in (245) is given by_ \[F=2-2\left(\frac{\partial_{r}f_{\infty}}{f_{\infty}}\right)^{2}-\frac{2}{\sin ^{2}r}\left(\frac{\partial_{\theta}f_{\infty}}{f_{\infty}}\right)^{2}=2-2 \frac{1}{(f_{\infty})^{2}}|\nabla f_{\infty}|^{2}. \tag{271}\] _Furthermore,_ \[\left(Fu\frac{d\mu_{\infty}}{d\mu_{0}}\right)=2uf_{\infty}-2\frac{u}{f_{ \infty}}|\nabla f_{\infty}|^{2}. \tag{272}\] _Here \(|\nabla f_{\infty}|\) is the norm of weak gradient of \(f_{\infty}\) with respect to the standard metric \(g_{\mathbb{S}^{2}}\)._ Proof.: First note that from the expression of \(\overline{R}\) in (246) and the Christofell symbols calculated in Lemma 5.12, one can easily see that \[\overline{R}=R_{g_{\mathbb{S}^{2}}}=2. \tag{273}\] Also recall that \[\overline{\nabla}_{i}g_{\infty}^{jl}=\partial_{i}(g_{\infty}^{jl})+\overline {\Gamma}_{ip}^{j}g_{\infty}^{pl}+\overline{\Gamma}_{iq}^{j}g_{\infty}^{jq}. \tag{274}\] Then by Lemmas 5.12 and 5.13, one has \[F := \overline{R}-(\overline{\nabla}_{k}g^{ij})\Gamma_{ij}^{k}+( \overline{\nabla}_{k}g^{ik})\Gamma_{ji}^{j}+g^{ij}(\Gamma_{kl}^{k}\Gamma_{ij}^ {l}-\Gamma_{ji}^{k}\Gamma_{ik}^{l}) \tag{276}\] \[= 2-\overline{\nabla}_{r}g^{\varphi\varphi}\Gamma_{\varphi\varphi }^{r}-\overline{\nabla}_{\theta}g^{\varphi\varphi}\Gamma_{\varphi\varphi}^{ \theta}-2\overline{\nabla}_{\varphi}g^{r\varphi}\Gamma_{r\varphi}^{\varphi}-2 \overline{\nabla}_{\varphi}g^{\theta\varphi}\Gamma_{\theta\varphi}^{\varphi}\] (277) \[+\overline{\nabla}_{k}g^{rk}\Gamma_{\varphi r}^{\varphi}+ \overline{\nabla}_{k}g^{\theta k}\Gamma_{\varphi\theta}^{\varphi}\] (278) \[+g^{\varphi\varphi}\Gamma_{\varphi r}^{\varphi}\overline{F}_{ \varphi\varphi}^{r-}+\overline{g}^{\varphi\varphi}\overline{\Gamma}_{\varphi \theta}^{\varphi}\Gamma_{\theta\varphi}^{\theta}\] (279) \[-g^{\varphi\varphi}\Gamma_{\varphi\varphi}^{r}\Gamma_{\varphi }^{r}-\overline{g}^{\varphi\varphi}\overline{\Gamma}_{\varphi\varphi}^{ \theta}-g^{rr}\Gamma_{r\varphi}^{\varphi}\Gamma_{r\varphi}^{\varphi}-g^{ \varphi\varphi}\overline{\Gamma}_{\varphi r}^{\varphi}\Gamma_{\varphi\varphi}^{r-}\] (280) \[-g^{\theta\theta}\Gamma_{\theta\varphi}^{\varphi}\Gamma_{\theta \varphi}^{\varphi}-g^{\varphi\varphi}\Gamma_{\varphi\theta}^{\varphi}\Gamma_{ \varphi\varphi}^{\theta}\] \[= 2-\left(\partial_{r}(g^{\varphi\varphi})+2\overline{\Gamma}_{r \varphi}^{\varphi}g^{\varphi\varphi}\right)\Gamma_{\varphi\varphi}^{r}-\left( \partial_{\theta}(g^{\varphi\varphi})+2\overline{\Gamma}_{\theta\varphi}^{ \varphi}g^{\varphi\varphi}\right)\Gamma_{\varphi\varphi}^{\theta}\] \[-2\left(\partial_{\varphi}(g^{r\varphi})+\overline{\Gamma}_{ \varphi\varphi}^{r}g^{\varphi\varphi}+\overline{\Gamma}_{\varphi r}^{\varphi}g ^{rr}\right)\Gamma_{r\varphi}^{\varphi} \tag{275}\] (283) \[-2\left(\partial_{\varphi}(g^{\theta\varphi})+\overline{\Gamma}_{ \varphi\varphi}^{\theta}g^{\varphi\varphi}+\overline{\Gamma}_{\varphi\theta}^{ \varphi}g^{\theta\theta}\right)\Gamma_{\theta\varphi}^{\varphi}\] (284) \[+\left(\partial_{r}(g^{rr})+\overline{\Gamma}_{rr}^{r}g^{rr}+ \overline{\Gamma}_{rr}^{r}g^{rr}\right)\Gamma_{\varphi r}^{\varphi}\] (285) \[+\left(\partial_{\theta}(g^{r\theta})+\overline{\Gamma}_{\theta \theta}^{r}g^{\theta\theta}+\overline{\Gamma}_{\theta r}^{\theta}g^{rr}\right) \Gamma_{\varphi r}^{\varphi}\] (286) \[+\left(\partial_{\varphi}(g^{\varphi\varphi})+\overline{\Gamma}_{ \varphi\varphi}^{\varphi}g^{\varphi\varphi}+\overline{\Gamma}_{\varphi r}^{ \varphi}g^{rr}\right)\Gamma_{\varphi r}^{\varphi}\] (287) \[+\left(\partial_{r}(g^{\theta r})+\overline{\Gamma}_{rr}^{\theta }g^{rr}+\overline{\Gamma}_{r\theta}^{r}g^{\theta\theta}\right)\Gamma_{ \varphi\theta}^{\varphi}\] (288) \[+\left(\partial_{\theta}(g^{\theta\theta})+\overline{\Gamma}_{ \theta\theta}^{\theta}g^{\theta\theta}+\overline{\Gamma}_{\theta\theta}^{ \theta}g^{\theta\theta}\right)\Gamma_{\varphi\theta}^{\varphi}\] (289) \[+\left(\partial_{\varphi}(g^{\theta\varphi})+\overline{\Gamma}_{ \varphi\varphi}^{\theta}g^{\varphi\varphi}+\overline{\Gamma}_{\varphi\theta}^ {\varphi}g^{\theta\theta}\right)\Gamma_{\varphi\theta}^{\varphi}\] (290) \[-g^{\varphi\varphi}\Gamma_{\varphi\varphi}^{r}\Gamma_{\tau \varphi}^{\varphi}-g^{\varphi\varphi}\Gamma_{\varphi\theta}^{\varphi}\Gamma_{ \varphi\varphi}^{\theta}-g^{rr}\Gamma_{r\varphi}^{\varphi}\Gamma_{r\varphi}^{ \varphi}-g^{\theta\theta}\Gamma_{\varphi\theta}^{\varphi}\Gamma_{\varphi \theta}^{\varphi}\] (291) \[= 2-(-2)\frac{\partial_{r}f_{\infty}}{(f_{\infty})^{3}}\left(-f_{ \infty}\partial_{r}f_{\infty}\right)-(-2)\frac{\partial_{\theta}f_{\infty}}{( f_{\infty})^{3}}\left(-\frac{1}{\sin^{2}r}f_{\infty}\partial_{\theta}f_{ \infty}\right)\] (292) \[+\left(-\frac{\cos r}{\sin r}+\frac{\cos r}{\sin r}\right) \Gamma_{\varphi r}^{\varphi}-\frac{1}{(f_{\infty})^{2}}(-f_{\infty}\partial_{ r}f_{\infty})\left(\frac{\partial_{r}f_{\infty}}{f_{\infty}}\right)\] (293) \[-\frac{1}{(f_{\infty})^{2}}\left(-\frac{1}{\sin^{2}r}f_{\infty} \partial_{\theta}f_{\infty}\right)\left(\frac{\partial_{\theta}f_{\infty}}{f_ {\infty}}\right)\] (294) \[-\left(\frac{\partial_{r}f_{\infty}}{f_{\infty}}\right)^{2}- \frac{1}{\sin^{2}r}\left(\frac{\partial_{\theta}f_{\infty}}{f_{\infty}}\right) ^{2}\] (295) \[= 2-2\frac{1}{(f_{\infty})^{2}}|\nabla f_{\infty}|^{2}.\] (296) We immediately obtain our second claim by applying Lemma A.8. **Lemma 5.17**.: _For \(g\) being our limit metric tensor \(g_{\infty}\) and a smooth nonnegative test function \(u\), the integrals in (248) and (249) are given by_ \[FirstInt_{g_{\infty}} = \int_{\mathbb{S}^{2}\times\mathbb{S}^{1}}\left(-V\cdot\overline {\nabla}\left(u\frac{d\mu_{\infty}}{d\mu_{0}}\right)\right)\,d\mu_{0} \tag{298}\] \[= \int_{\mathbb{S}^{2}}\left(2\langle\nabla f_{\infty},\nabla\bar{u }\rangle+2\frac{\bar{u}}{f_{\infty}}|\nabla f_{\infty}|^{2}\right)d\mathrm{vol }_{g_{\mathbb{S}^{2}}}, \tag{297}\] _and_ \[SecondInt_{g_{\infty}} = \int_{\mathbb{S}^{2}\times\mathbb{S}^{1}}\left(Fu\frac{d\mu_{ \infty}}{d\mu_{0}}\right)d\mu_{0} \tag{299}\] \[= \int_{\mathbb{S}^{2}}\left(2\bar{u}f_{\infty}-2\frac{\bar{u}}{f_{ \infty}}|\nabla f_{\infty}|^{2}\right)d\mathrm{vol}_{g_{\mathbb{S}^{2}}}, \tag{300}\] _where_ \[\bar{u}(r,\theta)=\int_{0}^{2\pi}u(r,\theta,\varphi)d\varphi, \tag{301}\] \(\nabla f_{\infty}\) _and \(\nabla\bar{u}\) are (weak) gradients of functions \(f_{\infty}\) and \(\bar{u}\) on standard sphere \((\mathbb{S}^{2},g_{\mathbb{S}^{2}})\) respectively, and \(\langle\cdot,\cdot\rangle\) is the Riemannian metric on \((\mathbb{S}^{2},g_{\mathbb{S}^{2}})\)._ Proof.: By integrating the formulas in Lemma 5.15 and Lemma 5.16, one can easily obtain the integrals in (298) and (300). **Remark 5.18**.: As explained in Remark 3.6, \(f_{\infty}\in W^{1,p}\) for any \(1\leq p<2\), which is obtained in in Proposition 3.5, is the best regularity for \(f_{\infty}\) in general, and we cannot expect \(f_{\infty}\) is in \(W^{1,2}_{loc}(\mathbb{S}^{2})\). So the integral \(\int_{\mathbb{S}^{2}}\frac{\bar{u}}{f_{\infty}}|\nabla f_{\infty}|^{2}d \mathrm{vol}_{g_{\mathbb{S}^{2}}}\) appearing in both (298) and (300) may be divergent (c.f. Lemma 4.16 in [19]). But if we sum the integrants in (298) and (300) firstly and then integrate, then this possible divergent integrant terms cancel out and we obtain a finite integral as in the following lemma. **Lemma 5.19**.: _For the limit metric \(g_{\infty}=g_{\mathbb{S}^{2}}+f_{\infty}^{2}g_{\mathbb{S}^{1}}\), the scalar curvature distribution \(\mathrm{Scalar}_{g_{\infty}}\) defined in Definition 5.7 can be expressed, for every test function \(u\in C^{\infty}(\mathbb{S}^{2}\times\mathbb{S}^{1})\), as the integral_ \[\langle\mathrm{Scalar}_{g_{\infty}},u\rangle=\int_{\mathbb{S}^{2}}\left(2 \langle\nabla f_{\infty},\nabla\bar{u}\rangle+2f_{\infty}\bar{u}\right)d \mathrm{vol}_{g_{\mathbb{S}^{2}}}, \tag{302}\] _and this is finite for any test function \(u\in C^{\infty}(\mathbb{S}^{2}\times\mathbb{S}^{1})\). Here \(\bar{u}\) is defined as in (350), \(\nabla f_{\infty}\) and \(\nabla\bar{u}\) are (weak) gradients of functions \(f_{\infty}\) and \(\bar{u}\) on standard sphere \((\mathbb{S}^{2},g_{\mathbb{S}^{2}})\) respectively, and \(\langle\cdot,\cdot\rangle\) is the Riemannian metric on \((\mathbb{S}^{2},g_{\mathbb{S}^{2}})\)._ Proof.: The expression in (302) immediately follows from the expressions in (298) and (300) and Definition 5.7. The finiteness of the integral in (302) follows from that \(f_{\infty}\in W^{1,p}(\mathbb{S}^{2})\) for \(1\leq p<2\) as proved in Proposition 3.5. We now apply these lemmas to prove Theorem 5.11: Proof.: By the expression (11) of the scalar curvature of \(\mathbb{S}^{2}\times_{f_{i}}\mathbb{S}^{1}\), we have that for any test function \(u\in C^{\infty}(\mathbb{S}^{2}\times\mathbb{S}^{1})\), \[\int_{\mathbb{S}^{2}\times\mathbb{S}^{1}}\mathrm{Scalar}_{g_{j}} \,ud\mathrm{vol}_{g_{j}} = \int_{\mathbb{S}^{2}}\left(\int_{0}^{2\pi}\left(2f_{j}u-2\Delta f _{j}u\right)d\varphi\right)d\mathrm{vol}_{g_{\mathbb{S}^{2}}} \tag{304}\] \[= \int_{\mathbb{S}^{2}}\left(2f_{j}\bar{u}-2\Delta f_{j}\bar{u} \right)d\mathrm{vol}_{g_{\mathbb{S}^{2}}} \tag{303}\] \[= \int_{\mathbb{S}^{2}}\left(2f_{j}\bar{u}+2\langle\nabla f_{j},\nabla \bar{u}\rangle\right)d\mathrm{vol}_{g_{\mathbb{S}^{2}}}, \tag{305}\] where \(\bar{u}(r,\theta)=\int_{0}^{2\pi}u(r,\theta,\varphi)d\varphi\). Then, by using the nonnegative scalar curvature condition \(\mathrm{Scalar}_{g_{j}}\geq 0\), Proposition 3.5 and Lemma 5.19, possibly after passing to a subsequence, we obtain for any nonnegative test function \(0\leq u\in C^{\infty}(\mathbb{S}^{2}\times\mathbb{S}^{1})\), \[0 \leq \int_{\mathbb{S}^{2}\times\mathbb{S}^{1}}\mathrm{Scalar}_{g_{j}} \,ud\mathrm{vol}_{g_{j}} \tag{307}\] \[= \int_{\mathbb{S}^{2}}\left(2f_{j}\bar{u}+2\langle\nabla f_{j}, \nabla\bar{u}\rangle\right)d\mathrm{vol}_{g_{\mathbb{S}^{2}}}\] (308) \[\to \int_{\mathbb{S}^{2}}\left(2f_{\infty}\bar{u}+2\langle\nabla f_{ \infty},\nabla\bar{u}\rangle\right)d\mathrm{vol}_{g_{\mathbb{S}^{2}}}\] (309) \[= \langle\mathrm{Scalar}_{g_{\infty}},u\rangle. \tag{306}\] Thus, \(\langle\mathrm{Scalar}_{g_{\infty}},u\rangle\geq 0\) for all nonnegative test function \(u\in C^{\infty}(\mathbb{S}^{2}\times\mathbb{S}^{1})\). By setting \(u\equiv 1\) in equations (306)-(309), we obtain the convergence of distributional total scalar curvature. ## Appendix A \(W^{1,2}\) convergence in \(\mathbb{S}^{1}\times_{h}\mathbb{S}^{2}\) case In this appendix, we will derive \(W^{1,2}\) convergence in the case of warped product spheres over circle with nonnegative scalar curvature, and show that the limit metric has nonnegative distributional scalar curvature in the sense of Lee-LeFloch. Specifically, we will prove the following two theorems. **Theorem A.1**.: _Let \(\{\mathbb{S}^{1}\times_{h_{j}}\mathbb{S}^{2}\}_{j=1}^{\infty}\) be a family of warped Riemannian manifolds with metric tensors as in (8) satisfying_ \[\mathrm{Scalar}_{j}\geq 0,\quad\mathrm{Diam}(\mathbb{S}^{1}\times_{h_{j}} \mathbb{S}^{2})\leq D, \tag{310}\] _and_ \[\mathrm{MinA}(\mathbb{S}^{1}\times_{h_{j}}\mathbb{S}^{2})\geq A>0 \tag{311}\] _for all \(j\in\mathbb{N}\), where \(\mathrm{Scalar}_{j}\) is the scalar curvature of \(\mathbb{S}^{1}\times_{h_{j}}\mathbb{S}^{2}\). Then there is a subsequence of warping functions \(h_{j}\) that converges in \(W^{1,2}(\mathbb{S}^{1})\) to a Lipschitz function \(h_{\infty}\in W^{1,2}(\mathbb{S}^{1})\), which has Lipschitz constant 1 and satisfies_ \[\sqrt{\frac{A}{4\pi}}\leq h_{\infty}\leq\frac{D}{\pi}+2\pi,\quad\text{on }\ \mathbb{S}^{1}. \tag{312}\] _Moreover, let \(g_{\infty}:=g_{\mathbb{S}^{1}}+h_{\infty}^{2}g_{\mathbb{S}^{2}}\), then \(g_{\infty}\) is a Lipschitz continuous Riemannian metric tensor on \(\mathbb{S}^{1}\times\mathbb{S}^{2}\), and a subsequence of \(\{g_{j}=g_{\mathbb{S}^{1}}+h_{j}^{2}g_{\mathbb{S}^{2}}\}_{j=1}^{\infty}\) converges in \(W^{1,2}(\mathbb{S}^{1}\times\mathbb{S}^{2},g_{0})\) to \(g_{\infty}\)._ Here, as before, we still use \(g_{0}=g_{\mathbb{S}^{1}}+g_{\mathbb{S}^{2}}\) as a background metric. Then we can compute the scalar curvature distribution of Lee-LeFloch and have the following property. **Theorem A.2**.: _The limit metric \(g_{\infty}\) obtained in Theorem A.1 has nonnegative distributional scalar curvature in the sense of Lee-LeFloch as recalled in Definition 5.7._ The study of this case is similar as the case of rotationally symmetric metrics on sphere, which was studied by authors with Jiewon Park in [15]. But there are some difference between these two cases. For example, in the rotationally symmetric metrics on sphere, in general MinA condition may not be able to prevent collapsing happening near two poles [Lemma 4.3 in [15]], however, in the case of \(\mathbb{S}^{1}\times_{h_{j}}\mathbb{S}^{2}\), MinA condition can provide a positive uniform lower bound for \(h_{j}\) [Lemma A.6] and hence prevent collapsing happening. The key ingredient is a uniform gradient estimate obtained by using nonnegative scalar curvature condition [Lemma A.4]. Moreover, for the minimal value of warping function \(h_{j}\), we obtain a uniform upper bound from uniform upper bounded diameter condition [Lemma A.3] and a uniform lower bound from MinA condition [Lemma A.6]. Then we combine these estimates to prove Theorem A.1 at the end of Subsection A.1. Finally, in Subsection A.2, we will prove Theorem A.2. ### Convergence of a subsequence **Lemma A.3**.: _Let \(\{\mathbb{S}^{1}\times_{h_{j}}\mathbb{S}^{2}\}_{j=1}^{\infty}\) be a family of warped product Riemannian manifolds with metric tensors as in (8), having uniformly upper bounded diameters, i.e. \(\operatorname{Diam}(\mathbb{S}^{1}\times_{h_{j}}\mathbb{S}^{2})\leq D\), then we have \(\min_{\mathbb{S}^{1}}\{h_{j}\}\leq\frac{D}{\pi}\)._ Proof.: Let \(s_{0}\in\mathbb{S}^{1}\) be the minimum point of the function \(h_{j}\). Then clearly the distance between antipodal points on the sphere \(\{s_{0}\}\times\mathbb{S}^{2}\subset M_{j}\) is \(\pi\cdot\min_{\mathbb{S}^{1}}\{h_{j}\}\). So we have \(\pi\cdot\min_{\mathbb{S}^{1}}\{h_{j}\}\leq\operatorname{Diam}(M_{j})\leq D\), and the claim follows. **Lemma A.4**.: _Let \(\{\mathbb{S}^{1}\times_{h_{j}}\mathbb{S}^{2}\}_{j=1}^{\infty}\) be a family of warped product Riemannian manifolds with metric tensors as in (8). The scalar curvature of the warped product metric \(g_{j}=g_{\mathbb{S}^{1}}+h_{j}^{2}g_{\mathbb{S}^{2}}\) is given by_ \[\operatorname{Scalar}_{j}=-4\frac{\Delta h_{j}}{h_{j}}+2\frac{1-|\nabla h_{j }|^{2}}{h_{j}^{2}}. \tag{313}\] _Here the Laplace is the trace of the Hessian._ _Moreover, if \(\operatorname{Scalar}_{j}\geq 0\), then we have \(|\nabla h_{j}|\leq 1\) on \(\mathbb{S}^{1}\)._ Proof.: First, by using the formula of Ricci curvature for warped product metrics as in 9.106 in [3], one can easily obtain that the scalar curvature \(\operatorname{Scalar}_{j}\) of \(\mathbb{S}^{1}\times_{h_{j}}\mathbb{S}^{2}\) is given as in (313). Now we prove the second claim by contradiction. Assume for some \(j\), \(|\nabla h_{j}|>1\) at some point, let's say \(p\in\mathbb{S}^{1}\). Take a unit vector field \(X\) on \(\mathbb{S}^{1}\) such that \(X\) is in the same direction as \(\nabla h_{j}\) at the point \(p\). Let \(q\) be the first point such that \(|\nabla h|(q)=1\) while moving from the point \(p\) on \(\mathbb{S}^{1}\) in the opposite direction of the unit vector field \(X\). Then let \(\gamma\) be the integral curve of the vector field \(X\) with the initial point \(\gamma(0)=q\). Let \(t_{1}>0\) such that \(\gamma(t_{1})=p\). Set \(\tilde{h}_{j}(t)=h_{j}\circ\gamma(t)\). Then (at least) for \(t\in[0,t_{1}]\), \[\tilde{h}^{\prime}_{j}(t)=\langle\nabla h_{j},\gamma^{\prime}(t)\rangle= \langle\nabla h_{j},X\rangle\circ\gamma(t)=|\nabla h_{j}|\circ\gamma(t), \tag{314}\] and \[\tilde{h}^{\prime\prime}_{j}(t)=(\Delta h_{j})\circ\gamma(t). \tag{315}\] By the Mean Value Theorem, there exists \(t_{2}\in(0,t_{1})\) such that \[\tilde{h}^{\prime\prime}_{j}(t_{2})=\frac{\tilde{h}^{\prime}_{j}(t_{1})- \tilde{h}^{\prime}_{j}(0)}{t_{1}}>0, \tag{316}\] since \(\tilde{h}^{\prime}_{j}(t_{1})=|\nabla h_{j}|(p)>1\) and \(\tilde{h}^{\prime}_{j}(0)=|\nabla h_{j}|(q)=1\). On the other hand, because \(\operatorname{Scalar}_{j}\geq 0\), by using the scalar curvature (313), one has \[-4\frac{\tilde{h}^{\prime\prime}_{j}(t_{2})}{\tilde{h}_{j}(t_{2})}+2\frac{1-( \tilde{h}_{j}(t_{2}))^{2}}{(\tilde{h}_{j}(t_{2}))^{2}}\geq 0 \tag{317}\] So \[\tilde{h}^{\prime\prime}_{j}(t_{2})\leq\frac{1-(\tilde{h}^{\prime}(t_{2}))^{2 }}{2\tilde{h}(t_{2})}<0, \tag{318}\] since \(\tilde{h}^{\prime}_{j}(t_{2})>1\) by the choice of \(q=\gamma(0)\). This produces a contradiction, and so \(|\nabla h_{j}|\leq 1\) on \(\mathbb{S}^{1}\). **Lemma A.5**.: _Let \(\{\mathbb{S}^{1}\times_{h_{j}}\mathbb{S}^{2}\}_{j=1}^{\infty}\) be a family of warped product Riemannian manifolds with metric tensors as in (8). If \(\nabla h_{j}(x_{0})=0\) for some \(x_{0}\in\mathbb{S}^{1}\) then there is a minimal surface \(\{x_{0}\}\times\mathbb{S}^{2}\) in \(\mathbb{S}^{1}\times_{h_{j}}\mathbb{S}^{2}\)._ Proof.: Define \(\Sigma_{x}:=\{x\}\times\mathbb{S}^{2}\). Then for all \(x\in\mathbb{S}^{1}\), \(\Sigma_{x}\) is an embedded submanifold with mean curvature \[H_{j}=\frac{2|\nabla h_{j}|(x)}{h_{j}(x)}. \tag{319}\] **Lemma A.6**.: _Let \(\{\mathbb{S}^{1}\times_{h_{j}}\mathbb{S}^{2}\}_{j=1}^{\infty}\) be a family of warped product Riemannian manifolds with metric tensors as in (8) satisfying \(\mathrm{MinA}(\mathbb{S}^{1}\times_{h_{j}}\mathbb{S}^{2})\geq A>0\). Then we have \(\min_{\mathbb{S}^{1}}\{h_{j}\}\geq\sqrt{\frac{A}{4\pi}}>0\)._ Proof.: By applying Lemma A.5, we have that there exists a minimal surface \(\Sigma_{x_{0}}=x_{0}\times\mathbb{S}^{2}\) on \(\mathbb{S}^{1}\times_{h_{j}}\mathbb{S}^{2}\) at the minimal value point \(x_{0}\) of \(h_{j}\). The area of \(\Sigma_{x_{0}}\) is given by \[\mathrm{Area}(\Sigma_{0})=4\pi h_{j}^{2}(x_{0}). \tag{320}\] Thus by the MinA condition, \(4\pi h_{j}^{2}(x_{0})\geq A\), and the conclusion follows. Now we will use above lemmas to prove Theorem A.1: Proof.: We complete the proof in the following three steps. **Step 1. Uniform convergence of warping functions.** By applying Lemma A.3 and Lemma A.4 we immediately obtain the uniform upper bound \[\max_{\mathbb{S}^{1}}\{h_{j}\}\leq\frac{D}{\pi}+2\pi,\quad\forall i\in N. \tag{321}\] By combining this uniform upper bound with the uniform lower bound obtained in Lemma A.6, we have that the warping functions \(h_{j}\) are uniformly bounded, i.e. \[\sqrt{\frac{A}{4\pi}}\leq h_{j}\leq\frac{D}{\pi}+2\pi\quad\text{on}\ \ \mathbb{S}^{1},\quad\forall j\in\mathbb{N}. \tag{322}\] Moreover, Lemma A.4 implies function \(h_{j}\) are equicontinuous. Thus by applying Arzela-Ascoli theorem we obtain that \(h_{j}\) are uniformly convergent a continuous function \(f_{\infty}\) satisfying \[\sqrt{\frac{A}{4\pi}}\leq h_{\infty}\leq\frac{D}{\pi}+2\pi,\quad\text{on}\ \ \mathbb{S}^{1}. \tag{323}\] Meanwhile, the uniform gradient estimate obtained in Lemma A.4 also implies that the limit function \(h_{\infty}\) is Lipschitz with Lipschitz constant \(1\). Because a Lipschitz function is \(W^{1,\infty}\), we actually have \(h_{\infty}\in W^{1,\infty}(\mathbb{S}^{1})\). **Step 2. \(W^{1,2}\) convergence of warping functions.** We will estimate the bounded variation norm \(\|\nabla h_{j}\|_{BV(\mathbb{S}^{1})}\) of warping functions. First note that \[0=\int_{\mathbb{S}^{1}}\Delta h_{j}=\int_{\{\Delta h_{j}\geq 0\}}\Delta h_{j}+ \int_{\{\Delta h_{j}<0\}}\Delta h_{j}. \tag{324}\] Thus, \[-\int_{\{\Delta h_{j}<0\}}\Delta h_{j}=\int_{\{\Delta h_{j}\geq 0\}}\Delta h _{j}, \tag{325}\] furthermore, \[\|\nabla h_{j}\|_{BV(\mathbb{S}^{1})} = \int_{\mathbb{S}^{1}}|\nabla h_{j}|+\int_{\mathbb{S}^{1}}|\Delta h _{j}| \tag{327}\] \[= \int_{\mathbb{S}^{1}}|\nabla h_{j}|+\int_{\lfloor\Delta h_{j} \geq 0\rfloor}\Delta h_{j}-\int_{\{\Delta h_{j}<0\}}\Delta h_{j}\] (328) \[= \int_{\mathbb{S}^{1}}|\nabla h_{j}|+2\int_{\{\Delta h_{j}\geq 0 \}}\Delta h_{j}. \tag{326}\] Then by the expression of the scalar curvature in Lemma A.4, the non-negative scalar curvature condition implies \[\Delta h_{j}\leq\frac{1-|\nabla h_{j}|^{2}}{2h_{j}}\leq\frac{1}{2h_{j}}\leq \sqrt{\frac{\pi}{A}},\quad\forall j\in\mathbb{N}. \tag{329}\] The last inequality here follows from Lemma A.6. Lemma A.4 also tells us that \(|\nabla h_{j}|\leq 1\) on \(\mathbb{S}^{1}\) for all \(j\in\mathbb{N}\). Consequently, we have \[\|\nabla h_{j}\|_{BV(\mathbb{S}^{1})} = \int_{\mathbb{S}^{1}}|\nabla h_{j}|+2\int_{\{\Delta h_{j}\geq 0 \}}\Delta h_{j} \tag{331}\] \[\leq 2\pi+2\int_{\Delta h_{j}\geq 0}\sqrt{\frac{\pi}{A}}\] (332) \[\leq 2\pi\left(1+2\sqrt{\frac{\pi}{A}}\right),\quad\forall j\in \mathbb{N}. \tag{330}\] As a result, by Theorem 5.5 in [5] we have that a subsequence, which is still denoted by \(\nabla h_{j}\), converges to some \(\phi\in BV(\mathbb{S}^{1})\) in \(L^{1}(\mathbb{S}^{1})\) norm, and it is easy to see that \(\phi=\nabla h_{\infty}\) in the weak sense. Moreover, since \(h_{\infty}\in W^{1,\infty}(\mathbb{S}^{1})\) and \(\sup\limits_{j}\|\nabla h_{j}\|_{L^{\infty}(\mathbb{S}^{1})}<\infty\), we have \(\nabla h_{j}\to\nabla h_{\infty}\) in \(L^{2}(\mathbb{S}^{1})\) norm. Indeed, note that by the Holder inequality, \[\int_{\mathbb{S}^{1}}|\nabla h_{j}-\nabla h_{\infty}|^{2}\leq\|\nabla h_{j}- \nabla h_{\infty}\|_{L^{1}(\mathbb{S}^{1})}\|\nabla h_{j}-\nabla h_{\infty} \|_{L^{\infty}(\mathbb{S}^{1})}. \tag{333}\] As a result, \(h_{j}\to h_{\infty}\) in \(W^{1,2}(\mathbb{S}^{1})\). **Step 3. \(W^{1,2}\) convergence of metrics.** Note that \[g_{j}-g_{\infty}=(h_{j}^{2}-h_{\infty}^{2})g_{\mathbb{S}^{2}}, \tag{334}\] and \[\overline{\nabla}(g_{j}-g_{\infty})=2(h_{j}\overline{\nabla}h_{j}-h_{\infty} \overline{\nabla}h_{\infty})\otimes g_{\mathbb{S}^{2}}. \tag{335}\] Therefore, by applying the uniform bound \(\sup\limits_{j}\|\nabla h_{j}\|_{L^{\infty}(\mathbb{S}^{1})}<\infty\), and \(W^{1,2}\) convergence of \(h_{j}\) to \(h_{\infty}\), we can obtain that \(g_{j}=g_{\mathbb{S}^{1}}+h_{j}^{2}g_{\mathbb{S}^{2}}\) converges to \(g_{\infty}\) in \(W^{1,2}(\mathbb{S}^{1}\times\mathbb{S}^{2},g_{0})\). ### Nonnegative distributional scalar curvature of the limit metric In this subsection, we compute the distributional scalar curvature of the limit metric tensor \(g_{\infty}\) obtained in Theorem A.1 with the background metric \(g_{0}\) in the sense of Lee-LeFloch, and prove Theorem A.2. Throughout this subsection, \(g_{\infty}\) always denotes the limit metric obtained in Theorem A.1. By the definition of \(\Gamma^{k}_{ij}\) in Definition 5.7 and the Christofell symbols in Lemma 5.12, one can obtain the following lemma: **Lemma A.7**.: _For the limit metric, \(g_{\infty}\), with the background metric, \(g_{0}\), the Christoffel symbols defined by Lee-LeFloch as in (244), in the coordinate \(\{\varphi,r,\theta\}\), all vanish except_ \[\Gamma^{\varphi}_{rr}=-h_{\infty}h^{\prime}_{\infty},\quad\Gamma^{\varphi}_{ \theta\theta}=-h_{\infty}h^{\prime}_{\infty}\sin^{2}r, \tag{336}\] \[\Gamma^{r}_{\varphi r}=\Gamma^{r}_{r\varphi}=\frac{h^{\prime}_{\infty}}{h_{ \infty}}, \tag{337}\] _and_ \[\Gamma^{\theta}_{\varphi\theta}=\Gamma^{\theta}_{\theta\varphi}=\frac{h^{ \prime}_{\infty}}{h_{\infty}}. \tag{338}\] Note also that **Lemma A.8**.: _Note that the volume forms are:_ \[d\mu_{0}=\,d\varphi\,\wedge dr\wedge\sin(r)\,d\theta, \tag{339}\] _and_ \[d\mu_{\infty}=d\varphi\wedge h_{\infty}^{2}dr\wedge\sin(r)\,d\theta, \tag{340}\] _which are both defined everywhere away from \(r=0\) and \(r=\pi\). In particular,_ \[\frac{d\mu_{\infty}}{d\mu_{0}}=h_{\infty}^{2}(\varphi) \tag{341}\] _is in \(W^{1,p}(\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\) for all \(p\geq 1\)._ Proof.: The first claim holds away from \(r=0\) and \(r=\pi\) by the definition of volume form, and the second claim holds almost everywhere on \((\mathbb{S}^{2}\times\mathbb{S}^{1},g_{0})\). So \(d\mu_{\infty}=f_{\infty}d\mu_{0}\) almost everywhere which gives us the third claim. The rest follows from Proposition 3.5. Now we are ready to compute the vector field \(V\) and the function \(F\) defined by Lee-LeFloch as in (243) and (245). **Lemma A.9**.: _For the limit metric \(g_{\infty}\) with the background metric \(g_{0}\), the vector field \(V\) defined in (243), in the local frame \(\{\partial_{\varphi},\partial_{r},\partial_{\theta}\}\), is given by_ \[V=\left(-4\frac{h^{\prime}_{\infty}}{h_{\infty}},0,0\right)=-4\frac{h^{\prime} _{\infty}}{h_{\infty}}\frac{\partial}{\partial\varphi}. \tag{342}\] _Furthermore_ \[-V\cdot\overline{\nabla}\left(u\frac{d\mu_{\infty}}{d\mu_{0}}\right)=4\frac{ h^{\prime}_{\infty}}{h_{\infty}}\partial_{\varphi}(uh_{\infty}^{2}). \tag{343}\] **Lemma A.10**.: _For the limit metric \(g_{\infty}\) with the background metric \(g_{0}\), the function \(F\) defined in (245) is given by_ \[F=\frac{2}{h_{\infty}^{2}}-6\left(\frac{h^{\prime}_{\infty}}{h_{\infty}} \right)^{2}. \tag{344}\] _Furthermore,_ \[\left(Fu\frac{d\mu_{\infty}}{d\mu_{0}}\right)=2u-6u(h^{\prime}_{\infty})^{2}. \tag{345}\] **Lemma A.11**.: _For \(g\) being our limit metric tensor \(g_{\infty}\) and a smooth nonnegative test function \(u\), the integrals in (248) and (249) are given by_ \[FirstInt_{g_{\infty}} = \int_{\mathbb{S}^{2}\times\mathbb{S}^{1}}\left(-V\cdot\overline {\nabla}\left(u\frac{d\mu_{\infty}}{d\mu_{0}}\right)\right)\,d\mu_{0} \tag{347}\] \[= \int_{\mathbb{S}^{1}}\left(8(h^{\prime}_{\infty})^{2}\bar{u}+4h^ {\prime}_{\infty}h_{\infty}\bar{u}\right)d\varphi, \tag{346}\] _and_ \[SecondInt_{g_{\infty}} = \int_{\mathbb{S}^{2}\times\mathbb{S}^{1}}\left(Fu\frac{d\mu_{ \infty}}{d\mu_{0}}\right)\,d\mu_{0} \tag{349}\] \[= \int_{\mathbb{S}^{1}}\left(2\bar{u}-6(h^{\prime}_{\infty})^{2} \bar{u}\right)d\varphi, \tag{348}\] _where_ \[\bar{u}(\varphi)=\int_{0}^{\pi}dr\int_{0}^{2\pi}u(r,\theta,\varphi)d\theta. \tag{350}\] Proof.: By integrating the formulas in Lemma A.9 and Lemma A.10, one can easily obtain the integrals in (347) and (349). **Remark A.12**.: Here \(W^{1,2}\) regularity of \(h_{\infty}\) implies that the integrals in (347) and (347) are both finite (c.f. Remarks 5.10 and 5.18). **Lemma A.13**.: _For the limit metric \(g_{\infty}=g_{\mathbb{S}^{1}}+h_{\infty}^{2}g_{\mathbb{S}^{2}}\), the scalar curvature distribution \(\operatorname{Scalar}_{g_{\infty}}\) defined in Definition 5.7 can be expressed, for every test function \(u\in C^{\infty}(\mathbb{S}^{2}\times\mathbb{S}^{1})\), as the integral_ \[\langle\operatorname{Scalar}_{g_{\infty}},u\rangle=\int_{\mathbb{S}^{1}}\left( 2\bar{u}+2(h_{\infty}^{\prime})^{2}\bar{u}+4h_{\infty}^{\prime}h_{\infty}\bar{ u}\right)d\varphi, \tag{351}\] _and this is finite for any test function \(u\in C^{\infty}(\mathbb{S}^{2}\times\mathbb{S}^{1})\). Here \(\bar{u}\) is defined as in (350)._ Proof.: The expression in (351) immediately follows from the expressions in (347) and (349) and Definition 5.7. The finiteness of the integral in (351) follows from that \(h_{\infty}\in W^{1,2}(\mathbb{S}^{2})\). We now apply these lemmas to prove Theorem A.2: Proof.: By the expression (313) of the scalar curvature of \(\mathbb{S}^{1}\times_{h_{i}}\mathbb{S}^{2}\), we have that for any test function \(u\in C^{\infty}(\mathbb{S}^{2}\times\mathbb{S}^{1})\), \[\int_{\mathbb{S}^{1}\times\mathbb{S}^{2}}\operatorname{Scalar}_{g _{j}}ud\operatorname{vol}_{g_{j}}\] \[= \int_{\mathbb{S}^{1}}\left(\int_{\mathbb{S}^{2}}\left(-4(\Delta h _{j})h_{j}u+2u-2|\nabla h_{j}|^{2}u\right)d\operatorname{vol}_{g_{\mathbb{S}^{2 }}}\right)d\varphi\] \[= \int_{\mathbb{S}^{2}}\left(2\bar{u}+2(h_{j}^{\prime})^{2}\bar{u}+ 4h_{j}^{\prime}h_{j}\bar{u}\right)d\varphi. \tag{352}\] Then, by using the nonnegative scalar curvature condition \(\operatorname{Scalar}_{g_{j}}\geq 0\), and convergence property of \(h_{j}\) in Theorem A.1, possibly after passing to a subsequence, we obtain for any nonnegative test function \(0\leq u\in C^{\infty}(\mathbb{S}^{2}\times\mathbb{S}^{1})\), \[0 \leq \int_{\mathbb{S}^{2}\times\mathbb{S}^{1}}\operatorname{Scalar}_{g _{j}}ud\operatorname{vol}_{g_{j}} \tag{357}\] \[= \int_{\mathbb{S}^{1}}\left(2\bar{u}+2(h_{j}^{\prime})^{2}\bar{u}+ 4h_{j}^{\prime}h_{j}\bar{u}\right)d\varphi\] (358) \[\to \int_{\mathbb{S}^{2}}\left(2\bar{u}+2(h_{\infty}^{\prime})^{2} \bar{u}+4h_{\infty}^{\prime}h_{\infty}\bar{u}\right)d\varphi\] (359) \[= \langle\operatorname{Scalar}_{g_{\infty}},u\rangle. \tag{356}\] Thus, \(\langle\operatorname{Scalar}_{g_{\infty}},u\rangle\geq 0\) for all nonnegative test function \(u\in C^{\infty}(\mathbb{S}^{2}\times\mathbb{S}^{1})\).
2310.11682
Substrate interaction mediated control of phase separation in FIB milled Ag-Cu thin films
Nanofabrication is an integral part of realization of advanced functional devices ranging from optical displays to memory devices. Focused ion beam (FIB) milling is one of the widely used nanofabrication methods. Conventionally, FIB milling has been carried out for patterning single-phase stable thin films. However, the influence of FIB milling on phase separation of metastable alloy films during subsequent treatments has not been reported. Here, we show how FIB milling of Ag-Cu thin films influences the separation process and microstructure formation during post-milling annealing. Phase-separated microstructure of the film consists of fine, randomly distributed Ag-rich and Cu-rich domains, whereas adjacent to milled apertures (cylindrical holes), we observe two distinctly coarser rings. A combination of imaging and analysis techniques reveals Cu-rich islands dispersed in Ag-rich domains in the first ring next to the aperture, while the second ring constitutes mostly of Ag-rich grains. Copper silicide is observed to form in and around apertures through reaction with the Si-substrate. This substrate interaction, in addition to known variables like composition, temperature, and capillarity, appears to be a key element in drastically changing the local microstructure around apertures. This current study introduces new avenues to locally modulate the composition and microstructure through an appropriate choice of the film-substrate system. Such an ability can be exploited further to tune device functionalities with possible applications in plasmonics, catalysis, microelectronics and magnetics.
Vivek C. Peddiraju, Pravallika Bandaru, Shourya Dutta-Gupta, Subhradeep Chatterjee
2023-10-18T03:12:21Z
http://arxiv.org/abs/2310.11682v1
**Substrate interaction mediated control of phase separation in FIB milled Ag-Cu thin films** ## Abstract Nanofabrication is an integral part of realization of advanced functional devices ranging from optical displays to memory devices. Focused ion beam (FIB) milling is one of the widely used nanofabrication methods. Conventionally, FIB milling has been carried out for patterning single-phase stable thin films. However, the influence of FIB milling on phase separation of metastable alloy films during subsequent treatments has not been reported. Here, we show how FIB milling of Ag-Cu thin films influences the separation process and microstructure formation during post-milling annealing. Phase-separated microstructure of the film consists of fine, randomly distributed Ag-rich and Cu-rich domains, whereas adjacent to milled apertures (cylindrical holes), we observe two distinctly coarser rings. A combination of imaging and analysis techniques reveals Cu-rich islands dispersed in Ag-rich domains in the first ring next to the aperture, while the second ring constitutes mostly of Ag-rich grains. Copper silicide is observed to form in and around apertures through reaction with the Si-substrate. This substrate interaction, in addition to known variables like composition, temperature, and capillarity, appears to be a key element in drastically changing the local microstructure around apertures. This current study introduces new avenues to locally modulate the composition and microstructure through an appropriate choice of the film-substrate system. Such an ability can be exploited further to tune device functionalities with possible applications in plasmonics, catalysis, microelectronics and magnetics. ## I. Introduction Nanostructuring of _single-phase_ thin films using masks or focused ion beam (FIB) milling is often employed to fabricate devices with enhanced or novel properties[1, 2, 3, 4, 5, 6, 7, 8]. For phase-separated films, however, controlling a pattern during milling can be challenging due to the difference in sputtering rates of the individual components [9]. As an alternative, nanostructures can be fabricated in the metastable single-phase films of immiscible systems, which can then be subjected to an annealing heat treatment to induce phase separation. Therefore, it is of great importance, both at practical and fundamental levels, to investigate how phase separation is affected by the pre-existing nanostructuring present in the film. Phase-separation phenomena in Ag-Cu bare thin films (that is, without imprinted nanostructuring) have been studied extensively [10, 11, 12, 13, 14]. Control of the separation process can potentially be exploited to tune the optical response of these films [9, 15]. Alloy films produced by physical vapor deposition usually exist in a metastable single-phase, solid solution state [9, 10, 11], although initiation of phase separation process during deposition has also been reported in some cases [16, 17, 18, 19]. When as-deposited films are subjected to a heat treatment, Ag- and Cu-rich domains form via phase separation of the undecomposed or partly decomposed film by nucleation and growth or spinodal decomposition [11, 14, 15, 20]. Surface energies of the components (Ag and Cu in this case), interfacial energy, and lattice mismatch between them as well as with the substrate are known to influence the decomposition process [21, 22, 23, 24, 25, 26, 27, 28, 29, 30]. Here we report unusual phase separation patterns observed around cylindrical holes (termed as 'apertures') fabricated by FIB milling on sputter-deposited Ag-Cu films. Further experiments and analysis demonstrate substrate-film interaction to be responsible for the striking shift in the decomposition process in the film. ## II. Materials and Methods _A. Thin film deposition_: Ag-Cu thin films are deposited on a substrate by magnetron co-sputtering with pure Ag and Cu targets (99.99% purity) under an Ar (99.99 % pure) atmosphere. Single crystal p-type Si-wafers of \(\sim\)380 \(\upmu\)m thickness with a (100) orientation and 20 nm thick 100 \(\mu\)m \(\times\) 100 \(\mu\)m amorphous SiN\({}_{\text{x}}\) (ASN) TEM windows were used as substrates. Deposition parameters (70 W RF and 30 W DC power for Ag and Cu, respectively; initial chamber pressure: 9\(\times\)10\({}^{-7}\) mbar and pressure during deposition: 4.4 \(\times\) 10\({}^{3}\) mbar; Ar flow rate: 5 SCCM; deposition time: 60 s) are optimized to deposit approximately equimolar (\(\sim\)53 \(\pm\) 1 at.% Ag), \(\sim\)38 nm thick films. _B. FIB milling and post-treatment_: Apertures of different diameters on as-deposited films are fabricated by FIB milling. Further experimental details of the milling are provided as Supplementary Information (SI); Table S1 lists the nominal and measured aperture sizes. Multiple replicas are made for each aperture diameter to ascertain the reproducibility of the results. Films with apertures are annealed at 400 \({}^{\circ}\)C for 30 minutes under vacuum to allow for phase separation. _Characterization_: Composition of the films is measured by energy dispersive X-ray spectroscopy (EDS; make & model: EDAX Octane Elect Plus) in a multi-beam (electron plus FIB) scanning electron microscope (SEM; make & model: JEOL JIB 4700F) operated at 20 kV. The same SEM (at 15 and 20 kV) is also used for imaging; unless stated otherwise, composition sensitive backscattered electron (BSE) mode is used for imaging. Atomic force microscopy (AFM of make Park Systems) of film cross section is used to measure film thickness which is cross verified by SEM imaging. Further microstructural analysis is done in a 200 kV scanning transmission electron microscope (S/TEM; make & model: JEOL JEM F-200) equipped with a high angle annular dark field (HAADF) detector. Unless explicitly mentioned otherwise, all SEM images are from films on Si, and TEM/STEM images and electron diffraction patterns are from those deposited on ASN windows. SEM and TEM images are analysed further to reveal local phase and compositional details. Steps in image processing and representative examples are provided in SI. Characterization experiments are carried out after each of the individual steps, _viz._, deposition, FIB milling and annealing. ## III. Results and Discussion The experimental workflow is illustrated by the schematics in Fig. 1(a). The first step involves co-deposition of Ag and Cu which is same for both the substrates (schematic in the left). This is followed by FIB milling to fabricate apertures: for the Si substrate, these extend slightly into the substrate (schematic in the middle), whereas through-holes are made in films on ASN windows. SE and BSE images of an FIB cut section of an aperture for the film on Si are provided Fig. S1. Fig. 1(b) shows a BSE image of an as-deposited film with an aperture of \(\sim\)400 nm diameter. A TEM diffraction ring pattern obtained from a similar film, along with the corresponding rotationally averaged intensity profile as a function of radial distance are shown in Fig 1(b). The intensity profile reveals the presence of two distinct peaks (111 and 200) that overlap to from the broad, brightest ring in pattern. These peaks fit to a single-phase metastable solid solution of Ag and Cu. The lattice parameter of this as-deposited FCC phase is 3.91 A. This is in between the lattice parameters of pure Ag and Cu, indicating an extended solid solubility of Ag and Cu. The uniform contrast in the BSE image along with this diffraction evidence confirm the lack of phase separation at this scale of observation in the as-deposited film. Additional evidence of compositional homogeneity in as-deposited film is presented in the STEM image and EDS maps of Fig. S1(d). Upon annealing, films undergo phase separation, as evidenced by domains of two distinct mean gray levels in the HAADF-STEM image of Fig. 1(c). The corresponding STEM EDS elemental map reveals the bright and dark domains to be Ag- and Cu-rich, respectively. Additionally, the electron diffraction pattern of this region clearly shows the occurrence of distinct rings that belong to the two different FCC phases (Ag-rich and Cu-rich). Lattice parameters of these two FCC phases are 4.07 A and 3.62 A, which are close to lattice parameters of pure Ag and Cu, respectively. Thus, all the evidence confirm that phase separation has taken place in the metastable as-deposited film during annealing. Fig. 1: (a) Schematic of the workflow. Left: Deposition of equimolar Ag-Cu alloy films using DC magnetron sputtering. Middle: Fabrication of apertures on as-deposited films using FIB milling. Right: Vacuum annealing of films with nanostructures inside the deposition chamber. (b) Left: Representative SEM-BSE image of an aperture milled on as-deposited film (Si substrate). Right: Electron diffraction (ED) ring pattern of the as-deposited film (ASN substrate) along with the corresponding rotationally averaged intensity profile. (c) HAADF-STEM image (top left) of the annealed film (ASN substrate) along with a corresponding STEM-EDS elemental map (bottom left) (Cu: red, Ag: green). Annotated ED pattern on the right show rings from Ag-rich and Cu-rich phases._ BSE image of Fig. 2(a) shows the post-annealing microstructure of a region containing a 400 nm aperture. Here, the bright blocky particle at the centre obscures the milled aperture; a secondary electron image shown as an inset reveals the actual aperture being covered by this particle. Strikingly, microstructure in the vicinity of the aperture is very different from that far away from it (see Fig. 1(c)). Adjacent to the aperture, we observe two distinct and approximately annular regions which are marked by dashed lines and indicated as A\({}_{1}\) and A\({}_{2}\). A magnified view of these zones is presented in Fig. 2(b). Zone A\({}_{1}\) consists of a mixture of Ag- and Cu-rich domains whereas A\({}_{2}\) contains mostly Ag-rich grains. Fig. 2(c) shows the EDS spectra obtained from different microstructural features that are present in Fig 2(a). The Si peak at 1.3 KeV is not included in the range as strong substrate contribution obscures other peaks. Spectrum from the bright particle in the center shows a very strong Cu-K signal in comparison to Cu-K signal from other microstructural features and careful quantitative analysis estimates its stoichiometry close to Cu-Si. The Cu-enrichment in and around the aperture is also evident from the EDS composition maps presented in Fig. 2(d). These results suggest the formation of a copper silicide phase in the central region. Additional evidence of formation of Cu\({}_{3}\)Si is presented through the cross-sectional image in Fig. S2(a). It shows the particle to extend into the Si-substrate with a sharp V-shaped interface that is characteristic of copper silicide particles forming on Si-(100) substrate[31, 32, 33, 34]. Although this particle is enriched in Cu (lower atomic number), the relatively bright BSE image contrast originates from its surface topography (projecting out of the plane), as confirmed by AFM measurements in Fig S2(b) and Fig. S2(c). Formation of the distinct microstructural zones A\({}_{1}\) and A\({}_{2}\) around the aperture needs to be discussed further. We confirmed (Fig. S3) that they form regardless of the aperture size and their spatial extent is constant across the aperture sizes. The dark islands in A\({}_{1}\) are Cu-rich (\(\sim\)90 at. % Cu) while the bright interconnected network is rich in Ag (\(\sim\)70 at. % Ag). Adjacent to A\({}_{1}\), the circular banded region A\({}_{2}\) appears with a relatively bright contrast compared to rest of the film. It is mostly Ag-rich (\(\sim\)90 at. % Ag), although a few darker grains (possibly Cu-rich) scattered among the numerous Ag-rich band could also be observed in Fig. 2(b). Due to the limited spatial resolution of the SEM-EDS, it is not possible to capture such fine-scale composition modulations. The microstructure within the two annular bands A\({}_{1}\) and A\({}_{2}\) is called as 'halo structure'. We note a gradual transition from the region A\({}_{1}\) to region A\({}_{2}\), but Figure 2: Phases and microstructure of the annealed film with an aperture (Si substrate). (a) Low magnification BSE image showing formation of halo structure (red dashed circle) around the aperture (blue circle). The white dashed circle demarcates A\({}_{1}\) and A\({}_{2}\) zones within the halo. Secondary electron (SE) image in the inset shows a magnified view of copper silicide (bright particle at center). (b) Magnified view of region marked by yellow box in (a). (c) Stack of SEM-EDS spectra obtained from different microstructural features shown in (a). (d) SEM-EDS elemental maps highlight the overall redistribution of Ag and Cu in the halo region. (e) Semi-quantitative information about Ag concentration obtained through fraction of bright pixels in different microstructural regions. (f) Bar chart showing chemical potentials of copper in different phases, viz., metastable solid solution (SS(ms)), phase separated mixture (SS(ps)), and Cu\({}_{I\beta}\)Si\({}_{6}\) (nominally Cu\({}_{3}\)Si). Arrow denotes the direction of transport of Cu atoms from SS(ms) towards Cu\({}_{3}\)Si. transition from A\({}_{2}\) to the bulk microstructure is sharp with a discernible boundary between them. To understand the redistribution of Ag and Cu within the halo structure, we carry out image analysis (procedure outlined in Supplementary Information) and represent the Ag-enrichment of each ring using the fraction of bright pixels in Fig. 2(e). This clearly establishes the depletion of Cu from regions surrounding the aperture. As stated earlier, we observe the formation of a copper silicide phase inside the milled aperture after annealing. Hong _et al._ had reported [35] the formation of Cu\({}_{3}\)Si by interface reaction between an Ag-Cu thin film with Si substrate. Therefore, presence of this compound phase must be considered while trying to understand the phase separation behavior of the undecomposed Ag-Cu solid solution. To understand the thermodynamics of the influence of substrate interaction on phase separation in the metastable film, chemical potentials of Cu (\(\mu_{\text{Cu}}\)) in the metastable solution (SS(ms)), phase separated Cu-rich solution (SS(ps)) and Cu\({}_{19}\)Si\({}_{6}\) phase (nominally Cu\({}_{3}\)Si) need to be compared. We compute these potentials using the computational thermodynamics software Thermo-Calc(r) with the SSOL6 database. The bar plot in Fig. 2(f) shows that \(\mu_{\text{Cu}}^{\text{Cu}_{19}\text{Si}_{6}}\)\(<\)\(\mu_{\text{Cu}}^{\text{SS(ps)}}\)\(<\)\(\mu_{\text{Cu}}^{\text{SS(ms)}}\). Therefore, transformation during the annealing is initiated by a reaction between the exposed Si substrate inside the aperture and the undecomposed film to form Cu\({}_{3}\)Si. This depletes the aperture-adjacent regions of Cu and sets up a flux of Cu atoms from the bulk film towards the aperture and modulates the phase separation process in the film taking place by spinodal decomposition. Fig. 2(e) provides evidence of the increase in the Ag-content in the annular regions A\({}_{1}\) and A\({}_{2}\) due to the depletion of Cu from these regions. Surprisingly, however, we find the Cu-depletion to be greater in A\({}_{2}\) than A\({}_{1}\). Interaction between compound formation inside the aperture and spinodal decomposition within the Ag-Cu film likely results in such unusual microstructural and compositional patterns. Two alternative growth scenarios are presented in Fig. 3 to explain their origin. Fig. 3(a) depicts a case where the tendency to form Cu\({}_{3}\)Si in the aperture creates a gradient of Cu concentration towards the aperture and a corresponding gradient of Ag in the opposite direction. Thus, as Cu\({}_{3}\)Si starts to form in the aperture, it simultaneously creates alternate Cu- and Ag-rich rings due to these radial composition gradients. Eventually, phase separation in the bulk of the film too initiates and it creates finer compositional domains. Formation of these rings around the aperture would be facilitated by different surface energies of Ag and Cu [23, 36]. Although this turn of events is plausible, we do not observe a uniformly Cu-rich ring around the aperture. Instead, the alternative scenario shown in Fig. 3(b) appears more likely. Here too there is an initial flux of Cu atoms is created by the formation of Cu\({}_{3}\)Si in the aperture. However, it nucleates and grows from certain locations (and does not cover the entire exposed film-aperture interface), thus breaking the radial symmetry of the flux of Cu atoms. This leads to both radial and lateral atom transport of Cu and Ag around the aperture, creating Cu-rich domains dispersed among Ag-rich ones (zone A\({}_{1}\)). However, due to the loss of Cu atoms from the film to the aperture (as is required for the formation of Cu\({}_{3}\)Si), there still remains a net radial inward flow of Cu atoms (and corresponding radial outward flow of Ag), Figure 3: Schematic depicting two alternative scenarios of microstructural evolution (progression of time indicated by arrows); see text for details. (a) A core-shell structure around the aperture resulting from an entirely radial species transport. (b) A halo structure as observed in the experiments results when the radial symmetry is broken by discrete silicide nucleation events at the aperture-film interface. Arrows inside the schematics indicate the flux of copper atoms. creating an Ag-rich annular region A\({}_{2}\) following the mixed Ag-Cu domains in A\({}_{1}\). Eventually, kinetics of spinodal decomposition in the bulk film becomes appreciable and it creates much finer compositional domains. Thus, the microstructural pattern adjacent to the aperture gets drastically modified due to the interaction of the chemical reaction forming Cu\({}_{3}\)Si and creating the gradient in Cu. The effect of substrate interaction (or lack thereof) is demonstrated in Fig. 4(a) for films deposited on ASN window. We can observe a slight Ag-enrichment of a narrow rim surrounding the aperture. This is supported by image analysis results in Fig. 4(b) (represented as before by the fraction of bright pixel in the annular region) and the STEM-EDS maps of Fig. 4(c). However, formation of Cu\({}_{3}\)Si and distinct annular regions surrounding the aperture are conspicuously absent for ASN. Similar observations are made for other aperture sizes too (Fig. S4). For Si substrate, Si atoms are readily available for the growth of Cu\({}_{3}\)Si, whereas for the ASN substrate, the Si-N bonds must first be broken to form silicides. Studies show[37, 38, 39] that neither Ag nor Cu atoms react with Si\({}_{3}\)N\({}_{4}\) and from compound phases at elevated temperatures. The slight enrichment of Ag around the aperture that is observed for the ASN window can be attributed to its lower surface energy compared to Cu which aids the segregation of Ag[40, 23, 41]. This provides strong evidence that the halo structure for the Si substrate is associated with the growth of Cu\({}_{3}\)Si and therefore, emphasizes the crucial role played by the substrate on the phase separation pattern around apertures. Figure 4: Phases and microstructure of the annealed film with an aperture (ASN substrate). (a) HAADF STEM image showing the absence of significant microstructural changes and silicides around the aperture (unlike the case of film deposited on Si substrate). (b, c) A slight Ag-enrichment around the aperture (indicated by white dashed circles in (a)) is confirmed by the bar chart of bright pixel fraction in (b) and STEM EDS maps in (c); green and red colors in the latter represent Ag and Cu concentrations, respectively. Results presented above demonstrate that it is possible to locally tune the multi-phase microstructures in alloy films via substrate-film reaction which in turn could give rise to enhanced properties (e.g., different plasmonic response than bulk as-deposited or phase separated film). In addition to Ag-Cu alloy thin films, the proposed method for controlling the phase separation via substrate chemical reaction can be readily extended to other phase separating and ordering alloys systems like Ag-Co, Ag-Fe, Au-Fe, Co-Cu, Co-Pt and Fe-Cu. These would enable an additional handle for controlling the response of functional devices that would be relevant for magnetic [42, 43], magneto-optical [5, 44, 45, 46, 47, 48] and catalytic [49] applications. Moreover, by introducing an additional reactive layer on the substrate (which can be inert), it is also possible to regulate the extent of such chemical reactions. In such a case, the reactive film thickness would control the availability of the reacting species and thereby provide an additional means for controlling the microstructure. ## IV. Conclusions In summary, our results show that reaction with exposed substrate produced by FIB milled apertures influence phase separation and microstructure of Ag-Cu thin films. During the annealing treatment the growth of Cu\({}_{3}\)Si takes place inside milled apertures. Adjacent to the aperture, an annular region (A\({}_{1}\)) is formed where the Cu-islands dispersed in an Ag-rich matrix. This annular region is in turn surrounded by a mostly Ag-rich band (A\({}_{2}\)). The A\({}_{1}\) and A\({}_{2}\) regions together is termed as the 'halo structure'. Outside this halo, the bulk film microstructure comprises of much finer Ag-rich and Cu-rich domains that are interspersed with each other. These characteristic patterns are absent for films deposited on the ASN window where only a very thin Ag-rich annular region is surrounding the apertures is observed. The evidence presented highlights how substrate interaction with spinodal decomposition in the film can strongly modulate the local microstructure around FIB milled apertures. Therefore, results of the present study offer new insights to tailor local microstructure for enhancing material properties and device performance. ## Supplementary materials See supplementary materials for details about experimental methods and techniques viz., thin film deposition, FIB milling, characterization, image analysis methodology and supporting microscopic images. ## Acknowledgements The authors would like to thank Dr. Saswata Bhattacharyya and Dr. Sai Rama Krishna Malladi for fruitful discussions. S.D.-G. would like to acknowledge the funding from SERB (Grant No. SERB/ECR/2018/002628). ## Author declarations ### Conflict of interest All authors declare that they have no conflicts to disclose. ### Author contributions **Vivek C. Peddiraju:** Data curation (lead), Formal analysis (equal), Investigation (lead), Methodology (equal), Software (supporting), Validation (lead), Visualization (equal), Writing-Original Draft Preparation (lead) **Pravallika Bandaru:** Investigation (supporting), Writing-Review & Editing (supporting) **Shourya Dutta-Gupta:** Conceptualization (lead), Funding Acquisition (lead), Investigation (supporting), Methodology (equal), Project Administration (equal), Resources (equal), Supervision (equal), Visualization (equal), Writing-Review & Editing (equal) **Subhradeep Chatterjee:** Conceptualization (lead), Formal analysis (equal), Methodology (equal), Project Administration (equal), Resources (equal), Software (lead), Supervision (equal), Visualization (equal), Writing-Review & Editing (equal) The data that support the findings of this study are available from the corresponding author upon reasonable request.
2304.02007
The Role of Mass and Environment on Satellite distributions around Milky Way analogs in the Romulus25 simulation
We study satellite counts and quenched fractions for satellites of Milky Way analogs in Romulus25, a large-volume cosmological hydrodynamic simulation. Depending on the definition of a Milky Way analog, we have between 66 and 97 Milky Way analogs in Romulus25, a 25 Mpc per-side uniform volume simulation. We use these analogs to quantify the effect of environment and host properties on satellite populations. We find that the number of satellites hosted by a Milky Way analog increases predominantly with host stellar mass, while environment, as measured by the distance to a Milky Way-mass or larger halo, may have a notable impact in high isolation. Similarly, we find that the satellite quenched fraction for our analogs also increases with host stellar mass, and potentially in higher-density environments. These results are robust for analogs within 3 Mpc of another Milky Way-mass or larger halo, the environmental parameter space where the bulk of our sample resides. We place these results in the context of observations through comparisons to the Exploration of Local VolumE Satellites and Satellites Around Galactic Analogs surveys. Our results are robust to changes in Milky Way analog selection criteria, including those that mimic observations. Finally, as our samples naturally include Milky Way-Andromeda pairs, we examine quenched fractions in pairs vs isolated systems. We find potential evidence, though not conclusive, that pairs, defined as being within 1 Mpc of another Milky Way-mass or larger halo, may have higher satellite quenched fractions.
Jordan Van Nest, Ferah Munshi, Charlotte Christensen, Alyson M. Brooks, Michael Tremmel, Thomas R. Quinn
2023-04-04T17:57:02Z
http://arxiv.org/abs/2304.02007v2
The Role of Mass and Environment on Satellite distributions around Milky Way analogs in the Rollulus25 simulation ###### Abstract We study satellite counts and quenched fractions for satellites of Milky Way analogs in Romulus25, a large-volume cosmological hydrodynamic simulation. Depending on the definition of a Milky Way analog, we have between 66 and 97 Milky Way analogs in Romulus25, a 25 Mpc-per-side uniform volume simulation. We use these analogs to quantify the effect of environment and host properties on satellite populations. We find that the number of satellites hosted by a Milky Way analog increases with host stellar mass, while there is no trend with environment, as measured by distance to a Milky Way-mass or larger halo. Similarly, we find that the satellite quenched fraction for our analogs also increases with host stellar mass, with no significant impact from environment. We place these results in the context of observations through comparisons to the ELVES and SAGA surveys. Our results are robust to changes in Milky Way analog selection criteria, including those that mimic observations. Finally, as our samples naturally include Milky Way/Andromeda pairs, we examine quenched fractions in pairs vs isolated systems. We find potential evidence, though not conclusive, that pairs may have higher satellite quenched fractions. galaxies:evolution - galaxies:quenching - galaxies:dwarf + Footnote †: journal: ApJ 0000-0002-0002-8880]Jordan Van Nest 0000-0002-0002-3880]Ferah Munshi 0000-0002-4133-0888]Charlotte Christensen 0000-0002-4133-0888]Alyson M. Brooks 0000-0002-0703-3885]Michael Tremmel 0000-0002-4133-0888]Thomas R. Quinn ## 1 Introduction The satellites of the Milky Way and its neighbors in the Local Group, thanks to their proximity, have often served as our basis of understanding satellite and dwarf galaxy formation and evolution. In the past decade there has been an explosion in our understanding of satellites around our own Milky Way (Mateo, 1998; Koposov et al., 2008; Simon, 2019; Drlica-Wagner et al., 2020, and references within) and Andromeda (Ibata et al., 2014; Martin et al., 2016; Ibata et al., 2014; McConnachie et al., 2018, and references within). Further, in the age of ultra-faint galaxy detection, the low surface brightness end of the Milky Way's satellite distribution continues to grow (e.g. Drlica-Wagner et al., 2015; Koposov et al., 2015; Simon, 2019). As we continue to discover fainter objects nearby, the question of the Milky Way's uniqueness becomes an important one. Applying what we learn locally to the universe at large would not be appropriate if the Local Group could be considered 'atypical'. To test for any potential discrepancy, surveys such as the "Satellites Around Galactic Analogs" (SAGA; Geha et al., 2017; Mao et al., 2021) and "Exploration of Local Volume Satellites" (ELVES; Carlsten et al., 2020, 2021, 2022; Mao et al., 2021) study the satellite distributions of galaxies similar to our own, placing the Milky Way in a broader, cosmological context. The SAGA survey is an ongoing effort to compile spectroscopically complete satellite luminosity functions of 100 Milky Way analogs with distances between 20-40 Mpc, providing vastly improved statistics for the bright end of these satellite distributions (down to M\({}_{R}\)=-12.3). In complement to the SAGA survey's probing of distant Milky Way-like systems, the ELVES survey seeks to fully map the satellite distributions of the hosts within the Local Volume (\(<12\) Mpc) down to M\({}_{V}\)=-9. Working in tandem, SAGA and ELVES will provide a better understanding of both what a "typical" Milky Way-like halo will look like and what influences an environment like the Local Volume can impart. The SAGA survey has found that the luminosity function of the Milky Way is consistent with their observations of other systems, but that the host-to-host scatter in number of satellites is large (Mao et al., 2021). SAGA also finds that the total number of satellites in a system correlates with the host's \(K\)-band luminosity. Similar to SAGA, the ELVES survey finds that satellite abundance correlates with host mass and that the Milky Way is typical for its mass. However, Carlsten et al. (2021) find that the observed luminosity functions of local hosts are typically "flatter" than predicted by the cosmological model; the stellar to halo mass relation tends to under-predict bright satellites and over-predict faint ones, a result found also by Geha et al. (2017). These results highlight the power of a larger sample of galaxies and their satellites to provide context for understanding satellite dwarf galaxies. One of the most interesting discrepancies to be highlighted so far is that the quenched fraction of Local Group (Milky Way and Andromeda) satellites is not in agreement with SAGA's results; the SAGA sample exhibits lower quenched fractions than those found in the Local Group. On the other hand, the ELVES survey finds higher quenched fractions amongst the Local Volume than in the SAGA sample, though still not as high as the Local Group. Although Mao et al. (2021) carefully attempt to quantify incompleteness in the SAGA survey, it remains an open question whether SAGA may be missing faint, red or low surface brightness satellites which would be predominantly quenched (Carlsten et al., 2022; Font et al., 2022), or whether the Local Group is a true outlier in terms of quenched satellite fraction. In general, various simulations of Milky Way-like galaxies tend to find good agreement in their resulting quenched satellite fractions, lying somewhere between the Local Group and Local Volume fractions (Akins et al., 2021; Engler et al., 2021; Karunakaran et al., 2021; Samuel et al., 2022). These simulations generally find that galaxies that infall into a host Milky Way with stellar masses above M\({}_{*}\sim 10^{8}\) M\({}_{\odot}\) are better able to retain their gas and continue star forming for extended periods. On the other hand, galaxies with stellar masses below \(10^{8}\) M\({}_{\odot}\) instead tend to experience ram pressure stripping that strips gas and quenches their star formation, and the quenching time scales can often be quite short (\(<2\) Gyr) (Wetzel et al., 2015; Simpson et al., 2018; Simons et al., 2020; Akins et al., 2021). These results lead to high predicted quenched satellite fractions as luminosity decreases. On the theoretical front, many analyses use zoom-in simulations of a handful of Milky Way analogs (e.g., Akins et al., 2021; Samuel et al., 2022), though Font et al. (2022) use the artemis suite of 24 cosmological Milky Way-mass zooms to interpret the ELVES and SAGA results. Font et al. (2022) find that applying a surface brightness limit to the artemis satellites can bring the quenched fractions and radial distributions into line with SAGA results, suggesting that SAGA is missing faint surface brightness galaxies. Fainter surface brightnesses correlate with more quenching at a fixed luminosity in artemis, and thus bias the SAGA results if true. On the other hand, Engler et al. (2022) found that a surface brightness cut could not bring the TNG50 satellite quenched fractions fully into agreement with SAGA, though it did bring the simulation and observational results more into line. Engler et al. (2022) were able to use TNG50, a 50 Mpc-on-a-side uniform cosmological volume, to study a larger sample of Milky Way analogs and look for statistical trends. In this work, we use Romulus25, a 25 Mpc-on-a-side uniform cosmological volume with comparable resolution to TNG50, to study similar trends. We particularly focus on the questions of how host mass and large-scale environment impact both satellite counts and quenched fractions for our simulated Milky Way analogs. The paper is outlined as follows. We begin in Section 2 by describing the Romulus25 simulation. In Section 3 we outline our various methods for identifying Milky Way analogs, as well as their satellites. In Section 4 we present our primary results, focusing on the general size of the satellite populations and their quenched fractions. We then discuss and summarize our results in Sections 5 & 6. ## 2 Simulation For this work, we use the Romulus25 simulation (Tremmel et al., 2017). Romulus25 was run with ChaNGa(Menon et al., 2015) which includes standard physics modules previously used in GASOLINE (Wadsley et al., 2004, 2008, 2017) such as a cosmic UV background (Haardt and Madau, 2012) including self-shielding (Pontzen et al., 2008), star formation, 'blastwave' supernova (SN) feedback (Stinson et al., 2006), and low temperature metal cooling (Bromm et al., 2001). ChaNGa implements an updated Smooth Particle Hydrodynamics (SPH) routine that uses a geometric mean density in the SPH force expression, allowing for the accurate simulation of shearing flows with Kelvin-Helmholtz instabilities (Wadsley et al., 2017). Finally, a time-dependent artificial viscosity and an on-the-fly time-step adjustment (Saitoh and Makino, 2009) system allow for more realistic treatment of weak and strong shocks (Wadsley et al., 2017). Romulus25 assumes a \(\Lambda\)CDM model with cosmological parameter values following results from Planck (\(\Omega_{0}=0.3086\), \(\Lambda=0.6914\), h= 0.6777, \(\sigma_{8}=0.8288\); Planck Collaboration et al., 2016). The simulation has a Plummer equivalent force softening of 250 pc (a spline softening of 350 pc is used, which converges to a Newtonian force at 700 pc). Unlike many similar cosmological runs, the dark matter particles were _oversampled_ relative to gas particles, such that the simulation was run with initially 3.375 times more dark matter particles than gas. This increased dark matter resolution allows for the ability to track the dynamics of supermassive black holes within galaxies (Tremmel et al., 2015). The result is a dark matter particle mass of \(3.39\times 10^{5}\)M\({}_{\odot}\) and gas particle mass of \(2.12\times 10^{5}\)M\({}_{\odot}\). These relatively low dark matter particle masses decrease numerical effects resulting from two-body relaxation and energy equipartition, which occur when particles have significantly different masses, both of which can affect the structure of simulated galaxies (e.g., Ludlow et al., 2019). Romulus25 has been shown to reproduce important galaxy and supermassive black hole scaling relations (Tremmel et al., 2017; Ricarte et al., 2019; Sharma et al., 2022, 2020). ### Star formation and gas cooling Gas cooling at low temperatures is regulated by metal abundance as in Guedes et al. (2011), as well as SPH hydrodynamics that includes both thermal and metal diffusion as described in Shen et al. (2010) and Governato et al. (2015) (thermal and metal diffusion coefficients set to 0.3, see Tremmel et al. (2017, 2019) for an in-depth discussion). Star formation and associated feedback from supernovae are crucial processes that require sub-grid models in cosmological simulations like Romulus25. Following Stinson et al. (2006), star formation (SF) is regulated with parameters that encode SF efficiency in dense gas, couple SN energy to the ISM, and specify the physical conditions required for SF. These parameters were calibrated using several dozen zoom-in simulations of dwarf to Milky Way mass galaxies (Tremmel et al., 2017) and are as follows: 1. The normalization of the SF efficiency, \(\rm c_{SF}=0.15\), and formation timescale, \(\Delta t=10^{6}\) yr, are both used to calculate the probability \(p\) of creating a star particle from a gas particle that has a dynamical time \(t_{dyn}\) \[p=\frac{m_{\rm gas}}{m_{\rm star}}(1-e^{-c_{\rm SF}\Delta t/t_{dyn}}).\] (1) 2. The fraction of SN energy coupled to the ISM, \(\rm\epsilon_{SN}=0.75\). 3. The minimum density, \(\rm n_{\star}=0.2\) cm\({}^{-3}\), and maximum temperature, \(\rm T_{\star}=10^{4}\) K, thresholds beyond which cold gas is allowed to form stars. Star particles form with a mass of \(6\times 10^{4}\) M\({}_{\odot}\), or 30% the initial gas particle mass. Romulus25 assumes a Kroupa IMF (Kroupa, 2001) with associated metal yields and SN rates. Feedback from SN uses the 'blastwave' implementation (Stinson et al., 2006), with thermal energy injection and a cooling shutoff period approximating the 'blastwave' phase of SN ejecta when cooling is inefficient. ### Halo Identification Amiga Halo Finder (AHF; Knollmann and Knebe, 2009) was applied to Romulus25 to identify dark matter halos, sub-halos, and the baryonic content within. AHF uses a spherical top-hat collapse technique (Bryan and Norman, 1998) to calculate each halo's virial radius (\(\rm R_{vir}\)) and mass (\(\rm M_{vir}\)). Halos are considered resolved if their virial mass is at least \(3\times 10^{9}\) M\({}_{\odot}\) at \(z=0\). This corresponds to dark matter particle count of \(\sim 10^{4}\), and a stellar mass of at least \(10^{7}\) M\({}_{\odot}\) (star particle count of \(\sim 150\)). Stellar masses were calculated using photometric colors following Munshi et al. (2013) as a better comparison to values inferred from typical observational techniques, and all magnitudes use the Vega zero-point. There is no concrete definition of what constitutes a Milky Way analog; observational surveys like SAGA and ELVES make sample cuts using \(K\)-band magnitudes as proxies for stellar mass, while simulations have access to more exact values for halo properties such as stellar mass and virial radius. In this work, we select samples of Milky Way analogs according to three different criteria sets in order to test if the selection criteria can influence the resultant satellite distribution. Our samples are defined as follows: * **A general M\({}_{\rm vir}\) restriction**: Any halo where \(10^{11.5}<\)M\({}_{\rm vir}\)/M\({}_{\odot}<10^{12.5}\). * **A general M\({}_{*}\) restriction**: Any galaxy where \(10^{10}<\)M\({}_{*}\)/M\({}_{\odot}<10^{11}\). This corresponds to the host stellar mass range outlined in Section 2.1.2 of SAGA II (Mao et al., 2021). A stellar mass of \(10^{10}\) M\({}_{\odot}\) also corresponds to the lower limit of the ELVES survey (Carlsten et al., 2022). * **An M\({}_{k}\) + Environmental restriction**: Any galaxy where \(-24.6<\)M\({}_{K}<-23\). Additionally, no neighbor within 300 kpc can have M\({}_{K,{\rm neighbor}}<\) M\({}_{K,{\rm MW}}-1.6\). This corresponds to the \(K\)-band magnitude cut and environmental restrictions from SAGA II (Mao et al., 2021). We also explore two different way to identify a satellite galaxy. First, we consider galaxies within the host's virial radius down to a stellar mass of \(10^{7}\) M\({}_{\odot}\), the resolution limit for Romulus25. This corresponds to an magnitude limit of M\({}_{R}\approx-12.6\). We note that the magnitude limit for the SAGA survey is M\({}_{R}=-12.3\) (though they have 4 satellites below this limit, see Mao et al. (2021)), so our samples do not probe the lowest mass regions of the SAGA or ELVES sample spaces. In addition, we perform a selection where satellites are \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline (1) Analog Criteria & (2) Analog Radius & (3) N\({}_{MW}\) & (4) N\({}_{Sats}\) & (5) max(N\({}_{Sat}\)) \\ \hline M\({}_{\rm vir}\) & & 67 & 138 & 8 \\ M\({}_{*}\) & R\({}_{vir}\) & 97 & 210 & 13 \\ M\({}_{K}\)+Env. & & 77 & 148 & 13 \\ \hline M\({}_{\rm vir}\) & & 66 & 125 & 6 \\ M\({}_{*}\) & 300 kpc & 90 & 171 & 7 \\ M\({}_{K}\)+Env. (SAGA II) & & 77 & 137 & 6 \\ \hline \end{tabular} Note. –(1) The criteria for identifying Milky Way analogs; (2) the virial radius of the Milky Way analog for the purpose of identifying satellites; (3) the total number of Milky Way analogs; (4) the total number of satellites with M\({}_{*}>10^{7}\) M\({}_{\odot}\); (5) the largest number of satellites hosted by a single Milky Way analog. \end{table} Table 1: A summary of our samples of Milky Way analogs and satellites Figure 1: Virial and stellar masses plotted against \(K\)-band magnitudes for three of our Milky Way analog samples. The dotted lines denote the mass and magnitude cuts used in our samples. The samples diverge at the different boundaries, and even within the boundaries there are analogs that exist in some definitions but not others. identified by being within 300 kpc of a Milky Way analog, rather than the analog's virial radius, as a more direct comparison to the SAGA and ELVES surveys. We note, however, that these surveys use 2D projected distances while in this work we use true 3D distances. In the event that a satellite is hosted by multiple analogs, it is ascribed to the most massive host. Any satellites that fall into the criteria of a Milky Way analog are not included in the satellite distribution. As a final step, any analogs that host a "satellite" more massive than themselves are removed from consideration. This cut is responsible for the slight variation in the number of Milky Way analogs under the same criteria when switching between R\({}_{\rm vir}\) and 300 kpc satellite identification. Our sample of Milky Way analogs and satellites are summarized for each criteria set in Table 1. Figure 1 shows the three Milky Way analog samples that we focus on in this work: M\({}_{\rm vir}\) and M\({}_{\star}\) with R\({}_{\rm vir}\) and M\({}_{K}\) with 300 kpc. While the samples largely overlap, we find that none of them are simple subsets of the others. As they approach the boundaries of the selection cuts, the samples diverge from one another. For example, the stellar mass sample probes virial masses below the virial mass cut, and vice versa. This is the result of natural scatter within the stellar-halo mass relation, which was shown in Tremmel et al. (2017) to match observations (Moster et al., 2013; Kravtsov et al., 2018). Within the overlapping regions of the criteria, there are galaxies considered Milky Way analogs in some samples but not others. This occurs as a result of the environmental criteria in the SAGA sample, which could remove analogs that are still within the \(K\)-band magnitude limits. In Figure 2, we compare the normalized distributions of hosts and satellites from our largest sample, M\({}_{\star}\) with simulated R\({}_{\rm vir}\) (in order to encapsulate the full magnitude range of our samples), to data from SAGA II and ELVES (Mao et al., 2021; Carlsten et al., 2022). We note that the ELVES satellites are weighted according to their likelihood estimates (\(P_{\rm sat}\) in Table 9 of Carlsten et al. (2022)), so each satellite adds its likelihood as a count rather than 1. In panel (a), we see that our hosts' span in \(K\)-magnitude space matches well with the ELVES sample, while the SAGA II sample (by definition) resides in \(-24.6<\)M\({}_{K}<-23\). The peaks of the host distributions are in good agreement as well, though we note our peak is at a slightly dimmer magnitude than the observational data. In panel (b), we see that our satellite distribution is in very good agreement with the SAGA II data, though we have an interesting lack of satellites at M\({}_{K}\approx-17\). The ELVES data probes much dimmer satellites (due to the difference in observational limits), but when only considering satellites brighter than M\({}_{K}=-12\), the ELVES sample is still more concentrated at low mass satellites when compared to SAGA II and Romulus25. This is consistent with ELVES finding steeper luminosity functions (fewer high mass satellites and more low mass) in their sample when compared to SAGA, and might also contribute to the different quenched fractions found by the two surveys (see Section 4.2 for discussion). ## 4 Results Figure 3 shows the \(V\)-band satellite luminosity function for our sample of Milky Way analogs alongside data Figure 2: Normalized histograms of (a) hosts in \(K\)-band magnitude and (b) satellites in V-band magnitude for our M\({}_{\star}\) with simulated R\({}_{\rm vir}\) sample. We make direct comparisons to SAGA II (Mao et al., 2021) and ELVES (Carlsten et al., 2022) data, with the Milky Way and M31 values taken from the latter. The ELVES satellites are weighted according to their likelihood measurements. For a fair comparison, the satellite distributions in (b) are all normalized to the samples’ number of satellites brighter than M\({}_{V}\)=-14, the approximate completeness limit for Romulus25. from the Milky Way and several Milky Way-like systems. The outer grey region outlines the space occupied by our M\({}_{*}\) within R\({}_{\rm vir}\) sample, while the black line and inner dark grey region indicate the mean and standard deviation. The Milky Way and M31 data are taken from Geha et al. (2017). The NGC4258 and NGC4631 data were taken from Carlsten et al. (2021), the M94 data from Smercina et al. (2018), and the M101 data from Bennet et al. (2019). We find that our sample of Milky Way analogs is in good agreement with these observations. We note that the space occupied by our sample remains largely unaffected when changing Milky Way analog criteria. ### Host Effects on Satellite Accumulation In order to study how the physical properties of our Milky Way analogs affect their satellite populations, we separated our sample according to their mass and environment. Figure 4 shows the average number of satellites hosted by the Milky Way analogs where the analogs are binned according to their stellar mass and minimum distance to a Milky Way-sized or larger halo, hereafter D\({}_{MW+}\). In calculating D\({}_{MW+}\), we consider the closest galaxy outside the system of the analog (i.e., not a satellite) that exceeds the minimum criteria of Milky Way analog under the given criteria. The text in each bin details **N**: the number of analogs in that bin, and \(\mathbf{\sigma}\): the standard deviation of the number of satellites hosted by analogs in that bin. A plot is shown for both our M\({}_{\rm vir}\) with simulated R\({}_{\rm vir}\) (left) and SAGA II comparison (right) samples. In all of our samples, the number of hosted satellites appears to increase with host mass, and potentially with decreasing D\({}_{MW+}\). However, this latter trend cannot be verified by-eye as the box size of Romulus25 yields a lack of data in the upper regions of this plot (i.e., highly isolated hosts), so the apparent trend is not statistically significant. While these macroscopic trends are present across all of our simulated samples, there are some notable differences in the distributions. We see that while the M\({}_{\rm vir}\) definition includes analogs at a lower stellar mass, the number of analogs below M\({}_{*}=10^{10}\) M\({}_{\odot}\) is much larger in the SAGA II sample. Additionally, in the higher mass bins there is fluctuation in both the number of analogs and hosted satellites due to the changing of the satellite selection radius from R\({}_{\rm vir}\) to 300 kpc. In an effort to quantify the "by-eye" trends seen in Figure 4, we looked at the specific frequency of the number of satellites hosted by our Milky Way analogs, \(S_{N}\), normalized to their mass and environment. We use the following specific frequency equations adapted from Harris and van den Bergh (1981): \[S_{N,{\rm env}} =N_{\rm sat}\times 10^{0.4(D-1.5)} \tag{2}\] \[S_{N,{\rm mass}} =N_{\rm sat}\times 10^{0.4(M-10.3)} \tag{3}\] Here, \(N_{\rm sat}\) is the number of satellites hosted by the Milky Way analog, \(D\) is D\({}_{MW+}\) in Mpc, and \(M\) is Log(M\({}_{*}\)/M\({}_{\odot}\)). The normalization values of 1.5 Mpc and 10.3 were chosen to be roughly the averages of the M\({}_{*}\) with simulated R\({}_{\rm vir}\) sample. Figure 5 shows the specific frequencies normalized to environment and mass for our M\({}_{\rm vir}\) and M\({}_{*}\) with simulated R\({}_{\rm vir}\) sample, as well as our SAGA II comparison sample. In looking at the trend with environment, 5(a), we see some interesting behavior. The \(S_{N}\) values increase somewhat linearly until D\({}_{MW+}\approx 3.5\) Mpc, where future points go either to zero or extreme outliers. This would suggest that N\({}_{\rm sat}\) increases as hosts become more isolated, but we note that a majority of our hosts (\(\sim\)60-70%) have D\({}_{MW+}<\)2 Mpc, so beyond this distance our samples get increasingly small, resulting in the large error bars and stochasticity of the higher D\({}_{MW+}\) points. Thus, we see no definitive trend of satellite accumulation with environment, though one might become present with a larger sample of more isolated hosts. In looking at 5(b) however, the \(S_{N}\) values consistently increase with the stellar masses of the Milky Way analogs. These results, which are present in all of our Milky Way analog Figure 3: The \(V\)-band satellite luminosity function for our Milky Way analog sample under the M\({}_{*}\) with simulated R\({}_{\rm vir}\) criteria. Black line and dark grey region represent the mean and single standard deviation of our sample, while the outer light grey region encompasses our entire sample. We compare to the Milky Way and M31 (Geha et al., 2017), M94 (Smercina et al., 2018), M101 (Bennet et al., 2019), NGC4258 and NGC4631 (Carlsten et al., 2021). The dotted vertical line marks the approximate completeness limit for Romulus25. Our sample is in good agreement with these observations. samples, indicate that stellar mass exerts a large influence on satellite accumulation. The SAGA and ELVES surveys both observe this trend of satellite abundance increasing with host mass, though the trends they find are slightly weaker than ours (see Section 5.1 for discussion). Further, a study of seven nearby Milky Way-like systems with the Hyper Suprime-Cam on the Subaru telescope observes this trend as well (Nashimoto et al., 2022). The trend of satellite abundance with host mass was also found by Font et al. (2021) using the ARTEMIS suite of zoom-in simulations (Font et al., 2020), and by Engler et al. (2021) using the TNG50 simulation. ### Host Effects on Satellite Quenching In addition to studying the number of satellites hosted by our analogs, we also analyzed the quenched fraction of the satellites. When studying quenched fraction (\(f_{\rm q}\)), we only consider satellites with a stellar mass of at least \(10^{8}\) M\({}_{\odot}\), as Romulus25 may be subject to numerical over-quenching below this mass (Wright et al., 2021). A galaxy is considered quenched if its instantaneous specific star formation rate (sSFR) is below \(10^{-11}\) yr\({}^{-1}\). The instantaneous sSFR is calculated using the expected SFR from gas particles meeting the temperature and density thresholds for star formation given in Section 2.1. In Figure 6, we show our quenched fractions as a function of host \(K\)-band magnitude for the M\({}_{K}\)+Env. with 300 kpc satellite selection (our SAGA II comparison sample), and compare our results to data from the SAGA and ELVES surveys (Mao et al., 2021; Carlsten et al., 2022). For a direct Figure 4: The average number of satellites hosted by Milky Way analogs as a function of stellar mass and environment (distance to a Milky Way-sized or larger halo). The text in each box indicates the number of Milky Way analogs in that parameter space, as well as the standard deviation amongst the number of satellites. The left plot shows the M\({}_{\rm vir}\) with simulated R\({}_{\rm vir}\) sample, while the right plot shows the sample most analogous to the SAGA Survey. In both cases, the number of satellites appears to increase as stellar mass increases. Figure 5: The specific frequencies of the number of satellites hosted by Milky Way analogs normalized to their (a) distance to a Milky Way-sized or larger halo, D\({}_{MW+}\) and (b) stellar mass. The plots show the results for the M\({}_{\rm vir}\) (black) and M\({}_{*}\) (red) with simulated R\({}_{\rm vir}\) and SAGA II (blue) samples. Error bars represent the standard error within each bin (\(\sigma/\sqrt{N}\)). With the exception of some large outliers, the \(S_{N,\rm env}\) values do not show statistically significant trends. However, the \(S_{N,\rm mass}\) values show a clear positive trend. comparison, we only consider SAGA and ELVES satellites with stellar masses above \(10^{8}\) M\({}_{\odot}\). We note, however, that the SAGA and ELVES surveys' methods of determining quenching are different than ours; SAGA considers a satellite quenched if it lacks strong H\(\alpha\) emission (EW[H\(\alpha\)]\(<2\)A) and ELVES considers a satellite quenched if it exhibits an early-type morphology, i.e., not exhibiting clear star-forming structures such as blue clumps or dust lanes (this correlates with color as well, see Carlsten et al. (2021) for an in-depth discussion). Our sSFR quenched definition was shown (see Sharma et al., 2022) to yield a good match to galaxies identified observationally as quenched using EW[H\(\alpha\)]\(<2\)A and \(D_{\rm n}4000>0.6+0.1\log_{10}M_{*}\) (as in Geha et al., 2012). While all three samples show quenched fractions increasing with host brightness, our simulated sample exhibit slightly larger quenched fractions than the observational surveys, with the exception of the lowest-mass bin where the difference becomes significant (see Section 5.2 for discussion). The SAGA and ELVES data are in very good agreement up to the brightest magnitude bin, where the sample sizes are only one host for ELVES (M31) and 2 hosts for SAGA (NGC5792 & NGC7541). This agreement within the high mass satellite subset is interesting, as the SAGA and ELVES quenched fractions are quite different when considering their full samples. Carlsten et al. (2022) find that the quenched fractions of the Local Volume are significantly higher than the SAGA sample (their figures 11 & 12), particularly in the low mass satellite regime. In Figure 2(a), we see that the ELVES survey contains a much larger number of faint satellites when compared to SAGA, but also that ELVES hosts (along with those of Romulus25) probe fainter magnitudes as well. In studying the ARTEMIS simulations, Font et al. (2022) found that SAGA detection methods may be preferentially selecting star forming or recently quenched satellites near their completeness limit, missing a notable population of quenched dwarfs. This detection bias could explain the difference between SAGA and ELVES low mass satellites, both the abundance and quenched fraction. Following Font et al. (2022), in Figure 2(b) we apply an additional cut to our SAGA II comparison sample by requiring satellites to have \(\mu_{\rm eff,r}<25\) mag arcsec\({}^{-2}\). As in the ARTEMIS simulations, we find that this cut lowers the resultant quenched fractions, and brings our results (particularly the middle bins) into excellent agreement with SAGA and ELVES. To quantify the trend of quenched fraction with mass seen in Figure 6, and to search for a trend with environment, we again used the specific frequency equations 2 and 3 with N\({}_{\rm sat}\) replaced by \(f_{\rm q}\). Figure 7(a,b) shows our quenched fraction specific frequencies for the M\({}_{\rm vir}\) and M\({}_{*}\) with simulated R\({}_{\rm vir}\) samples, as well as our SAGA II comparison sample. We find that, as with the number of hosted satellites, the environmental trend in 7(a) becomes quite stochastic beyond D\({}_{MW+}\)\(\approx 2\) Mpc. How Figure 6: (a) Quenched fraction plotted against \(K\)-band magnitude for the M\({}_{K}+\)Env. sample from Romulus25 compared to SAGA II (Mao et al., 2021) and ELVES (Carlsten et al., 2022) data. (b) shows the same sample from Romulus25 with addition criteria of requiring satellites to have \(\mu_{\rm eff,r}<25\) mag arcsec\({}^{-2}\). As a direct comparison, the SAGA and ELVES data plotted here only contains satellites with stellar masses above \(10^{8}\) M\({}_{\odot}\). Error bars represent the standard error within each bin. Approximate stellar mass values are taken from a linear fit between M\({}_{K}\) and Log(M\({}_{*}\)/M\({}_{\odot}\)) for our Milky Way analogs. The Milky Way and M31 values are taken from ELVES, and also only consider satellites with stellar masses above \(10^{8}\) M\({}_{\odot}\). While all three samples show the quenched fractions increasing with host brightness, our Romulus25 sample exhibit slightly larger quenched fractions (particularly in the faintest bin) unless low surface brightness galaxies are removed. The SAGA and ELVES data are also in good agreement up to the brightest magnitude bin where the sample sizes are small. ever, in the region of \(\mathrm{D}_{MW+}<2\) Mpc (where the majority of our samples reside) the trend is fairly flat within errors, indicating no clear trend of satellite quenched fraction with environment. When looking at 7(b), we do see a trend of \(S_{N}\) with host mass, indicating that larger hosts are expected to yield higher quenched fractions, though we note this trend is not as strong as the one seen in Figure 5(b). Our lack of environmental trend is in agreement with Samuel et al. (2022) who, using the FIRE-2 simulations, find no significant difference in the quenched fraction of isolated and paired hosts. Further, our results agree with Engler et al. (2022) who, using the TNG50 run from the IllustrisTNG simulations, found that massive hosts exhibit systematically larger satellite quenched fractions, and that there is no difference between isolated and paired analogs when considering satellites within 300 kpc of their host (see Section 5.3 for discussion). Interestingly, when applying the satellite surface brightness criteria in Figure 7(c,d), we see that our trend of quenched fraction with host mass is erased. As the high-mass end of Figures 7 (b) and (d) are strongly affected by this surface brightness cut, it seems that the preferentially quenched satellites below this threshold are more common in higher mass hosts, which is consistent with Figure 7(b) implying a larger number of quenched galaxies in this regime. To investigate further, we looked at the quenched fraction values of our individual systems plotted against \(\mathrm{D}_{MW+}\). Figure 8 shows these values where pairs are identified as having \(\mathrm{D}_{MW+}<1\) Mpc. The figure shows data for our \(\mathrm{M}_{*}\) and \(\mathrm{M}_{K}\)+Env. samples (both \(\mathrm{R}_{\mathrm{vir}}\) and 300 kpc), where vertical lines identify analogs present in both samples with differing quenched fractions, and the colors of each point represent the number of satellites in Figure 7: The specific frequencies of the quenched fraction of satellites hosted by Milky Way analogs normalized to their (a) \(\mathrm{D}_{MW+}\) and (b) stellar mass. The plots show the results for the \(\mathrm{M}_{\mathrm{vir}}\) (black) and \(\mathrm{M}_{*}\) (red) with simulated \(\mathrm{R}_{\mathrm{vir}}\) and SAGA II (blue) samples. Subplots (c) and (d) require satellites to have \(\mu_{\mathrm{eff},r}<25\) mag arcsec\({}^{-2}\). Error bars represent the standard error within each bin. As with \(\mathrm{N}_{\mathrm{sat}}\), there is a strong trend with Milky Way analog mass and effectively no trend with environment. However, applying the surface brightness cut to our satellites removes the trend with mass. each system. We find that the average quenched fraction (denoted by the red diamonds, arbitrarily placed near 1 Mpc) is higher amongst pairs than isolated analogs, though the magnitude of this difference is not ubiquitous across our samples (see Section 5.3 for discussion). We also find that in the switch from 300 kpc to R\({}_{\rm vir}\) when identifying satellites, hosts typically have either the same or lower quenched fractions and a higher satellite count. This indicates that restricting satellites to within 300 kpc for this host range is more likely to exclude satellites, and that the satellites beyond 300 kpc are predominantly star forming; though this is still only when considering satellites with M\({}_{*}>10^{8}\) M\({}_{\odot}\). These results are consistent with those found in the TNG50 simulation (Engler et al., 2022). If we apply the surface brightness cut to satellites advocated by Font et al. (2022), we find that while the resultant averaged quenched fractions are lower, the difference between the isolated group and pairs remains relatively constant. ## 5 Discussion In Section 3, we discussed the various methods by which we identified Milky Way analogs and satellites. While shifting between these definitions has no effect on our conclusions, there are subtle impacts worth noting. ### Satellites within R\({}_{\rm vir}\) vs. 300 kpc In Figures 4 & 5 we showed host stellar mass to be the driving factor in satellite accumulation, but this trend is less prominent when using our SAGA II comparison sample. This appears to be the result of using 300 kpc to identify satellites, not the selection on \(K\)-band magnitude, as our M\({}_{K}\)+Env. with R\({}_{\rm vir}\) sample actually exhibits the strongest trend. In fact, identifying satellites via a 300 kpc selection rather than R\({}_{\rm vir}\) reduces the strength of the mass trend in all criteria (though the trend is still prominent). The weakening of the trends is the result of analogs in the high mass regime (where the trends manifest), which have virial radii larger than 300 kpc and exclude satellites in this shift to 300 kpc. This shift is in agreement with the ARTEMIS simulations (Font et al., 2021), in which satellite abundance trends strongly with host mass, but the trend is weakened when SAGA observation selection criteria are applied (M\({}_{r,\rm sat}<-12,~{}\mu_{\rm eff,r}<25\) mag arcsec\({}^{-2}\), and within 300 kpc of the host). When considering quenched fractions, our choice of satellite selection radius also seems to have a noticeable effect on our M\({}_{K}\)+Env. sample. Figure 8(b) shows that all but one host whose satellite count changed between selection radii had fewer satellites and higher quenched fractions in the 300 kpc sample, resulting in the mean quenched fractions of this sample being notably higher. Thus, within the context of satellites with M\({}_{*}>10^{8}\) Figure 8: Quenched fraction plotted against the distance to closest Milky Way halo or larger for the M\({}_{*}\) with simulated R\({}_{\rm vir}\) and SAGA II criteria. For points connected by lines, the upper-hemisphere points represent analogs from the sample with a Milky Way radius of 300 kpc (the SAGA II comparison sample), while lower-hemisphere points represent the matched analogs from the sample using the simulated virial radius (if there is no line, the values are identical regardless of host radius definition, or the given host is not present in both analog samples). The colors of each point represent the number of hosted satellites with stellar mass greater than \(10^{8}\) M\({}_{\odot}\). The samples are separated into “Pairs” and “Isolated” by whether the closest Milky Way or larger halo is within 1 Mpc, and the means of each sample are denoted by the red diamonds. Note that the placement of the red diamonds along the x-axis is arbitrary, and do not represent the mean distance of a satellite system from a host. For reference, the Milky Way - M31 distance is plotted with a vertical dashed line. The pairs exhibit a higher mean quenched fraction, and changing the satellite selection radius from 300 kpc to R\({}_{\rm vir}\) typically takes the quenched fraction to an equivalent or lower value. M\({}_{\odot}\), it seems applying a satellite cut of 300 kpc to the \(K\)-band magnitude analog selection is _primarily_ removing star-forming satellites from massive hosts, and biasing the global quenched fraction high. Since the 300 kpc selection results in a more centrally located satellite population, it is likely that these satellites had an earlier infall time and underwent more ram pressure stripping when compared to satellites near or beyond 300 kpc from the host. This effect is present in Figure 7 as well, wherein the SAGA II comparison sample exhibits the strongest trend of quenched fraction with host mass. These results are consistent with those in the TNG50 simulation (Engler et al., 2022), another large-volume, uniform-resolution simulation with comparable resolution to Romulus25. ### Quenched Fraction Discrepancy The shift from R\({}_{\rm vir}\) to 300 kpc, however, does not explain why our quenched fractions are higher than that of SAGA and ELVES (Figure 6); our SAGA II comparison sample uses 300 kpc as a selection radius, and our results indicate that if SAGA and ELVES had access to the virial radii of their hosts, their quenched fractions would be _lower_. Donnari et al. (2021) find that the adopted definition of quenching and using two-dimensional projected distances can both notably affect the resultant quenched fractions.Notably, the quenched fractions of Romulus25 are in better agreement with the observations when satellites with \(\mu_{\rm eff,r}<25\) mag arcsec\({}^{-2}\) are removed, in agreement with Font et al. (2022). The exception is the faintest bin, where a key factor may be the resolution of Romulus25. The lower resolution of the volume is unable to resolve a multiphase interstellar medium, i.e., there is no extremely dense gas (Tremmel et al., 2019, 2020, and references within). Thus, all of the gas is "puffy" and overly susceptible to ram pressure stripping and quenching. Dickey et al. (2021) found that large scale cosmological simulations over-quench isolated galaxies below M\({}_{*}=10^{9}\) M\({}_{\odot}\) when compared to SDSS. The authors attribute this to over-efficient feedback, which is typically tuned to recreate quenched fractions found in the Local Volume. In looking at Romulus25, Sharma et al. (2022), also found that isolated dwarfs exhibit a higher quiescent fraction when compared to observations, but that this can be entirely attributed to the presence of massive black holes and their feedback. Although we are not studying isolated dwarfs in this work, it is still likely that these feedback properties are influencing our results. We note, however, that there are only 6 satellites in our SAGA II comparison sample with black holes and M\({}_{*}>10^{8}\) M\({}_{\odot}\) so this does not notably affect our results. ### Isolated vs. Paired Hosts In Figure 7(a) we found that environment has no impact on an analog's quenched fraction, but Figure 8 suggests that pairs potentially exhibit higher quenched fractions than isolated analogs. There are some caveats preventing us from making a more robust statement about the environment's effect on quenched fraction. First, we are only considering satellites with M\({}_{*}>10^{8}\) M\({}_{\odot}\). Within our SAGA II sample, this is only \(\sim\)56% of our total satellite population and they are hosted by \(\sim\)72% of our Milky Way analogs with a non-zero satellite count (or \(\sim\)49% of all Milky Way analogs), so a large section of our population is being removed. Secondly, our simulation box size prevents us from having a large sample of highly-isolated analogs; only \(\sim 14\%\) of our SAGA II sample analogs have D\({}_{MW}>3\) Mpc. Finally, by ignoring low mass satellites, we are looking at the quenched fractions of several systems with few satellites (only 1 or 2 satellites). Around 43% of high-mass satellite-hosting analogs in our SAGA II sample contain only one satellite above our resolution limit, so their quenched fractions can only occupy the extremes of 0 and 1, and in Figure 8 they are being averaged with systems that have as many as 8 high-mass satellites. However, if we weight these systems according to host stellar mass (as a rough proxy of N\({}_{sat}\)), the average quenched fractions of pairs is still higher than that of isolated analogs, but only within the M\({}_{\rm vir}\) definition. These combined effects yield a sample that is lacking low-mass satellites (and thus the analogs' full satellite distributions) as well as highly-isolated hosts, making it difficult for us to extrapolate our results to the universe at large. Recently, Engler et al. (2022) (TNG50) and Samuel et al. (2022) (FIRE-2) found no difference in the satellite quenched fractions of paired and isolated hosts in their simulations. Further, Garrison-Kimmel et al. (2019) (FIRE-2) found that satellites of isolated Milky Way mass galaxies have nearly identical star formation histories to satellites of Milky Way analogs in Local Group-like pairs. However, these results were only considering satellites within 300 kpc of the host. In looking further out to 300-1000 kpc, Engler et al. (2022) find that paired, Local Group-like hosts exhibit significantly larger quenched fractions than their isolated counterparts. ## 6 Conclusions Using the Romulus25 simulation, we have created various samples of Milky Way analogs along with their satellite distributions. We explored the role of host mass and environment on satellite numbers and quenched fractions. Our results can be summarized as follows: * When testing various criteria for defining a Milky Way analog, from more theoretically-motivated (M\({}_{\rm vir}\)) to more observationally-motivated (M\({}_{\star}\) and SAGA-like) we find that the resultant samples do not fully overlap. Within the overlapping regions, galaxies may also be defined as analogs in one sample but not another due to environmental criteria (see Table 1 and Figure 1). * The number of satellites hosted by a Milky Way analog increases with host stellar mass, while environment has no statistically-significant impact (see Figures 4 & 5). This result is consistent across all of our samples. * The quenched fraction (for satellites with M\({}_{*}>10^{8}\) M\({}_{\odot}\)) of our analogs increases with host mass (see Figures 6(a) & 7(b)), but applying a surface brightness cut to satellites can erase this trend (see Figure 7(d)). * Being in a pair may yield higher satellite quenched fractions, but it is hard to draw statistically robust results given the small volume of Romulus25 and the fact that we can only study satellites down to M\({}_{*}=10^{8}\) M\({}_{\odot}\) to avoid numerical over-quenching. (see Figure 8). We find that the distributions of both the Milky Way and M31 are well explained by our sample, with M31 being at the highly populated edge of our sample space. This is in agreement with the SAGA and ELVES surveys, where ELVES found the Local Volume to be slightly more populated and exhibiting a steeper luminosity function when compared to the full SAGA sample. Additionally, we are in agreement with ELVES in finding that the stellar mass of a Milky Way analog seems to be the dominant factor in both the number of hosted satellites and the number of quenched satellites. Interestingly, in our study of quenching, we find that the SAGA and ELVES results are in good agreement for satellites with M\({}_{*}>10^{8}\) M\({}_{\odot}\), suggesting that their discrepancy in quenched fraction comes from lower mass satellites, which we are unable to probe here due to numerical effects that artificially quench simulated galaxies. However, our results support the notion put forward in Font et al. (2022) that SAGA is missing a large population of low surface brightness satellites near its detection limit that are preferentially quenched. JDV is supported by the Homer L. Dodge Fellowship from the University of Oklahoma. Long before the University of Oklahoma was established, the land on which the University now resides was the traditional home of the "Hasinais" Caddo Nation and Kirikiris Wichita & Affiliated Tribes; more information can be found here. CC was supported by the NSF under CAREER grant AST-1848107. AMB was partially supported by NSF grant AST-1813871. MT was supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-2001810. The Romulus simulations are part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. This work is also part 12 of a Petascale Computing Resource Allocations allocation support by the National Science Foundation (award number OAC-1613674). This work also used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. ## Data Availability The data for this work was generated from a proprietary branch of the ChaNGa N-Body+SPH code (Menon et al., 2015). The public repository for ChaNGa is available on github [https://github.com/N-BodyShop/changa](https://github.com/N-BodyShop/changa)). Analysis was conducted using the publicly available softwares pynbody (Pontzen et al., 2013, [https://github.com/pynbody/pynbody](https://github.com/pynbody/pynbody)) and TANGOS (Pontzen & Tremmel, 2018, [https://github.com/pynbody/tangos](https://github.com/pynbody/tangos)). These results were generated from the Romulus25 cosmological simulation. The raw output from this simulation can be accessed upon request from Michael Tremmel ([email protected]), along with the TANGOS database files that were generated from these outputs and directly used for this analysis.
2308.15876
Refined renormalization group improvement for thermally resummed effective potential
We newly develop a renormalization group (RG) improvement for thermally resummed effective potentials. In this method, $\beta$-functions are consistently defined in resummed perturbation theories, so that order-by-order RG invariance is not spoiled after thermal resummation. With this improvement, scale dependences of phase transition quantities such as a critical temperature, which are known to be notoriously large at the one-loop order, are greatly reduced compared to calculations with the conventional $\overline{\text{MS}}$ scheme. By taking advantage of the RG invariance, we also devise a resummation method that can incorporate potentially harmful large logarithmic terms and temperature-dependent power corrections in a generic form. We point out that a resummed one-loop effective potential refined by the method can give results that agree with those obtained by resummed two-loop effective potentials within errors.
Koichi Funakubo, Eibun Senaha
2023-08-30T09:00:03Z
http://arxiv.org/abs/2308.15876v2
# Refined renormalization group improvement for thermally resummed effective potential ###### Abstract We newly develop a renormalization group (RG) improvement for thermally resummed effective potentials. In this method, \(\beta\)-functions are consistently defined in resummed perturbation theories, so that order-by-order RG invariance is not spoiled after thermal resummation. With this improvement, scale dependences of phase transition quantities such as a critical temperature, which are known to be notoriously large at the one-loop order, are greatly reduced compared to calculations with the conventional \(\overline{\rm MS}\) scheme. By taking advantage of the RG invariance, we also devise a resummation method that can incorporate potentially harmful large logarithmic terms and temperature-dependent power corrections in a generic form. We point out that a resummed one-loop effective potential refined by the method can give results that agree with those obtained by resummed two-loop effective potentials within errors. Introduction Investigating phase transitions in early Universe is expected to shed light on new physics searches in particle physics and cosmology. Much attention has been drawn to gravitational wave generations from first-order phase transitions, which could provide useful information of high energy physics that cannot be obtained by terrestrial experiments. Furthermore, if electroweak phase transition (EWPT) is first order, a cosmic baryon asymmetry can be explained by electroweak baryogenesis (EWBG) mechanism [1]. While nonperturbative approaches such as lattice calculations would be robust, perturbative treatments are still useful for probing vast parameter space in new physics models because of their less costs. One of the vexing problems at finite temperature is infrared divergences originating from a zero Matsubara frequency mode, which could spoil the validity of perturbative expansions even for small coupling constants at high temperature [2; 3]. It is standard practice to reorganize the perturbative expansion to incorporate the dominant temperature corrections into the unperturbed part, which is refereed to _thermal resummation_[4; 5; 6]. One-loop effective potentials with resummation schemes in Refs. [4; 5; 6] have been mostly employed in studies of EWPT (for other approaches, see, e.g., Refs. [7; 8]). In perturbative analyses of EWPT, a renormalization scheme dependence inevitably comes into calculations, and the magnitude of which implies impacts of higher-order terms that are missing in the calculations. If the dependence is too large to make quantitative studies reliable, a renormalization group equation (RGE) can be used to improve the calculations [9; 10; 11; 12]. This can be done by replacing parameters appearing in the effective potential with corresponding running parameters derived from \(\beta\)-functions which are perturbatively defined at some fixed order. On should note that the derivation of the \(\beta\)-functions follows from the scale independence of bare parameters together with a specific renormalization scheme such as a \(\overline{\text{MS}}\) scheme [13; 14]. Once the effective potential is made scale independent at some order, one can incorporate a series of higher-order terms utilizing its scale invariance. As demonstrated in Refs. [10; 11] at zero temperature, a \(\ell\)-loop effective potential with \((\ell+1)\)-loop \(\beta\)-functions can resum up to \(\ell\)th-to-leading logarithmic terms. At nonzero temperature, however, such a RG improvement of the effective potential would not be straightforward due to the aforementioned thermal resummation. Actually, unlike the zero temperature case the order-by-order RG invariance is lost and higher-order terms are required to recover the RG invariance up to certain order in coupling constants. For example, the RG invariance of the resummed one-loop effective potential requires a part of two-loop effective potentials. Explicit calculations using a high-temperature expansion can be found in Ref. [6] (for recent study, see Ref. [7]). Another difference from the zero temperature is that in addition to the potentially large logarithmic terms, temperature-dependent power corrections could also be sizable at higher temperature, as described above. Thus, the commonly used log-resummation scheme is not always appropriate. In light of this situation, the main issues to be clarified are as follows: * How do we construct an order-by-order RG invariant effective potential at finite temperature? * How do we incorporate both logarithmic terms and temperature-dependent corrections in a general manner? In our recent letter paper [15], we proposed a novel RG improvement method for the resummed effective potentials to answer the above questions. In our method, \(\beta\)-functions are defined in the resummed perturbation theory instead of using those in the \(\overline{\text{MS}}\) scheme, and as a result, the RG invariance is maintain order by order after the thermal resummation. In addition to this, the resummation by RG is generalized to include whole loop functions that contain both logarithmic terms and thermal corrections. By its general form, this method is reduced to the log-resummation scheme in the zero temperature limit while the hard thermal loop resummation in the high temperature limit. Due to the length limitation of the letter [15], we show only a main result and some details are omitted. In this paper, we fill the gap in Ref. [15] by giving all the details including lengthy but useful expressions and adding more numerical examples for further clarification of our method. One of the main findings is that the resummed one-loop effective potential in our scheme has much less scale dependence than that in the \(\overline{\text{MS}}\) scheme thanks to the order-by-order RG invariance though an exceptional region can, in principle, be found due to an accidental cancellation between RG-noninvariant terms and truncation errors in the \(\overline{\text{MS}}\) scheme. If one takes two-loop corrections into account, the both schemes are equally better than the one-loop result in our scheme. This is because that the dominant RG-noninvariant terms in the \(\overline{\text{MS}}\) scheme are cancelled by the two-loop corrections. As a by-product of the RG invariance in our scheme, a series of the higher-order terms can be incorporated into the resummed effective potentials. In the case of a single field theory such as the \(\phi^{4}\) theory, we can show that the resummed one-loop effective potential in our method correctly reproduces dominant two-loop corrections. Even in a two-scalar field theory, our numerical studies show that \(v_{C}/T_{C}\) obtained by the resummed one-loop effective potential with our two-loop \(\beta\)-functions falls within the two-loop order scale uncertainties, where \(T_{C}\) denotes a critical temperature and \(v_{C}\) is a vacuum expectation value (VEV) at \(T_{C}\). Therefore, our RG-improved effective potential would be particularly useful when the complete two-loop effective potential is not available. The paper is organized as follows. In Sec. II, \(\beta\)-functions of masses and couplings as well as \(\gamma\)-functions of fields are generally derived by employing the dimensional regularization. In Sec. III, as the first application, we demonstrate the RG invariance of the effective potentials up to the two-loop order in the \(\phi^{4}\) theory and make a comparison between the \(\overline{\text{MS}}\) and our schemes analytically and numerically. We also present how to incorporate higher-order terms based on the RG invariance at some fixed order. An application of our method to the \(\phi^{4}\) theory with an additional real scalar field is conducted in Sec. IV. The numerical results of first-order phase transitions are presented in this section. Sec. V is devoted for the conclusion. Some detailed expressions are given in Appendices. ## II \(\beta\)-functions in the resummed theory Let us collectively denote arbitrary fields and couplings as \(\phi_{i}(x)\) and \(g_{k}\) and boson and fermion masses as \(m_{a}^{2}\), and \(M_{\alpha}\), respectively, and a vacuum energy is denoted as \(\Omega\). We use dimensional regularization in which the spacetime dimension is analytically continued to the \(d=4-\epsilon\) dimension [16]. In this case, the mass dimensions of the bare couplings \(g_{Bk}\) become \(\sigma_{k}\epsilon\), where \(\sigma_{k}=1\) for scalar quartic couplings and \(\sigma_{k}=1/2\) for gauge and Yukawa couplings, respectively, while that of the bare vacuum energy \(\Omega_{B}\) is \(d\). Before discussing our scheme, we begin by deriving \(\beta\)-functions in mass-independent regularization schemes such as MS and \(\overline{\text{MS}}\)[13; 14]. The bare parameters are decomposed into the renormalized parts and \(\epsilon\) poles: \[g_{Bk}\mu^{-\sigma_{k}\epsilon} =g_{k}+\sum_{n=1}^{\infty}\frac{a_{k}^{(n)}(g)}{\epsilon^{n}}, \tag{1}\] \[m_{Ba}^{2} =\left(\delta_{ab}+\sum_{n=1}^{\infty}\frac{b_{ab}^{(n)}(g)}{ \epsilon^{n}}\right)m_{b}^{2},\] (2) \[M_{B\alpha} =\left(\delta_{ab}+\sum_{n=1}^{\infty}\frac{B_{ab}^{(n)}(g)}{ \epsilon^{n}}\right)M_{\beta},\] (3) \[Z_{ij} =\delta_{ij}+\sum_{n=1}^{\infty}\frac{c_{ij}^{(n)}(g)}{\epsilon^ {n}},\] (4) \[\Omega_{B}\mu^{\epsilon} =\Omega+\sum_{n=1}^{\infty}\frac{\omega_{n}(g)}{\epsilon^{n}}. \tag{5}\] From those expressions, one can find the \(\beta\)-functions of each parameter as \[\beta_{k}= \lim_{\epsilon\to 0}\mu\frac{dg_{k}}{d\mu}=-\sigma_{k}a_{k}^{(1)}+ \sum_{\ell}a_{k,\ell}^{(1)}\sigma_{\ell}g_{\ell}, \tag{6}\] \[m_{a}^{2}\beta_{m_{a}^{2}}= \lim_{\epsilon\to 0}\mu\frac{dm_{a}^{2}}{d\mu}=\sum_{k,b}b_{ab,k}^{(1)} \sigma_{k}g_{k}m_{b}^{2},\] (7) \[M_{\alpha}\beta_{M_{\alpha}}= \lim_{\epsilon\to 0}\mu\frac{dM_{\alpha}}{d\mu}=\sum_{k,\beta}B_{ \alpha\beta,k}^{(1)}\sigma_{k}g_{k}M_{\beta},\] (8) \[\gamma_{ij}= \lim_{\epsilon\to 0}\mu\frac{Z_{ij}}{d\mu}=-\frac{1}{2}\sum_{k}c_{ ij,k}^{(1)}\sigma_{k}g_{k},\] (9) \[\beta_{\Omega}= \lim_{\epsilon\to 0}\mu\frac{d\Omega}{d\mu}=\omega_{1}, \tag{10}\] where \(a_{k,l}^{(1)}=da_{k}^{(1)}/dg_{\ell}\), \(b_{ab,k}^{(1)}=db_{ab}^{(1)}/dg_{k}\), \(B_{\alpha\beta,k}^{(1)}=db_{\alpha\beta}^{(1)}/dg_{k}\), and \(c_{ij,k}^{(1)}=dc_{ij}^{(1)}/dg_{k}\). For illustrative purpose, we focus exclusively on scalar theories throughout this paper. Following the work of Parwani [4], we reorganize the Lagrangian as \[\mathcal{L}_{B}=\mathcal{L}_{R}+\mathcal{L}_{\text{CT}}=\left[\mathcal{L}_{R} -\frac{1}{2}\Sigma_{a}(T)\phi_{a}^{2}\right]+\left[\mathcal{L}_{\text{CT}}+ \frac{1}{2}\Sigma_{a}(T)\phi_{a}^{2}\right], \tag{11}\] where \(\Sigma_{a}(T)\) are dominant thermal corrections to the masses of the scalar fields \(\phi_{a}\). \(\Sigma_{a}(T)\) is supposed to be obtained by gap equations or other methods in advance. At the leading order, one would have \(\Sigma_{a}(T)=\mathcal{O}(g_{i}T^{2})\), where \(g_{i}\) are scalar quartic couplings. Though this reorganization does not change the bare Lagrangian, \(\Sigma_{a}(T)\) appearing in the first square brackets are regarded as the zeroth order in this new perturbation theory while those in the second ones are the part of the counterterm (CT) which are one-order higher in this perturbative expansion (called _thermal counterterm_ hereafter). In our method, the bare mass parameters of the scalar fields in resummed perturbation theory are defined as \[m_{Ba}^{2}=\left(\delta_{ab}+\sum_{n=1}^{\infty}\frac{b_{ab}^{(n)}(g)}{\epsilon^{ n}}\right)m_{b}^{2}+\sum_{n=1}^{\infty}\frac{\tilde{b}_{ab}^{(n)}(g)}{ \epsilon^{n}}\Sigma_{b}(T), \tag{12}\] where the last terms correspond to temperature-dependent divergences. Such terms must be absent in all-order calculations since the divergence structure of the theory must not be altered by the thermal resummation. At a fixed order in the resummed perturbation theory, however, one would encounter the temperature-dependent divergences, as seen in the actual effective potential calculations shown in the next section. Even though the _new_ divergences are expected to be cancelled by higher-order terms, the order-by-order renormalizability would be generally unclear. On the other hand, if CTs are defined in the form of Eq. (12) at each order in the resummed perturbation theory, the renormalization would be more apparent. This is the strategy we adopt here. The rearrangement of the perturbative expansion seems to mess up the order-by-order RG invariance. While the scaling of \(\Sigma_{a}(T)\) may be nontrivial,1 it should be scale independent for full-order calculations, and thus the scaling of the resummed effective potential would not be altered. As far as \(\mathcal{L}_{B}\) remains unchanged and the infrared sensitive region is ameliorated, the couplings at some fixed scale \(g_{i}(\mu_{\rm fixed})\) such as \(g_{i}(T)\) could also be the choices for \(\Sigma_{a}(T)\). In our work, we adopt such \(\Sigma_{a}(T)\) and call \(d\Sigma_{a}(T)/d\mu=0\)_consistency condition_. With this condition, we prove the order-by-order RG invariance of the resummed effective potentials up to the two-loop level. Following the same step as in the \(\overline{\rm MS}\) scheme but with the consistency condition, it follows that Footnote 1: Since \(\Sigma_{a}(T)\) is preset by the gap equation, etc, its scaling should be determined in that framework, not by the perturbation theory considered here. \[m_{a}^{2}\beta_{m_{a}^{2}}=\sum_{k,b}\big{(}b_{ab,k}^{(1)}m_{b}^{2}+\tilde{b}_ {ab,k}^{(1)}\Sigma_{b}\big{)}\sigma_{k}g_{k}. \tag{13}\] The thermal resummation also generates temperature-dependent divergences in the vacuum energy. However, the relation \(\beta_{\Omega}=\omega_{1}\) is not altered once the consistency condition is imposed. Furthermore, \(\beta\)-functions of dimensionless couplings remain the same as those in the \(\overline{\rm MS}\) scheme. ## III \(\phi^{4}\) theory We first consider the \(\phi^{4}\) theory to explain our scheme and show the order-by-order RG invariance up to the two-loop levels. The bare Lagrangian is given by \[\mathcal{L}_{B}=\frac{1}{2}\partial_{\mu}\Phi_{B}\partial^{\mu}\Phi_{B}-V_{B}( \Phi_{B}),\quad V_{B}(\Phi_{B})=\Omega_{B}-\frac{\nu_{B}^{2}}{2}\Phi^{2}+\frac{ \lambda_{B}}{4!}\Phi_{B}^{4}. \tag{14}\] As shown below, the vacuum energy \(\Omega\) is also needed to show the RG invariance of the effective potentials. We decompose \(\mathcal{L}_{B}\) into the renormalized Lagrangian (\(\mathcal{L}_{R}\)) and CT (\(\mathcal{L}_{\rm CT}\)), and subtract and add a dominant thermal correction \(\Sigma(T)\) in each part. The explicit form of the resummed Lagrangian is given in Appendix A.1. We derive the effective potentials up to the two-loop level in this resummed perturbation theory. Let us denote the classical background field as \(\varphi\). The tree-level effective potential is \[V_{0}(\varphi)=\Omega+\frac{1}{2}\left(-\nu^{2}+\Sigma(T)\right)\varphi^{2}+ \frac{\lambda\mu^{\epsilon}}{4!}\varphi^{4}, \tag{15}\] where \(\Sigma(T)\) must be regarded as the zeroth-order term. The field-dependent mass is given by \[M^{2}=\frac{\partial^{2}V_{0}}{\partial\varphi^{2}}=m^{2}+\Sigma(T), \tag{16}\] with \(m^{2}=-\nu^{2}+\lambda\mu^{\epsilon}\varphi^{2}/2\). Consequently, the resummed one-loop effective potential takes the form \[\mu^{\epsilon}V_{1}(\varphi)=\frac{M^{4}}{4(16\pi^{2})}\left(-\frac{2}{ \epsilon}+\ln\frac{M^{2}}{\bar{\mu}^{2}}-\frac{3}{2}+\mathcal{O}(\epsilon) \right), \tag{17}\] where \(\bar{\mu}=\sqrt{4\pi e^{-\gamma_{E}}}\mu\simeq 2.66\mu\) with \(\gamma_{E}\) being the Euler constant. As mentioned in Sec. II, the temperature-dependent divergence appears in the fixed-order calculation. In our renormalization scheme, the whole divergences in Eq. (17) are removed by CTs defined in Eqs. (12)-(13), leading to \[\delta^{(1)}\Omega=\frac{1}{\epsilon}\frac{(\nu^{2}-\Sigma)^{2}}{32\pi^{2}}, \quad\delta^{(1)}\nu^{2}=\frac{1}{\epsilon}\frac{\lambda(\nu^{2}-\Sigma)}{16 \pi^{2}},\quad\delta^{(1)}\lambda=\frac{1}{\epsilon}\frac{3\lambda^{2}}{16\pi ^{2}}. \tag{18}\] Therefore, CTs of the dimensionful parameters are modified by the thermal resummation. With those CTs, the bare mass parameters \(\Omega_{B}\) and \(\nu_{B}\) are expressed as \[\Omega_{B}\mu^{\epsilon} =\Omega+\delta^{(1)}\Omega=\Omega+\frac{1}{\epsilon}\frac{(\nu^{ 2}-\Sigma)^{2}}{32\pi^{2}}, \tag{19}\] \[\nu_{B}^{2} =Z_{\Phi}^{-1}(\nu^{2}+\delta^{(1)}\nu^{2})=\nu^{2}\left(1+\frac{ 1}{\epsilon}\frac{\lambda}{16\pi^{2}}\right)-\Sigma\left(\frac{1}{\epsilon} \frac{\lambda}{16\pi^{2}}\right), \tag{20}\] where \(Z_{\Phi}=1\) at the one-loop level. From our \(\beta\)-function formulas (10) and (13), it follows that (for the derivation, see Appendix A.1) \[\beta_{\Omega}^{(1)} = \frac{(\nu^{2}-\Sigma)^{2}}{32\pi^{2}}, \tag{21}\] \[\nu^{2}\beta_{\nu^{2}}^{(1)} = \frac{\lambda(\nu^{2}-\Sigma)}{16\pi^{2}}. \tag{22}\] In the limit of \(\Sigma=0\), our \(\beta\)-functions are reduced to those in the \(\overline{\rm MS}\) scheme. Therefore, the difference between the two schemes could be sizable when \(\Sigma\) is comparable to \(\nu^{2}\). If one uses CTs in the \(\overline{\rm MS}\) scheme, the temperature-dependent divergences would remain at this order. As pointed out in Ref. [17] (see also Ref. [18]), higher-order loop corrections are needed to cancel such divergences.2 Footnote 2: One could consider a resummetion method shown in Ref. [19; 20], in which the bare Lagrangian is decomposed into \[\mathcal{L}_{B}=\mathcal{L}_{R}+\mathcal{L}_{\rm CT} = \left[\frac{1}{2}(\partial_{\mu}\Phi)^{2}+\frac{1}{2}M^{2}\Phi^{2} -\frac{\lambda}{4!}\Phi^{4}+\frac{1}{2}\Sigma\Phi^{2}\right] \tag{23}\] \[+\left[\frac{A}{2}(\partial_{\mu}\Phi)^{2}+\frac{B}{2}(M^{2}- \Sigma)\Phi^{2}-C\frac{\lambda}{4!}\Phi^{4}+D(M^{2}-\Sigma)^{2}\right],\] where \(M^{2}=\nu^{2}-\Sigma\) and \(A\), \(B\), \(C\), and \(D\) are CTs in the \(\overline{\rm MS}\) scheme at zero temperature. Orders (denoted as \(\delta\)) of \(M^{2}\) and \(\Sigma\) in the resummed perturbation theory are regarded as \(M^{2}=\mathcal{O}(\delta^{0})\) and \(\Sigma=\mathcal{O}(\delta)\). With this order counting, the one-loop CTs for the mass and vacuum energy are reduced to \(\frac{B}{2}M^{2}\Phi^{2}+DM^{4}\), which are essentially the same as our CTs, and the order-by-order renormalization works. where the high-temperature expansion (HTE) is used in the second line, and \(\ln\alpha_{B}=2\ln 4\pi-2\gamma_{E}\simeq 3.91\). The last term in \(V_{1}(\varphi)\) comes from the thermal CT which avoids the double counting of \(\Sigma(T)\varphi^{2}/2\). Now we move on to the two-loop analysis. As is the one-loop case, all the divergences appearing in the two-loop effective potential are removed by CTs defined in Eqs. (78)-(79). Correspondingly, the \(\beta\)-functions of the theory parameters in our scheme are found to be \[\gamma_{\Phi}^{(2)} =\frac{\lambda^{2}}{12(16\pi^{2})^{2}}, \tag{30}\] \[\beta_{\Omega}^{(2)} =\frac{(\nu^{2}-\Sigma)\Sigma}{16\pi^{2}},\] (31) \[\nu^{2}\beta_{\nu^{2}}^{(2)} =\frac{\lambda^{2}(-\nu^{2}+\Sigma)}{(16\pi^{2})^{2}}+\frac{ \lambda\Sigma}{16\pi^{2}}+2\nu^{2}\gamma_{\Phi}^{(2)}\] \[=\frac{\lambda^{2}}{(16\pi^{2})^{2}}\left(-\frac{5\nu^{2}}{6}+ \Sigma\right)+\frac{\lambda\Sigma}{16\pi^{2}},\] (32) \[\beta_{\lambda}^{(2)} =-\frac{6\lambda^{3}}{(16\pi^{2})^{2}}+4\lambda\gamma_{\Phi}^{( 2)}=\frac{1}{(16\pi^{2})^{2}}\left(-\frac{17\lambda^{3}}{3}\right). \tag{33}\] Similarly to the one-loop order, only \(\beta\)-functions of the dimentionful parameters are modified by the thermal resummation. One can see that there exists \(\lambda\Sigma/(16\pi^{2}\nu^{2})\) in \(\beta_{\nu^{2}}^{(2)}\) which is the same as the temperature-dependent term in Eq. (22) but the opposite sign. At first sight, they appear to be canceled out in \(\beta_{\nu^{2}}=\beta_{\nu^{2}}^{(1)}+\beta_{\nu^{2}}^{(2)}\). As shown the RG invariance of the effective potential using HTE, however, one has to regard \(\lambda\Sigma/(16\pi^{2}\nu^{2})\) in \(\beta_{\nu^{2}}^{(2)}\) as one-order higher correction than that in \(\beta_{\nu^{2}}^{(1)}\), implying that \(\lambda\) appearing in the former is one-order lower than that in the latter. We also note that \(\beta_{\Omega}^{(2)}\) is nonzero due to the thermal correction, which is another difference from the \(\overline{\rm MS}\) scheme. After removing all the divergences by CTs, the two-loop corrections to the resummed effective potential are cast into the form \[V_{2}(\varphi)=\frac{\lambda}{8}\bar{I}^{2}(M)-\frac{\lambda^{2}\varphi^{2}}{ 12}\tilde{H}(M)-\frac{1}{2}\Sigma(T)\bar{I}(M), \tag{34}\] where the loop functions \(\bar{I}(M)\) and \(\tilde{H}(M)\) are defined in Eqs. (73) and (76), respectively. The last term in Eq. (34) corresponds to the thermal CT at this order, and by which the double counting of \(\Sigma(T)\) corrections and linear-like terms in \(\varphi\) such as \(\mathcal{O}((M^{2})^{1/2}T^{3})\) are avoided [6]. ### RG invariance of the thermally resummed effective potential Now that we have obtained the renormalized effective potentials and \(\beta\)-functions in our scheme at one- and two-loop orders, we show their RG invariance one by one. The effective potential satisfies [9; 10; 11; 12] \[0=\mu\frac{dV_{\text{eff}}}{d\mu}=\left[\mu\frac{\partial}{ \partial\mu}+\nu^{2}\beta_{\nu^{2}}\frac{\partial}{\partial\nu^{2}}+\beta_{ \lambda}\frac{\partial}{\partial\lambda}-\gamma_{\Phi}\varphi\frac{\partial} {\partial\varphi}+\beta_{\Omega}\frac{\partial}{\partial\Omega}\right]V_{\text{ eff}}\equiv\mathcal{D}V_{\text{eff}}. \tag{35}\] We first show the RG invariance of the resummed one-loop effective potential. Applying the derivative operator \(\mathcal{D}\) to the potential (25), one gets \[\mathcal{D}V_{0}|_{\text{one-loop}} =\beta_{\Omega}^{(1)}-\frac{\nu^{2}}{2}\beta_{\nu^{2}}^{(1)} \varphi^{2}+\frac{1}{4!}\beta_{\lambda}^{(1)}\varphi^{4}=\frac{M^{4}}{32\pi^ {2}}, \tag{36}\] \[\mathcal{D}V_{1}|_{\text{one-loop}} =\mu\frac{\partial V_{1}}{\partial\mu}=-\frac{M^{4}}{32\pi^{2}}, \tag{37}\] where the consistency condition \(\mathcal{D}\Sigma=0\) is used. Thus, one obtains \(\mathcal{D}(V_{0}+V_{1})|_{\text{one-loop}}=0\). We note that this invariance is due to the modified \(\beta\)-functions. In order words, the \(\overline{\text{MS}}\)\(\beta\)-functions cannot maintain the RG invariance at this order. Let us consider errors of the both schemes. In our scheme, we have the truncation error which starts from the two-loop order, \(\mathcal{O}(1/(16\pi^{2})^{2})\). In the \(\overline{\text{MS}}\) scheme, on the other hand, an additional error comes from the RG-noinvariant terms, which are found to be \[\mathcal{D}(V_{0}+V_{1})_{\text{one-loop}}^{\overline{\text{MS}}} =\frac{-(2m^{2}+\Sigma)\Sigma}{32\pi^{2}}+\mathcal{O}\left(\frac{ 1}{(16\pi^{2})^{2})}\right)\] \[\to\frac{-\lambda\varphi^{2}\Sigma}{32\pi^{2}}+\mathcal{O}\left( \frac{1}{(16\pi^{2})^{2})}\right), \tag{38}\] where \(\varphi\)-independent terms are dropped after the right arrow assuming \(\Sigma=\lambda T^{2}/24\). Therefore, in the \(\overline{\text{MS}}\) scheme, there could a cancellation between the two different errors depending on model parameters. We will exemplify such a case below. However, we emphasize that the less scale dependence is merely accidental and has no theoretical reasoning. It would also be instructive to see the above RG invariance by use of HTE. In this demonstration, we focus exclusively on the \(\varphi\)-dependent terms and omit the vacuum energy \(\Omega\). Using HTE of \(I_{B}\) given in Eq. (29), the resummed one-loop effective potential (25) up to \({\cal O}(\varphi^{4})\) is approximated as \[V_{\rm eff}^{\rm HTE}(\varphi) = V_{0}(\varphi)+V_{1}^{\rm HTE}(\varphi) \tag{39}\] \[\simeq \frac{1}{2}\biggl{[}(-\nu^{2}+\Sigma)\left(1+\frac{\lambda}{32\pi^{ 2}}\ln\frac{T^{2}}{\bar{\mu}^{2}}\right)+\frac{\lambda(-\nu^{2}+\Sigma)c_{B}}{ 16\pi^{2}}\biggr{]}\varphi^{2}\] \[-\frac{T(M^{2})^{3/2}}{12\pi}+\frac{1}{4!}\left[\lambda\left(1+ \frac{3\lambda}{32\pi^{2}}\ln\frac{T^{2}}{\bar{\mu}^{2}}\right)+\frac{3 \lambda^{2}c_{B}}{16\pi^{2}}\right]\varphi^{4},\] where \(c_{B}=(\ln\alpha_{B})/2\). To make the RG invariance of \(V_{\rm eff}^{\rm HTE}(\varphi)\) manifest, we solve \(\beta_{\nu^{2}}^{(1)}\) and \(\beta_{\lambda}^{(1)}\) perturbatively. Let us denote a running parameter as \(\bar{\cal X}(t)\) with \(t=\ln(\bar{\mu}/\bar{\mu}_{0})\), where \(\bar{\mu}\) is an arbitrary scale and \(\bar{\mu}_{0}\) is its initial value. \(\bar{\cal X}(t)\) can be expanded as \[\bar{\cal X}(t) = \bar{\cal X}(0)+\frac{d\bar{\cal X}(t)}{dt}\bigg{|}_{t=0}t+\frac{ 1}{2}\frac{d^{2}\bar{\cal X}(t)}{dt^{2}}\bigg{|}_{t=0}t^{2}+\cdots \tag{40}\] \[= \bar{\cal X}(0)+\left(\beta_{\cal X}^{(1)}+\beta_{\cal X}^{(2)} \right)\big{|}_{t=0}t+\frac{1}{2}\frac{d\beta_{\cal X}^{(1)}(t)}{dt}\bigg{|}_{ t=0}t^{2}+\cdots.\] Using this expansion, \(\bar{\nu}^{2}(t)\) and \(\bar{\lambda}(t)\) to \({\cal O}(t)\) are, respectively, given by \[-\bar{\nu}^{2}(t)+\Sigma \simeq (-\nu_{0}^{2}+\Sigma)\left(1+\frac{\lambda_{0}}{16\pi^{2}}t\right), \tag{41}\] \[\bar{\lambda}(t) \simeq \lambda_{0}\left(1+\frac{3\lambda_{0}}{16\pi^{2}}t\right), \tag{42}\] where \(\nu_{0}^{2}=\bar{\nu}^{2}(t=0)\), and \(\lambda_{0}=\bar{\lambda}(t=0)\). As noted in Sec. II, \(\Sigma\) is given by the parameters at \(t=0\). Using those expressions, \(V_{\rm eff}^{\rm HTE}(\varphi)\) is rewritten as \[V_{\rm eff}^{\rm HTE}(\varphi)\simeq\frac{1}{2}\biggl{[}-\bar{ \nu}^{2}(T)+\Sigma+\frac{\lambda(-\nu^{2}+\Sigma)c_{B}}{16\pi^{2}}\biggr{]} \varphi^{2}-\frac{T(M^{2})^{3/2}}{12\pi}+\frac{1}{4!}\left[\bar{\lambda}(T)+ \frac{3\lambda^{2}c_{B}}{16\pi^{2}}\right]\varphi^{4},\] where \(\bar{\nu}^{2}(T)\) and \(\bar{\lambda}(T)\) are the running parameters evaluated at \(T\) evolved from the scale \(\bar{\mu}\). Therefore, \(V_{\rm eff}^{\rm HTE}(\varphi)\) is manifestly RG invariant, where the explicit scale dependences of \(\bar{\mu}\) are absorbed into the running parameters. This is not the case if one uses the \(\beta\)-functions in the \(\overline{\rm MS}\) scheme. Suppose that \(\Sigma=\lambda T^{2}/24\), Eq. (39) is cast into the form \[V_{\rm eff}^{\rm HTE}(\varphi)\simeq\frac{1}{2}\biggl{[} - \bar{\nu}^{2}(T)|_{\overline{\rm MS}}+\frac{\lambda(-\nu^{2}+ \Sigma)c_{B}}{16\pi^{2}}+\frac{\lambda T^{2}}{24}\left(1+\frac{\lambda}{32\pi ^{2}}\ln\frac{T^{2}}{\bar{\mu}^{2}}\right)\bigg{]}\varphi^{2} \tag{44}\] \[-\frac{T(M^{2})^{3/2}}{12\pi}+\frac{1}{4!}\left[\bar{\lambda}(T) +\frac{3\lambda^{2}c_{B}}{16\pi^{2}}\right]\varphi^{4},\] where \(\bar{\nu}^{2}(T)|_{\overline{\rm MS}}=\bar{\nu}^{2}(T)|_{\Sigma=0}\). Note that the explicit \(\bar{\mu}\)-dependence appearing in the \(\lambda T^{2}/24\) term of the first line cannot be absorbed into \(\bar{\lambda}\) since the coefficient of \(\lambda\ln(T^{2}/\bar{\mu}^{2})/32\pi^{2}\) is different from the right one in Eq. (42), reflecting the RG noninvariance in the \(\overline{\text{MS}}\) scheme. Actually, this RG-noninvariant term is also inferred from Eq. (38). As shown below, the RG noninvariant term would become the RG-invariant form if one adds two-loop corrections [6]. Now we discuss the RG invariance at the two-loop level. Applying the derivative operator \(\mathcal{D}\) to the resummed effective potentials (26), (27), and (34), respectively, each contribution at the two-loop level is calculated as \[\mathcal{D}V_{0}|_{\text{two-loop}} =\beta_{\Omega}^{(2)}-\frac{\nu^{2}}{2}\beta_{\nu^{2}}^{(2)}\varphi ^{2}+\frac{1}{4!}\beta_{\lambda}^{(2)}\varphi^{4}+(\nu^{2}-\Sigma)\gamma_{ \Phi}^{(2)}\varphi^{2}-\frac{1}{3!}\gamma_{\Phi}^{(2)}\varphi^{4}\] \[=-\frac{\lambda^{2}M^{2}}{2(16\pi^{2})^{2}}\varphi^{2}-\frac{ \Sigma M^{2}}{16\pi^{2}}-\gamma_{\Phi}^{(2)}\Sigma\varphi^{2}, \tag{45}\] \[\mathcal{D}V_{1}|_{\text{two-loop}} =\left[\nu^{2}\beta_{\nu^{2}}^{(1)}\frac{\partial}{\partial\nu^{ 2}}+\beta_{\lambda}^{(1)}\frac{\partial}{\partial\lambda}-\gamma_{\Phi}^{(2)} \varphi\frac{\partial}{\partial\varphi}\right]V_{1}\] \[=\frac{\lambda(M^{2}+\lambda\varphi^{2})}{2(16\pi^{2})}\bar{I}(M )+\gamma_{\Phi}^{(2)}\Sigma\varphi^{2},\] (46) \[\mathcal{D}V_{2}|_{\text{two-loop}} =\mu\frac{\partial V_{2}}{\partial\mu}=\frac{\lambda^{2}M^{2} \varphi^{2}}{2(16\pi^{2})^{2}}-\frac{\lambda(M^{2}+\lambda\varphi^{2})}{2(16 \pi^{2})}\bar{I}(M)+\frac{\Sigma M^{2}}{16\pi^{2}}. \tag{47}\] Summing up, one verifies that \(\mathcal{D}(V_{0}+V_{1}+V_{2})|_{\text{two-loop}}=0\). We here emphasize again that the order-by-order RG invariance holds by virtue of the \(\beta\)-functions in our scheme. As we have done in the one-loop analysis, it is enlightening to discuss the RG invariance in terms of the high-temperature expanded effective potential. Before doing so, we obtain the expression of \(\nu^{2}\) up to \(\mathcal{O}(t^{2})\). From the \(t\)-expansion formula (40), it follows that \[\bar{\nu}^{2}\simeq\nu_{0}^{2}+\frac{\lambda_{0}(\nu_{0}^{2}-\Sigma)}{16\pi^{ 2}}t+\frac{2\lambda_{0}^{2}(\nu_{0}^{2}-\Sigma)}{(16\pi^{2})^{2}}t^{2}+\frac{ \lambda_{0}^{2}(-\nu_{0}^{2}+\Sigma)}{(16\pi^{2})^{2}}t+\frac{\lambda_{0} \Sigma}{16\pi^{2}}t+2\nu_{0}^{2}\gamma_{\Phi}^{(2)}t. \tag{48}\] Note that \(-\lambda_{0}\Sigma t/16\pi^{2}\) in the second term is cancelled by \(+\lambda_{0}\Sigma t/16\pi^{2}\) in the fifth term, which originates from \(\beta_{\nu^{2}}^{(2)}\). The result would be different if one cancels the whole \(\lambda\Sigma/(16\pi^{2}\nu^{2})\) terms in \(\beta_{\nu^{2}}=\beta_{\nu^{2}}^{(1)}+\beta_{\nu^{2}}^{(2)}\) from the beginning. By the cancellation of the \(\lambda_{0}\Sigma t/16\pi^{2}\) terms, the \(\mathcal{O}(1/(16\pi^{2}))\) term coincides with the corresponding term in the \(\overline{\text{MS}}\) scheme. However, the \(\mathcal{O}(1/(16\pi^{2})^{2})\) terms are still different from those in the \(\overline{\text{MS}}\) scheme due to the presence of \(\Sigma\). From this demonstration, one could infer that the difference between the two schemes would get smaller at the two-loop level as long as the two-loop corrections are moderate. We will quantify this statement in our numerical analysis. As for the quartic coupling \(\lambda\) and the scalar field \(\varphi\), their running parameters up to \(\mathcal{O}(t^{2})\) are found to be \[\bar{\lambda} \simeq\lambda_{0}+\frac{3\lambda_{0}^{2}}{16\pi^{2}}t+\frac{9 \lambda_{0}^{3}}{(16\pi^{2})^{2}}t^{2}-\frac{6\lambda_{0}^{3}}{(16\pi^{2})^{2}}t +4\lambda_{0}\gamma_{\Phi}^{(2)}t, \tag{49}\] \[\bar{\varphi} =\exp\left[-\int_{0}^{t}dt^{\prime}\ \gamma_{\Phi}(t^{\prime}) \right]\varphi_{0}\simeq\big{(}1-\gamma_{\Phi}^{(2)}t\big{)}\varphi_{0}, \tag{50}\] where \(\varphi_{0}=\bar{\varphi}(t=0)\). The resummed two-loop effective potential in the high-temperature limit is \[V_{\rm eff}^{\rm HTE}(\varphi)=V_{0}(\varphi)+V_{1}^{\rm HTE}( \varphi)+V_{2}^{\rm HTE}(\varphi)\] \[\simeq\frac{1}{2}\Bigg{[}-\bigg{\{}\nu^{2}\left(1+\frac{\lambda} {32\pi^{2}}\ln\frac{T^{2}}{\bar{\mu}^{2}}\right)-\frac{\lambda^{2}(-\nu^{2}+ \Sigma)}{2(16\pi^{2})^{2}}\ln^{2}\frac{T^{2}}{\bar{\mu}^{2}}+\frac{\lambda^{2} (-\nu^{2}+\Sigma)}{2(16\pi^{2})^{2}}\ln\frac{T^{2}}{\bar{\mu}^{2}}\bigg{\}}\] \[\qquad+\frac{\lambda T^{2}}{24}\left(1+\frac{3\lambda}{32\pi^{2} }\ln\frac{T^{2}}{\bar{\mu}^{2}}\right)+\frac{\lambda^{2}T^{2}}{24(16\pi^{2})} \left(2\ln\frac{M^{2}}{T^{2}}+1+c_{H}\right)\] \[\qquad+\bigg{\{}\frac{\lambda(-\nu^{2}+\Sigma)}{16\pi^{2}}+\frac {2\lambda^{2}(-\nu^{2}+\Sigma)}{(16\pi^{2})^{2}}\ln\frac{T^{2}}{\bar{\mu}^{2} }\bigg{\}}c_{B}\Bigg{]}\varphi^{2}\] \[-\frac{T}{12\pi}\Bigg{[}(M^{2})^{3/2}+\frac{3}{4(16\pi^{2})} \Big{\{}\lambda(M^{2})^{3/2}+\lambda^{2}(M^{2})^{1/2}\varphi^{2}\Big{\}}\ln \frac{T^{2}}{\bar{\mu}^{2}}\Bigg{]}\] \[+\frac{1}{4!}\Bigg{[}\bigg{\{}\lambda+\frac{3\lambda^{2}}{32\pi^{ 2}}\ln\frac{T^{2}}{\bar{\mu}^{2}}+\frac{9\lambda^{3}}{4(16\pi^{2})}\ln^{2} \frac{T^{2}}{\bar{\mu}^{2}}-\frac{3\lambda^{3}}{(16\pi^{2})^{2}}\ln\frac{T^{2} }{\bar{\mu}^{2}}\bigg{\}}+\frac{3\lambda^{2}}{16\pi^{2}}\left(1+\frac{3\lambda }{16\pi^{2}}\ln\frac{T^{2}}{\bar{\mu}^{2}}\right)c_{B}\Bigg{]}\varphi^{4}, \tag{51}\] where the terms without explicit \(\bar{\mu}\) dependences are only retained up to \(\mathcal{O}(1/(16\pi^{2}))\). One can see that the numerical coefficient of \(\lambda\ln(T^{2}/\bar{\mu}^{2})/32\pi^{2}\) in the parenthesis multiplied by the factor \(\lambda T^{2}/24\) in the second line becomes 3 owing to the addition of the two-loop correction, and as a result, this term obeys the one-loop RG equation (42) [6]. We also note that all the explicit \(\bar{\mu}\) dependences in Eq. (51) are absorbed into the running parameters given in Eqs. (48), (49), and (50), resulting in \[V_{\rm eff}^{\rm HTE}(\varphi)\simeq\frac{1}{2}\bigg{[}-\bar{\nu} ^{2}(T)+\frac{\bar{\lambda}(T)T^{2}}{24}+\frac{\lambda^{2}T^{2}}{24(16\pi^{2}) }\left(2\ln\frac{M^{2}}{T^{2}}+1+c_{H}\right)+\frac{\bar{\lambda}(T)(-\bar{ \nu}^{2}(T)+\Sigma)c_{B}}{16\pi^{2}}\bigg{]}\bar{\varphi}^{2}\] \[\qquad-\frac{T\big{(}\bar{M}^{2}(T)\big{)}^{3/2}}{12\pi}+\frac{1} {4!}\left[\bar{\lambda}(T)+\frac{3\bar{\lambda}^{2}(T)c_{B}}{16\pi^{2}} \right]\bar{\varphi}^{4}, \tag{52}\] which is manifestly RG invariant. This \(V_{\rm eff}^{\rm HTE}\) is common in the \(\overline{\rm MS}\) and our schemes. In the \(\overline{\rm MS}\) scheme, however, the explicit \(\bar{\mu}\) dependences would remain in the \(\mathcal{O}(\lambda^{2}\Sigma/(16\pi^{2})^{2})\) terms, and higher-order terms would be necessary to restore the RG invariance. Now we present numerical results on the \(\bar{\mu}\) dependences of the RG-improved effective potentials up to the two-loop level. For practical calculations, we rewrite it as \[\bar{V}_{\rm eff}(\bar{\varphi};t)=\bar{V}_{0}(\bar{\varphi};t)+\bar{V}_{1}(\bar{ \varphi};t)+\bar{V}_{2}(\bar{\varphi};t), \tag{53}\] where \(t=\ln(\bar{\mu}/\bar{\mu}_{0})\) with \(\bar{\mu}_{0}\) representing an initial scale. Hereafter, the barred quantities \(\bar{\Omega}\), \(\bar{\nu}^{2}\), \(\bar{\lambda}\), and \(\bar{\varphi}\) are defined as the running parameters which are functions of \(t\). For example, the running parameters obtained by the one-loop \(\beta\) functions are, respectively, given by \[\bar{\varphi} = \varphi\exp\left(-\int_{0}^{t}dt^{\prime}\gamma_{\Phi}^{(1)}(t^{ \prime})\right)=\varphi, \tag{54}\] \[\bar{\lambda} = \frac{\lambda}{1-\frac{3\lambda}{16\pi^{2}}t},\] (55) \[\bar{\nu}^{2}-\Sigma = \frac{\nu^{2}-\Sigma}{\left[1-\frac{3\lambda}{16\pi^{2}}t\right] ^{1/3}},\] (56) \[\bar{\Omega} = \Omega+\frac{(\nu^{2}-\Sigma)^{2}}{2\lambda}\left[1-\left(1-\frac {3\lambda}{16\pi^{2}}t\right)^{1/3}\right], \tag{57}\] where the unbarred parameters are defined at \(t=0\). Figure 1: Resummed one- and two-loop effective potential with RG improvement in the \(\overline{\rm MS}\) scheme (left) and our scheme (right) at \(T=250\). The reference point of the RG running is \(\bar{\mu}_{0}=90\), where we take \(v=50\) and \(m_{\phi}=90\) as the inputs, which gives \(\lambda\simeq 10\). Note that \(\Sigma(T)=\lambda T^{2}/24\). All the dimensionfull parameters are given in units of arbitrary mass dimension. In our numerical study, we choose a parameter in which \(\Sigma(T)=\lambda(\bar{\mu}_{0})T^{2}/24\) is enhanced to make the difference between \(\overline{\rm MS}\) and our schemes bigger. One of the examples is shown in Fig. 1, where the resummed effective potentials with the RG improvement in the \(\overline{\rm MS}\) scheme (left) and our scheme (right) are plotted at \(T=250\). The reference point of the RG running is set to \(\bar{\mu}_{0}=90\), and we take \(v=50\) and \(m_{\phi}=90\) as the inputs, which corresponds to \(\lambda\simeq 10\). All the dimensionful parameters are given in units of arbitrary mass dimension. '1-loop' denotes \(\bar{V}_{\rm eff}(\bar{\varphi};t)=\bar{V}_{0}(\bar{\varphi};t)+\bar{V}_{1}( \bar{\varphi};t)\) with the one-loop \(\beta\)-functions in the cases of \(\bar{\mu}=T\) (blue, dotted) and \(\bar{\mu}=5T\) (blue, dashed), while '2-loop' represents \(\bar{V}_{\rm eff}(\bar{\varphi};t)=\bar{V}_{0}(\bar{\varphi};t)+\bar{V}_{1}( \bar{\varphi};t)+\bar{V}_{2}(\bar{\varphi};t)\) with the two-loop \(\beta\)-functions in the cases of \(\bar{\mu}=T\) (red, solid) and \(\bar{\mu}=5T\) (red, dot-dashed). One can see that the \(\bar{\mu}\) dependence of \(\bar{V}_{\rm eff}\) at the one-loop level in our scheme is generally smaller than that in the \(\overline{\rm MS}\) scheme. This is due to the modified \(\beta\)-functions in our scheme. At the two-loop level, on the other hand, no significant differences between the two schemes are observed and the \(\bar{\mu}\) dependences of \(\bar{V}_{\rm eff}\) are even smaller than the one-loop case in our scheme. As mentioned below Eq. (51), the RG invariance in the \(\overline{\rm MS}\) is restored up to \({\cal O}(\lambda^{2}T^{2})\) in the high-temperature limit, which explains our numerical results well. The \(\bar{\mu}\) dependence of the effective potential at the one-loop order obtained by our scheme is smaller than that obtained by the MS-bar scheme, in the sense that the latter has a larger error in the \({\cal D}(V_{0}+V_{1})_{\rm one\mbox{-}loop}\), as shown in Eqs. (36)-(38). On the o seems to hold for the small-\(\varphi\) region in Fig. 1. To see it easily, we magnify that region and display it in Fig. 2. This seemingly contradiction is caused by an accidental cancellation between the one-loop level error and the original two-loop level one for the parameters. In this numerical analysis, \(\bar{\mu}=(1-5)T\) is considered to see the \(\bar{\mu}\) dependence of \(\bar{V}_{\text{eff}}(\varphi;t)\). A next question is that which value of \(\bar{\mu}\) is preferable among others. At the one-loop order, for example, it would be useful if there exists \(\bar{\mu}\) that can give similar results as the two-loop order. The answer to this question would be very practical when the two-loop effective potential is not at hand. For this purpose, we refine the one-loop order \(\bar{V}_{\text{eff}}(\varphi;t)\) by judiciously choosing \(t\) in the next subsection. ### Incorporation of higher-order terms Following the same sprit of the RG improvement proposed in Refs. [10; 11], we incorporate higher-order terms utilizing the RG invariance of the effective potential at a given order. Here, we focus exclusively on the case of \(\bar{V}_{\text{eff}}(\bar{\varphi};t)=\bar{V}_{0}(\bar{\varphi};t)+\bar{V}_{ 1}(\bar{\varphi};t)\). When \(\bar{V}_{\text{eff}}(\bar{\varphi},t)\) were exactly \(t\) independent, one could choose any \(t\) as far as \(t\) is below a Landau pole discussed below, and by which it is possible to incorporate a series of dominant higher-order terms via a \(\varphi\)-dependent \(t(\varphi)\).3 On its trajectory in \(t\)-\(\varphi\) space, \(\bar{V}_{\text{eff}}(\bar{\varphi},t)\) is always flat in the \(t\) direction because of the \(t\) invariance. As stated above Eq. (38), however, the \(t\) invariance of \(\bar{V}_{\text{eff}}(\bar{\varphi},t)\) is violated by the two-loop corrections. It is thus preferable to choose \(t\) such that the truncation error is minimized. With this consideration, we determine \(t\) by the condition Footnote 3: When \(t\) is \(\varphi\) dependent, the running vacuum energy \(\bar{\Omega}\) also becomes \(\varphi\) dependent so that one cannot simply subtract it from the effective potential. \[\frac{d\bar{V}_{\text{eff}}(\bar{\varphi};t)}{dt}=\frac{\partial\bar{V}_{ \text{eff}}(\bar{\varphi};t)}{\partial t}=0+\frac{1}{2}\frac{\partial\bar{M}^{ 2}}{\partial t}\bar{I}(\bar{M})=0, \tag{58}\] with \[\frac{\partial\bar{M}^{2}}{\partial t}=\frac{\bar{\lambda}(\bar{M}^{2}+\bar{ \lambda}\varphi^{2})}{16\pi^{2}}, \tag{59}\] where the one-loop \(\beta\)-functions are used. From Eq. (58), it follows that \[t(\varphi)=\frac{8\pi^{2}}{\bar{M}^{2}}\bar{I}(\bar{M})_{t=0}=\frac{1}{2} \left[\left(\ln\frac{\bar{M}^{2}}{\bar{\mu}_{0}^{2}}-1\right)+\frac{16T^{2}}{ \bar{M}^{2}}I^{\prime}_{B}(\bar{A}^{2})\right]. \tag{60}\] On the trajectory given by this \(t(\varphi)\), \(\bar{V}_{\text{eff}}(\bar{\varphi},t)\) would still be locally flat in the \(t\) direction, implying that \(t(\varphi)\) in Eq. (60) yields the minimal \(t\) violation of \(\bar{V}_{\text{eff}}(\bar{\varphi},t)\) among any other choices of \(t(\varphi)\). In addition to this approximate \(t\) invariance, this \(t(\varphi)\) copes with two potentially harmful corrections such as large logarithmic corrections and temperature-dependent power corrections in a general way. At zero temperature, Eq. (60) is reduced to \(t(\varphi)=\ln(\bar{m}^{2}/e\bar{\mu}_{0}^{2})/2\) which is connected to the well-known log-resummation scheme \(t=\ln(\bar{m}^{2}/\bar{\mu}_{0}^{2})/2\)[10; 11] by changing our initial scale \(\mu_{0}\) to \(\mu_{0}/\sqrt{e}\). At high temperature, on the other hand, Eq. (60) incorporates temperature-dependent power corrections arising from \[\bar{I}^{\rm HTE}(\bar{M})_{t=0}\simeq\frac{T^{2}}{12}-\frac{(\bar{M}^{2})^{1/ 2}T}{4\pi}+\frac{\bar{M}^{2}}{16\pi^{2}}\ln\frac{\alpha_{B}T^{2}}{\bar{\mu}_{0 }^{2}}+\cdots. \tag{61}\] Therefore, \(t(\varphi)\) given in Eq. (60) seems to be the best choice for the thermally resummed one-loop effective potential. One thing that needs to be noted here is that the truncation error in Eq. (58) is estimated under the assumption that the one-loop \(\beta\) functions are used for the running parameters. Instead of this assumption, we could consider the two-loop order running parameters. In this case, Eq. (58) is modified to \[\frac{d\bar{V}_{\rm eff}(\bar{\varphi};t)}{dt}=0+\frac{1}{2}\frac{\partial\bar {M}^{2}}{\partial t}\bar{I}(\bar{M})-\frac{\bar{\lambda}^{2}\bar{M}^{2}\bar{ \varphi}^{2}}{2(16\pi^{2})^{2}}-\frac{\bar{M}^{2}\Sigma}{16\pi^{2}}=0, \tag{62}\] where \[\frac{\partial\bar{M}^{2}}{\partial t}=\frac{\bar{\lambda}(\bar{M}^{2}+\bar{ \lambda}\bar{\varphi}^{2}-\Sigma)}{16\pi^{2}}-\frac{\bar{\lambda}^{2}[5(\bar{M }^{2}+3\bar{\lambda}\bar{\varphi}^{2})+\Sigma]}{6(16\pi^{2})^{2}}. \tag{63}\] Therefore, the condition of \(\bar{I}(\bar{M})=0\) cannot eliminate the whole truncation error when using the two-loop \(\beta\) functions. Although we could, in principle, determine \(t(\varphi)\) by the condition (62), we still adopt \(t(\varphi)\) in Eq. (60) throughout our study due to the benefit described above, i.e., the link to the ordinary log-resummation at zero temperature and \(\mathcal{O}(T^{2})\) mass resummation at high temperature. Now we scrutinize if the resummed one-loop effective potential \(\bar{V}_{\rm eff}(\bar{\varphi};t)=\bar{V}_{0}(\bar{\varphi};t)+\bar{V}_{1}( \bar{\varphi};t)\) with the \(t\)-\(\varphi\) relation (60) correctly reproduce the fixed-order two-loop effective potential. For this purpose, \(\bar{V}_{\rm eff}(\varphi;t)\) is expanded in powers of \(t\), \[\bar{V}_{\rm eff}(\bar{\varphi};t)=\bar{V}_{\rm eff}(\varphi;0)+\frac{\partial \bar{V}_{\rm eff}(\bar{\varphi};t)}{\partial t}\bigg{|}_{t=0}t+\frac{1}{2} \frac{\partial^{2}\bar{V}_{\rm eff}(\bar{\varphi};t)}{\partial t^{2}}\bigg{|}_ {t=0}t^{2}+\cdots. \tag{64}\] Let us consider the following two cases \[\bar{V}_{\rm eff}^{(1)}(\bar{\varphi};t(\varphi))\equiv\bar{V}_{0 }(\bar{\varphi};t(\varphi))+\bar{V}_{1}(\bar{\varphi};t(\varphi))\text{ with the one-loop $\beta$ functions}, \tag{65}\] \[\bar{V}_{\rm eff}^{(2)}(\bar{\varphi};t(\varphi))\equiv\bar{V}_{0 }(\bar{\varphi};t(\varphi))+\bar{V}_{1}(\bar{\varphi};t(\varphi))\text{ with the two-loop $\beta$ functions}. \tag{66}\] Expanding \(\bar{V}_{\rm eff}^{(1)}(\varphi;t(\varphi))\) as the \(t\) series, one can find \[\bar{V}_{\rm eff}^{(1)}(\bar{\varphi};t(\varphi))=\bar{V}_{\rm eff}^{(1)}(\varphi ;0)+\frac{\lambda(M^{2}+\lambda\varphi^{2})}{8M^{2}}\bar{I}^{2}(M)_{t=0}. \tag{67}\] The \(\bar{I}^{2}(M)\) terms are exactly the same as those in \(V_{2}(\varphi)\) shown in Eq. (34). Similarly, the \(t\) series of \(\bar{V}_{\rm eff}^{(2)}(\varphi;t)\) becomes \[\bar{V}_{\rm eff}^{(2)}(\bar{\varphi};t(\varphi)) =\bar{V}_{\rm eff}^{(2)}(\varphi;0)+\frac{\lambda(M^{2}+\lambda \varphi^{2}-\Sigma)}{8M^{2}}\left(1+\frac{\Sigma}{M^{2}}\right)\bar{I}^{2}(M)_ {t=0}\] \[\quad-\frac{1}{2}\left(\frac{\lambda^{2}\varphi^{2}}{32\pi^{2}}+ \Sigma\right)\bar{I}(M)_{t=0}. \tag{68}\] In this case, \(\bar{V}_{\rm eff}^{(2)}(\bar{\varphi};t(\varphi))\) contains not only \(\mathcal{O}(\bar{I}^{2}(M))\) but the \(\mathcal{O}(\bar{I}(M))\) terms appearing in \(V_{2}(\varphi)\). One should note that \(\Sigma\) terms in \(\mathcal{O}(\bar{I}^{2}(M))\), which are not present in \(V_{2}(\varphi)\), are the consequence of the use of the two-loop \(\beta\) functions in \(\bar{V}_{\rm eff}(\bar{\varphi};t(\varphi))\). From the viewpoint of its RG invariance, such terms can be regarded as the higher order terms so that they can be dropped, as we have done in the proof of the RG invariance given in subsection III.1. In this sense, \(\bar{V}_{\rm eff}^{(2)}(\bar{\varphi};t(\varphi))\) correctly resums up to \(\mathcal{O}(\bar{I}(M))\). This appears parallel to the leading and next-to-leading logarithmic resummations in the scheme of \(t(\varphi)=\ln(\bar{m}^{2}/\bar{\mu}_{0}^{2})/2\) at zero temperature [10; 11]. Before closing this subsection, we discuss the upper limit of \(t\). As seen from Eq. (55), \(t(\varphi)\) could hit the Landau pole \(t_{\rm LP}=16\pi^{2}/3\lambda\simeq 52.6/\lambda\) at which \(\bar{\lambda}\) diverges. From the condition \(t(\varphi)<t_{\rm LP}\), it follows that \[\frac{\bar{I}(\bar{M})_{t=0}}{\bar{M}^{2}}<\frac{2}{3\lambda}. \tag{69}\] When the \(\lambda\times\)logarithmic terms are large and/or temperature is significantly high, this condition would not be satisfied. Actually, although the parameter set adopted in Fig. 1 illustrates the differences between \(\overline{\rm MS}\) and our schemes clearly, \(\lambda\simeq 10\) and \(T=250\) are too large to satisfy the condition (69). In addition to this, since our interest is the case of first-order phase transition required by the gravitational wave generation and EWBG, we extend the \(\phi^{4}\) theory and apply our \(t(\varphi)\) to it in the next section. \(\phi^{4}\) theory with an additional real scalar One of the simplest extensions of the \(\phi^{4}\) theory is to add another real scalar field. The bare Lagrangian we consider is defined by \[\mathcal{L}_{B} =\sum_{i=1,2}\frac{1}{2}\partial_{\mu}\Phi_{Bi}\partial^{\mu}\Phi_{ Bi}-V_{0}(\Phi_{B1},\Phi_{B2}), \tag{70}\] \[V_{0}(\Phi_{B1},\Phi_{B2}) =\Omega_{B}+\frac{\nu_{B1}^{2}}{2}\Phi_{1}^{2}+\frac{\nu_{B2}^{2} }{2}\Phi_{B2}^{2}+\frac{\lambda_{B1}}{4!}\Phi_{B1}^{4}+\frac{\lambda_{B2}}{4!} \Phi_{B2}^{4}+\frac{\lambda_{B3}}{4}\Phi_{B1}^{2}\Phi_{B2}^{2}, \tag{71}\] where two \(\mathbb{Z}_{2}\) symmetries \(\Phi_{B1}\rightarrow-\Phi_{B1}\) and \(\Phi_{B2}\rightarrow-\Phi_{B2}\) are imposed to make our analysis simpler. As we have done in the \(\phi^{4}\) theory, we subtract and add the dominant temperature corrections to the masses of \(\Phi_{1}\) and \(\Phi_{2}\) (denoted as \(\Sigma_{1}\) and \(\Sigma_{2}\)) in \(\mathcal{L}_{R}\) and \(\mathcal{L}_{\text{CT}}\), respectively. Their explicit forms are given in Appendix A.2. For the sake of further simplicity, we also assume that only \(\Phi_{1}\) develops VEV and investigate the thermal phase transition in the \(\Phi_{1}\) direction. We define the classical constant background fields and their fluctuation fields as \(\Phi_{i}(x)=\varphi_{i}+\phi_{i}(x)\), and VEV of \(\phi_{1}\) is denoted as \(v\). After removing all the divergences of the resummed one-loop effective potential by CTs in Eq. (101) and improving it by RGE (35), one would arrive at4 Footnote 4: We suppress the \(\varphi_{2}\) dependence by the assumption that only \(\Phi_{1}\) has nonzero VEV. \[\bar{V}_{\text{eff}}(\bar{\varphi}_{1},t)=\bar{V}_{0}(\bar{\varphi}_{1},t)+ \bar{V}_{1}(\bar{\varphi}_{1},t), \tag{72}\] where \[\bar{V}_{0}(\bar{\varphi}_{1},t) =\bar{\Omega}+\frac{1}{2}\left(\bar{\nu}_{1}^{2}+\Sigma_{1}(T) \right)\bar{\varphi}_{1}^{2}+\frac{\bar{\lambda}_{1}}{4!}\bar{\varphi}_{1}^{4}, \tag{73}\] \[\bar{V}_{1}(\bar{\varphi}_{1},t) =\sum_{i=1,2}\frac{\bar{M}_{i}^{4}}{4(16\pi^{2})}\left(\ln\frac{ \bar{M}_{i}^{2}}{e^{2t}\bar{\mu}_{0}^{2}}-\frac{3}{2}\right)+\frac{T^{4}}{2\pi ^{2}}I_{B}(\bar{A}_{i}^{2})-\frac{1}{2}\Sigma_{1}(T)\bar{\varphi}_{1}^{2}, \tag{74}\] with \[\bar{M}_{1}^{2} =\bar{\nu}_{1}^{2}+\Sigma_{1}(T)+\frac{\bar{\lambda}_{1}}{2}\bar {\varphi}_{1}^{2},\quad\bar{M}_{2}^{2}=\bar{\nu}_{2}^{2}+\Sigma_{2}(T)+\frac{ \bar{\lambda}_{3}}{2}\bar{\varphi}_{1}^{2}, \tag{75}\] \[\Sigma_{1}(T) =\frac{T^{2}}{24}(\lambda_{1}+\lambda_{3}),\quad\Sigma_{2}(T)= \frac{T^{2}}{24}(\lambda_{2}+\lambda_{3}). \tag{76}\] Note that \(\Sigma_{i}(T)\) are given by the parameters at \(t=0\) to fulfill the consistency condition as explained in Sec. II, Our next step is to refine \(\bar{V}_{\text{eff}}(\bar{\varphi}_{1},t)\) by incorporating a series of higher-order terms in \(\bar{I}(\bar{M}_{i})\) via a proper \(t\). As in the \(\phi^{4}\) theory, we choose \(t\) for each \(\varphi_{1}\) such that \[\frac{\partial\bar{V}_{\text{eff}}(\bar{\varphi}_{1};t)}{\partial t }=0+\frac{1}{2}\sum_{i}\frac{\partial\bar{M}_{i}^{2}}{\partial t}\bar{I}(\bar{ M}_{i})=0, \tag{77}\] from which, one obtains \[t(\varphi_{1})=\frac{8\pi^{2}\sum_{i}\frac{\partial\bar{M}_{i}^ {2}}{\partial t}\bar{I}(\bar{M}_{i})_{t=0}}{\sum_{i}\bar{M}_{i}^{2}\frac{ \partial\bar{M}_{i}^{2}}{\partial t}}. \tag{78}\] Let us approximate \(\bar{V}_{\text{eff}}(\bar{\varphi}_{1};t)\) in terms of the \(t\)-expansion and compare it with the two-loop correction to the effective potential (40). \(\bar{V}_{\text{eff}}^{(1)}(\bar{\varphi}_{1},t(\varphi_{1}))\) defined in Eq. (65) is found to be \[\bar{V}_{\text{eff}}^{(1)}(\bar{\varphi}_{1};t(\varphi_{1}))= \bar{V}_{\text{eff}}^{(1)}(\varphi_{1};0)+\frac{\big{(}\sum_{i=1,2}\alpha_{i} \bar{I}(M_{i})_{t=0}\big{)}^{2}}{8\sum_{i=1,2}\alpha_{i}M_{i}^{2}}, \tag{79}\] where \(\alpha_{i}=16\pi^{2}(\partial\bar{M}_{i}^{2}/\partial t)|_{t=0}\), i.e., \[\alpha_{1} =\lambda_{1}M_{1}^{2}+\lambda_{3}M_{2}^{2}+(\lambda_{1}^{2}+ \lambda_{3}^{2})\varphi_{1}^{2}, \tag{80}\] \[\alpha_{2} =\lambda_{3}M_{1}^{2}+\lambda_{2}M_{2}^{2}+2\lambda_{3}^{2} \varphi_{1}^{2}. \tag{81}\] One can see that \(\mathcal{O}(\bar{I}^{2}(M_{i}))\) terms in Eq. (79) do not agree with those in the \(V_{2}(\varphi_{1})\) in Eq. (40). This is because that the single parameter \(t\) alone cannot, in principle, incorporate the multiple \(\bar{I}^{2}(M_{i})\) terms simultaneously. Only in a special case, such as \(|\lambda_{1}|\gg|\lambda_{2}|,|\lambda_{3}|\sim 0\), the \(\mathcal{O}(\bar{I}^{2}(M_{i}))\) terms in Eq. (79) would coincide with the corresponding terms of \(V_{2}(\varphi_{1})\). Similarly, it is straightforward to obtain \(\bar{V}_{\text{eff}}^{(2)}(\varphi_{1};t(\varphi_{1}))\) defined in Eq. (66) as \[\bar{V}_{\text{eff}}^{(2)}(\bar{\varphi}_{1};t) =\bar{V}_{\text{eff}}^{(2)}(\varphi_{1};0)\] \[\quad+\left[\frac{1}{2}\sum_{i=1,2}\frac{\partial\bar{M}_{i}^{2}} {\partial t}\bigg{|}_{t=0}\bar{I}(M_{i})_{t=0}-\sum_{i=1,2}\frac{M_{i}^{2} \Sigma_{i}}{16\pi^{2}}-\frac{(\lambda_{1}^{2}+\lambda_{3}^{2})M_{1}^{2}+2 \lambda_{3}^{2}M_{2}^{2}}{2(16\pi^{2})^{2}}\varphi_{1}^{2}\right]t\] \[\quad+\frac{1}{2}\left[-\sum_{i=1,2}\frac{M_{i}^{2}+\Sigma_{i}} {16\pi^{2}}\frac{\partial\bar{M}_{i}^{2}}{\partial t}\bigg{|}_{t=0}\right]t^ {2}+\cdots \tag{82}\] where \[\frac{\partial\bar{M}_{1}^{2}}{\partial t}\bigg{|}_{t=0} \simeq\frac{\alpha_{1}}{16\pi^{2}}-\frac{\lambda_{1}\Sigma_{1}+ \lambda_{3}\Sigma_{2}}{16\pi^{2}}, \tag{83}\] \[\frac{\partial\bar{M}_{2}^{2}}{\partial t}\bigg{|}_{t=0} \simeq\frac{\alpha_{2}}{16\pi^{2}}-\frac{\lambda_{3}\Sigma_{1}+ \lambda_{2}\Sigma_{2}}{16\pi^{2}}. \tag{84}\] The \(\mathcal{O}(\bar{I}(M_{i}))\) terms do not agree with those in \(V_{2}(\varphi_{1})\) either. Here one may ask wether linear-like terms \((M_{i}^{2})^{1/2}T^{3}\) in \(\bar{V}_{\text{eff}}^{(2)}(\bar{\varphi}_{1};t)\) are cancelled or not. As shown below, the answer is positive. Recalling that such terms arise from the high-temperature limit of \(\bar{I}^{2}(M_{i})\), i.e, \((T^{2}/12-(M_{i})^{1/2}T/4\pi+\cdots)^{2}\), we take the first derivative of the \(\mathcal{O}(\bar{I}^{2}(M_{i}))\) terms in \(\bar{V}_{\text{eff}}^{(2)}(\bar{\varphi}_{1};t)\) with respect to \(\bar{I}(M_{j})_{t=0}\), which goes like \[\frac{\partial\bar{V}_{\text{eff}}^{(2)}(\bar{\varphi}_{1};t) \big{|}_{\bar{I}^{2}(M_{i})}}{\partial\bar{I}(M_{j})_{t=0}} =\frac{\alpha_{j}}{4\sum_{i}M_{i}^{2}\alpha_{i}}\bigg{[}\sum_{i} \alpha_{i}\bar{I}(M_{i})_{t=0}-2\sum_{i}M_{i}^{2}\Sigma_{i}\bigg{]}\] \[\simeq\frac{\alpha_{j}}{4\sum_{i}M_{i}^{2}\alpha_{i}}\bigg{[} \frac{T^{2}}{12}\sum_{i}\alpha_{i}-2\sum_{i}M_{i}^{2}\Sigma_{i}\bigg{]}\] \[=\frac{\alpha_{j}T^{2}}{48\sum_{i}M_{i}^{2}\alpha_{i}}\Big{[}( \lambda_{1}^{2}+\lambda_{3}^{2})\varphi_{1}^{2}+2\lambda_{3}^{2}\varphi_{1}^{ 2}\Big{]}. \tag{85}\] Therefore, the linear-like terms are absent in \(\bar{V}_{\text{eff}}^{(2)}(\bar{\varphi}_{1};t)\). Although \(\mathcal{O}(\bar{I}^{2}(M_{i}))\) and \(\mathcal{O}(\bar{I}(M_{i}))\) terms in \(\bar{V}_{\text{eff}}^{(1,2)}(\bar{\varphi}_{1},t(\varphi_{1}))\) are different from those in \(V_{2}(\varphi_{1})\) in a strict sense, they may still capture the two-loop order corrections that are absent in the resummed one-loop effective potential \(V_{\text{eff}}(\bar{\varphi}_{1})=V_{0}(\bar{\varphi}_{1})+V_{1}(\bar{\varphi }_{1})\) commonly used in the literature. We will quantify to what extent results obtained from \(\bar{V}_{\text{eff}}^{(1,2)}(\bar{\varphi}_{1},t(\varphi_{1}))\) close to those from the resummed two-loop effective potential \(\bar{V}_{\text{eff}}(\bar{\varphi}_{1};t)=\bar{V}_{0}(\bar{\varphi}_{1};t)+ \bar{V}_{1}(\bar{\varphi}_{1};t)+\bar{V}_{2}(\bar{\varphi}_{1};t)\). ### Numerical analysis Here, we present our numerical results. There are 5 independent parameters in this model, \((\nu_{1}^{2},\nu_{2}^{2},\lambda_{1},\lambda_{2},\lambda_{3})\). Some of them can be traded with physical parameters, such as \((v,\nu_{2}^{2},m_{\phi_{1}},\lambda_{2},m_{\phi_{2}})\) using the vacuum and mass conditions. We search for a parameter set that gives the first-order phase transition. In particular, we select a case in which differences between the \(\overline{\text{MS}}\) and our schemes could be sufficiently large. For that purpose, we take a rather large \(\lambda_{2}\) that enhances \(\Sigma_{2}\). Moreover, we consider a case in which an imaginary part of the effective potential does not arise near a critical temperature \(T_{C}\), where the effective potential has the two degenerate minima. One of parameter sets is given by \(v=200.0\), \(m_{\phi_{1}}=5.0\), \(m_{\phi_{2}}=125.0\), \(\nu_{2}^{2}=85.0^{2}\), \(\lambda_{2}=5.0\), where those values are given at the initial scale \(\bar{\mu}_{0}\) which is fixed by the condition \(t(\varphi_{1}=v)=0\). It is found that \(\bar{\mu}_{0}\simeq 75.81\) at the both one- and two-loop levels. From the input parameters, \(\nu_{1}^{2}\) and \(\lambda_{1}\) are determined by tadpole and mass conditions at a given order while \(\lambda_{3}\) at the tree-level. All the dimensionful parameters are given in units of any mass scale. Fig. 3 shows \(v(T)/T\) as functions of the temperature \(T\) in the \(\overline{\rm MS}\) (left) and our (right) schemes, respectively. '1-loop' denotes the results using \(\bar{V}_{\rm eff}(\varphi_{1};t)=\bar{V}_{0}(\varphi_{1};t)+\bar{V}_{1}(\varphi _{1};t)\) with the one-loop \(\beta\)-functions in the cases of \(t=0\) (blue, dotted) and \(t=\ln 5\) (blue, dashed), while '2-loop' represents those using \(\bar{V}_{\rm eff}(\varphi_{1};t)=\bar{V}_{0}(\varphi_{1};t)+\bar{V}_{1}( \varphi_{1};t)+\bar{V}_{2}(\varphi_{1};t)\) with the two-loop \(\beta\)-functions in the cases of \(t=0\) (red, dot-dashed) and \(t=\ln 5\) (red, two-dot-dashed). The intersections between each curve and horizontal axis correspond to \(T_{C}\). One can see that \(t\) dependence of \(T_{C}\) at the one-loop order in the \(\overline{\rm MS}\) scheme is about 5 times larger than that in our scheme. Such a large \(t\) dependence in the \(\overline{\rm MS}\) scheme is reflected by the large RG noninvariance at the order. At the two-loop order, on the other hand, the \(t\) dependences in the both schemes are equally smaller than the one-loop order result in our scheme. The significant improvement in the \(\overline{\rm MS}\) scheme is due to the partial restoration of the RG invariance as discussed in the \(\phi^{4}\) theory. As explicitly given in Appendix A.2, the effective potential follows the RG invariance up to the \({\cal O}(\lambda_{i}^{2}T^{2})\) order in the high temperature limit. In this parameter choice, the residual RG-violating terms are numerically unimportant and thus the \(t\) dependence is dominated by the truncation error, leading to the similar results in the both schemes. We also overlay \(v(T)/T\) obtained by \(\bar{V}^{(1)}_{\rm eff}(\varphi_{1};t(\varphi_{1}))\) (grey, solid) and \(\bar{V}^{(2)}_{\rm eff}(\varphi_{1};t(\varphi_{1}))\) (black, thick-solid). It is found that in the two schemes, \(v_{C}/T_{C}\) in the case of \(\bar{V}^{(2)}_{\rm eff}(\bar{\varphi}_{1},t(\varphi_{1}))\) lie within the two-loop level scale uncertainties, while not in that of \(\bar{V}^{(1)}_{\rm eff}(\bar{\varphi}_{1},t(\varphi_{1}))\). This demonstration suggests that up to the \({\cal O}(\bar{I}(\bar{M}))\) terms are necessary to obtain the results closer to those at the two-loop order. \(T_{C}\) and \(v_{C}/T_{C}\) in each case are summarized in Table 1. As a reference, we also consider the cases of \(\lambda_{3}=1,3\), which give the smaller \(\Sigma_{2}\) compared to the \(\lambda_{2}=5\) case, to see to what extent the two schemes can differ. In Fig. 4, \(v/T\) is shown as a function of \(T\), with the upper plots corresponding to the \(\lambda_{2}=3\) case and the lower ones to the \(\lambda_{2}=1\) case. The general consequences in those plots are the same as in Fig. 3, but the differences between the two schemes in the one-loop order results gets smaller as \(\lambda_{2}\) becomes smaller. \(T_{C}\) and \(v_{C}/T_{C}\) in all the cases are listed in Table 1. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{\(\overline{\rm MS}\) scheme} & \multicolumn{3}{c|}{Our scheme} \\ \hline & \multicolumn{8}{c|}{\(T_{C}\)} \\ \hline & 1-loop & 2-loop & \(\bar{V}^{(1)}_{\rm eff}\) & \(\bar{V}^{(2)}_{\rm eff}\) & 1-loop & 2-loop & \(\bar{V}^{(1)}_{\rm eff}\) & \(\bar{V}^{(2)}_{\rm eff}\) \\ \hline \(\lambda_{2}=5\) & \(48.6-53.6\) & \(47.6-48.1\) & \(48.5\) & \(48.6\) & \(48.6-49.6\) & \(47.6-48.1\) & \(48.2\) & \(48.6\) \\ \(\lambda_{2}=3\) & \(48.5-51.2\) & \(48.1-48.3\) & \(48.5\) & \(48.6\) & \(48.5-48.8\) & \(48.1-48.3\) & \(48.2\) & \(48.6\) \\ \(\lambda_{2}=1\) & \(48.4-49.0\) & \(48.4\) & \(48.1\) & \(48.4\) & \(48.0-48.4\) & \(48.4\) & \(48.1\) & \(48.4\) \\ \hline \hline & \multicolumn{8}{c|}{\(v_{C}/T_{C}\)} \\ \hline & 1-loop & 2-loop & \(\bar{V}^{(1)}_{\rm eff}\) & \(\bar{V}^{(2)}_{\rm eff}\) & 1-loop & 2-loop & \(\bar{V}^{(1)}_{\rm eff}\) & \(\bar{V}^{(2)}_{\rm eff}\) \\ \hline \(\lambda_{2}=5\) & 1.9 & \(2.1-2.3\) & 2.1 & 2.2 & \(1.6-1.9\) & \(2.2-2.3\) & 1.9 & 2.2 \\ \(\lambda_{2}=3\) & \(2.0-2.1\) & \(2.1-2.3\) & 2.1 & 2.1 & \(1.8-2.0\) & \(2.1-2.3\) & 2.0 & 2.1 \\ \(\lambda_{2}=1\) & 2.2 & \(2.1-2.3\) & 2.3 & 2.3 & \(2.1-2.2\) & \(2.1-2.3\) & 2.2 & 2.3 \\ \hline \end{tabular}. \end{table} Table 1: The values of \(v_{C}\) and \(T_{C}\) are summarized in the case of \(\lambda_{3}=1,3,5\) for the \(\overline{\rm MS}\) and our renormalization schemes. Here ‘1-loop’ and ‘2-loop’ denote the values obtained by the \(t\)-dependent effective potential at the one- and two-loop orders, respectively, for \(t\) in the range of \(0<t<\ln 5\). Conclusion and discussions We have presented our RG improvement for the thermally resummed effective potentials in details. In our method, \(\beta\)-functions are defined in the resummed theory and thus the order-by-order RG invariance of the effective potential holds consistently, which is in stark contrast to the case of \(\overline{\text{MS}}\) scheme. As a simple example, we applied our method to the \(\phi^{4}\) theory and made a comparison with the \(\overline{\text{MS}}\) scheme both analytically and numerically. At the one-loop order, our scheme generally gives less scale dependences than the \(\overline{\text{MS}}\) scheme does. At the two-loop order, however, the differences between the two schemes are not pronounced since the scale invariance is restored up to \(\mathcal{O}(\lambda^{2}T^{2})\) in the \(\overline{\text{MS}}\) scheme. Our numerical study also exemplifies the case that the scale dependence in the \(\overline{\text{MS}}\) scheme can become smaller than that in our scheme due to the accidental cancellation between the RG-noninvariant terms and truncation errors. This demonstration illustrates the need to exercise caution when interpreting the scale dependence. We also proposed the refinement for the resummed one-loop effective potential in which the one-loop function (\(\bar{I}(\bar{M})\)) as a whole is resummed by fully utilizing the RG invariance. Because of its general form, the potentially dangerous large logarithmic terms and power corrections of temperature are simultaneously tamed. Moreover, this method is less sensitive to the truncation errors among any other choices. We also discussed the first-order phase transition in the \(\phi^{4}\) theory augmented by another real scalar field. We showed that the scale dependence of \(T_{C}\) obtained by the resummed one-loop effective potential is much smaller than that in the \(\overline{\text{MS}}\) scheme owing to the modified \(\beta\)-functions. At the two-loop order, however, the both schemes are equally good as in the \(\phi^{4}\) theory. Our numerical study shows that the resummed one-loop effective potential with the two-loop \(\beta\)-functions (\(\bar{V}_{\text{eff}}^{(2)}\)) can yield the same \(v_{C}/T_{C}\) as those in the two-loop order calculations within their scale uncertainties, implying that the dominant two-loop order contributions are incorporated into \(\bar{V}_{\text{eff}}^{(2)}\) to a good approximation. This suggests that \(\bar{V}_{\text{eff}}^{(2)}\) could be practically useful when the full two-loop effective potentials are not at hand. In Ref. [21], we show the renormalizability of resummed two-loop effective potentials without resorting to HTE in abelian gauge theories. It would be interesting to clarify whether our method also leads to the same conclusion obtained here. We leave this to future research [22]. ## Appendix A Counterterms and \(\beta\)-functions in the resummed theories ### \(\phi^{4}\) theory We divide the bare Lagrangian (14) into the renormalized part and counterterms: \[\mathcal{L}_{B}=\mathcal{L}_{R}+\mathcal{L}_{\text{CT}}, \tag{10}\] where \[\mathcal{L}_{R} =\frac{1}{2}\partial_{\mu}\Phi\partial^{\mu}\Phi-\Omega+\frac{ \nu^{2}}{2}\Phi^{2}-\frac{\lambda\mu^{\epsilon}}{4!}\Phi^{4}, \tag{11}\] \[\mathcal{L}_{\text{CT}} =\frac{1}{2}(Z_{\Phi}-1)\partial_{\mu}\Phi\partial^{\mu}\Phi- \delta\Omega+\frac{\delta\nu^{2}}{2}\Phi^{2}-\frac{\delta\lambda\mu^{\epsilon} }{4!}\Phi^{4}. \tag{12}\] The relationships between the bare and renormalized parameters are, respectively, given by \[\Phi_{B}=Z_{\Phi}^{1/2}\Phi,\quad\nu_{B}^{2}=Z_{\Phi}^{-1}(\nu^{2}+\delta\nu^ {2}),\quad\lambda_{B}\mu^{-\epsilon}=Z_{\Phi}^{-2}(\lambda+\delta\lambda), \quad\Omega_{B}\mu^{\epsilon}=\Omega+\delta\Omega. \tag{13}\] \(\mathcal{L}_{R}\) and \(\mathcal{L}_{\text{CT}}\) in the resummed \(\phi^{4}\) theory are modified as \[\mathcal{L}_{R} =\frac{1}{2}\partial_{\mu}\Phi\partial^{\mu}\Phi-\Omega+\frac{ \nu^{2}-\Sigma(T)}{2}\Phi^{2}-\frac{\lambda\mu^{\epsilon}}{4!}\Phi^{4}, \tag{14}\] \[\mathcal{L}_{\text{CT}} =\frac{1}{2}(Z_{\Phi}-1)\partial_{\mu}\Phi\partial^{\mu}\Phi- \delta\Omega+\frac{\delta\nu^{2}+\Sigma(T)}{2}\Phi^{2}-\frac{\delta\lambda\mu ^{\epsilon}}{4!}\Phi^{4}. \tag{15}\] Note that the relations in Eq. (13) remain intact. When the spontaneous symmetry breaking occurs, the scalar field is shifted as \(\Phi(x)=\varphi+\phi(x)\). As in the ordinary perturbation theory, CTs are perturbatively expanded as \[\delta\Omega =\delta^{(1)}\Omega+\delta^{(2)}\Omega+\cdots, \tag{16}\] \[\delta\nu^{2} =\delta^{(1)}\nu^{2}+\delta^{(2)}\nu^{2}+\cdots,\] (17) \[\delta\lambda =\delta^{(1)}\lambda+\delta^{(2)}\lambda+\cdots,\] (18) \[Z_{\Phi} =1+z_{\Phi}^{(1)}+z_{\Phi}^{(2)}+\cdots, \tag{19}\] and determined order-by-order in the resummed perturbation theory. At the one-loop level, CTs are given in Eq. (18) and at the two-loop level, one can find \[\delta^{(2)}\Omega =\frac{\lambda(\nu^{2}-\Sigma)^{2}}{2(16\pi^{2})^{2}}\frac{1}{ \epsilon^{2}}+\frac{(\nu^{2}-\Sigma)\Sigma}{16\pi^{2}}\frac{1}{\epsilon}, \tag{11}\] \[\delta^{(2)}\nu^{2} =\frac{\lambda^{2}(\nu^{2}-\Sigma)}{(16\pi^{2})^{2}}\left(\frac{2 }{\epsilon^{2}}-\frac{1}{2\epsilon}\right)+\frac{\lambda\Sigma}{16\pi^{2}} \frac{1}{\epsilon},\] (12) \[\delta^{(2)}\lambda =\frac{3\lambda^{2}}{(16\pi^{2})^{2}}\left(\frac{3}{\epsilon^{2} }-\frac{1}{\epsilon}\right),\] (13) \[z_{\Phi}^{(2)} =-\frac{\lambda^{2}}{12(16\pi^{2})^{2}}\frac{1}{\epsilon}. \tag{14}\] It would be instructive to show the derivation of \(\beta_{\nu^{2}}^{(1)}\) and \(\beta_{\Omega}^{(1)}\) in more detail. The bare mass \(\nu_{B}^{2}\) is expressed as \[\nu_{B}^{2}=\nu^{2}\left(1+\sum_{n=1}^{\infty}\frac{b_{n}(\lambda)}{\epsilon^ {n}}\right)+\Sigma(T)\sum_{n=1}^{\infty}\frac{\tilde{b}_{n}(\lambda)}{\epsilon ^{n}}. \tag{15}\] Applying \(d/dt=\mu d/d\mu\) in both sides, one gets \[0=\nu^{2}\beta_{\nu^{2}}^{(\epsilon)}\left(1+\sum_{n=1}^{\infty}\frac{b_{n}( \lambda)}{\epsilon^{n}}\right)+(\nu^{2}+\Sigma(T))\sum_{n=1}^{\infty}\frac{ \beta_{\lambda}^{(\epsilon)}}{\epsilon^{n}}\frac{db_{n}(\lambda)}{d\lambda}+ \frac{d\Sigma(T)}{d\lambda}\beta_{\lambda}^{(\epsilon)}\sum_{n=1}^{\infty} \frac{\tilde{b}_{n}(\lambda)}{\epsilon^{n}}, \tag{16}\] where \(\beta_{\lambda}^{(\epsilon)}=d\lambda/dt=\sum_{n=0}^{\infty}x_{n}\epsilon^{n}\). Since \(x_{n}=0\) for \(n\geq 2\), \(\beta_{\lambda}^{(\epsilon)}=x_{0}+x_{1}\epsilon=x_{0}-\lambda\epsilon\), which leads to \[\nu^{2}\beta_{\nu^{2}}=\lambda\nu^{2}\frac{db_{1}(\lambda)}{d\lambda}+\lambda \Sigma(T)\frac{d\tilde{b}_{1}(\lambda)}{d\lambda}+\lambda\tilde{b}_{1}\frac{ d\Sigma(T)}{d\lambda}. \tag{17}\] At the one-loop level, one obtains \(b_{1}(\lambda)=-\tilde{b}_{1}(\lambda)=\lambda/16\pi^{2}\) from Eq. (17). One finally arrives at \[\nu^{2}\beta_{\nu^{2}}^{(1)}=\frac{\lambda(\nu^{2}-\Sigma(T))}{16\pi^{2}}- \frac{\lambda^{2}}{16\pi^{2}}\frac{d\Sigma(T)}{d\lambda}. \tag{18}\] If we adopt the resummation method in which \(d\Sigma(T)/dt\neq 0\), the last term should be kept. However, such a term would not preserve the RG invariance at the one-loop order. In our resummation method with the consistency condition, on the other hand, \(\beta_{\nu^{2}}^{(1)}\) is reduced to \[\nu^{2}\beta_{\nu^{2}}^{(1)}=\frac{\lambda(\nu^{2}-\Sigma(T))}{16\pi^{2}}. \tag{19}\] Now we move on to derive \(\beta_{\Omega}^{(1)}\). The bare vacuum energy is expressed as \[\Omega_{B}\mu^{\epsilon}=\Omega+\sum_{n=1}^{\infty}\frac{\omega_{n}(\lambda) }{\epsilon^{n}}. \tag{20}\] where the \(\lambda\) dependence of \(\omega_{n}(\lambda)\) arise from \(\Sigma(T)\). Taking the \(t\)-derivative of both sides, one finds \[\epsilon\Omega_{B}\mu^{\epsilon}=\epsilon\left[\Omega+\sum_{n=1}^{ \infty}\frac{\omega_{n}(\lambda)}{\epsilon^{n}}\right]=\beta_{\Omega}^{( \epsilon)}+\sum_{n=1}^{\infty}\frac{1}{\epsilon^{n}}\mu\frac{d\omega_{n}( \lambda)}{d\mu}, \tag{101}\] where \(\beta_{\Omega}^{(\epsilon)}=\mu d\Omega/d\mu\). With \(\beta_{\Omega}^{(\epsilon)}=\sum_{n=0}^{\infty}d_{n}\epsilon^{n}\) and \(\beta_{\lambda}^{(\epsilon)}=x_{0}-\lambda\epsilon\) and taking \(\epsilon\to 0\), \[\beta_{\Omega}=\lim_{\epsilon\to 0}\beta_{\Omega}^{(\epsilon)}=d_{0}= \omega_{1}+\lambda\frac{d\omega_{1}(\lambda)}{d\lambda}, \tag{102}\] where the second term is induced by the running of \(\Sigma(T)\). Thus, such a term should be discarded if the consistency condition applies, and we are left with \[\beta_{\Omega}^{(1)}=\frac{(\nu^{2}-\Sigma)^{2}}{32\pi^{2}}. \tag{103}\] ### \(\phi^{4}\) theory with additional scalar Following the same procedure in the \(\phi^{4}\) theory, the renormalized Lagrangian and CTs after the thermal resummation are, respectively, given by \[\mathcal{L}_{R} =\sum_{i=1,2}\frac{1}{2}\partial_{\mu}\Phi_{i}\partial^{\mu}\Phi_ {i}-V_{0}(\Phi_{1},\Phi_{2}), \tag{104}\] \[\mathcal{L}_{\rm CT} =\frac{1}{2}\sum_{i}(Z_{\Phi_{i}}-1)\partial_{\mu}\Phi_{i}\partial ^{\mu}\Phi_{i}-\delta V_{0}(\Phi_{1},\Phi_{2}), \tag{105}\] where \[V_{0}(\Phi_{1},\Phi_{2}) =\Omega+\frac{\nu_{1}^{2}+\Sigma_{1}(T)}{2}\Phi_{1}^{2}+\frac{\nu _{2}^{2}+\Sigma_{2}(T)}{2}\Phi_{2}^{2}+\frac{\lambda_{1}}{4!}\Phi_{1}^{4}+ \frac{\lambda_{2}}{4!}\Phi_{2}^{4}+\frac{\lambda_{3}}{4}\Phi_{1}^{2}\Phi_{2}^ {2}, \tag{106}\] \[\delta V_{0}(\Phi_{1},\Phi_{2}) =\delta\Omega\mu^{-\epsilon}+\frac{\delta\nu_{1}^{2}-\Sigma_{1}(T )}{2}\Phi_{1}^{2}+\frac{\delta\nu_{2}^{2}-\Sigma_{2}(T)}{2}\Phi_{2}^{2}+\frac{ \delta\lambda_{1}\mu^{\epsilon}}{4!}\Phi_{1}^{4}+\frac{\delta\lambda_{2}\mu^{ \epsilon}}{4!}\Phi_{2}^{4}+\frac{\delta\lambda_{3}\mu^{\epsilon}}{4}\Phi_{1}^{ 2}\Phi_{2}^{2}. \tag{107}\] The relationships between the bare and renormalized parameters are \[\Phi_{iB} =Z_{\Phi_{i}}^{1/2}\Phi_{i},\quad\nu_{i}^{2}=Z_{\Phi_{i}}^{-1}(\nu _{i}^{2}+\delta\nu_{i}^{2}),\quad\lambda_{iB}\mu^{-\epsilon}=Z_{\Phi_{i}}^{-2} (\lambda_{i}+\delta\lambda_{i}),\quad i=1,2 \tag{108}\] \[\lambda_{3B}\mu^{-\epsilon} =Z_{\Phi_{1}}^{-1}Z_{\Phi_{2}}^{-1}(\lambda_{3}+\delta\lambda_{3} ),\quad\Omega_{B}\mu^{\epsilon}=\Omega+\delta\Omega. \tag{109}\] As in Eqs. (107)-(108), CTs are determined order by order in the resummed perturbation theory. The one-loop order CTs are, respectively, given by \[\delta^{(1)}\Omega =\frac{(\nu_{1}^{2}+\Sigma_{1})^{2}+(\nu_{2}^{2}+\Sigma_{2})^{2}}{2 (16\pi^{2})}\frac{1}{\epsilon}, \tag{109}\] \[\delta^{(1)}\nu_{1}^{2} =\frac{\lambda_{1}(\nu_{1}^{2}+\Sigma_{1})+\lambda_{3}(\nu_{2}^{2 }+\Sigma_{2})}{16\pi^{2}}\frac{1}{\epsilon},\] (110) \[\delta^{(1)}\lambda_{1} =\frac{3(\lambda_{1}^{2}+\lambda_{3}^{2})}{16\pi^{2}}\frac{1}{ \epsilon},\] (111) \[z_{\Phi_{1}}^{(1)} =0. \tag{112}\] while the two-loop order CTs are \[\delta^{(2)}\Omega =\frac{\lambda_{1}(\nu_{1}^{2}+\Sigma_{1})^{2}+\lambda_{2}(\nu_{ 2}^{2}+\Sigma_{2})^{2}+2\lambda_{3}(\nu_{1}^{2}+\Sigma_{1})(\nu_{2}^{2}+ \Sigma_{2})}{2(16\pi^{2})^{2}}\frac{1}{\epsilon^{2}}\] \[\qquad-\frac{\Sigma_{1}(\nu_{1}^{2}+\Sigma_{1})+\Sigma_{2}(\nu_{ 2}^{2}+\Sigma_{2})}{16\pi^{2}}\frac{1}{\epsilon}, \tag{113}\] \[\delta^{(2)}\nu_{1}^{2} =\frac{2(\lambda_{1}^{2}+\lambda_{3}^{2})(\nu_{1}^{2}+\Sigma_{1}) +\lambda_{3}(\lambda_{1}+\lambda_{2}+2\lambda_{3})(\nu_{2}^{2}+\Sigma_{2})}{( 16\pi^{2})^{2}}\frac{1}{\epsilon^{2}}\] \[\qquad-\bigg{[}\frac{(\lambda_{1}^{2}+\lambda_{3}^{2})(\nu_{1}^{ 2}+\Sigma_{1})+2\lambda_{3}^{2}(\nu_{2}^{2}+\Sigma_{2})}{2(16\pi^{2})^{2}}+ \frac{\lambda_{1}\Sigma_{1}+\lambda_{3}\Sigma_{2}}{16\pi^{2}}\bigg{]}\,\frac{1 }{\epsilon},\] (114) \[\delta^{(2)}\lambda_{1} =\frac{3(3\lambda_{1}^{3}+4\lambda_{1}\lambda_{3}^{2}+\lambda_{2} \lambda_{3}^{2}+4\lambda_{3}^{3})}{(16\pi^{2})^{2}}\frac{1}{\epsilon^{2}}- \frac{3[\lambda_{1}(\lambda_{1}^{2}+\lambda_{3}^{2})+2\lambda_{3}^{3}]}{(16 \pi^{2})^{2}}\frac{1}{\epsilon},\] (115) \[z_{\Phi_{1}}^{(2)} =-\frac{\lambda_{1}^{2}+3\lambda_{3}^{2}}{12(16\pi^{2})^{2}}\frac {1}{\epsilon}. \tag{116}\] The classical constant background fields and their fluctuation fields are denoted as \(\Phi_{i}(x)=\varphi_{i}+\phi_{i}(x)\). After the renormalization in our scheme, the resummed effective potential up to the two-loop level is \[V_{0}(\varphi_{1}) =\Omega+\frac{1}{2}\left(\nu_{1}^{2}+\Sigma_{1}(T)\right)\varphi_ {1}^{2}+\frac{\lambda_{1}}{4!}\varphi_{1}^{4}, \tag{117}\] \[V_{1}(\varphi_{1}) =\sum_{i=1,2}\frac{M_{i}^{4}}{4(16\pi^{2})}\left(\ln\frac{M_{i}^ {2}}{\bar{\mu}^{2}}-\frac{3}{2}\right)+\frac{T^{4}}{2\pi^{2}}I_{B}(A_{i}^{2})- \frac{1}{2}\Sigma_{1}(T)\varphi_{1}^{2},\] (118) \[V_{2}(\varphi_{1}) =-\frac{\varphi_{1}^{2}}{4}\left[\frac{\lambda_{1}^{2}}{3}\tilde {H}(M_{1})+\lambda_{3}^{2}\tilde{H}(M_{1},M_{2},M_{2})\right]\] \[\quad+\frac{1}{8}\Big{[}\lambda_{1}\vec{I}^{2}(M_{1})+\lambda_{2} \vec{I}^{2}(M_{2})+2\lambda_{3}\bar{I}(M_{1})\bar{I}(M_{2})\Big{]}-\frac{1}{2 }\Big{[}\Sigma_{1}\bar{I}(M_{1})+\Sigma_{2}\bar{I}(M_{2})\Big{]}, \tag{119}\] where \(\tilde{H}(M_{1})=\tilde{H}(M_{1},M_{1},M_{1})\) defined in Eq. (109), \(A_{i}=M_{i}/T\) and \[M_{1}^{2} =\nu_{1}^{2}+\Sigma_{1}(T)+\frac{\lambda_{1}}{2}\varphi_{1}^{2}, \quad M_{2}^{2}=\nu_{2}^{2}+\Sigma_{2}(T)+\frac{\lambda_{3}}{2}\varphi_{1}^{2}, \tag{120}\] \[\Sigma_{1}(T) =\frac{T^{2}}{24}(\lambda_{1}+\lambda_{3}),\quad\Sigma_{2}(T)= \frac{T^{2}}{24}(\lambda_{2}+\lambda_{3}). \tag{121}\] As is the \(\phi^{4}\) theory case, one can verify the order-by-order RG invariance of the above effective potential in terms of the \(\beta\)-functions in our scheme. One-loop \(\beta\)-functions are given by \[\gamma^{(1)}_{\Phi_{1}} =0, \tag{103}\] \[\beta^{(1)}_{\Omega} =\frac{1}{32\pi^{2}}\Big{[}(\nu_{1}^{2}+\Sigma_{1})^{2}+(\nu_{2}^{ 2}+\Sigma_{2})^{2}\Big{]},\] (104) \[\nu_{1}^{2}\beta^{(1)}_{\nu_{1}^{2}} =\frac{1}{16\pi^{2}}\Big{[}\lambda_{1}(\nu_{1}^{2}+\Sigma_{1})+ \lambda_{3}(\nu_{2}^{2}+\Sigma_{2})\Big{]},\] (105) \[\nu_{2}^{2}\beta^{(1)}_{\nu_{2}^{2}} =\frac{1}{16\pi^{2}}\Big{[}\lambda_{3}(\nu_{1}^{2}+\Sigma_{1})+ \lambda_{2}(\nu_{2}^{2}+\Sigma_{2})\Big{]},\] (106) \[\beta^{(1)}_{\lambda_{1}} =\frac{3}{16\pi^{2}}(\lambda_{1}^{2}+\lambda_{3}^{2}),\] (107) \[\beta^{(1)}_{\lambda_{3}} =\frac{\lambda_{3}(\lambda_{1}+\lambda_{2}+4\lambda_{3})}{16\pi^ {2}}, \tag{108}\] and two-loop \(\beta\)-functions we need are \[\beta^{(2)}_{\Omega} =-\frac{1}{16\pi^{2}}\Big{[}(\nu_{1}^{2}+\Sigma_{1})\Sigma_{1}+( \nu_{2}^{2}+\Sigma_{2})\Sigma_{2}\Big{]}, \tag{109}\] \[\nu_{1}^{2}\beta^{(2)}_{\nu_{1}^{2}} =-\frac{1}{(16\pi^{2})^{2}}\Big{[}(\lambda_{1}^{2}+\lambda_{3}^{ 2})(\nu_{1}^{2}+\Sigma_{1})+2\lambda_{3}^{2}(\nu_{2}^{2}+\Sigma_{2})\Big{]}- \frac{\lambda_{1}\Sigma_{1}+\lambda_{3}\Sigma_{2}}{16\pi^{2}}+2\nu_{1}^{2} \gamma^{(2)}_{\Phi_{1}},\] (110) \[\nu_{2}^{2}\beta^{(1)}_{\nu_{2}^{2}} =\frac{1}{16\pi^{2}}\Big{[}\lambda_{3}(\nu_{1}^{2}+\Sigma_{1})+ \lambda_{2}(\nu_{2}^{2}+\Sigma_{2})\Big{]},\] (111) \[\beta^{(2)}_{\lambda_{1}} =\frac{-6}{(16\pi^{2})^{2}}\Big{[}\lambda_{1}(\lambda_{1}^{2}+ \lambda_{3}^{2})+2\lambda_{3}^{3}\Big{]}+4\lambda_{1}\gamma^{(2)}_{\Phi_{1}},\] (112) \[\gamma^{(2)}_{\Phi_{1}} =\frac{\lambda_{1}^{2}+3\lambda_{3}^{2}}{12(16\pi^{2})^{2}}. \tag{113}\] As is the previous special case, one can find \[\mathcal{D}V_{0}|_{\text{one-loop}} =\beta^{(1)}_{\Omega}+\frac{\nu_{1}^{2}}{2}\beta^{(1)}_{\nu_{1}^{ 2}}\varphi_{1}^{2}+\frac{1}{4!}\beta^{(1)}_{\lambda_{1}}\varphi_{1}^{4}=\frac {M_{1}^{4}+M_{2}^{4}}{2(16\pi^{2})}, \tag{114}\] \[\mathcal{D}V_{1}|_{\text{one-loop}} =\mu\frac{\partial V_{1}}{\partial\mu}=-\frac{M_{1}^{4}+M_{2}^{4} }{2(16\pi^{2})}, \tag{115}\] which verifies that \(\mathcal{D}(V_{0}+V_{1})|_{\text{one-loop}}=0\). Now we consider \(V_{\rm eff}^{\rm HTE}(\varphi_{1})\). \[V_{\rm eff}^{\rm HTE}(\varphi_{1}) =V_{0}(\varphi_{1})+V_{1}^{\rm HTE}(\varphi_{1})\] \[\simeq\frac{1}{2}\bigg{[}\left\{\nu_{1}^{2}+\frac{\lambda_{1}(\nu_ {1}^{2}+\Sigma_{1})+\lambda_{3}(\nu_{2}^{2}+\Sigma_{2})}{32\pi^{2}}\ln\frac{T^ {2}}{\bar{\mu}^{2}}\right\}+\frac{(\lambda_{1}+\lambda_{3})T^{2}}{24}\] \[\qquad+\frac{1}{16\pi^{2}}\Big{\{}\lambda_{1}(\nu_{1}^{2}+\Sigma_ {1})+\lambda_{3}(\nu_{2}^{2}+\Sigma_{2})\Big{\}}c_{B}\bigg{]}\varphi_{1}^{2}- \frac{T\big{(}(M_{1}^{2})^{3/2}+(M_{2}^{2})^{3/2}\big{)}}{12\pi}\] \[\quad+\frac{1}{4!}\left[\left(\lambda_{1}+\frac{3(\lambda_{1}^{2} +\lambda_{3}^{2})}{32\pi^{2}}\ln\frac{T^{2}}{\bar{\mu}^{2}}\right)+\frac{3( \lambda_{1}^{2}+\lambda_{3}^{2})c_{B}}{16\pi^{2}}\right]\varphi_{1}^{4}+\cdots,\] \[=\frac{1}{2}\bigg{[}\bar{\nu}_{1}^{2}(T)+\frac{(\lambda_{1}+ \lambda_{3})T^{2}}{24}+\frac{1}{16\pi^{2}}\Big{\{}\lambda_{1}(\nu_{1}^{2}+ \Sigma_{1})+\lambda_{3}(\nu_{2}^{2}+\Sigma_{2})\Big{\}}c_{B}\bigg{]}\varphi_{1 }^{2}\] \[\quad-\frac{T\big{(}(M_{1}^{2})^{3/2}+(M_{2}^{2})^{3/2}\big{)}}{12 \pi}+\frac{1}{4!}\left[\bar{\lambda}_{1}(T)+\frac{3(\lambda_{1}^{2}+\lambda_{3 }^{2})c_{B}}{16\pi^{2}}\right]\varphi_{1}^{4}+\cdots,\] (A56) where \(\bar{\nu}_{1}^{2}\) and \(\bar{\lambda}_{1}\) are the running parameters in our scheme. To see difference between the \(\overline{\rm MS}\) and our schemes, we rewrite \(V_{\rm eff}^{\rm HTE}(\varphi_{1})\) by taking \(\Sigma_{1}=(\lambda_{1}+\lambda_{3})T^{2}/24\) and \(\Sigma_{2}=(\lambda_{2}+\lambda_{3})T^{2}/24\), resulting in \[V_{\rm eff}^{\rm HTE}(\varphi_{1}) =\frac{1}{2}\bigg{[}\bar{\nu}_{1}^{2}(T)|_{\overline{\rm MS}}+ \frac{T^{2}}{24}\left\{\lambda_{1}\left(1+\frac{\lambda_{1}+\lambda_{3}}{32\pi ^{2}}\ln\frac{T^{2}}{\bar{\mu}^{2}}\right)+\lambda_{3}\left(1+\frac{\lambda_{ 2}+\lambda_{3}}{32\pi^{2}}\ln\frac{T^{2}}{\bar{\mu}^{2}}\right)\right\}\] \[\qquad+\frac{1}{16\pi^{2}}\Big{\{}\lambda_{1}(\nu_{1}^{2}+\Sigma_ {1})+\lambda_{3}(\nu_{2}^{2}+\Sigma_{2})\Big{\}}c_{B}\bigg{]}\varphi_{1}^{2}\] \[\quad-\frac{T\big{(}(M_{1}^{2})^{3/2}+(M_{2}^{2})^{3/2}\big{)}}{12 \pi}+\frac{1}{4!}\left[\bar{\lambda}_{1}(T)+\frac{3(\lambda_{1}^{2}+\lambda_{ 3}^{2})c_{B}}{16\pi^{2}}\right]\varphi_{1}^{4}+\cdots,\] (A57) where \(\bar{\nu}_{1}^{2}(T)|_{\overline{\rm MS}}=\bar{\nu}_{1}^{2}(T)|_{\Sigma_{1}= \Sigma_{2}=0}\). The \({\cal O}(T^{2})\) term in the first line break the RG invariance. After including terms arising from the sunset diagrams, they would become the RG invariant form, as shown below. Taking \(\mathcal{D}\) derivatives of \(V_{\text{eff}}(\varphi_{1})\) at the two-loop level, one finds \[\mathcal{D}V_{0}|_{\text{two-loop}} =\beta_{\Omega}^{(2)}+\frac{\nu_{1}^{2}}{2}\beta_{\nu_{1}^{2}}^{(2 )}\varphi_{1}^{2}+\frac{1}{4!}\beta_{\lambda_{1}}^{(2)}\varphi_{1}^{4}-(\nu_{1 }^{2}+\Sigma_{1})\gamma_{\Phi_{1}}^{(2)}\varphi_{1}^{2}-\frac{1}{3!}\gamma_{ \Phi_{1}}^{(2)}\varphi_{1}^{4}\] \[=-\frac{M_{1}^{2}\Sigma_{1}+M_{2}^{2}\Sigma_{2}}{16\pi^{2}}-\frac {(\lambda_{1}^{2}+\lambda_{3}^{2})M_{1}^{2}+2\lambda_{3}^{2}M_{2}^{2}}{2(16 \pi^{2})^{2}}\varphi_{1}^{2}-\Sigma_{1}\gamma_{\Phi_{1}}^{(2)}\varphi_{1}^{2}, \tag{100}\] \[\mathcal{D}V_{1}|_{\text{two-loop}} =\frac{\bar{I}(M_{1})}{2(16\pi^{2})}\Big{[}\lambda_{1}M_{1}^{2}+ \lambda_{3}M_{2}^{2}+(\lambda_{1}^{2}+\lambda_{3}^{2})\varphi_{1}^{2}\Big{]}\] \[\quad+\frac{\bar{I}(M_{2})}{2(16\pi^{2})}\Big{[}\lambda_{3}M_{1}^ {2}+\lambda_{2}M_{2}^{2}+2\lambda_{3}\varphi_{1}^{2}\Big{]}+\Sigma_{1}\gamma_ {\Phi_{1}}^{(2)}\varphi_{1}^{2},\] (101) \[\mathcal{D}V_{2}|_{\text{two-loop}} =-\frac{\bar{I}(M_{1})}{2(16\pi^{2})}\Big{[}\lambda_{1}M_{1}^{2}+ \lambda_{3}M_{2}^{2}+(\lambda_{1}^{2}+\lambda_{3}^{2})\varphi_{1}^{2}\Big{]}\] \[\quad-\frac{\bar{I}(M_{2})}{2(16\pi^{2})}\Big{[}\lambda_{3}M_{1}^ {2}+\lambda_{2}M_{2}^{2}+2\lambda_{3}\varphi_{1}^{2}\Big{]}+\frac{(\lambda_{1 }^{2}+\lambda_{3}^{2})M_{1}^{2}+2\lambda_{3}^{2}M_{2}^{2}}{2(16\pi^{2})^{2}} \varphi_{1}^{2}\] \[\quad+\frac{M_{1}^{2}\Sigma_{1}+M_{2}^{2}\Sigma_{2}}{16\pi^{2}}. \tag{102}\] Summing up, one gets \(\mathcal{D}(V_{0}+V_{1}+V_{2})|_{\text{two-loop}}=0\). Let us look into what the \(\bar{\mu}\)-dependent terms look like using HTE. \[V_{\rm eff}^{\rm HTE}(\varphi_{1})=V_{0}(\varphi_{1})+V_{1}^{\rm HTE }(\varphi_{1})+V_{2}^{\rm HTE}(\varphi_{1})\] \[=\frac{1}{2}\Bigg{[}\bigg{\{}\nu_{1}^{2}+\frac{\lambda_{1}\nu_{1}^ {2}+\lambda_{3}\nu_{2}^{2}}{32\pi^{2}}\ln\frac{T^{2}}{\bar{\mu}^{2}}+\frac{2( \lambda_{1}^{2}+\lambda_{3}^{2})(\nu_{1}^{2}+\Sigma_{1})+\lambda_{3}(\lambda_ {1}+\lambda_{2}+2\lambda_{3})(\nu_{2}^{2}+\Sigma_{2})}{4(16\pi^{2})^{2}}\ln^{2 }\frac{T^{2}}{\bar{\mu}^{2}}\] \[\qquad\qquad-\frac{(\lambda_{1}^{2}+\lambda_{3}^{2})(\nu_{1}^{2} +\Sigma_{1})+2\lambda_{3}^{2}(\nu_{2}^{2}+\Sigma_{2})}{2(16\pi^{2})^{2}}\ln \frac{T^{2}}{\bar{\mu}^{2}}\bigg{\}}+\frac{(\lambda_{1}^{2}+\lambda_{2}\lambda _{3})T^{2}}{8(16\pi^{2})}\] \[\quad+\frac{T^{2}}{24}\left\{\bigg{(}\lambda_{1}+\frac{3\lambda_ {1}^{2}+3\lambda_{3}^{2}}{32\pi^{2}}\ln\frac{T^{2}}{\bar{\mu}^{2}}\bigg{)}+ \bigg{(}\lambda_{3}+\frac{\lambda_{3}(\lambda_{1}+\lambda_{2}+4\lambda_{3})}{3 2\pi^{2}}\ln\frac{T^{2}}{\bar{\mu}^{2}}\bigg{)}\right\}\] \[\quad+\bigg{\{}\frac{\lambda_{1}(\nu_{1}^{2}+\Sigma_{1})+\lambda _{3}(\nu_{2}^{2}+\Sigma_{2})}{16\pi^{2}}\] \[\qquad\qquad+\frac{2(\lambda_{1}^{2}+\lambda_{3}^{2})(\nu_{1}^{2} +\Sigma_{1})+\lambda_{3}(\lambda_{1}+\lambda_{2}+2\lambda_{3})(\nu_{2}^{2}+ \Sigma_{2})}{(16\pi^{2})^{2}}\ln\frac{T^{2}}{\bar{\mu}^{2}}\bigg{\}}c_{B}\] \[\quad+\frac{(\lambda_{1}^{2}+\lambda_{3}^{2})(\nu_{1}^{2}+\Sigma _{1})+\lambda_{3}(\lambda_{1}+\lambda_{3})(\nu_{2}^{2}+\Sigma_{2})}{(16\pi^{2 })^{2}}c_{B}^{2}\Bigg{]}\varphi_{1}^{2}\] \[\quad-\frac{T}{12\pi}\bigg{[}(M_{1}^{2})^{3/2}+\frac{3}{4(16\pi^{ 2})}\Big{\{}\lambda_{1}(M_{1}^{2})^{3/2}+\lambda_{3}M_{2}^{2}(M_{1}^{2})^{1/2} +(\lambda_{1}^{2}+\lambda_{3}^{2})(M_{1}^{2})^{1/2}\varphi_{1}^{2}\Big{\}}\ln \frac{T^{2}}{\bar{\mu}^{2}}\] \[\qquad\qquad+(M_{2}^{2})^{3/2}+\frac{3}{4(16\pi^{2})}\Big{\{} \lambda_{2}(M_{2}^{2})^{3/2}+\lambda_{3}M_{1}^{2}(M_{2}^{2})^{1/2}+2\lambda_{3 }^{2}(M_{2}^{2})^{1/2}\varphi_{1}^{2}\Big{\}}\ln\frac{T^{2}}{\bar{\mu}^{2}}\] \[\qquad\qquad+\frac{3}{2(16\pi^{2})}\Big{\{}\lambda_{1}(M_{1}^{2}) ^{3/2}+\lambda_{2}(M_{2}^{2})^{3/2}+\lambda_{3}\big{(}M_{1}^{2}(M_{2}^{2})^{1/ 2}+M_{2}^{2}(M_{1}^{2})^{1/2}\big{)}\Big{\}}c_{B}\bigg{]}\] \[\quad+\frac{1}{4!}\Bigg{[}\bigg{\{}\lambda_{1}+\frac{3(\lambda_{1 }^{2}+\lambda_{3}^{2})}{32\pi^{2}}\ln\frac{T^{2}}{\bar{\mu}^{2}}+\frac{3(3 \lambda_{1}^{3}+4\lambda_{1}\lambda_{3}^{2}+\lambda_{2}\lambda_{3}^{2}+4 \lambda_{3}^{3})}{4(16\pi^{2})}\ln^{2}\frac{T^{2}}{\bar{\mu}^{2}}\] \[\qquad\qquad-\frac{3\left\{\lambda_{1}(\lambda_{1}^{2}+\lambda_{3 }^{2})+2\lambda_{3}^{3}\right\}}{(16\pi^{2})^{2}}\ln\frac{T^{2}}{\bar{\mu}^{2}} \bigg{\}}\] \[\quad+\frac{3}{16\pi^{2}}\left\{\lambda_{1}^{2}+\lambda_{3}^{2}+ \frac{3\lambda_{1}^{3}+4\lambda_{1}\lambda_{3}^{2}+\lambda_{2}\lambda_{3}^{2} +4\lambda_{3}^{3})}{16\pi^{2}}\ln\frac{T^{2}}{\bar{\mu}^{2}}\right\}c_{B}\] \[\quad+\frac{3(\lambda_{1}^{3}+\lambda_{2}\lambda_{3}^{2}+2\lambda_ {1}\lambda_{3}^{2})}{(16\pi^{2})^{2}}c_{B}^{2}\Bigg{]}\varphi_{1}^{4}+\frac{ \lambda_{3}T^{2}}{4(16\pi^{2})}(M_{1}^{2})^{1/2}(M_{2}^{2})^{1/2}+\cdots\] \[=\frac{1}{2}\bigg{[}\bar{\nu_{1}}^{2}(T)+\frac{T^{2}}{24}\left( \bar{\lambda}_{1}(T)+\bar{\lambda}_{3}(T)\right)+\frac{1}{16\pi^{2}}\Big{\{} \bar{\lambda}_{1}(T)(\bar{\nu}_{1}^{2}(T)+\Sigma_{1})+\bar{\lambda}_{3}(T)( \bar{\nu}_{2}^{2}(T)+\Sigma_{2})\Big{\}}c_{B}\] \[\qquad\qquad+\frac{(\lambda_{1}^{2}+\lambda_{2}\lambda_{3})T^{2 }}{8(16\pi^{2})}+\frac{(\lambda_{1}^{2}+\lambda_{3}^{2})(\nu_{1}^{2}+\Sigma_{1}) +\lambda_{3}(\lambda_{1}+\lambda_{3})(\nu_{2}^{2}+\Sigma_{2})}{(16\pi^{2})^{2} }c_{B}^{2}\Bigg{]}\varphi_{1}^{2}\] \[\quad-\frac{T}{12\pi}\bigg{[}\Big{(}\bar{M}_{1}^{2}(T)\Big{)}^{3/2 }+\big{(}\bar{M}_{2}^{2}(T)\big{)}^{3/2}\] \[\qquad\qquad+\frac{3}{2(16\pi^{2})}\Big{\{}\lambda_{1}(M_{1}^{2}) ^{3/2}+\lambda_{2}(M_{2}^{2})^{3/2}+\lambda_{3}\big{(}M_{1}^{2}(M_{2}^{2})^{1/ 2}+M_{2}^{2}(M_{1}^{2})^{1/2}\big{)}\Big{\}}c_{B}\bigg{]}\] \[\quad+\frac{1}{4!}\left[\bar{\lambda}_{1}(T)+\frac{3(\bar{ \lambda}_{1}^{2}(T)+\bar{\lambda}_{3}^{2}(T))c_{B}}{16\pi^{2}}+\frac{3( \lambda_{1}^{3}+\lambda_{2}\lambda_{3}^{2}+2\lambda_{1}\lambda_{3}^{2})}{(16 \pi^{2})^{2}}c_{B}^{2}\right]\varphi_{1}^{4}\] \[\quad+\frac{\lambda_{3}T^{2}}{4(16\pi^{2})}(M_{1}^{2})^{1/2}(M_{2 }^{2})^{1/2}+\cdots\.\qquad\qquad 32\] (A61) Note that all the \(\bar{\mu}\) dependences are absorbed into the running parameters and the RG invariance is manifest. ## Appendix B Loop functions Let us define the sum-integral symbol as \[\not{\sum}_{k}\equiv\mu^{\epsilon}T\sum_{n=-\infty}^{\infty}\int\frac{d^{d-1} \mathbf{k}}{(2\pi)^{d-1}}, \tag{101}\] where \(n\) denote integers and \(d=4-\epsilon\). A thermal function for the one-loop bubble diagram is defined as \[I(m)=\not{\sum}_{k}\frac{1}{k^{2}+m^{2}}=-\frac{m^{2}}{16\pi^{2}}\frac{2}{ \epsilon}+\bar{I}(m)+\epsilon i_{\epsilon}(m)+\mathcal{O}(\epsilon^{2}), \tag{102}\] with \[\bar{I}(m)=\frac{m^{2}}{16\pi^{2}}\left(\ln\frac{m^{2}}{\bar{\mu}^{2}}-1 \right)+\frac{T^{2}}{\pi^{2}}I^{\prime}_{B}\left(\frac{m^{2}}{T^{2}}\right) \equiv\bar{I}_{0}(m)+\frac{T^{2}}{\pi^{2}}I^{\prime}_{B}\left(\frac{m^{2}}{T^{ 2}}\right), \tag{103}\] where \(k^{2}=\omega_{n}^{2}+\mathbf{k}^{2}\) with \(\omega_{n}=2n\pi T\).5 The explicit form of \(i_{\epsilon}(m)\), which is needed when one goes beyond the one-loop level, is Footnote 5: We focus exclusively on the bosonic case. \[i_{\epsilon}(m)=-\frac{m^{2}}{64\pi^{2}}\bigg{[}\bigg{(}\ln \frac{m^{2}}{\bar{\mu}^{2}}-1\bigg{)}^{2}+1+\frac{\pi^{2}}{6}\bigg{]}-\frac{T^ {2}}{2\pi^{2}}\bigg{[}\bigg{(}\ln\frac{T^{2}}{\bar{\mu}^{2}}+\ln 4-2\bigg{)}I^{ \prime}_{B}(a^{2})+j(a^{2})\bigg{]}, \tag{104}\] where \[j(a^{2})=\int_{0}^{\infty}dx\frac{x^{2}\ln x}{\sqrt{x^{2}+a^{2} }}\frac{1}{e^{\sqrt{x^{2}+a^{2}}}-1}. \tag{105}\] The contributions from the function \(i_{\epsilon}(m)\) are cancelled among the diagrams and do not appear in the renormalized effective potential. The sunset-type diagram composed of all the scalar fields is defined as \[H(m_{1},m_{2},m_{3})=\not{\sum}_{k}\not{\sum}_{q}\frac{1}{(k^{2}+m_{1}^{2})(q ^{2}+m_{2}^{2})[(k+q)^{2}+m_{3}^{2}]}, \tag{106}\] where \(k^{2}=\omega_{n}^{2}+\mathbf{k}^{2}\) and \(q^{2}=\omega_{m}^{2}+\mathbf{q}^{2}\) with \(\omega_{n}^{2}=2n\pi T\) and \(\omega_{m}^{2}=2m\pi T\). We parametrize \(H(m_{1},m_{2},m_{3})\) in terms of the divergent and finite parts as \[H(m_{1},m_{2},m_{3})=H^{\rm div}(m_{1},m_{2},m_{3})+\tilde{H}(m_{1},m_{2},m_{ 3})+\frac{1}{8\pi^{2}}\sum_{j=1}^{3}i_{\epsilon}(m_{j}) \tag{107}\] where \[H^{\rm div}(m_{1},m_{2},m_{3})=-\frac{1}{(16\pi^{2})^{2}}\left(\frac{ 2}{\epsilon^{2}}+\frac{1}{\epsilon}\right)(m_{1}^{2}+m_{2}^{2}+m_{3}^{2})+ \frac{1}{16\pi^{2}}\frac{2}{\epsilon}\Big{(}\bar{I}(m_{1})+\bar{I}(m_{2})+\bar {I}(m_{3})\Big{)}. \tag{100}\] The divergences in the first line are removed by the local counterterms. As discussed in Sec. II, only single \(\epsilon\) pole contributes to the \(\beta\)-functions. On the other hand, the divergences proportional to \(\bar{I}(m)\) are cancelled among the diagrams. The finite part is given by \[\tilde{H}(m_{1},m_{2},m_{3})\] \[=\frac{1}{16\pi^{2}}\big{(}\bar{I}_{0}(m_{1})+\bar{I}_{0}(m_{2}) +\bar{I}_{0}(m_{3})\big{)}-\frac{1}{(4\pi)^{4}}(m_{1}^{2}+m_{2}^{2}+m_{3}^{2})\] \[\quad-\frac{1}{2}\Big{\{}\frac{m_{1}^{2}+m_{2}^{2}-m_{3}^{2}}{m_ {1}^{2}m_{2}^{2}}\bar{I}_{0}(m_{1})\bar{I}_{0}(m_{2})+\frac{m_{2}^{2}+m_{3}^{ 2}-m_{1}^{2}}{m_{2}^{2}m_{3}^{2}}\bar{I}_{0}(m_{2})\bar{I}_{0}(m_{3})\] \[\qquad\qquad+\frac{m_{3}^{2}+m_{1}^{2}-m_{2}^{2}}{m_{3}^{2}m_{1} ^{2}}\bar{I}_{0}(m_{3})\bar{I}_{0}(m_{1})\Big{\}}+\frac{1}{(4\pi)^{4}}R\, \not{\Phi}(m_{1},m_{2},m_{3})\] \[-\frac{T^{2}}{(2\pi)^{4}}\Big{[}\varphi(m_{1},m_{2},m_{3})I_{B}^ {\prime}(a_{1}^{2})+\varphi(m_{2},m_{3},m_{1})I_{B}^{\prime}(a_{2}^{2})+ \varphi(m_{3},m_{1},m_{2})I_{B}^{\prime}(a_{3}^{2})\Big{]}\] \[+\frac{T^{2}}{4(2\pi)^{4}}\Big{[}K_{--}(a_{1},a_{2},a_{3})+K_{--} (a_{2},a_{3},a_{1})+K_{--}(a_{3},a_{1},a_{2})\Big{]}. \tag{101}\] where \(R^{2}=(m_{1}^{2}+m_{2}^{2}-m_{3}^{2})^{2}-4m_{1}^{2}m_{2}^{2}\) and \[\not{\Phi}(m_{1},m_{2},m_{3}) =\text{Li}_{2}\left(\frac{m_{1}^{2}+m_{2}^{2}-m_{3}^{2}-R}{2m_{1}^ {2}}\right)+\text{Li}_{2}\left(\frac{m_{1}^{2}-m_{2}^{2}+m_{3}^{2}-R}{2m_{1}^ {2}}\right)+\frac{1}{2}\ln\frac{m_{2}^{2}}{m_{1}^{2}}\ln\frac{m_{3}^{2}}{m_{1} ^{2}}\] \[\quad-\ln\left(\frac{m_{1}^{2}+m_{2}^{2}-m_{3}^{2}-R}{2m_{1}^{2} }\right)\ln\left(\frac{m_{1}^{2}-m_{2}^{2}+m_{3}^{2}-R}{2m_{1}^{2}}\right)- \frac{\pi^{2}}{6}. \tag{102}\] Note that the dilogarithmic function \(\text{Li}_{2}(z)\) has an imaginary part if \(z>1\), i.e., if \(m_{1}^{2}-m_{2}^{2}+m_{3}^{2}+R<0\) or \(m_{1}^{2}+m_{2}^{2}-m_{3}^{2}+R<0\), \(\text{Li}_{2}\) in the first line has the imaginary part. However, the log term in the second line also has the imaginary part that cancels the imaginary part of the former. For the numerical calculation of \(\not{\Phi}(m_{1},m_{2},m_{3})\), to evaluate the real part, we use \[\text{Re}\big{[}\text{Li}_{2}(z)\big{]}=\frac{\pi^{2}}{6}-\int_{1} ^{z}dt\ \frac{\ln|1-t|}{t}. \tag{103}\] Furthermore, for \(R^{2}<0\), \(R\) and \(\not{\Phi}(m_{1},m_{2},m_{3})\) have the imaginary parts. They are cancelled to each other and \(R\not{\Phi}(m_{1},m_{2},m_{3})\) is reduced to \[R\,\not{\Phi}(m_{1},m_{2},m_{3})=|R|\left[2\int_{0}^{1}\frac{dt}{ t}\ \tan^{-1}\left(\frac{m_{2}t\sin\eta}{m_{1}-m_{2}t\cos\eta}\right)+\theta\ln\frac{m _{2}^{2}}{m_{1}^{2}}\right], \tag{104}\] where \[\eta=\arctan\left(\frac{|R|}{m_{1}^{2}+m_{2}^{2}-m_{3}^{2}}\right),\quad\theta= \arctan\left(\frac{-|R|}{m_{1}^{2}-m_{2}^{2}+m_{3}^{2}}\right), \tag{111}\] and \(\varphi(m_{1},m_{2},m_{3})\) is defined as \[\varphi(m_{1},m_{2},m_{3})=\int_{0}^{1}dx\ \ln\left(\frac{-x(1-x)m_{1}^{2}+(1-x )m_{2}^{2}+xm_{3}^{2}}{\bar{\mu}^{2}}\right), \tag{112}\] and \[K_{--}(a_{1},a_{2},a_{3})=\int_{0}^{\infty}dx\ \frac{xn_{-}(x;a_{1})}{\sqrt{x^{2} +a_{1}^{2}}}\int_{0}^{\infty}dy\ \frac{yn_{-}(y;a_{2})}{\sqrt{y^{2}+a_{2}^{2}}}\ln\left|\frac{ \tilde{Y}_{+}(x,y;a_{1},a_{2},a_{3})}{\tilde{Y}_{-}(x,y;a_{1},a_{2},a_{3})}\right| \tag{113}\] with \[n_{-}(x;a) =\frac{1}{e^{\sqrt{x^{2}+a^{2}}}-1}, \tag{114}\] \[\tilde{Y}_{\pm}(x,y;a_{1},a_{2},a_{3}) =\Big{[}(a_{1}^{2}+a_{2}^{2}-a_{3}^{2})^{2}-4a_{1}^{2}a_{2}^{2}-4 \big{\{}a_{2}^{2}x^{2}\pm(a_{1}^{2}+a_{2}^{2}-a_{3}^{2})xy+a_{1}^{2}y^{2}\big{\}} \Big{]}^{2}. \tag{115}\] For \(m_{1}=m_{2}=m_{3}\), \(\tilde{H}(m,m,m)\) is reduced to \[\tilde{H}(m)\equiv\tilde{H}(m,m,m)=3\bigg{[}-\frac{\bar{I}^{2}(m )}{2m^{2}}+\frac{\bar{I}(m)}{16\pi^{2}}-\frac{m^{2}}{(16\pi^{2})^{2}}\left(1+ \frac{2}{3}f_{2}\right)\\ -\frac{1}{2m^{2}}\frac{T^{2}}{\pi^{2}}\big{(}I_{B}^{\prime}(a^{2 })\big{)}^{2}-\frac{T^{2}}{16\sqrt{3}\pi^{3}}I_{B}^{\prime}(a^{2})+\frac{4T^{ 2}}{(16\pi^{2})^{2}}K(a)\bigg{]}, \tag{116}\] where \(K(a)\equiv K_{--}(a,a,a)\) and we have used \[\varphi(m,m,m) =\ln\frac{m^{2}}{\bar{\mu}^{2}}-2+\frac{\pi}{\sqrt{3}}, \tag{117}\] \[\varPhi(m,m,m) =-\frac{\pi^{2}}{18}+2\mathrm{Li}_{2}\left(\frac{1-\sqrt{3}i}{2} \right), \tag{118}\] and \(f_{2}=-\frac{\sqrt{3}}{2}i\varPhi(m,m,m)\simeq-1.76\). In our numerical analysis, we use an approximation [6] \[K_{--}(a_{1},a_{2},a_{3})=K\left(\frac{a_{1}+a_{2}+a_{3}}{3}\right). \tag{119}\] ## Appendix C Tadpole and mass conditions for RG-improved one-loop effective potentials Some parameters in the Lagrangian can be expressed in terms of VEV and the scalar masses using tadpole and mass conditions. In the cases of the RG-improved effective potentials with our \(t(\varphi)\), their relations are more involved than those in fixed-order calculations. In this Appendix, we explicitly give the first and second derivatives of the RG-improved one-loop effective potentials with respect to the background fields. Although we do not use such a potential in our numerical analysis in the \(\phi^{4}\) theory, we still present all the formulas to know how they differ from the fixed-order expressions. ### \(\phi^{4}\) theory At zero temperature, the \(t\)-\(\varphi\) relation (60) is reduced to \(t(\varphi)=\ln(\bar{m}^{2}/e\bar{\mu}_{0}^{2})/2\). With this, the one-loop effective potential is cast into the form \[\bar{V}_{\text{eff}}(\varphi;t(\varphi))=\bar{V}_{0}(\varphi;t( \varphi))+\bar{V}_{1}(\varphi;t(\varphi))=\bar{\Omega}-\frac{\bar{\nu}^{2}}{2} \varphi^{2}+\frac{\bar{\lambda}}{4!}\varphi^{4}-\frac{\bar{m}^{4}}{8(16\pi^{2} )}, \tag{101}\] where \(\bar{m}^{2}=-\bar{\nu}^{2}+\bar{\lambda}\varphi^{2}/2\). The first derivative of \(\bar{V}_{\text{eff}}(\varphi;t(\varphi))\) with respect to \(\varphi\) is \[\frac{d\bar{V}_{\text{eff}}(\varphi;t(\varphi))}{d\varphi} =\frac{\partial\bar{V}_{\text{eff}}(\varphi;t(\varphi))}{\partial \varphi}+\frac{dt(\varphi)}{d\varphi}\frac{\partial\bar{V}_{\text{eff}}( \varphi;t)}{\partial t}\bigg{|}_{t=t(\varphi)}\] \[=\varphi\left[-\bar{\nu}^{2}+\frac{\bar{\lambda}}{6}\varphi^{2}- \frac{\bar{\lambda}\bar{m}^{2}}{4(16\pi^{2})}\right]+\frac{dt(\varphi)}{d \varphi}\cdot\frac{\bar{m}^{2}(2\bar{m}^{2}-\mathcal{N})}{4(16\pi^{2})}\] \[=\varphi\left[-\bar{\nu}^{2}+\frac{\bar{\lambda}}{6}\varphi^{2} \right], \tag{102}\] where \(\mathcal{N}=\bar{\lambda}\left(\bar{m}^{2}+\bar{\lambda}\varphi^{2}\right)/16\pi ^{2}\) and \[\frac{dt(\varphi)}{d\varphi}=\frac{\bar{\lambda}\varphi}{2\bar{m}^{2}- \mathcal{N}}. \tag{103}\] Since we determine \(\bar{\mu}_{0}\) by the condition \(t(\varphi=v)=0\), i.e., \(\bar{\mu}_{0}^{2}=(-\nu^{2}+\lambda v^{2}/2)/e\), it is easy to solve the tadpole condition \((d\bar{V}_{\text{eff}}/d\varphi)|_{\varphi=v}=0\), which gives \[\nu^{2}=\frac{\lambda}{6}v^{2}. \tag{104}\] The second derivative of \(\bar{V}_{\text{eff}}(\varphi;t(\varphi))\) is found to be \[\frac{d^{2}\bar{V}_{\text{eff}}(\varphi;t(\varphi))}{d\varphi^{2}}=-\bar{\nu} ^{2}+\frac{\bar{\lambda}}{2}\varphi^{2}+\frac{\bar{\lambda}^{2}\varphi^{2}}{2( 16\pi^{2})}\frac{1}{1-\mathcal{N}/2\bar{m}^{2}}. \tag{105}\] Thus, the mass in the vacuum (denoted as \(m_{\phi}\)) is obtained by \[m_{\phi}^{2}=\frac{d^{2}\bar{V}_{\text{eff}}(\varphi;t(\varphi)) }{d\varphi^{2}}\bigg{|}_{\varphi=v} =\frac{\lambda}{3}v^{2}\frac{1-\lambda/32\pi^{2}}{1-\lambda/8\pi^{ 2}}\] \[=\frac{\lambda}{3}v^{2}\left[1+\frac{3\lambda}{2(16\pi^{2})}+ \frac{3\lambda^{2}}{(16\pi^{2})^{2}}+\cdots\right]. \tag{106}\] One should note that \(m_{\phi}\) would agree with a one-loop fixed-order result if the higher-order terms are dropped, as it should. ### \(\phi^{4}\) theory with an additional real scalar We obtain the first and second derivatives of \(\bar{V}_{\text{eff}}(\varphi_{1};t(\varphi_{1}))\) in Eq. (72) with the \(t\)-\(\varphi_{1}\) relation (78) at zero temperature. The first derivative of \(\bar{V}_{\text{eff}}\) with respect to \(\varphi_{1}\) is \[\frac{d\bar{V}_{\text{eff}}(\bar{\varphi}_{1};t(\varphi_{1}))}{d\varphi_{1}}= \varphi_{1}\left[\bar{\nu}_{1}^{2}+\frac{\bar{\lambda}_{1}}{6}\varphi_{1}^{2}+ \frac{1}{2}\left(\bar{\lambda}_{1}\bar{I}_{0}(\bar{m}_{1})+\bar{\lambda}_{3} \bar{I}_{0}(\bar{m}_{2})\right)\right]=0, \tag{100}\] where \[\bar{I}_{0}(\bar{m})=\frac{\bar{m}^{2}}{16\pi^{2}}\left(\ln\frac{\bar{m}^{2}}{ e^{2t}\bar{\mu}_{0}^{2}}-1\right). \tag{101}\] As in the \(\phi^{4}\) theory, we determine \(\bar{\mu}_{0}\) by the condition \(\frac{\partial\bar{V}_{\text{eff}}(\bar{\varphi}_{1};t)}{\partial t}|_{t=0}=0\), i.e., \(t(\varphi_{1}=v)=0\), from which it follows that \[\ln\bar{\mu}_{0}^{2}=\frac{\sum_{i=1,2}\frac{\partial\bar{m}_{i}^{2}}{ \partial t}|_{t=0}m_{i}^{2}(\ln m_{i}^{2}-1)}{\sum_{i=1,2}\frac{\partial\bar{m} _{i}^{2}}{\partial t}|_{t=0}m_{i}^{2}}, \tag{102}\] where \[m_{1}^{2} =\nu_{1}^{2}+\frac{\lambda_{1}}{2}v^{2},\quad m_{2}^{2}=\nu_{2}^ {2}+\frac{\lambda_{3}}{2}v^{2}, \tag{103}\] \[\frac{\partial\bar{m}_{1}^{2}}{\partial t}\Bigg{|}_{t=0} =\frac{1}{16\pi^{2}}\left[\lambda_{1}\nu_{1}^{2}+\lambda_{3}\nu_{2 }^{2}+\frac{3}{2}(\lambda_{1}^{2}+\lambda_{3}^{2})v^{2}\right],\] (104) \[\frac{\partial\bar{m}_{2}^{2}}{\partial t}\Bigg{|}_{t=0} =\frac{1}{16\pi^{2}}\left[\lambda_{3}\nu_{1}^{2}+\lambda_{2}\nu_{2 }^{2}+\frac{1}{2}\lambda_{3}(\lambda_{1}+\lambda_{2}+4\lambda_{3})v^{2}\right]. \tag{105}\] With this \(\bar{\mu}_{0}\), the tadpole condition is simplified to \[\frac{d\bar{V}_{\text{eff}}(\bar{\varphi}_{1};t(\varphi_{1}))}{d\varphi_{1}} \Bigg{|}_{\varphi_{1}=v}=v\left[\nu_{1}^{2}+\frac{\lambda_{1}}{6}v^{2}+\frac{1 }{2}\left(\lambda_{1}\bar{I}_{0}(m_{1})+\lambda_{3}\bar{I}_{0}(m_{2})\right) \right]=0, \tag{106}\] which determines \(\nu_{1}^{2}\) as \[\nu_{1}^{2}=-\left[\frac{\lambda_{1}}{6}v^{2}+\frac{\lambda_{1}m_{1}^{2}}{32 \pi^{2}}\left(\ln\frac{m_{1}^{2}}{\bar{\mu}_{0}^{2}}-1\right)+\frac{\lambda_{3 }m_{2}^{2}}{32\pi^{2}}\left(\ln\frac{m_{2}^{2}}{\bar{\mu}_{0}^{2}}-1\right) \right]. \tag{107}\] This coincides with the one-loop fixed-order result but \(\bar{\mu}_{0}\) is given by Eq. (102). The second derivative is cast into the form \[m_{\phi_{1}}^{2} =\frac{d^{2}\bar{V}_{\text{eff}}(\varphi_{1};t(\varphi_{1}))}{d \varphi_{1}^{2}}\Bigg{|}_{\varphi_{1}=v}\] \[=m_{1}^{2}+\frac{1}{2}\Big{[}\lambda_{1}\bar{I}_{0}(m_{1})+ \lambda_{3}\bar{I}_{0}(m_{2})+\left(\lambda_{1}^{2}\bar{I}_{0}^{\prime}(m_{1})+ \lambda_{3}^{2}\bar{I}_{0}(m_{2})\right)v^{2}\Big{]}\] \[\quad+\frac{dt(\varphi_{1})}{d\varphi_{1}}\Bigg{|}_{\varphi_{1}=v }\frac{1}{2}\sum_{i=1,2}\left[\frac{\partial^{2}\bar{m}_{i}^{2}}{\partial\varphi _{1}\partial t}\bar{I}_{0}(m_{i})+\frac{\partial\bar{m}_{i}^{2}}{\partial \varphi_{1}}\frac{\partial\bar{m}_{i}^{2}}{\partial t}\bar{I}_{0}^{\prime}(m_{ i})\right]_{t=0}, \tag{108}\] where \[\left.\frac{dt(\varphi_{1})}{d\varphi_{1}}\right|_{\varphi_{1}=v}=\frac{\sum_{i=1,2 }\left[\frac{\partial^{2}\bar{m}_{i}^{2}}{\partial\varphi_{1}\partial t}\bar{I}_ {0}(m_{i})+\frac{\partial\bar{m}_{i}^{2}}{\partial\varphi_{1}}\frac{\partial \bar{m}_{i}^{2}}{\partial t}\bar{I}_{0}^{\prime}(m_{i})\right]_{t=0}}{\sum_{i= 1,2}\left[\frac{m_{i}^{2}}{8\pi^{2}}\frac{\partial\bar{m}_{i}^{2}}{\partial t} -\frac{\partial^{2}\bar{m}_{i}^{2}}{\partial t^{2}}\bar{I}_{0}(m_{i})-\left( \frac{\partial^{2}\bar{m}_{i}^{2}}{\partial t^{2}}\right)^{2}\bar{I}_{0}^{ \prime}(m_{i})\right]_{t=0}}, \tag{101}\] and \[\frac{\partial^{2}\bar{m}_{1}^{2}}{\partial\varphi_{1}\partial t} =\frac{3(\bar{\lambda}_{1}^{2}+\bar{\lambda}_{3}^{2})\varphi_{1} }{16\pi^{2}}, \tag{102}\] \[\frac{\partial^{2}\bar{m}_{2}^{2}}{\partial\varphi_{1}\partial t} =\frac{\bar{\lambda}_{3}(\bar{\lambda}_{1}+\bar{\lambda}_{2}+4 \bar{\lambda}_{3})\varphi_{1}}{16\pi^{2}},\] (103) \[\frac{\partial^{2}\bar{m}_{1}^{2}}{\partial t^{2}} =\frac{1}{16\pi^{2}}\left[\beta_{\lambda_{1}}^{(1)}(\bar{m}_{1}^ {2}+2\bar{\lambda}_{1}\varphi_{1}^{2})+\beta_{\lambda_{3}}^{(1)}(\bar{m}_{2}^ {2}+2\bar{\lambda}_{3}\varphi_{1}^{2})+\bar{\lambda}_{1}\frac{\partial\bar{m} _{1}^{2}}{\partial t}+\bar{\lambda}_{3}\frac{\partial\bar{m}_{2}^{2}}{\partial t }\right]\] \[=\frac{1}{(16\pi^{2})^{2}}\Big{[}4(\bar{\lambda}_{1}^{2}+\bar{ \lambda}_{3}^{2})\bar{m}_{1}^{2}+2\bar{\lambda}_{3}(\bar{\lambda}_{1}+\bar{ \lambda}_{2}+2\bar{\lambda}_{3})\bar{m}_{2}^{2}\] \[\qquad\qquad+\Big{\{}7\bar{\lambda}_{1}(\bar{\lambda}_{1}^{2}+ \bar{\lambda}_{3}^{2})+2\bar{\lambda}_{3}^{2}(\bar{\lambda}_{1}+\bar{\lambda }_{2}+5\bar{\lambda}_{3})\Big{\}}\varphi_{1}^{2}\Big{]},\] (104) \[\frac{\partial^{2}\bar{m}_{2}^{2}}{\partial t^{2}} =\frac{1}{16\pi^{2}}\left[\beta_{\lambda_{3}}^{(1)}(\bar{m}_{1}^ {2}+4\bar{\lambda}_{3}\varphi_{1}^{2})+\beta_{\lambda_{2}}^{(1)}\bar{m}_{2}^{2} +\bar{\lambda}_{3}\frac{\partial\bar{m}_{1}^{2}}{\partial t}+\bar{\lambda}_{2 }\frac{\partial\bar{m}_{2}^{2}}{\partial t}\right]\] \[=\frac{1}{(16\pi^{2})^{2}}\Big{[}2\bar{\lambda}_{3}(\bar{\lambda }_{1}+\bar{\lambda}_{2}+2\bar{\lambda}_{3})\bar{m}_{1}^{2}+4(\bar{\lambda}_{2 }^{2}+\bar{\lambda}_{3}^{2})\bar{m}_{2}^{2}\] \[\qquad\qquad+\bar{\lambda}_{3}(\bar{\lambda}_{1}^{2}+4\bar{ \lambda}_{1}\bar{\lambda}_{3}+6\bar{\lambda}_{2}\bar{\lambda}_{3}+17\bar{ \lambda}_{3}^{2})\varphi_{1}^{2}\Big{]},\] (105) \[\bar{I}_{0}^{\prime}(m) =\frac{d\bar{I}_{0}(m)}{dm^{2}}=\frac{1}{16\pi^{2}}\ln\frac{m^{2}} {e^{2t}\bar{\mu}_{0}^{2}}. \tag{106}\]
2301.08492
WASP-39b: exo-Saturn with patchy cloud composition, moderate metallicity, and underdepleted S/O
WASP-39b is one of the first extrasolar giant gas planets that has been observed within the JWST ERS program. Fundamental properties that may enable the link to exoplanet formation differ amongst retrieval methods, for example metallicity and mineral ratios. In this work, the formation of clouds in the atmosphere of WASP-39b is explored to investigate how inhomogeneous cloud properties (particle sizes, material composition, opacity) may be for this intermediately warm gaseous exoplanet. WASP-39b's atmosphere has a comparable day-night temperature median with sufficiently low temperatures that clouds may form globally. The presence of clouds on WASP-39b can explain observations without resorting to a high (> 100x solar) metallicity atmosphere for a reduced vertical mixing efficiency. The assessment of mineral ratios shows an under-depletion of S/O due to condensation compared to C/O, Mg/O, Si/O, Fe/O ratios. Vertical patchiness due to heterogeneous cloud composition challenges simple cloud models. An equal mixture of silicates and metal oxides is expected to characterise the cloud top. Further, optical properties of Fe and Mg silicates in the mid-infrared differ significantly which will impact the interpretation of JWST observations. We conclude that WASP-39b's atmosphere contains clouds and the underdepletion of S/O by atmospheric condensation processes suggest the use of sulphur gas species as a possible link to primordial element abundances. Over-simplified cloud models do not capture the complex nature of mixed-condensate clouds in exoplanet atmospheres. The clouds in the observable upper atmosphere of WASP-39b are a mixture of different silicates and metal oxides. The use of constant particles sizes and/or one-material cloud particles alone to interpret spectra may not be sufficient to capture the full complexity available through JWST observations.
Ludmila Carone, David A. Lewis, Dominic Samra, Aaron D. Schneider, Christiane Helling
2023-01-20T09:44:13Z
http://arxiv.org/abs/2301.08492v1
# WASP-39b: exo-Saturn with patchy cloud composition, moderate metallicity, and underdepleted S/O ###### Abstract Context:WASP-39b is one of the first extrasolar giant gas planet that have been observed within the JWST ERS program. Data interpretation by different retrieval approaches diverge. Fundamental properties that may enable the link to exoplanet formation differ amongst methods, for example metallicity and mineral ratios. The retrieval of these values impact the results for individual element abundances as well as the presence or absence of chemical tracer species. This challenge is eminent for all JWST targets. Aims:The formation of clouds in the atmosphere of WASP-39b is explored to investigate how inhomogeneous cloud properties (particle sizes, material composition, opacity) may be for this intermediately warm gaseous exoplanet. Methods:1D profiles extracted from the 3D GCM expeRT/MITgcm results are used as input for a kinetic. non-equilibrium cloud model. Resulting cloud particle sizes, number densities and material volume fractions are the input for opacity calculations. Results:WASP-39b's atmosphere has a comparable day-night temperature median with sufficiently low temperatures that clouds may form globally. The presence of clouds on WASP-39b can explain observations without resorting to a high (\(>100\times\) solar) metallicity atmosphere for a reduced vertical mixing efficiency. The assessment of mineral ratios shows an under-depletion of S/O due to condensation compared to C/O, Mg/O, Si/O, Fe/O ratios. Vertical patchiness due to heterogeneous cloud composition challenges simple cloud models. An equal mixture of silicates and metal oxides is expected to characterise the cloud top. Further, optical properties of Fe and Mg silicates in the mid-infrared differ significantly which will impact the interpretation of JWST observations. Conclusions:WASP-39b's atmosphere contains clouds and the underdepletion of S/O by atmospheric condensation processes suggest the use of sulphur gas species as a possible link to primordial element abundances. Over-simplified cloud models do not capture the complex nature of mixed-condensate clouds in exoplanet atmospheres. The clouds in the observable upper atmosphere of WASP-39b are a mixture of different silicates and metal oxides. The use of constant particles sizes and/or one-material cloud particles alone to interpret spectra may not be sufficient to capture the full complexity available through JWST observations. Conclusions: ## 1 Introduction WASP-39b was one of the first extrasolar planets for which the James Webb Space Telescope (JWST) observations were released to the community. Similar to WASP-96b, it was observed in transmission using the NIRSpec instrument as part of the Early Release Science Programme (ERS) (The JWST Transiting Exoplanet Community Early Release Science Team et al., 2022). WASP-39b has a mass of \(M_{p}=0.28\) M\({}_{\rm Jup}\), a radius of \(R_{p}=1.27\) R\({}_{\rm Jup}\), an equilibrium temperature of \(T_{\rm eq}\sim 1100\) K, and orbits a G-type star with a period of 4.055 days (Faedi et al., 2011). WASP-96b has a mass of \(0.48\pm 0.03\) M\({}_{\rm Jup}\), a radius of \(1.2\pm 0.06\) R\({}_{\rm Jup}\), an equilibrium temperature \(T_{\rm eq}\sim 1300\) K, and orbits a G-type star with a period 3.4 days (Heller et al., 2014). The WASP-39b JWST ERS observation, covering the \(3-5\mu\)m wavelength range, suggest a strong CO\({}_{2}\) absorption feature at \(4.3\mu\)m (The JWST Transiting Exoplanet Community Early Release Science Team et al., 2022). Further observations with JWST instruments have also reported the detection of CO\({}_{2}\)(Rustamkulov et al., 2022; Ahrer et al., 2022). Previous observations with HST/WFC3, HST/STIS, and the VLT, as well as recent observations with several instruments on JWST have ascertained the presence of water, sodium, and potassium in the atmosphere of WASP-39b (Fischer et al., 2016; Nikolov et al., 2016; Wakeford et al., 2018; Rustamkulov et al., 2022; Alderson et al., 2022; Ahrer et al., 2022; Feinstein et al., 2022). Photochemical production has been proposed as the source of the SO\({}_{2}\) detection in the WASP-39b atmosphere (Tsai et al., 2022). The observability of spectral features, in particular the pressure broadened Na and K lines lead Sing et al. (2016) to suggest that the atmosphere of WASP-39b may be relatively cloud free. Fischer et al. (2016) use HST/STIS in combination with the Spitzer/IRAC photometry to suggest a clear-sky WASP-39b. Their best fit to the data suggest a H\({}_{2}\) dominated atmosphere with either a clear atmosphere of \(0.1\times\) to solar metallicity, or a weak haze layer with solar abundances. Wakeford et al. (2018) suggest asymmetric limbs as a possibilities to fit their observations of WASP-39b. Analysis of observations with JWST/NIRCam, JWST/NIRSpec G395H, JWST/NIRSpec PRISM, and JWST/NIRISS, favour a cloudy atmosphere (Rustamkulov et al., 2022; Alderson et al., 2022; Ahrer et al., 2022; Feinstein et al., 2022). Retrieval studies often use assumptions about atmospheric clouds, including clouds being homogeneous in wavelength ('grey' clouds), in composition as well as in particle size (Barstow et al., 2017; Wakeford et al., 2018). These assumptions could be problematic as evidenced by disparate results for WASP-39b and planets of similar mass and temperature (\(T_{\rm eq}\lesssim 1300\) K). Carone et al. (2021) studied the atmosphere of WASP-117b which is similar to WASP-39b in mass and radius. A muted water feature was detected in WASP-117b using HST/WFC3. There was no conclusive (\(>3\sigma\)) detection of Na and K in the high-resolution VLT/ESPRESSO data. It was shown that the retrieval process can lead to bifurcating results since two models would be consistent with the observations. The first, a 1D, isothermal atmosphere model with a uniform cloud deck and in equilibrium chemistry suggests preference for a high atmospheric metallicity [Fe/H] \(=2.58\pm 0.26\) but clear skies in Bayesian analysis. The data are, however, also consistent with a lower metallicity \(\sim 0.37\times\varepsilon_{\rm solar}\) (\(\varepsilon_{\rm solar}\) - solar metallicity), [Fe/H] \(<1.75\) and a cloud deck between \(10^{-2.2}\)... \(10^{-5.1}\) bar. Wakeford et al. (2018) report a cloud-free and very high metallicity of more than \(100\times\varepsilon_{\rm solar}\) for WASP-39b based on a combination of HST/WFC3, VLT/FORS2, HST/STISIS and _Spitzer_ observations. This result is degenerate with C/O ratio and cloud coverage. As was found for WASP-117b, a lower metallicity - more in line with Solar System Saturn of \(\sim 10\times\varepsilon_{\rm solar}\)(Fletcher et al., 2011; Atreya et al., 2016) - can be fitted to the data if a cloud coverage is allowed within the retrieval approach. Further, asymmetrically cloudy limbs would mimick a high metallicity atmosphere if 1D retrieval models are assumed (Line & Parmentier, 2016). Statistical trends in exoplanet atmosphere metallicity suggest that also exoplanets follow a similar metallicity-mass trend as known for the gas planets in the Solar System (Chachan et al., 2019; Welbanks et al., 2019): lower mass gas planets are more metal-rich than more massive gas planet. This trend is also in line with planet formation models for planets that have migrated in the proto-planetary disk (Schneider & Bitsch, 2021; Knierim et al., 2022). Thus, for Saturn-mass objects like WASP-39b, a moderately increased atmosphere metallicity (\(10\times\)solar) similar to Saturn in the Solar System can be expected (Thomgren & Fortney, 2019; Guillot et al., 2022). This paper addresses the question of cloud formation in the atmosphere of WASP-39b by applying a microphysical model that self-consistently treats the formation of cloud condensation nuclei, their growth to macroscopic particles from multiple condensing species, element depletion, and the feedback of gravitational settling on these processes. Combining the cloud model with the output of a 3D General Circulation Model (GCM) for the 3D thermodynamic atmosphere structure, shows that the cloud properties vary are vertically heterogeneous however they maintain a relatively homogeneous horizontal distribution, in contrast to ultra-hot Jupiters, such as HAT-P-7b or WASP-121b (Helling et al., 2021). The dominant gas phase species after condensation are explored showing that H\({}_{2}\)S is less strongly affected by cloud formation and the local thermodynamics than CO\({}_{2}\). The impact of atmospheric metallicity on cloud formation is assessed, illustrating that high metallicity leads to an increased cloud mass. The potential for using the S/O ratio as a link to planet formation processes is discussed. The potential of using the complex cloud model to link to the currently available observational data for WASP-39b is explored. For WASP-39b, similar to WASP-96b, a reduced mixing efficiency in the cloud model is required to produce a cloud deck between \(p_{\rm gas}=10^{-2}\) and \(5\times 10^{-3}\) bar as implied by the observations. The potentially misleading effects of over-simplified cloud parameterisations (for example, constant particle sizes or homogeneous cloud particle composition) is demonstrated. Figure 1: The JWST ESR targets WASP-39b (light-blue triangle) and WASP-96b (purple triangle) comfortably share the T\({}_{\rm eq}\), log(g) and T\({}_{\rm eff}\) parameter ranges with the exoplanet subclass of hot Jupiters like HD 189773b. Comparing known exoplanets in the (R\({}_{\rm P}\), M\({}_{\rm P}\))-plane shows both sharing the parameter space with the warm Saturn HATS-6b. The grey symbols indicate the presently known JWST exoplanet targets. ## 2 Approach The cloud structure on the JWST ERS target WASP-39b is examined by adopting a hierarchical approach similar to works on another JWST ERS target WASP-96b (Samra et al. 2022), the canonical hot Jupiters HD 189733b and HD 209458b (Lee et al. 2015; Helling et al. 2016), and the ultra-hot Jupiters WASP-18b (Helling et al. 2019) and HAT-P-7b (Helling et al. 2019; Molaverdikhani et al. 2020). The first modelling step produces a cloud-free 3D GCM representing WASP-39b. These results are used as input for the second modelling step, which is a kinetic cloud formation model consistently combined with equilibrium gas-chemistry calculations. 120 1D (\(T_{\rm gas}\)(z), \(p_{\rm gas}\)(z), \(v_{\rm z}\)(z))-profiles are utilised for WASP-39b similar to our previous works. \(T_{\rm gas}\)(z) is the local gas temperature [K], \(p_{\rm gas}\)(z) is the local gas pressure [bar], and \(v_{\rm z}\)(z) is the local vertical velocity component [cm s\({}^{-1}\)]. Figure 2: WASP-39b 2D slices showing atmosphere and cloud structure terminator maps. **Top Left:** Local atmospheric gas temperature and gas pressure (\(T_{\rm gas}\), \(p_{\rm gas}\)). **Top Right:** Total nucleation rate, \(J_{\star}=\sum_{i}J_{\rm i}\) [cm\({}^{-3}\) s\({}^{-1}\)] (i=TiO\({}_{2}\), SiO, NaCl, KCl). **Bottom left:** Dust-to-gas mass ratio \(\rho_{\rm d}/\rho\). **Bottom right:** Surface averaged mean cloud particle radius \(\langle a\rangle_{A}\) [\(\mu\)m]. This hierarchical approach is limited by not explicitly taking into account the potential effect of horizontal winds on cloud formation. However, processes governing the formation of mineral clouds are mainly determined by local thermodynamic properties which result from the 3D GCM. The temperature structure may change if the cloud particle opacity is fully taken into account in the solution of the radiative transfer. This may change the precise location of the cloud in pressure space but not the principle result of clouds forming in WASP-39b. In the following, the individual modelling steps are described in more detail. 3D atmosphere modelling: The 3D GCM expeRT/MITgcm (Carone et al., 2020; Baeyens et al., 2021) is utilised to model WASP-39b. The code was used by Schneider et al. (2022) to demonstrate that inflation of extrasolar giant gas planets is probably not caused by vertically advected heat. The ex Figure 3: WASP-39b 2D slices showing atmosphere and cloud structure equatorial maps. **Top Left:** Local atmospheric gas temperature and gas pressure (\(\rm T_{gas}\), \(\rm p_{gas}\)). **Top Right:** Total nucleation rate, \(J_{*}=\sum_{i}J_{i}\) [cm\({}^{-3}\) s\({}^{-1}\)] (i=TiO\({}_{2}\), SiO, NaCl, KCl). **Bottom left:** Dust-to-gas mass ratio \(\rho_{\rm d}/\rho\). **Bottom right:** Surface averaged mean cloud particle radius \(\langle a\rangle_{A}\) [\(\rm\mu m\)]. peRT/MITgcm builds on the dynamical core of MITgcm (Adcroft et al. 2004) and has been adapted to model tidally locked gas giants. Recent extensions in Schneider et al. (2022) include non-grey radiative transfer coupling. The model parameters used for the GCM, representative of the hot Saturn WASP-39b, are: \(R_{\rm p}=9.07\times 10^{9}\) cm, \(P_{\rm rot}=4.06\) days, \(\log_{10}(g\ {\rm[cm^{-2}]})=2.63\), and the substellar point irradiation temperature \(T_{\rm irr}=1580\) K or an equilibrium temperature assuming full heat distribution and zero albedo of \(T_{\rm eq}=1117.4\) K (Eq. 20 Schneider et al. 2022). The atmosphere of WASP-39b is assumed to have a metallicity of \(10\times e_{\rm solar}\). The model is run for 700 days. Additional details for the 3D GCM setup can be found in Table 1. Kinetic cloud formation:The kinetic cloud formation model (nucleation, growth, evaporation, gravitational settling, element consumption and replenishment) and equilibrium gas-phase calculations are applied following a similar approach as taken in Helling et al. (2022). The undepleted gas phase element abundances are set to \(10\times e_{\rm solar}\) by increasing all element abundances of metals. A constant mean molecular weight1 of \(\mu=2.4\) is used to reflect the higher value expected compared to a solar abundance H\({}_{2}\)/He dominated atmosphere. This constant value is a reasonable assumption given that the thermodynamic structure of the atmosphere does not cause the gas composition to deviate from a H\({}_{2}\)-dominated gas. Footnote 1: This value is derived from the specific gas constant \(R\) used in the GCM (Table 1). In total, 31 ODEs are solved to describe the formation of cloud condensation nuclei (\(J_{\rm s}(z)=\sum_{i}J_{\rm i}\), i=TiO\({}_{2}\), SiO, KCl, NaCl) which grow to macroscopic sized cloud particles comprised of a difference condensate species which change depending on the local atmospheric gas temperature and gas pressure. The 16 condensate species considered are TiO\({}_{2}\)[s], Mg\({}_{2}\)SiO\({}_{4}\)[s], MgSiO\({}_{3}\)[s], MgO[s], SiO[s], SiO\({}_{2}\)[s], Fe[s], FeO[s], FeS[s], Fe\({}_{2}\)O\({}_{3}\)[s], Fe\({}_{2}\)SiO\({}_{4}\)[s], Al\({}_{2}\)O\({}_{3}\)[s], CaTiO\({}_{3}\)[s], CaSiO\({}_{5}\)[s], NaCl[s], KCl[s] which form from 11 elements (Mg, Si, Ti, O, Fe, Al, Ca, S, K, Cl, Na) by 132 surface reactions. The vertical mixing is based on \(v_{\rm s}(z)\) and calculated according to Appendix B.1. in Helling et al. (2022) mimicking a diffusive flux across computational cells. Deriving cloud properties:In Section 3.2, the clouds are quantified in terms of the surface averaged mean particle size \(\langle a\rangle_{A}\) [\(\mu\)m] of the particles that make up the clouds, their material volume fractions \(V_{\rm s}/V_{\rm tot}\), and the dust-to-gas mass ratio, \(\rho_{\rm d}/\rho\) which represents the cloud mass load. The surface averaged mean particle size \(\langle a\rangle_{A}\) is defined as \[\langle a\rangle_{\rm A}=\sqrt[3]{\frac{3}{4\pi}}\frac{L_{3}}{L_{2}}, \tag{1}\] where \(L_{2}\) and \(L_{3}\) are the second and third dust moments (Eq.A.1 in Helling et al. 2020). In Section 3.3, column integrated properties are discussed. As outlined in previous works (Helling et al. 2020; Helling et al. 2021, 2022) the column integrated total nucleation rate is \[\int_{z_{\rm min}}^{z_{\rm max}}J_{\rm s,\ {\rm tot}}(z)dz\ \ {\rm[cm^{-2}\ s^{-1}]}. \tag{2}\] It quantifies the total amount of cloud condensation nuclei that form along the atmosphere column. The mass that makes up this column of cloud condensation nuclei is \[\dot{\Sigma}=\int_{z_{\rm min}}^{z_{\rm max}}\sum_{i}m_{i}J_{\rm s,\ {\rm d}}(z)dz\ {\rm[g\ cm^{-2}\ s^{-1}]}, \tag{3}\] Figure 4: (T\({}_{\rm gas}\), p\({}_{\rm gas}\)) - profiles extracted from the 3D GCM in the non-grey version presented in Schneider et al. (2022). **Left:** The 120 1D profiles extracted from a WASP-39b 3D GCM. The inset highlights the region of the nightside where Rossby vortices form at \(\theta\sim\pm 68^{\circ}\) (see also Fig. 1) **Right:** WASP-39b and WASP-96b day- and nightside median (T\({}_{\rm gas}\), p\({}_{\rm gas}\)) profiles with maximum and minimum temperature envelopes. The dayside and the nightside of WASP-96b are on average slightly hotter than WASP-39b. taking into account the four individual nucleation species (\(i=\)TiO\({}_{2}\), SiO, NaCl, KCl), with the mass of individual cloud condensation nuclei \(m_{i}\) and their respective nucleation rates \(J_{*,i}\). The column integrated, number density weighted, surface averaged mean particle size is \[\langle\langle a\rangle_{\rm A}\rangle=\frac{\int_{\zeta_{\rm min}}^{\zeta_{ \rm max}}n_{d}(z)\langle a\rangle_{\rm A}(z)dz}{\int_{\zeta_{\rm min}}^{\zeta_{ \rm max}}n_{d}(z)dz}\quad\mbox{with}\quad n_{\rm d}(z)=\frac{\rho(z)L_{3}(z)}{4 \pi\langle a(z)\rangle_{\rm A}^{3}/3}. \tag{4}\] The number density of cloud particles (that result from the nucleation rate, Eq. 3) is a weighting factor in Eq. 4 such that the average accounts for differing numbers of particles of different sizes through the atmosphere. ## 3 Cloud properties on the hot Saturn WASP-39b The similarity of WASP-39b with WASP-96b in mass and temperature allows to explore the consistency of the cloud model employed here for WASP-39b and in Samra et al. 2022 for WASP-96b. For both planets, ample observational data are available, including JWST data from the ERS programme for WASP-39b, which can be used to further constrain cloud model parameters like vertical mixing. Further, the diverging retrieval results (Sect. 1) of fundamental properties like metallicity for WASP-39b poses a challenge for planet formation and evolution studies, which may be resolved by taking into account cloud formation. Section 3.1 presents the WASP-39b atmosphere thermodynamic structure as a base for the global cloud results in Sect. 3.2 and all following sections. In Sect. 3.2 the global distribution of Figure 5: Microphysical cloud properties of WASP-39 b. **Left column:** Individual 1D profiles which describe the local properties of the cloud. **Right column:** Median dayside and nightside profiles with maximum and minimum planet wide value envelopes. **Top row:** Total nucleation rate \(J_{*,\rm tot}\). **Middle row:** Surface averaged mean cloud particle radius \(\langle a\rangle_{\rm A}\). **Bottom row:** Dust-to-gas mass ratio \(\rho_{\rm d}/\rho\). the cloud properties is presented which indicates the clouds on WASP-39b are rather homogeneous in nature. It, however, becomes clear in the following sections that, for example, morning and evening terminator differences (Sect. 3.3) and changing material composition (Sect. 3.4) that determine the remaining gas-phase abundances (Sect. 3.5) and the cloud opacity (Sect. 4.3) get lost in simplifications. ### The 3D GCM atmosphere structure The 3D atmosphere structures of gas giants like WASP-39b and WASP-96b that share a global temperature of T\({}_{\rm eq}\sim 1100\,\ldots\,1300\)K undergo moderate and relatively smooth daughter temperature changes compared to ultra-hot Jupiters with temperatures T\({}_{\rm eq}\gtrsim 2000\) K, for example, HAT-P-7b (Helling et al. 2019). This is shown in Fig. 4 (left) which displays the 120 1D (\(T_{\rm gas},p_{\rm gas}\))-profiles which were extracted from the 3D GCM for WASP-39b. The maximum temperature difference between the dayside and the nightside is \(\Delta T_{\rm day-night}\sim 500\) K. The flow of hot gas across the dayside results in the evening terminator being 100-200 K hotter than the morning terminator for a given pressure level where \(p_{\rm gas}\leq 10^{-2}\) bar (see Fig. 2, top left). Figure 4 (right) encapsulates the complexity of the 3D (\(T_{\rm gas},p_{\rm gas}\))-profiles in terms of dayside and nightside median profiles to facilitate comparison or application in retrieval approaches. The maximum error that would occur when only using the median profiles is shown as (red/blue) envelop which represent the maximum deviation from the median amongst all 120 1D profiles. This maximum deviation from the median is determined by a few profiles on the night side that form a vortex cold-trap (Fig. 4, inset). These Rossby vortices appear at latitude \(\theta\approx+68^{\circ}\) as shown in Fig. A.1. This cold-trap is sampled by the profiles (\(\phi=-165.0^{\circ},\theta=68.0^{\circ}\)), (\(\phi=-150.0^{\circ},\theta=68.0^{\circ}\)), (\(\phi=-135.0^{\circ},\theta=68.0^{\circ}\)), (\(\phi=-120.0^{\circ},\theta=45.0^{\circ}\)), (\(\phi=-120.0^{\circ},\theta=68.0^{\circ}\)), and (\(\phi=-105.0^{\circ},\theta=68.0^{\circ}\)) on the nightside at \(p_{\rm gas}\sim 10^{-3.5}\) bar. The median (\(T_{\rm gas},p_{\rm gas}\))-profiles also highlight that WASP-39b and WASP-96b differ only moderately with respect to their atmosphere temperatures. Figure 2 (top left) further demonstrates in 2D terminator slices that the WASP-39b terminator temperature distribution is only very mildly asymmetric within the present 3D GCM modelling domain, again similar to WASP-96b. Since cloud formation is determined by the local thermodynamical conditions in the collision dominated part of any atmosphere, the cloud distribution will be similarly symmetric globally. However, a local cold trap, like the cold Rossby vortices on the night side, may amplify the cloud formation efficiency locally. Such details may be lost when representing the WASP-39b atmosphere profiles in terms of median day and night profiles. It may, however, be reasonable to cast the complexity of physically self-consistent temperature profiles, as used here, in terms of median profiles for better comparison with temperature profiles from retrieval frameworks. ### The global properties of mineral clouds on WASP-39b Figure 2 (top right) demonstrates where cloud formation is triggered by the formation of cloud condensation nuclei. The local thermodynamic conditions (Fig. 2, top left) trigger the formation of cloud particles generally at pressures \(p_{\rm gas}<10^{-1}\) bar on the dayside and \(p_{\rm gas}<10^{-2}\) bar on the nightside. The extension of the global cloud layers reach considerably higher pressures as cloud particles grow and gravitationally settle into the deeper layers where they evaporate. Hence, the largest particles of \(\langle a\rangle_{\rm A}\sim 10^{4}\mu\)m (Fig. 2, lower right) appear at the cloud base of \(p_{\rm gas}\sim 10^{2.5}\) bar. The dust-to-gas ratio (Fig. 2, lower left) demonstrates the rather symmetric cloud mass load of the WASP-39b atmosphere. Figure 5 (left) visualises the detailed cloud property results for the total nucleation rate, \(J_{*}\) (top), the surface averaged mean particle size, \(\langle a\rangle_{\rm A}\) (middle), and the dust-to-gas mass ratio, \(\rho_{\rm d}/\rho\) (bottom), for the 120 1D profiles that represent the WASP-39b atmosphere. Figure 5 (right) presents the median and maximum deviation values for the day (orange) and the night (blue) side. The opacity relevant surface averaged mean particle size, \(\langle a\rangle_{\rm A}\) (middle) appears well represented by the median values and the maxima deviations appear very moderate. The nucleation rate, \(J_{*}\) (top), however, has a few order of magnitudes differences between the median profiles for the dayside and the nightside in the upper atmosphere. Furthermore, the spread in deviation away from the nightside median is much larger on the nightside in the upper atmosphere. This affects the cloud particle number density and therefore translates in considerable differences between the median profiles for \(\rho_{\rm d}/\rho\) (bottom). The peak in the maximum deviation from the nightside median profile for \(J_{*}\), between \(p_{\rm gas}\sim 10^{-4}-10^{-3.5}\) bar, is due to the few atmosphere profiles that constitute a cold trap due to the Rossby vortices. Similarly to the cloud model performed for WASP-96b (Samra et al. 2022), also for WASP-39b the cloud properties appear to be relatively horizontally homogeneous for a given hemisphere, except for the Rossby cold traps. The latter will be on the night side and will thus not be observable with transmission spectroscopy. However, to be able to interpret transmission spectra derived by JWST that probe the planetary limbs, particle sizes and material compositions at the morning and evening terminator have to be investigated in more detail. The local thermodynamic conditions of the morning terminator are similar to that of the night side and thus exhibits night side nucleation and the local thermodynamic conditions of the evening terminator are similar to the day side and exhibit correspondingly different nucleation. ### Column integrated properties to reveal differences at the terminators Column integrated properties (definitions in Sect. 2) provide additional insights at the terminators. The column integrated values are less affected by extreme events like the Rossby vortices which determine the maximum deviations from the median values in Fig. 5. Further, they enable the comparison with results from the ARCiS (Ormel & Min 2019; Min et al. 2020) retrieval framework, which incorporates a self-consistent cloud model, too. The column integrated nucleation rate mass, \(\dot{\Sigma}\), and the column integrated, number density weighted, surface averaged mean particle size, \(\langle\langle a\rangle_{\rm A}\rangle\), are shown in Fig. 6. Figure 6 highlights that differences between the morning and evening terminators of WASP-39b become more apparent in the integrated properties. The integrated nucleation rates are higher on the nightside with a range of \(\dot{\Sigma}\sim 10^{-11.5}\,\ldots\,10^{-13.5}\,\ldots\,10^{-15.5}\) g cm\({}^{-2}\) s\({}^{-1}\) compared to the dayside with a range of \(\dot{\Sigma}\sim 10^{-13.5}\,\ldots\,10^{-15.5}\) g cm\({}^{-2}\) s\({}^{-1}\). Correspondingly, the values of \(\langle\langle a\rangle_{\rm A}\rangle\) are larger on the dayside, ranging from \(\langle\langle a\rangle_{\rm A}\rangle\sim 10^{-1}\,\ldots\,10^{-0.5}\)\(\mu\)m, than on the nightside where \(\langle\langle a\rangle_{\rm A}\rangle\sim 10^{-2}\,\ldots\,10^{-1}\)\(\mu\)m. The evening terminator inherits similar local thermodynamic conditions to the dayside, whereas the morning terminator is similar to the nightside. Hence, nucleation on the evening terminator is less efficient than the morning terminator (\(\hat{\Sigma}_{\rm evening}<\hat{\Sigma}_{\rm morning}\)) resulting in a larger average particle size at the evening terminator compared to the morning terminator. The evening terminator appears more homogeneous in nucleation efficiency which results into a smaller variation in particle sizes across the limb. Both, \(\hat{\Sigma}\) and \(\langle\langle a\rangle_{\rm A}\rangle\) show a moderate variation with latitude. These notable differences in column integrated cloud properties are caused by the moderate differences in the local thermodynamic conditions e.g. the (T\({}_{\rm gas}\), p\({}_{\rm gas}\))-profiles. The values presented in Fig. 6 compare well with the values derived by Min et al. (2020) using ARCiS to perform retrieval on observations of WASP-39b (without JWST data). They derived \(\log_{10}\hat{\Sigma}=-12.77^{+3.68}_{-2.93}\) [g cm\({}^{-2}\) s\({}^{-1}\)] from pre-JWST observations, which is consistent with the values that are derived here with self-consistent forward modelling. Similarly, Samra et al. (2022) noted that their column integrated nucleation rate also matched within 1\(\sigma\) ARCiS results for the exo-Saturn WASP-96b. Thus, forward modelling and retrieval can complement each other if retrieved cloud model properties are complex enough. ### Non-homogeneous vertical cloud material composition WASP-39b's limbs have notable differences in particle size at a given pressure level as result of variations in the local atmospheric density structures. The particle sizes do also strongly vary in the vertical direction. Therefore, the change of the vertical thermodynamic structure within the atmosphere of WASP-39b causes the cloud properties to be non-homogeneous in size and number, but also in material composition. The changing atmosphere density affects the collisional rates, but the temperature affects the thermal stability (and to a lesser extent the collisional rates) which then results in a changing composition of the cloud particle within the atmosphere. The detailed material compositions of the cloud particles are presented for the substellar (dayside) and the antistellar (nightside) points, as well as for the morning and the evening terminator in the equatorial plane in Sect. 3.4.1. Triggered by the pre-JWST large values (\(150\times\epsilon_{\rm solar}\)) of inferred metallicities for WASP-39b, Sect. 3.4.2 explores the effect of increasing amounts of heavy elements (heavier than He) on the cloud results. #### 3.4.1 Patchy clouds due to changing thermal stability The composition of the particles that compose the clouds in the atmosphere of WASP-39b varies throughout the atmosphere due to the changing thermal stability of condensing materials in response to changing local thermodynamic conditions. Figure 7 shows the volume fractions, \(V_{s}/V_{\rm tot}\), of the individual material condensates at the substellar, antistellar, and equatorial morning and evening terminator. The composition of the cloud layers is shown grouped into silicates (s=MgSiO\({}_{3}\)[s], Mg\({}_{2}\)SiO\({}_{4}\)[s], Fe\({}_{2}\)SiO\({}_{4}\)[s], CaSiO\({}_{3}\)[s]), metal oxides (s=SiO[s], SiO\({}_{2}\)[s], MgO[s], FeO[s], Fe\({}_{2}\)O\({}_{3}\)[s]), high temperature condensates (s=TiO\({}_{2}\)[s], Fe[s], FeS[s], Al\({}_{2}\)O\({}_{3}\)[s], CaTiO\({}_{3}\)[s]), and salts (s=KCl[s], NaCl[s]) for the morning and evening terminators as 2D slice plots in Fig. 8. In the upper atmosphere (\(p_{\rm gas}\lesssim 10^{-2}\) bar) there is no clear dominant group of material condensates globally, with both silicates and metal oxides representing the bulk composition and the relative fraction of each varying between profiles. At the morning terminator, silicates and metal oxides represent almost equal fractions of the cloud particle composition with \(\sim 50\%\) and \(\sim 40\%\) respectively. Over the same pressure region for the evening terminator, the silicates dominate slightly over the metal oxides, each comprising \(\sim 60\%\) and \(\sim 30\%\) respectively. The remaining \(\sim 10\%\) of the cloud particle volume is comprised of high temperature condensates. The dominant silicate condensates are by MgSiO\({}_{3}\)[s], Mg\({}_{2}\)SiO\({}_{4}\)[s], and Fe\({}_{2}\)SiO\({}_{4}\)[s]. The dominant metal oxide condensates are SiO[s], SiO\({}_{2}\)[s], and MgO[s]. The mixed silicate and metal oxide cloud layer extends until the metal oxides become thermally unstable at \(p_{\rm gas}\sim 10^{-3}\) bar for the evening terminator and \(p_{\rm gas}\sim 10^{-2}\) bar for the morning terminator. The thermally unstable material evaporates releasing Mg, Si, and O back into the gas phase which permits further silicate condensation. The increase in the silicate fraction is initially due to an increase in the fraction of MgSiO\({}_{3}\)[s] from \(\sim 16-17\%\) to \(\sim 45\%\) of the cloud volume. When MgSiO\({}_{3}\)[s] evaporates, Mg\({}_{2}\)SiO\({}_{4}\)[s] becomes the dominant silicate at \(\sim 61\%\) of the total cloud particle volume. The maximum contribution of silicates is \(\sim 81\%\) occurring by \(p_{\rm gas}\sim 10^{-2}\) bar at the morning terminator and \(p_{\rm gas}\sim 10^{-1}\) bar at the evening terminator. The deepest layers of the atmosphere are dominated by high temperature condensates as all other materials considered here are thermally unstable. The general trends in material composition outlined for the terminators also apply for the substellar and antistellar points. Hence, the clouds on WASP-39b are expected to be patchy in terms of mixed composition in the upper atmosphere, with an extended silicate dominated layer until \(p_{\rm gas}\sim 10^{-2}\) bar. Figure 6: Column integrated cloud properties for WASP-39b. **Top:** Column integrated mass nucleation rate, \(\hat{\Sigma}\). **Bottom:** Column integrated, number density weighted surface averaged mean, particle size, \(\langle\langle a\rangle_{\rm A}\rangle\) #### 3.4.2 Effect of global metallicity on cloud formation Previous works before JWST data were obtained have reported a wide range of different values for the metallicity of WASP-39b: From solar (Nikolov et al., 2016) to moderately super-solar (Pinhas et al., 2019) to very high values of \(151^{+48}_{-46}\times e_{\rm solar}\) metallicity (Wakeford et al., 2018). Notably, the derived atmospheric metallicity changed, depending on cloud model being used for retrieval for the same data. Retrieval with a simple cloud prescription as used by Wakeford et al. (2018) favoured a cloud-free, very high metallicity composition. ARCiS yielded \(\sim 15\times e_{\rm solar}\) metallicity (Min et al., 2020) with a more complex cloud model, which is in accordance with recently obtained JWST data (The JWST Transiting Exoplanet Community Early Release Science Team et al., 2022; Rustamkulov et al., 2022; Alderson et al., 2022; Ahrer et al., 2022; Feinstein et al., 2022). For this work \(10\times e_{\rm solar}\) metallicity was adopted in the nominal model in accordance of newly obtained JWST data, in contrast to previous work on the similarly warm exo-Saturn WASP-96b, for which solar metallicity was assumed (Samra et al., 2022). While Min et al. (2020) constrained metallicity for WASP-39b with a cloud model from the retrieval side, it is worthwhile to also explore the impact of different metallicities with forward modelling using a fully microphysical cloud model. The implications of different assumptions of the atmospheric metallicity on the formation of clouds on WASP-39b is explored here. Three different metallicity values are tested for the same equatorial evening terminator (T\({}_{\rm gas}\), p\({}_{\rm gas}\))-profile: \(1\times\), \(10\times\), and \(100\times e_{\rm solar}\) abundances. The mean molecular weight, \(\mu\), varies for a given (T\({}_{\rm gas}\), p\({}_{\rm gas}\))-profile. The values of \(\mu=2.3,2.4\), and \(4.75\) are adopted for the \(1\times\), \(10\times\), and \(100\times e_{\rm solar}\) cases, respectively. A value of \(\mu=2.3\) is expected for a solar H\({}_{2}\)/He dominated atmosphere. The choice of \(\mu=4.75\) is motivated by equilibrium chemistry calculations for the equatorial evening terminator (T\({}_{\rm gas}\), p\({}_{\rm gas}\))-profile using GGChem (Woitke et al., 2018) which show \(\mu=4.73\)... \(4.81\) throughout the atmosphere. The \(10\times e_{\rm solar}\) case with \(\mu=2.4\) is the same as presented in the previous sections. The change in metallicity is applied only in the cloud formation simulation, therefore, any impact on the (T\({}_{\rm gas}\), p\({}_{\rm gas}\))-profile which may arise from a different metallicity is not included here. The differences in the total nucleation rate (\(J_{\rm s,tot}\)), total particle number density (\(n_{d}\)), surface averaged particle size (\(\langle a\rangle_{\rm A}\)), and material composition of cloud particles (\(V_{s}/V_{\rm tot}\)) for each case are presented in Fig. 9. The nucleation efficiency increases slightly with increased metallicity, however, the pressure range over which nucleation occurs does not change significantly. The increased availability of condensable elements results in approximately an order of magnitude increase in the total cloud particle number density for each factor of 10 increase in metallicity. In the upper atmosphere, \(p_{\rm gas}\lesssim 10^{-2}\) bar, there is little difference in \(\langle a\rangle_{\rm A}\). At higher pressures, the slightly increased nucleation rate in the upper atmosphere manifests in a slightly smaller \(\langle a\rangle_{\rm A}\) for \(p_{\rm gas}>10^{-2}\) bar. The general trend in which condensates dominate the cloud particle composition remains consistent between in the three cases. In general, the upper atmosphere has a 55%, 40%, 15% mix of silicates, metal oxides, and high temperature condensates, respectively. The most significant difference between the three cases is the extent of deep atmosphere (\(p_{\rm gas}\gtrsim 10^{-25}\) bar) silicate Figure 7: Volume fractions, \(V_{s}/V_{\rm tot}\), of individual cloud material condensates at the substellar, antistellar, and equatorial morning and evening terminators for WASP-39b. cloud layer. The increased global abundance of heavy elements increases the thermal stability of the silicate materials at higher pressures. Consequently, the pressure level at which MgSiO\({}_{3}\)[s] evaporates increases from \(p_{\rm gas}\sim 10^{-1.5}\) bar in the \(1\times e_{\rm solar}\) case to \(p_{\rm gas}\sim 10^{-0.5}\) bar and \(p_{\rm gas}\sim 10^{1.5}\) bar in the \(10\times e_{\rm solar}\) and \(100\times e_{\rm solar}\) cases, respectively. ### Atmospheric gas composition The formation of cloud particles affects the composition of the observed atmospheric gas by depleting those elements that form the respective cloud materials ((Mg, Si, Ti, O, Fe, Al, Ca, S, K, Cl, Na). The most abundant elements (O) are the least affected. Therefore, it is important to explore the composition of the gas phase which results from the formation of clouds. Figure 8: WASP-39b 2D terminator slices showing the bulk material composition of cloud particles. The materials are grouped as in (Helling et al., 2021): **Top Left:** Metal oxides (\(s\) =SiO[s], SiO\({}_{2}\)[s], MgO[s], FeO[s], Fe\({}_{2}\)O\({}_{3}\)[s]), **Top Right:** Silicates (\(s\) =MgSiO\({}_{3}\)[s], Mg\({}_{2}\)SiO\({}_{4}\)[s], Fe\({}_{2}\)SiO\({}_{4}\)[s], CaSiO\({}_{3}\)[s]). **Bottom left** High temperature condensates (\(s\) =TiO\({}_{2}\)[s], Fe[s], FeS[s], Al\({}_{2}\)O\({}_{3}\)[s], CaTiO\({}_{3}\)[s]), **Bottom right** Salts (\(s\) =KCl[s], NaCl[s]). #### 3.5.1 Dominant gas-phase species In Figure 10 (left) the concentrations of the dominant gas species, excluding H\({}_{2}\)/He, for the equatorial morning and evening terminators are shown. The most dominant gas phase species include CO and H\({}_{2}\)O, both with concentrations greater than \(10^{-3}\), as well as H\({}_{2}\)S, CO\({}_{2}\), CH\({}_{4}\), Na, and K. For most of the species shown the concentrations are generally broadly similar between the two profiles and the concentrations of these species generally do not change significantly through the atmospheres, with the exception of CO\({}_{2}\) and CH\({}_{4}\). The CO\({}_{2}\) concentration slightly increases by approximately an order of magnitude between \(p_{\rm gas}\sim 10^{-1}\) bar and \(p_{\rm gas}\sim 10^{-5}\) bar. The major exception, however, is CH\({}_{4}\), with the morning terminator concentration exceeding that of the evening terminator by approximately 3 orders of magnitude in the upper atmosphere (\(p_{\rm gas}\lesssim 10^{-2}\) bar) in equilibrium chemistry. At the cooler gas temperatures, the CO reacts with H\({}_{2}\) to form CH\({}_{4}\) resulting in an increase in the concentration of CH\({}_{4}\), with the O liberated in this reaction serving to increase the H\({}_{2}\)O concentration (Sharp & Burrows 2007). The concentration of CH\({}_{4}\) at the morning terminator in the upper atmosphere of approximately \(n_{\rm CH_{4}}/n_{<H>}\sim 10^{-4}\) is in principle detectable by HST/WFC3 and JWST (e.g. Kreidberg et al. 2018; Carone et al. 2021b). So far, however, methane has not been detected with low resolution spectroscopy for warm exoplanets (\(T_{\rm eq}\lesssim 1200\) K) like WASP-39b. The absence of spectral CH\({}_{4}\) features in these planets suggests that methane abundances may be affected by disequilibrium chemistry (Fortney et al. 2020) via vertical (Moses et al. 2011; Venot et al. 2012) and maybe also by horizontal mixing (Agundez et al. 2014; Baeyens et al. 2021). Both would quench the amount of CH\({}_{4}\) below the observation limit (\(\rm VMR<10^{-5}\)). Hence, it is reasonable to assume the concentration of CH\({}_{4}\) will be globally homogeneous and below the detectability threshold. Thus, CH\({}_{4}\) is not included as an opacity species in Sect. 4.2. Figure 10 (right) compares the concentrations of the dominant gas phase species at the evening terminator between the cloud model and the atmosphere with equilibrium chemistry but without condensation. The concentrations of each species do not differ significantly due to the element depletion associated with the cloud formation. #### 3.5.2 The role of sulphur in clouds Elements such as K, Na, and Cl are not significantly affected by cloud formation because their possible condensate materials are thermally unstable in the atmosphere of WASP-39b. In addition, the formation of sulphur-containing materials is considerably less favoured in comparison to the Fe/Mg-containing silicates. Figure 5 in Helling (2019) demonstrates that materials like S[s], MgS[s], and FeS[s] would reach a sweet spot of maximum volume contribution of \(\sim 15\)% when C/O=0.99...1.10 and if the sulphur abundance is enriched above the solar value for a \(T_{\rm eff}=1200\) K exoplanet. Mahapatra et al. (2017) (Table 3) demonstrate that FeS[s] would contribute with \(<1\)% to cloud particle in gas giants and with \(<10\)% in hot rocky planets like 55 Cnc e. If sulphur compounds do not condense, the sulphur needs to remain in the gas phase of exoplanet atmosphere. Therefore, the S/O ratio would remain near-solar as demonstrated in Fig. 4 in Helling (2019). This negative result for the role of sulphur contributing to the cloud mass in exoplanets (and brown dwarfs) is supported by similar findings for AGB stars. The study of post-AGB stars shows a lack of sulphur depletion (Waelkens et al. 1991; Reyniers & van Winckel 2007) which is interpreted as a lack of sulphur condensation into dust grain in AGB stars (Danilovich et al. 2018). Figure 9: Metallicity affect on cloud properties shown for the same evening terminator equator (T\({}_{\rm gas}\), p\({}_{\rm gas}\))-profile (top left, black line) for WASP-39b. An increased global amount of heavy elements increases the thermal stability of the silicate materials (orange) at higher temperatures where p\({}_{\rm gas}>10^{-2}\)bar. **Top right:** Total nucleation rate, \(J_{\ast,\rm tot}\) [cm\({}^{-3}\) s\({}^{-1}\)], **Bottom left:** Surface averaged mean particle size (black) and cloud particle number density, \(n_{\rm d}\) [cm\({}^{-3}\)] (green), **Bottom right:** Material volume fractions, \(V_{\rm s}/V_{\rm tot}\), for four material groups (high temperature condensates, metal oxides, silicates, salts). AGB stars, which undergo strong condensation events (dust formation), enrich the ISM for the next generation of stars and planets to form. Sulphur, is however, not nuclearysynthesized in AGB stars but rather in type II supernovae (Colavitti et al., 2009; Perdjion et al., 2021), and is the 10th most abundant element in the Universe, as well as an important element for life on Earth. Oxygen-rich AGB stars are observed to have SO and SO\({}_{2}\)(Danilovich et al., 2020) but H\({}_{2}\)S is most likely not a parent molecule since it decays rapidly (Danilovich et al., 2016). H\({}_{2}\)S has been found in high-mass loss oxygen-rich stars and is argued to account for a significant fraction of the sulphur abundance in these objects (Danilovich et al., 2017). Gobrecht et al. (2016) point out also the chemical link between CS, CN, SH and H\({}_{2}\)S in carbon-rich AGB stars. Here, SH can combine with O to SO which further may form SO\({}_{2}\); this would also be relevant for oxygen-rich environments or the upper exoplanet atmospheres where photochemistry enables to formation of CS and CN. However, the SH reservoir may be depleted through the formation of H\({}_{2}\)S such that it indirectly affects the presence of SO and SO\({}_{2}\). The SH/H\({}_{2}\)S chemistry is further dictating the formation of CS and CN which then may continue to form HCN (their Eqs. 5 to 16). The research question is therefore which gas-phase constituent holds the sulphur reservoir if sulphur is not depleted by dust formation in AGB stars or in cloud particles in extrasolar planets / brown dwarfs. Recently, Tsai et al. (2022) suggested that H\({}_{2}\)S is a precursor molecule which gives rise to SO\({}_{2}\) on WASP-39b via gas-phase non-equilibrium in combination with a solar element overabundance of \(10\times e_{\rm solar}\). To gather a first impression of the exoplanet sulphur reservoir, the most abundant sulphur-binding gas species in the WASP-39b atmosphere, H\({}_{2}\)S, is included into the comparison of major gas species in Fig. 10. These are the gas species for which a radiative transfer solution is presented to fit the observational date for WASP-39b in Sect. 4.2. #### 3.5.3 Mineral ratios Si/O, Mg/O, Fe/O, S/O, C/O The Si/O, Mg/O, Fe/O, S/O and C/O element abundance ratios for the equatorial morning and evening terminator points are shown in Fig. 11. The cloud formation process reduces the abundances of Si, Mg, and Fe by several orders of magnitude in comparison to oxygen. The reduction is seen for both the morning and evening terminators as cloud formation occurs at both points, however, the morning terminator abundances are reduced more than the evening terminator. The maximum difference in the element ratios between the two terminator points is approximately 1 order magnitude occurring over a pressure range of \(p_{\rm gas}\leq 10^{-2}-10^{-1}\) bar. No substantial reduction is seen for sulphur which confirms previous results (Mahapatra et al., 2017; Helling, 2019). This indicates that potential reaction partners Fe, Mg, Si are stronger bound by other materials and leads to the conclusion that sulphur gas species may provide means to determine the primordial abundances and hence, to link to planet formation processes. This finding is relevant for all exoplanets. In contrast to the other element ratios, in regions of cloud formation the C/O is increased from the solar C/O = 0.54 as the oxygen is depleted from the gas phase. The maximum value of C/O \(\sim\) 0.75 occurs where \(p_{\rm gas}=10^{-3}\) bar in the upper atmosphere. In Figure 11 (bottom), the equatorial terminator C/O of WASP-39b is compared to that of the reduced mixing efficiency case of WASP-96b from (Samra et al., 2022). The C/O ratio for WASP-96b drops to C/O \(\sim\) 0.31 for both the equatorial morning and evening terminators at \(p_{\rm gas}\sim 10\) bar due to the evaporation of Mg\({}_{2}\)SiO\({}_{4}\)[s] resulting from the increased gas temperature (see Figs. 2 and 3 in Samra et al. (2022)). The reduced mixing efficiency serves to maintain the sub-solar C/O at this pressure level as the oxygen is not efficiently removed from the evaporation edge of the cloud base. Figure 10: Gas phase concentrations (n\({}_{\rm x}\)/n\({}_{\rm ch}\)-) for selected molecules at the WASP-39b equatorial morning and evening terminators in chemical equilibrium. **Left:** Comparing morning and evening terminator. **Right:** Comparing results for depleted (after cloud formation) and undepleted element abundances at the equatorial evening terminator. ## 4 The value of simplistic cloud models for observations Cloud models of varying complexity are used to represent observational data for extrasolar planets. The self-consistent, complex cloud model used in this work yields detailed cloud properties. In atmosphere retrievals, more simplified cloud models in form of a grey cloud prescription is generally used. In this section, both cloud model approaches are applied to understand how the complex cloud model can aid atmosphere retrieval to realise the full potential of JWST observational data. Section 4.1 identifies two distinct wavelength regimes longer and shorter than \(\sim 4\)\(\mu\)m based on the disperse, mixed material cloud results from our kinetic model. Section 4.2 presents a synthetic spectrum for WASP-39b for \(\lambda\lesssim 4\)\(\mu\)m where the cloud may be reasonably well represented by a grey opacity, and Sect. 4.3 discusses the effect of a homo-disperse, homo-material cloud on the cloud opacity which is particularly strong for \(\lambda>4\mu\)m. ### Disperse, mixed material cloud opacity Following Helling et al. (2022) and Samra et al. (2022), the atmospheric gas pressure, \(p_{\rm gas}\), is explored where the cloud reaches optical depth \(\tau(\lambda)=1\) as function of wavelength, \(\lambda\). The optically thick pressure level aligns with the cloud top pressure which many of the simple grey cloud deck approaches use for fitting spectra. Further, the pressure level at which the clouds become optically thick is a result of the complex cloud model. Figure 12 demonstrates how the \(\tau(\lambda)=1\) pressure level changes with wavelength if the cloud opacity is calculated using the full microphysical cloud model for WASP-39b as presented in Sect. 3. The optically thick cloud pressure level, \(p_{\rm gas}(\tau(\lambda)=1)\), is the result of the varying composition, particle size, and number density of cloud particles with pressure. Two distinct wavelength regimes may be identified in Fig. 12: \(\lambda<4\mu\)m with no solid-material spectral features but a slope, and \(\lambda>4\mu\)m where solid-material spectral features occur. For \(\lambda<4\mu\)m, \(p_{\rm gas}(\tau(\lambda)=1)\) varies by \(\sim 1.5\) orders of magnitude in the wavelength bands that are accessible by HST/WFC3, HST/STIS, VLT/FORS2, JWST/NIRSpec, JWST/NIRCam and future missions like PLATO and Ariel. The pressure range \(p_{\rm gas}\sim 10^{-4.5}\ldots 10^{-3}\) bar that is probed at these wavelengths is characterised by cloud particles made of a mix of materials as shown in Table 1. The dominating materials are Mg/Si/O and Fe/Si/O silicates with inclusions from various materials, including further iron compounds like Fe[s] and Fe[s]. For \(\lambda>4\mu\)m, the pressure range \(p_{\rm gas}\sim 10^{-3}\)bar is probed. This coincides, for the substellar point and evening terminator profiles, with chemically very active regions of the atmosphere, namely where the iron-silicates (for example, Fe\({}_{2}\)SiO\({}_{4}\)[s]), MgO[s] and SiO\({}_{2}\)[s]) become thermally unstable and evaporate. Instead, Fe[s], SiO\({}_{2}\)[s], Mg\({}_{2}\)SiO\({}_{4}\)[s] and MgSiO\({}_{3}\)[s] increase their respective volume fractions considerably.This is the reason for the increase in the optical depth of the clouds for only the substellar point (between \(\lambda=4\ldots 8\,\mu\)m). Therefore, assuming cloud particles observed at these wavelengths are made of only silicates is questionable. While the assumption of uniform cloud composition is called into question here, the concept of optical depth \(\tau=1\) is still useful to compare between the complex models, observational data and cloud parameterisation. ### Synthetic transmission spectra of WASP-39b A first exploration of the available observational data for WASP-39b for \(\lambda<5\)\(\mu\)m is undertaken with the current, publicly available data, using a grey cloud model for the evening and morning terminator, respectively to derive insights between our complex cloud model and observations (Fig. 13). The synthetic spectra shown in Fig. 13 are computed using _petitRADTRANS_(Molliere et al., 2019) for the GCM (T\({}_{\rm gas}\), p\({}_{\rm gas}\))-profile for the equatorial morning and evening terminators, and their respective cloud-depleted gas-phase concentrations (see Fig. 10). The line opacities used for the spectra are CO (Rothman et al., 2010), CO\({}_{2}\)(Yurchenko et al., 2020), H\({}_{2}\)O (Rothman et al., 2010), H\({}_{2}\)S (Azzam et al., 2016), Na (Allard et al., 2019), and K (Allard et al., 2016). A simple grey cloud deck is applied to both terminators separately. The synthetic spectra are compared to both pre-JWST observations and all the four different pipeline reduction of the JWST observations from The JWST Transiting Exoplanet Community Early Release Science Team et al. (2022). Systematic effects that lead to offsets between data make it difficult to com Figure 11: Gas phase element abundance ratios, for equatorial morning (\(\phi=-90.0\), dashed lines) and evening (\(\phi=90.0\), solid lines) terminators. **Top:** The mineral ratios Si/O, Mg/O, Fe/O, S/O, and C/O for WASP-39b. **Bottom:** C/O for both WASP-39b and WASP-96b. pare observations from different telescopes. Here, this effect was taken into account by adding 100 - 150 ppm for individual data sets to achieve best fit with the model. _Spitzer_ data was treated as suggested in The JWST Transiting Exoplanet Community Early Release Science Team et al. (2022). The grey cloud model produces generally a good fit with the data available for \(\lambda=0.3\,\dots\,5\mu\)m, but is unable to reproduce the slope in the optical which is associated with small cloud particles. The qualitative impact of the cloud opacity can be estimated by examining the parameter \(x=2\pi r/\lambda\), where \(r\) is the radius of a spherical cloud particle and \(\lambda\) is the observation wavelength. For \(x\ll 1\) Rayleigh scattering is the dominant. Using Fig. 6 as a guide, taking \(r\sim\langle\langle a\rangle_{\lambda}\rangle\sim 10^{-1.5}\)\(\mu\)m (representative of the morning terminator) and an observing wavelength of \(\lambda=0.3\,\mu\)m yields a value of \(x\approx 0.66\), hence, a Rayleigh scattering slope would be expected in the optical wavelength regime based on the complex cloud model (see Fig. 12). Figure 5 shows that the population of cloud particles is expected to be smaller than the column integrated value in the upper atmosphere, and thus supports the expectation of an optical slope. The location of the grey cloud to enables another comparison to results from the complex cloud model. Grey cloud decks at \(p_{\rm gas}\sim 10^{-2}\) bar and \(p_{\rm gas}\sim 5\times 10^{-3}\) bar for the morning and evening terminators, respectively, appear to reproduce the pressure broadening required for the Na line in Fig. 13. There is good agreement between both the synthetic terminator spectra and the observed CO\({}_{2}\) feature at \(4.3\)\(\mu\)m, and a reasonable agreement with the observed H\({}_{2}\)O features. New work of Alderson et al. (2022); Rustamkulov et al. (2022); Ahrer et al. (2022); Tsai et al. (2022) derive a cloud deck between \(3\times 10^{-4}\,\dots\,10^{-2}\) bar qualitatively agreeing with the range of cloud decks derived here. These authors further noted that varying vertical opacity contribution is required. Horizontal differences in cloud coverage between the limbs is also mentioned as a possibility. As has been noted in Section 3.3, average cloud particle sizes can indeed differ between morning and evening terminators. In Section 3.5, it was shown that H2S may be present in chemical equilibrium in significant quantities for WASP-39b. Thus. H\({}_{2}\)S was included as an additional opacity source, but no clear impact on the simulated spectrum could be found for \(\lambda<5\)\(\mu\)m. However, other molecules for which H\({}_{2}\)S is a precursor may impact observations (Zahnle et al. 2009) as was discussed for AGB stars (see Sect. 3.5.2). The important role of H\({}_{2}\)S and sulphur chemistry in WASP-39b has been recently confirmed by Tsai et al. (2022) who present a photochemical pathway from H\({}_{2}\)S to explain the recent detection of SO\({}_{2}\). The inferred cloud deck pressure level from the complex cloud model can be derived for the Na feature at \(\lambda\sim 0.6\)\(\mu\)m that lies in the optical slope region (Fig. 12). The cloud deck is slightly higher in the atmosphere at \(p_{\rm gas}\sim 10^{-4.4}\) bar for the morning terminator than the evening terminator at \(p_{\rm gas}\sim 10^{-4.2}\) bar. The width of the sodium feature seen in observations suggests, similar to WASP-96b, that the actual cloud deck must be deeper in the atmosphere than shown in Fig. 12. This also agrees with the synthetic spectra fits, which require a deeper cloud deck, namely \(p_{\rm gas}\sim 10^{-2}\dots 5\times 10^{-3}\) bar, to match observations. To apply the lessons learned from WASP-96b to WASP-39b, it has to be taken into account that for WASP-96b solar metallicity is assumed whereas for WASP-39b a higher metallicity of 10 times solar is assumed. Samra et al. (2022) illustrate in their Fig. 5 that an enhanced atmosphere metallicity of \(10\times e_{\rm solar}\) raises the cloud deck to higher altitudes by almost half an order of magnitude for their models of WASP-96b. They show that the altitude of the cloud deck is reduced by an order of magnitude when the mixing efficiency is reduced by a factor of 100. For WASP-39b, hence, the location of the grey cloud deck required to fit the evening terminator spectrum is broadly consistent with a required factor of \(\sim 100\times\) reduction in mixing efficiency (compare Fig. 12, bottom). Further the cloud deck for WASP-39b is slightly higher compared to WASP-96b which can be entirely explained by its 10 times higher metallicity. Thus, the cloud model is capable of meeting observations of both planets, by adjusting only one factor: vertical mixing. Figure 12: WASP-39b pressure levels, \(p_{\rm gas}(\tau(\lambda)=1)\) [bar], at which the atmosphere becomes optically thick due to cloud opacity. **Top:** for the substellar point (\(\phi=0^{\circ}\), red), anti-stellar point (\(\phi=180^{\circ}\), dark blue) and the equatorial morning (\(\phi=-90^{\circ}\), light blue) and equatorial evening terminator (\(\phi=90^{\circ}\), yellow). **Bottom:** for the equatorial evening terminator compared to the same profile but with the mixing timescale \(100\times\tau_{\rm mix}\). The dashed vertical blue line is the location of Na, coloured bars show wavelength ranges of various instruments. ### The effect of over-simplification It has been shown here that clouds are expected to form in WASP-39b given its temperature of \(T_{\rm eq}\sim 1100\) K. A higher cloud mass is expected in WASP-39b compared to WASP-96b because a higher metallicity is assumed. Thus, retrieval approaches that treat atmosphere metallicity and cloud formation as competing processes, when in reality they are intrinsically related may lead to unrealistically high metallicity values (Sect. 3.4.2). Further, cloud formation in the temperature range of \(T_{\rm eq}\sim 1200\) K always tends to raise the C/O ratio in the remaining gas phase due to the silicate and metal oxides removing oxygen (see e.g. Helling et al. 2022). Here, assuming a solar C/O ratio, the observed C/O ratio in the gas phase can be raised to C/O \(\sim 0.7\) (Fig. 11). Thus, again, treating C/O ratios and cloud formation as unrelated to each other, may result in unrealistically low C/O ratios. Retrievals thus tend to produce diverging results with respect to the metallicity and the C/O in exoplanet data interpretation. However, it is challenging to find the optimal degree of simplifications for complex processes like cloud formation. Again, \(p_{\rm gas}(\tau(\lambda)=1)\) is used as a tool to demonstrate how assumptions like constant cloud particle sizes and homogeneous cloud particle composition may bias retrieval results. Figure 12 illustrates why the use of over-simplified cloud models is problematic: the cloud particles in the optically thin region (above \(p_{\rm gas}(\tau(\lambda)=1)\)) are highly mixed, with no single condensate species contributing more than \(\sim 20\%\). At different wavelength ranges, different pressure level and thus different cloud materials are probed. This is readily apparent for the substellar point, which shows an excess in cloud opacity between 5 and 8 micron (Fig. 12). Furthermore, as described in Section 4.2 the cloud top of grey clouds needs to be between \(p_{\rm gas}=10^{-2}\dots 5\times 10^{-5}\) bar in order to fit the Na line pressure broadening. In the case of such a deeper cloud top, a different cloud composition becomes visible by observations: a mixed composition of MgSiO\({}_{3}\)[s]/Mg\({}_{2}\)SiO\({}_{4}\)[s] (\(\sim 45\%\) and \(\sim 30\%\) respectively) at morning terminator. Similarly, lowering the cloud top will allow different local gas-phase chemistry to be probed. How oversimplified cloud models effect the cloud opacity, particularly for \(\lambda>2\mu\)m is shown in Fig. 14. The effect of simplifications in terms of constant particles sizes and homogeneous \begin{table} \begin{tabular}{c l} \hline \hline material & \(V_{s}/V_{\rm tot}\) \\ \hline Mg\({}_{2}\)SiO\({}_{4}\)[s] & \(\sim 19\%\) \\ MgSiO\({}_{8}\)[s] & \(\sim 16\%\) \\ Fe\({}_{2}\)SiO\({}_{4}\)[s] & \(\sim 16\%\) \\ MgO[s] & \(\sim 13\%\) \\ FeS[s] & \(\sim 7\%\) \\ SiO\({}_{8}\)[s] & \(\sim 7\%\) \\ SiO[s] & \(\sim 6\%\) \\ FeO[s] & \(\sim 5\%\) \\ CaSiO\({}_{8}\)[s] & \(\sim 5\%\) \\ Fe[s] & \(\sim 3\%\) \\ Al\({}_{2}\)O\({}_{3}\)[s] & \(\sim 2\%\) \\ Fe\({}_{2}\)O\({}_{3}\)[s] & \(\sim 1\%\) \\ TiO\({}_{2}\)[s] & \(<0.1\%\) (Trace) \\ CaTiO\({}_{8}\)[s] & \(<0.1\%\) (Trace) \\ NaCl[s] & None \\ KCl[s] & None \\ \hline \hline \end{tabular} \end{table} Table 1: Material volume fractions, \(V_{s}/V_{\rm tot}\), for the evening terminator WASP-39b clouds in decreasing order where \(p_{\rm gas}\approx 2\times 10^{-4}\) bar representing where the cloud regions becomes optically thin. Figure 13: Synthetic spectra for the equatorial morning and evening terminators of WASP-39b compared to both pre-JWST and JWST observation for \(\lambda=0.3\,\dots\,\sim 5.0\)\(\mu\)m, computed with _petitRADTRANS_. Opacities are considered using the concentrations of the dominant gas phase species output from the kinetic cloud model (H\({}_{2}\)O, CO\({}_{2}\), CO, H\({}_{2}\)S, Na, K) material properties is demonstrated. For this simplified model, two particle number densities are used (\(n_{\rm d}=10^{4},\,10^{5}\,\)cm\({}^{-3}\)), and two cloud particle sizes (\(\langle a\rangle_{A}=10^{-2},\,10^{-1}\,\mu\)m). These values are based on the minimum and maximum values of cloud particle properties in the full microphysical model (Fig. 9) in the optically thin pressure range (\(10^{-4.5}\,\ldots\,10^{-3}\,\)bar, Fig 12). Forsterite (Mg\({}_{2}\)SiO\({}_{4}\)[s]) was chosen as it is the largest volume constituent of the cloud particles in the optically thin pressure range (Fig. 7). The spectral fingerprint of the single material Mg\({}_{2}\)SiO\({}_{4}\)[s] becomes very apparent for small mean particle sizes of 0.1\(\mu\)m a small cloud particle number density of \(10^{4}\)cm\({}^{-3}\) and diminishes with increasing number density of cloud particles (middle, \(10^{5}\)cm\({}^{-3}\)). With \(\langle a\rangle_{A}=10^{-1}\,\mu\)m, clouds are more optically thick than in the full microphysical case explored so far. However, even in such an optically thick case, 'windows' that allow to observe deeper parts of the atmosphere can appear for different cloud particle compositions. For example, by assuming a full forsterite (Mg\({}_{2}\)SiO\({}_{4}\)[s]) composition, there are optically thinner 'windows' both between the 8 and 18 \(\mu\)m silicate features. But also a completely optically thin (down to at least \(10^{2}\,\)bar, not shown) window in the near- and mid-infrared (\(\sim 4\,\ldots\,7\,\mu\)m) for the lower number density case (\(n_{\rm d}=10^{4}\,\)cm\({}^{-3}\)). If instead a pure Fe\({}_{2}\)SiO\({}_{4}\)[s] or pure MgSiO\({}_{3}\)[s] composition is assumed, the infrared spectral features would be very different (Fig. 15) because of the clearly different refractory indices (Fig B.1). In particular, pure Fe\({}_{2}\)SiO\({}_{4}\)[s] cloud particles do not show the optically thin window at \(\sim 4\,\ldots\,7\,\mu\)m, as is the case for Mg\({}_{2}\)SiO\({}_{4}\)[s] and MgSiO\({}_{3}\)[s]. This wavelength regime is especially sensitive to cloud material composition, size, and number density. There is a particular sensitivity between iron and magnesium silicates. In addition, for wavelengths \(>7\,\mu\)m there is substantial differences between the features of all of the materials. As the cloud particles in the observable atmosphere are highly mixed (see Table 1), a simple model with this constant composition is also shown (Fig. 15, bottom). From this, the mixing of infrared features between all the materials involved (including Fe[s]) can be readily seen, and hence the lack of an optically thin window in the full material composition optical depth (Fig. 12). The bottom of Fig. 14 shows that the \(\tau=1\) level of the clouds is also very sensitive to particle size (\(\langle a\rangle_{A}\sim 10^{-2}\,\,\mu\)m). In this instance because of both the changing total cloud mass in this simplified model, but also Rayleigh scattering as discussed in Section 4.2. The changing particle size with pressure is what causes the slope in optically thick pressure level; at different wavelengths, different depths in the atmosphere are where the cloud particles change scattering regime (from Rayleigh to Mie). Taken together, what this simplified approach shows is the biases that are introduced when reducing the complexity of cloud particle size, number density, and material composition. As observational data across a broader wavelength range becomes available, problems may occur with the patchiness of the of the clouds. The sensitivity of near- and mid-infrared observations to the Fe/Mg composition of silicate cloud particles, which is being discussed for substellar atmospheres (e.g. Wakeford & Sing 2015; Luna & Morley 2021; Burningham et al. 2021), is also difficult to assess without taking into account the possibility of highly-mixed composition. Fits using parameterised cloud opacities for material composition (Kitzmann & Heng 2018; Taylor et al. 2021) will be challenged by the mixed material composition of clouds, which changes with height in the atmosphere. Furthermore, retrievals of the same JWST/NIRSpec G395H (\(3\,\ldots\,5\,\mu\)m) observations of WASP-39b yield differing Figure 14: Pressure levels, \(p_{\rm gmk}(\tau(\lambda)=1)\) [bar], at which the atmosphere becomes optically thick due to cloud opacity for WASP-39b, assuming constant, mono-disperse, cloud particles made of forsterite (Mg\({}_{2}\)SiO\({}_{4}\)[s]). **Top:**\(\langle a\rangle_{A}=10^{-1}\,\mu\)m, \(n_{\rm d}=10^{4}\,\)cm\({}^{-3}\), **Middle:**\(\langle a\rangle_{A}=10^{-1}\,\mu\)m, \(n_{\rm d}=10^{5}\,\)cm\({}^{-3}\), **Bottom:**\(\langle a\rangle_{A}=10^{-2}\,\mu\)m, \(n_{\rm d}=10^{4}\,\)cm\({}^{-3}\) leaving the cloud optically thin. values of metallicity when specific condensate clouds are modelled in comparison to a grey cloud deck (Alderson et al. 2022). ## 5 Conclusion WASP-39b, similar to WASP-96b, is cool enough that clouds are expected to form globally. A cloud-free atmosphere with \(>100\times\varepsilon_{\rm solar}\), as implied by previous studies for this planet (Wakeford et al. 2018), is thus not consistent with the high efficiency of cloud formation in this temperature range. A cloudy and \(\sim 10\times\varepsilon_{\rm solar}\) atmosphere provides a better fit, consistent with recent JWST observations. We thus suggest that retrievals should add a cloud model by default. Application of a non-equilibrium cloud formation model further elucidates that simple grey cloud models - as used in atmosphere retrieval - do not capture the mixed composition cloud particles which are expected to form in the WASP-39b atmosphere. The cloud composition will vary throughout the atmosphere in response to the changing local thermodynamic conditions. Inclusion of the varying composition of clouds in exoplanet atmospheres may be required to fully interpret the wealth of observational data that is offered by current and future JWST observations. The following points summarise the findings on the atmosphere structure and cloud composition of WASP-39b, as well as highlighting the care that is needed in interpreting simple cloud models used in retrievals: * Hydrodynamic redistribution of irradiated heating is efficient enough in such cool objects that day-night temperature differences are not very large, therefore, WASP-39b is expected to be homogeneously covered in clouds. * The terminators inherit a temperature similar to either one hemisphere, the morning terminator is similar to the nightside and the evening terminator is similar to the dayside. Hence, trends in cloud properties at the terminators of WASP-39b are divergent between the terminators and similar to the influencing hemisphere. * Cooler temperatures in Rossby vorticies at the nightside increase cloud formation compared to other locations at the same pressure level. * The cloud composition in this temperature regime can be very heterogeneous, leading to vertical patchiness in terms of material composition. The cloud deck is characterised by an almost equal mixture of silicates and metal oxides, with a small fraction of high temperature condensates. The deeper atmosphere is dominated by an extended silicate cloud layer and a high temperature condensate cloud base. * Increased atmospheric metallicity enhances cloud mass and stabilises the deep extended silicate cloud layer to higher pressures. * Increased metallicity does not qualitatively change the expectation of mixed composition upper cloud layers. * Sulphur may be used to trace planet formation processes more easily than Fe, Mg, Si or even O since it is considerably less affected by condensation processes. * Similar to WASP-96b (Samra et al. 2022), a reduced vertical mixing by approximately two orders of magnitude may be required to explain a cloud deck between \(5\times 10^{-3}\) bar and \(1\times 10^{-2}\) bar in a \(10\times\varepsilon_{\rm solar}\) metallicity WASP-39b atmosphere. * Simplification of cloud microphysical properties can lead to biases in retrieval. These simplifications are: neglecting the Figure 15: Pressure levels, \(p_{\rm gas}(\tau(\lambda)=1)\) [bar], at which the atmosphere becomes optically thick due to cloud opacity for WASP-39b, assuming constant, mono-disperse, cloud particles for various materials, where \(\langle a\rangle_{A}=10^{-1}\,\mu\)m, \(n_{\rm d}=10^{4}\,{\rm cm}^{-3}\) (comparable to the top of Fig. 14). **Top:** For cloud particles made of MgSiO\({}_{3}\)[s]. **Middle:** For cloud particles made of Fe\({}_{2}\)SiO\({}_{4}\)[s]. **Bottom:** Mixed cloud composition based on \(p_{\rm gas}=2\times 10^{-4}\) bar, see Table 1. pressure dependence of cloud particle size and number density. Further, a highly mixed material composition has a profound impact on cloud optical depth and observations. Understanding these biases will be especially crucial for near- and mid-infrared observations with JWST. It is promising that adjustment of vertical mixing alone by the same factor yields for the complex model already good agreement with both, the WASP-96b and here the WASP-39b data. Thus, this work indicates that the microphysical cloud model can be adjusted using the new JWST data to yield better predictions of future observations and to act as a physically motivated background model to guide atmospheric retrievals. ###### Acknowledgements. D.A.L. and D.S. acknowledge financial support from the Austrian Academy of Sciences. Ch.H., L.C. and A.D.S. acknowledge funding from the European Union H2020-MSCA-ITN-2019 under Grant Agreement no. 860470 (CHAMELEON).
2305.19068
Complex Query Answering on Eventuality Knowledge Graph with Implicit Logical Constraints
Querying knowledge graphs (KGs) using deep learning approaches can naturally leverage the reasoning and generalization ability to learn to infer better answers. Traditional neural complex query answering (CQA) approaches mostly work on entity-centric KGs. However, in the real world, we also need to make logical inferences about events, states, and activities (i.e., eventualities or situations) to push learning systems from System I to System II, as proposed by Yoshua Bengio. Querying logically from an EVentuality-centric KG (EVKG) can naturally provide references to such kind of intuitive and logical inference. Thus, in this paper, we propose a new framework to leverage neural methods to answer complex logical queries based on an EVKG, which can satisfy not only traditional first-order logic constraints but also implicit logical constraints over eventualities concerning their occurrences and orders. For instance, if we know that "Food is bad" happens before "PersonX adds soy sauce", then "PersonX adds soy sauce" is unlikely to be the cause of "Food is bad" due to implicit temporal constraint. To facilitate consistent reasoning on EVKGs, we propose Complex Eventuality Query Answering (CEQA), a more rigorous definition of CQA that considers the implicit logical constraints governing the temporal order and occurrence of eventualities. In this manner, we propose to leverage theorem provers for constructing benchmark datasets to ensure the answers satisfy implicit logical constraints. We also propose a Memory-Enhanced Query Encoding (MEQE) approach to significantly improve the performance of state-of-the-art neural query encoders on the CEQA task.
Jiaxin Bai, Xin Liu, Weiqi Wang, Chen Luo, Yangqiu Song
2023-05-30T14:29:24Z
http://arxiv.org/abs/2305.19068v2
# Complex Query Answering on Eventuality Knowledge Graph with Implicit Logical Constraints ###### Abstract Querying incomplete knowledge graphs (KGs) using deep learning approaches can naturally leverage the reasoning and generalization ability to learn to infer better answers. Traditional neural complex query answering (CQA) approaches mostly work on entity-centric KGs. However, in the real world, we also need to make logical inferences about events, states, and activities (i.e., eventualities or situations) to push learning systems from System I to System II, as proposed by Yoshua Bengio. Querying logically from an EVentuality-centric KG (EVKG) can naturally provide references to such kind of intuitive and logical inference. Thus, in this paper, we propose a new framework to leverage neural methods to answer complex logical queries based on an EVKG, which can satisfy not only traditional first-order logic constraints but also implicit logical constraints over eventualities concerning their occurrences and orders. For instance, if we know that "Food is bad" happens before "PersonX adds soy sauce" is unlikely to be the cause of "Food is bad" due to implicit temporal constraint. To facilitate consistent reasoning on EVKGs, we propose Complex Eventuality Query Answering (CEQA), a more rigorous definition of CQA that considers the implicit logical constraints governing the temporal order and occurrence of eventualities. In this manner, we propose to leverage theorem provers for constructing benchmark datasets to ensure the answers satisfy implicit logical constraints. We also propose a Memory-Enhanced Query Encoding (MEQE) approach to significantly improve the performance of state-of-the-art neural query encoders on the CEQA task. ## 1 Introduction Querying knowledge graphs (KGs) can support many real applications, such as fact-checking and question-answering. Using deep learning methods to answer logical queries over KGs can naturally leverage the inductive reasoning and generalization ability of learning methods to overcome the sparsity and incompleteness of existing KGs, and thus has attracted much attention recently, which are usually referred to as Complex Query Answering (CQA) [32; 24; 33]. As the computational complexity of answering complex logical queries increases exponentially with the length of the query [32; 24], brute force matching algorithms are unsuitable for processing complex queries. To overcome these challenges, various techniques, such as query encoding [21] and query decomposition [2], have been proposed. These techniques enable efficient and effective reasoning on incomplete KGs and facilitate the processing of complex queries in a scalable manner. Most of the existing work in this field has primarily focused on entity-centric KGs that only describe entities and their relationships. As Yoshua Bengio described in his view1 of moving from System I to System II [14; 15; 16; 23; 11], we need to equip machine learning systems with logical, sequential, reasoning, and many other abilities. Particularly, such a system requires the understanding of how actions (including events and activities/processes) interact with changes in distribution which can be reflected by states. Here we can summarize events, activities, and states as a linguistic term, eventualities (or situations), according to the linguistics literature [28; 4]. As with many other KG querying tasks, querying eventually-centric knowledge graphs can also support many applications, such as providing references for making logical and rational decisions of intuitive inferences or eventual planning. This requires the CQA models to perform reasoning at the eventuality level. To provide resources for achieving eventuality-level reasoning, recently constructed KGs, such as ATOMIC [36; 22], Knowlywood [38], and ASER [45; 46], tend to use one or more discourse relations to represent the relationships between eventuality instances. For example, "PersonX went to the store" and "PersonX bought some milk" are two simple eventuality instances, with the latter being a possible consequence of the former. The construction of these EVentuality-centric Knowledge Graphs (EVKGs) thoroughly maps the relationships between eventualities and enables us to reason about eventuality instances and their relationships using logical queries, thereby facilitating a more comprehensive approach to modeling complex relationships than traditional KGs. Footnote 1: [http://www.iro.umontreal.ca/~bengioy/AAAI-9feb2020.pdf](http://www.iro.umontreal.ca/~bengioy/AAAI-9feb2020.pdf) Aside from the importance of querying EVKGs, reasoning on EVKG also significantly differs from that on an entity-level KG because eventualities involve considering their occurrences and order. In entity-centric KGs, as shown in Figure 1\(q_{1}\), the vertices represent entities such as "Alzheimer" or "MadCow," and truth values are assigned to the edges between entities to indicate their relationships. For example, the statement \(\texttt{Assoc}(Beta-amyloid,Alzheimer)\) is true. In contrast, during the reasoning process on EVKG, the eventualities may or may not occur, and determining their occurrence is a crucial part of the reasoning. For instance, given \(\texttt{ChosenAlternative}(PersonX\,go\,home,PersonX\,buy\,umbrella)\) in Figure 1\(q_{2}\), it implicitly suggests that "PersonX go home" occurs, while "PersonX buy umbrella" does not. Moreover, there are relationships that explicitly or implicitly describe the order of occurrences, such as temporal and causal relations. For example, \(\texttt{Reason}(PersonX\,study\,hard,PersonX\,pass\,exam)\) indicates the causality between "PersonX pass the exam" and "PersonX study hard," which also implies that "PersonX pass the exam" occurs after "PersonX study hard." When multiple edges are presented in a given situation, it is essential to ensure that there are no contradictions regarding the occurrence of these eventualities. For example, in Figure 1\(q_{3}\), \(\texttt{ChosenAlternative}(PersonX\,go\,home,PersonX\,buy\,umbrella)\,\wedge \texttt{Succession}(PersonX\,go\,home,PersonX\,buy\,umbrella)\) is contradictory because the former suggests that PersonX did not buy an umbrella, while the latter implies otherwise. To enable complex reasoning on eventuality knowledge graphs, we formally define the problem of complex eventuality query answering (CEQA). CEQA is a more rigorous definition of CQA on EVKG that consider not only the explicitly given relational constraints, but also the implicit logical constraints on the occurrence and temporal order of eventualities. The implicit constraints are derived from the relational constraints and can be further divided into two types: _occurrence constraints_ and _temporal constraints_. Incorporating these implicit constraints into complex query answers drastically changes the nature of the reasoning process. Unlike conventional CQA, the reasoning process of CEQA is defeasible because when additional knowledge is presented, the original reasoning could be Figure 1: Complex query examples and corresponding interpretations in natural language. \(q_{1}\) is a query on an entity knowledge graph, while \(q_{2}\) and \(q_{3}\) are queries on an eventuality knowledge graph. weakened and overturned [17]. For example, we showed in Figure 2, "PersonX adds soy sauce" is a possible answer to the query "What is the reason for food being bad." However, if more knowledge is given, like "Food is bad" is before "PersonX adds soy sauce," then it cannot be the proper reason anymore due to temporal constraints. However, all the existing methods for CQA cannot incorporate additional knowledge to conduct defeasible reasoning in CEQA. To address this problem, we propose the method of memory-enhanced query encoding (MEQE). In the MEQE method, we first separate the logic terms in a query into two categories, computational atomics and informational atomics. Computational atomics, like \(\text{Reason}(Food\ is\ bad,V_{?})\), contains at least one variable in their arguments, and informational atomics, like \(\text{Precedence}(Food\ is\ bad,PersonX\ add\ soy\ sauce)\), does not contain variables. For the computational atomics, following previous work, we construct the corresponding computational graph to recursively compute its query embedding step-by-step. For the informational atomics, we put them into a key-value memory module. For each of the informational atomics, its head argument is used as the memory key, and its relation type and tail arguments are used as memory values. In the query encoding process, after each operation in the computational graph, a score is computed between the query embedding and memory heads. This score is then used to retrieve the corresponding memory values of the corresponding relation and tail. Then these memory values are aggregated and added back to the query embedding. By doing this, the query encoder is able to differentiate and leverage implicit logical constraints that are given by the informational atomics. We evaluate our proposed memory-enhanced query encoding method on the most diverse eventuality knowledge graph, ASER, which involves fourteen types of discourse relations between eventualities. Experiment results show that our proposed MEQE is able to consistently improve the performance of four frequently used neural query encoders on the task of CEQA. Code and data will be released after publishing. ## 2 Problem Definition In this section, we first introduce the definitions of the complex queries on entity-centric and eventuality-centric KGs. Then we introduce the definition of implicit logical constraints and the informational atomics that specifically provide such constraints to the eventuality queries. ### Complex Queries Complex query answering is conducted on a KG \(\mathcal{G}=(\mathcal{V},\mathcal{R})\). The \(\mathcal{V}\) is the set of vertices \(v\), and the \(\mathcal{R}\) is the set of relation \(r\). The relations are defined in functional forms to describe logical expressions. Each relation \(r\) is defined as a function, and it has two arguments, which represent two entities, \(v\) and \(v^{\prime}\). The value of function \(r(v,v^{\prime})=1\) if and only if there is a relation between the entities \(v\) and \(v^{\prime}\). In this paper, the queries are defined in conjunctive forms. In such a query, there are logical operations such as existential quantifiers \(\exists\) and conjunctions \(\wedge\), and there are anchor eventualities \(V_{a}\in\mathcal{V}\), existential quantified variables \(V_{1},V_{2},...V_{k}\in\mathcal{V}\), and a target variable \(V_{?}\in\mathcal{V}\). The query is written to find the answers \(V_{?}\in\mathcal{V}\), such that there exist \(V_{1},V_{2},...V_{k}\in\mathcal{V}\) satisfying the logical expression: \[q[V_{?}]=V_{?}.\exists V_{1},...,V_{k}:=e_{1}\wedge e_{2}\wedge...\wedge e_{ m}. \tag{1}\] Each \(e_{i}\) is an atomic expression in any of the following forms: \(e_{i}=r(v_{a},V)\), or \(e_{i}=r(V,V^{\prime})\). Here \(v_{a}\) is an anchor eventuality, and \(V,V^{\prime}\in\{V_{1},V_{2},...,V_{k},V_{?}\}\) are distinct variables. Figure 2: Complex eventuality queries with their implicit temporal and occurrence constraints ### Complex Eventuality Queries For complex eventuality queries, they can similarly be written in the form of a conjunctive logical expression as Eq. (1). Differently, each atomic \(e_{i}\) can all be in the form of \(e_{i}=r(v_{i},v_{j})\), where \(v_{i},v_{j}\in V\) are all given eventualities. These atomics, which do not include variables, are called informational atomics, because they only provide implicit constraint information. The relations \(r\) in CEQA are discourse relations, and they exert implicit constraints over the eventualities, and these constraints can be categorized into occurrence constraints and temporal constraints. Suppose the occurrence and temporal constraints derived from the \(i\)-th atomic \(e_{i}\) is represented as \(o_{i}\) and \(t_{i}\). Then complex eventuality query, including its implicit constraints can be written as \[q[V_{?}]=V_{?}.\exists V_{1},...,V_{k}:=(e_{1}\wedge...\wedge e_{m})\wedge(o_ {1}\wedge...\wedge o_{m})\wedge(t_{1}\wedge...\wedge t_{m}). \tag{2}\] The constraints derived from each type of discourse relations are presented in Table 1. Further justifications of the derivation process are given in the Appendix. #### 2.2.1 Occurrence Constraints The occurrence constraints determine whether certain eventuality happens or not. For instance, consider Figure 2 (A), where the logical query means that "instead of buying an umbrella, PersonX goes home. What occurred before PersonX went home?" If we rely solely on relational constraints, as in the conventional definition of CQA, we would only consider the latter part of the query, "What happened before PersonX went home?" Consequently, "PersonX buys an umbrella" could be a solution to this query. However, within the query, there is an information atomic saying, "instead of buying an umbrella, PersonX goes home," which inherently restricts the occurrence of "PersonX buying an umbrella." To formally express such constraint, we use the function \(\eta(V)\). If eventuality \(V\) occurs, then \(\eta(V)=\texttt{True}\), otherwise it is False. As depicted in Figure 2, the occurrence constraint of this query comprises the terms \(\eta(V_{?})\wedge\neg\eta(PersonX\ buy umbrella)\). In this case, \(V_{?}\) cannot be "PersonX buys an umbrella" since it would violate the occurrence constraint. Most discourse relations assume the occurrence of the argument eventualities, for example, Precedence, Conjunction, and Reason. However, there are also relations that do not imply the occurrence of the arguments, such as Condition and Restatement. Moreover, the Exception and ChosenAlternative relations restrict certain eventualities from happening. For instance, in the case of ChosenAlternative\((PersonX\ read books,PersonX\ play\ games)\), it implies that PersonX reads books (\(\eta(PersonX\ read books)\)) and does not play games (\(\neg\eta(PersonX\ play\ games)\)). Another example is Exception\((Room\ is\ empty,PersonX\ stay\ in\ room), which implies that the room is not empty and PersonX is present in the room. Furthermore, if PersonX is not in the room, then the room is empty. This can be formally expressed as \(\neg\eta(Room\ is\ empty)\wedge\eta(PersonX\ stay\ in\ room)\wedge(\neg\eta( PersonX\ stay\ in\ room)\rightarrow\eta(Room\ is\ empty))\). For a comprehensive overview of the occurrence constraints, please refer to Table 1. \begin{table} \begin{tabular}{l l l l} \hline \hline \multicolumn{1}{c}{Discume Relations (\(e_{i}\))} & Semantics & \multicolumn{1}{c}{Implicit Constraints} & \multicolumn{1}{c}{} \\ & \multicolumn{1}{c}{Occurrence Constraints (\(o_{i}\))} & \multicolumn{1}{c}{Temporal Constraints (\(t_{i}\))} \\ \hline Precedence(\(V_{?}\), \(V_{?}\)) & \(V_{?}\) occurs before \(V_{?}\) & \(\eta(V_{?})\wedge\eta(V_{?})\) & \(\tau(V_{?})\prec\tau(V_{?})\) \\ Succession(\(V_{?}\), \(V_{?}\)) & \(V_{?}\) occurs after \(V_{?}\) happens. & \(\eta(V_{?})\wedge\eta(V_{?})\) & \(\tau(V_{?})\succ\tau(V_{?})\) \\ Synchron(\(V_{?}\), \(V_{?}\)) & \(V_{?}\) occurs at the same time as \(V_{?}\). & \(\eta(V_{?})\wedge\eta(V_{?})\) & \(\tau(V_{?})=\tau(V_{?})\) \\ \hline Reason(\(V_{?}\), \(V_{?}\)) & \(V_{?}\) occurs because \(V_{?}\). & \(\eta(V_{?})\wedge\eta(V_{?})\wedge\eta(V_{?})\wedge\eta(V_{?})\) & \(\tau(V_{?})=\tau(V_{?})\) \\ Result(\(V_{?}\), \(V_{?}\)) & \(V_{?}\) occurs a result \(V_{?}\) & \(\eta(V_{?})\wedge\eta(V_{?})\wedge\eta(V_{?})\rightarrow\eta(V_{?})\) & \(\tau(V_{?})=\tau(V_{?})\) \\ Condition(\(V_{?}\), \(V_{?}\)) & \(\text{if }V_{?}\) occurs, \(V_{?}\). & \(\eta(V_{?})\rightarrow\eta(V_{?})\) & \(\tau(V_{?})\succ\tau(V_{?})\) \\ \hline Concession(\(V_{?}\), \(V_{?}\)) & \(V_{?}\) occurs, although \(V_{?}\). & \(\eta(V_{?})\wedge\eta(V_{?})\) & - \\ Constant(\(V_{?}\), \(V_{?}\)) & \(V_{?}\) occur, but \(V_{?}\). & \(\eta(V_{?})\wedge\eta(V_{?})\) & - \\ \hline Conjunction(\(V_{?}\), \(V_{?}\)) & \(V_{?}\) and \(V_{?}\) both occur. & \(\eta(V_{?})\wedge\eta(V_{?})\) & - \\ Instatiation(\(V_{?}\), \(V_{?}\)) & \(V_{?}\) is a more detailed description of \(V_{?}\). & \(\eta(V_{?})\wedge\eta(V_{?})\) & - \\ Instatiement(\(V_{?}\), \(V_{?}\)) & \(V_{?}\) reaches the semantics of \(V_{?}\). & \(\eta(V_{?})\leftrightarrow\eta(V_{?})\) & - \\ Alternative(\(V_{?}\), \(V_{?}\)) & \(V_{?}\) and \(V_{?}\) are alternative situations. & \(\eta(V_{?})\wedge\eta(V_{?})\) & - \\ ChosenAlternative(\(V_{?}\), \(V_{?}\)) & Instead of \(V_{?}\) occurs, \(V_{?}\). & \(\eta(V_{?})\wedge\eta(V_{?})\) & - \\ Exception(\(V_{?}\), \(V_{?}\)) & \(V_{?}\), except \(V_{?}\). & \(\neg\eta(V_{?})\wedge\eta(V_{?})\wedge(\neg\eta(V_{?})\rightarrow\eta(V_{?}))\) & - \\ \hline \hline \end{tabular} \end{table} Table 1: The discourse relations and their implicit logical constraints. \(\eta(V)\) is True if and only if \(V\) occurs. \(\tau(V)\) indicates the happening time of \(V\). Meanwhile, the instance-based temporal logic operator \(\prec,\succ\), or \(=\) means \(V_{?}\) is before, after, or at the same time as \(V_{?}\). #### 2.2.2 Temporal Constraints The temporal constraints reflect the order of occurrence of the eventualities. As shown in Figure 2 (B), the complex query on the eventuality knowledge graph can be interpreted as "Food is bad before PersonX adds soy sauce. What is the reason for food being bad?" If we only considered the relational constraints, like in the conventional setting of CQA, then "PersonX adds soy sauce" is a possible answer. However, in the definition of CEQA, the answer "PersonX adds soy sauce" is incorrect because the food is bad already occurred before PersonX added soy sauce, but something that occurs later is impossible to be the reason for something that previously occurred. Formally, we use the expression of temporal logic \(\succ\), \(\prec\), and \(=\) to describe the temporal order between two eventualities [20]. \(\tau(A)\prec\tau(B)\) means \(A\) occurs before \(B\), and \(\tau(A)=\tau(B)\) means they happen at the same time, and \(\tau(A)\succ\tau(B)\) means \(A\) occurs after \(B\). For example in Figure 2 (B), the temporal constraint is represented by \(\tau(Food\ is\ bad)\prec\tau(PersonX\ add\ soy\ sauce)\wedge\tau(Food\ is\ bad)\succ\tau(V_{?})\), which can be interpreted as "Food is bad" is before "PersonX adds soy sauce" and \(V_{?}\) is before "Food is bad." Because of this, \(V_{?}\) cannot "PersonX adds soy sauce," otherwise there exists a contradiction. The temporal relations \(\texttt{Precedence}(A,B)\), \(\texttt{Succession}(A,B)\), and \(\texttt{Synchronous}(A,B)\) naturally describes the temporal constraint. Meanwhile, previous studies also assume that causation implies precedence [35; 9; 47], By keeping this assumption, the temporal constraints can also be derived from relations like Reason and Result. The descriptions of temporal constraints are given in Table 1. ## 3 Memory-Enhanced Query Encoding In this section, we will first introduce the method of query encoding, and then introduce how to use the memory module to represent the informational atomics to conduct reasoning on EVKGs. ### Computational Graph and Query Encoding Figure 3 shows that there is a computational graph for each query. This computational graph is a directed acyclic graph (DAG) that consists of nodes and edges representing intermediate states and operations, respectively. By recursively encoding the sub-queries following the computational graph, the operations implicitly model the set operations of the intermediate query results. The set operations are defined as follows: (1) _Relational Projection_: Given a set of entities \(A\) and a relation \(r\in R\), the relational projection operation returns all eventualities that hold the relation \(r\) with at least one entity \(e\in A\). This can be expressed as: \(P_{r}(A)=\{v\in\mathcal{V}\mid\exists v^{\prime}\in A,r(v^{\prime},v)=1\}\); (2) _Intersection_: Given sets of eventualities \(A_{1},\ldots,A_{n}\subseteq\mathcal{V}\), the intersection computes the set that is the subset to all of the sets \(A_{1},\ldots,A_{n}\). This can be expressed as \(\bigcap_{i=1}^{n}A_{i}\). Various query encoding methods are proposed to recursively encode the computational graph. However, the query embeddings of these methods can be translated into \(d\)-dimensional vectors. As shown in Figure 4, the computations along the computation graph start with the anchor eventualities, such as "PersonX complains." Suppose the embedding of an anchor \(v\) is denoted as \(e_{v}\in R^{d}\). Then, the initial query embedding is computed as \(q_{0}=e_{v}\). As for the _relational projection_ operation, suppose the \(e_{rel}\in R^{d}\) is the embedding vector of the relation \(rel\). The relation projection \(F_{proj}\) is expressed as \[q_{i+1}=F_{proj}(q_{i},e_{rel}), \tag{3}\] Figure 3: An example complex eventuality query with the computational and informational atomics. Meanwhile, for the _Intersection_ operations, suppose there are \(k\) embeddings of sub-queries, \(q_{i}^{(1)},q_{i}^{(2)},...,q_{i}^{(k)}\), as the input for this operation, then the output can be expressed as: \[q_{i+1}=F_{inter}(q_{i}^{(1)},q_{i}^{(2)},...,q_{i}^{(k)}), \tag{4}\] where the \(F_{inter}\) is a permutation-invariant neural network. ### Memory-Enhanced Query Encoding The computational graph is capable of encoding computational atomics present in the logical expression. However, informational atomics can influence the reasoning outcomes by introducing implicit temporal or occurrence constraints. As depicted in Figure 3, the absence of informational atomics results in four possible inferred answers from the computational graph. Conversely, when informational atomics are included, providing implicit constraints, the number of derived answers reduces to two. Based on this observation, we propose utilizing a memory module to encode the constraint information provided by the informational atomics. Suppose that there are \(M\) informational atomics in the query. We represent their head eventuality embeddings, relation embeddings, and tail eventuality embeddings as \(c_{h}^{(m)},c_{r}^{(m)}\), and \(c_{t}^{(m)}\). For each operation output \(q_{i}\) from the computational graph, we compute its relevance score \(s_{i,m}\) towards each head eventuality \(m\), \[s_{i,m}=<q_{i},c_{h}^{(m)}>. \tag{5}\] Then we use the \(s_{i,m}\) to access the values from the constraint relation and tails, and then aggregate the memory values according to the relevance scores \[v_{i}=\sum_{m=1}^{M}s_{i,m}(c_{r}^{(m)}+c_{t}^{(m)}). \tag{6}\] Finally, as shown in Figure 4, the constraint values are added back to the query embedding after going through a feed-forward layer FFN, and this process is described by \[q_{i}=q_{i}+\texttt{FFN}(v_{i}). \tag{7}\] ### Learning MEQE To train the model, we compute the normalized probability of \(v\) being the correct answer to query \(q\) by applying the softmax function to all similarity scores: \[p(q,v)=\frac{e^{<q_{I},e_{v}>}}{\sum_{v^{\prime}\in V}e^{<q_{I},e_{v^{\prime} }>}}, \tag{8}\] where \(<\cdot,\cdot>\) denotes the dot product of two vectors, when \(q_{I}\) is the query embedding after the last operation. A cross-entropy loss is used to maximize the log probabilities of all correct answer pairs: \[\mathcal{L}=-\frac{1}{N}\sum_{i=1}^{N}\log p(q^{(i)},v^{(i)}), \tag{9}\] where \((q^{(i)},v^{(i)})\) denotes one of the positive query-answer pairs, and \(N\) is the total number of them. Figure 4: The example computational graph and the memory-enhanced query encoding process. ## 4 Experiments To ensure a fair comparison of various methods for the CEQA problem, we generated a dataset by sampling from ASER [46], the largest eventuality knowledge graph, which encompasses fourteen types of discourse relations. The division of edges within each knowledge graph into training, validation, and testing sets was performed in an 8:1:1 ratio, as illustrated in Table 5. The training graph \(\mathcal{G}_{train}\), validation graph \(\mathcal{G}_{val}\), and test graph \(\mathcal{G}_{test}\) were constructed using the training edges, training+validation edges, and training+validation+testing edges, respectively, following the established configuration outlined in prior research by [32]. Moreover, we conducted evaluations using different reasoning models, consistent with methodologies in previous studies. ### Query Sampling with Theorem Prover We employ the sampling algorithm proposed by [32]. We utilize the conjunctive query types outlined in [39]. Specifically, for the training dataset, we sample queries that have a maximum of two anchor nodes, while for the validation and test sets, we select queries containing up to three anchor eventualities. Once the query-answer pairs are sampled, we randomly select up to three edges that share common vertices with the reasoning chain of the query-answer pairs. These selected edges are then used as the informational atomics for the corresponding query. Subsequently, we employ the z3 prover [13] to filter the queries. We retain only those queries where the informational atomics incorporate effective implicit constraints, ensuring the presence of meaningful constraints in the data. In detail, for each eventuality present on the reasoning path towards an answer in the complex query, we create a corresponding boolean variable in the z3 prover. We then incorporate the relevant occurrence constraints based on the relations between these eventualities, as outlined in Table 1, and feed them into the z3 prover. If the result returned by the prover is unsat, it indicates a contradiction in the reasoning process. Regarding temporal constraints, we follow a similar approach. We create corresponding floating variables that represent the timestamps of the occurrence of the eventualities. We then establish constraints on the temporal order by utilizing floating operators such as >, =, or < between the variables. Once again, if the prover outputs unsat, it signifies a contradiction with respect to the sequence of events. Queries that have no contradictory answers and queries where all the answers are contradictory are discarded. The remaining queries are then categorized into two types: queries with occurrence constraints and queries with temporal constraints. Table 6 presents the average number of contradictory and non-contradictory answers per query. ### Baselines and Metrics In this section, we introduce several baseline query encoding models that use different neural network architectures to parameterize the operators in the computational graph and recursively encode the query into various embedding structures: (1) GQE [21] uses vectors to encode complex queries; (2) Q2P [5] uses multiple vectors to encode queries; (3) Neural MLP [1] use MLP as the operators; (4) FuzzQE [10] uses fuzzy logic to represent logical operators. To define the evaluation metrics, we use \(q\) to represent a testing query and \(\mathcal{G}_{val}\) and \(\mathcal{G}_{test}\) to represent the validation and testing knowledge graphs, respectively. We use \([q]_{val}\) and \([q]_{test}\) to represent the answers to query \(q\) on the validation graph \(\mathcal{G}_{val}\) and the testing graph \(\mathcal{G}_{test}\), respectively. Eq. (10) shows how to compute the metrics. When the evaluation metric is Hit@K, \(m(r)\) is defined as \(m(r)=\textbf{1}[r\leq K]\), where \(m(r)=1\) if \(r\leq K\), and \(m(r)=0\) otherwise. For mean reciprocal ranking (MRR), \(m(r)\) is defined as \(m(r)=\frac{1}{r}\). \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Data Split} & \multirow{2}{*}{\#Types} & \multicolumn{3}{c}{Occurence Constraints} & \multicolumn{3}{c}{Temporal Constraints} \\ & & \#Queries & \#Ans. & \#Contr. Ans. & \#Queries & \#Ans. & \# Contr. Ans. \\ \hline Train & 6 & 124,766 & 5.02 & 1.53 & 35,962 & 5.02 & 1.15 \\ Validation & 15 & 30,272 & 7.68 & 1.75 & 23,905 & 9.17 & 1.44 \\ Test & 15 & 30,243 & 8.40 & 1.81 & 24,226 & 11.40 & 1.50 \\ \hline \hline \end{tabular} \end{table} Table 2: The dataset details for CEQA. #Ans. reports the number of answers that are proved to be not contradictory by theorem provers. #Contr. Ans. reports the number of answers that can be searched from the ground truth KG, but are contradictory due to the occurrence/temporal constraints. \[\texttt{metric}(q)=\frac{\sum_{v\in[q]_{test}/[q]_{val}}m(\texttt{rank}(v))}{|[q] _{test}/[q]_{val}|}. \tag{10}\] During the training process, the testing graph \(\mathcal{G}_{test}\) is unobserved. In the hyper-parameters selection process, we use the same metrics as Eq. (10), but replace the graphs \(\mathcal{G}_{test}/\mathcal{G}_{val}\) with \(\mathcal{G}_{val}/\mathcal{G}_{train}\). ### Details To ensure fair comparisons, we replicate all the models under a unified framework. We use the same number of embedding sizes of three hundred for all models and use grid-search to tune the hyperparameters of the learning rate ranging from \(\{0.002,0.001,0.0005,0.0002,0.0001\}\) and batch size ranging from \(\{128,256,512\}\). All the experiments can be run on NVIDIA RTX3090 GPUs. Experiments are repeated three times, and the averaged results are reported. ### Experiment Results Table 3 presents the results of the main experiment, which compares different query encoding models with and without MEQE. The table includes the performance metrics of Hit@1, Hit@3, and MRR for both occurrence constraints and temporal constraints, along with the average scores across all categories. The experimental results demonstrate that our proposed memory-enhanced query encoding (MEQE) model consistently improves the performance of existing query encoders in complex eventuality query answering. We conduct experiments on four commonly used query encoders, and the MEQE model, leveraging the memory model depicted in Figure 4, outperforms the baselines. The MEQE models differ structurally from the baseline models by incorporating a memory module that contains informational atomics. By reading this memory module, MEQE effectively incorporates implicit constraints from these atomics, leading to improved performance. Additionally, we observed that combining MEQE with the Q2P [5] model yields the best average performance across three metrics: Hit@1, Hit@3, and MRR. Furthermore, on average, MEQE enhances the Hit@1 metric by 17.53% and the Hit@3 metric by 9.53%. The greater improvement in the Hit@1 metric suggests that the model's ability to accurately predict the top-ranked answer has improved more significantly compared to predicting answers within the top three rankings. Moreover, MEQE demonstrates a 13.85% improvement in performance on queries with temporal constraints and an 11.15% improvement on occurrence constraints. This indicates that MEQE is particularly effective in handling temporal constraints compared to occurrence constraints. Table 4 displays the Hit@3 and MRR results of various types of complex queries. The table demonstrates the superiority of MEQE over the baseline models across different query types. Furthermore, the table indicates that, on average, MEQE achieves an improvement of 8.1% and 11.6% respectively. This suggests that MEQE is particularly adept at handling queries with multiple eventualities. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{3}{c}{Occurence Constraints} & \multicolumn{3}{c}{Temporal Constraints} & \multicolumn{3}{c}{Average} \\ & Hit@1 & Hit@3 & MRR & Hit@1 & Hit@3 & MRR & Hit@1 & Hit@3 & MRR \\ \hline GQE & 8.92 & 14.21 & 13.09 & 9.09 & 14.03 & 12.94 & 9.12 & 14.12 & 13.02 \\ + MEQE & **10.20** & **15.54** & **14.31** & **10.70** & **15.67** & **14.50** & **10.45** & **15.60** & **14.41** \\ \hline Q2P & 14.14 & 19.97 & 18.84 & 14.48 & 19.69 & 18.68 & 14.31 & 19.83 & 18.76 \\ + MEQE & **15.15** & **20.67** & **19.38** & **16.06** & **20.82** & **19.74** & **15.61** & **20.74** & **19.56** \\ \hline Nerual MLP & 13.03 & 19.21 & 17.75 & 13.45 & 19.06 & 17.68 & 13.24 & 19.14 & 17.71 \\ + MEQE & **15.26** & **20.69** & **19.32** & **15.91** & **20.63** & **19.47** & **15.58** & **20.66** & **19.40** \\ \hline FuzzQE & 11.68 & 18.64 & 17.07 & 11.68 & 17.97 & 16.53 & 11.68 & 18.31 & 16.80 \\ + MEQE & **14.76** & **21.12** & **19.45** & **15.31** & **21.01** & **19.49** & **15.03** & **21.06** & **19.47** \\ \hline \hline \end{tabular} \end{table} Table 3: Experiment results of different query encoding models. In this experiment, we compare the performance of the query encoder with or without the memory-enhanced query encoding method. ## 5 Related Work Complex query answering is a task in deductive knowledge graph reasoning, where a system or model is required to answer a logical query on an incomplete knowledge graph. Query encoding [21] is a fast and robust method for addressing complex query answering. Various query embedding methods utilize different structures to encode logical KG queries, enabling them to handle different types of logical queries. The GQE method, introduced by Hamilton et al. [21], represents queries as vector representations to answer conjunctive queries. Ren et al. [32] employed hyper-rectangles to encode and answer existential positive first-order (EPFO) queries. Simultaneously, Sun et al. [37] proposed the use of centroid-sketch representations to enhance the faithfulness of the query embedding method for EPFO queries. Both conjunctive queries and EPFO queries are subsets of first-order logic (FOL) queries. The Beta Embedding [31] is the first query embedding method that supports a comprehensive set of operations in FOL by encoding entities and queries into probabilistic Beta distributions. Moreover, Zhang et al. [48] utilized cone embeddings to encode FOL queries. Meanwhile, there are also neural-symbolic methods for query encoding. Xu et al. [42] proposes an entangled neural-symbolic method, ENeSy, for query encoding. Wang et al. [40] propose using pre-trained knowledge graph embeddings and one-hop message passing to conduct complex query answering. Additionally, Yang et al. [43] propose using Gamma Embeddings to encode complex logical queries. Finally, Liu et al. [25] propose pre-training on the knowledge graph with kg-transformer and then fine-tuning on the complex query answering. Recently, Bai et al. [6] proposes to use sequence encoders to encode the linearized computational graph of complex queries, and Galkin et al. [18] propose to conduct inductive logical reasoning on KG. Meanwhile, Zhu et al. [49] proposes GNN-QE to conduct reasoning on KG with message passing on the observed knowledge graph. Another approach to addressing complex knowledge graph queries is query decomposition [2]. In this research direction, the probabilities of these atomic queries are modeled using link predictors, and then an inference time optimization is used to find the answers. In addition, an alternative to query encoding and query decomposition is proposed by Wang et al. [40]. They employ message passing on one-hop atomic queries to perform complex query answering. A recent neural search-based method called QTO is introduced by Bai et al. [7], which has shown impressive performance in complex question answering (CQA). Theorem proving is another deductive reasoning task applied to knowledge graphs. Neural theorem proving methods [34; 26; 27] have been proposed to tackle the incompleteness of KGs by using embeddings to conduct inference on missing information. \begin{table} \begin{tabular}{c|l|c|c c c|c c c|c c} \hline \hline \multirow{2}{*}{**\#Anc.**} & \multirow{2}{*}{Query Type} & \multirow{2}{*}{Metric} & \multicolumn{2}{c|}{GQE} & \multicolumn{2}{c|}{Q2P} & \multicolumn{2}{c|}{Neural MLP} & \multicolumn{2}{c}{FuzzyQE} \\ & & & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{} \\ & (p,(d,(p,(c)))) & \multicolumn{1}{c|}{} & HR@3 & 12.97 & **13.76** & 17.74 & **18.88** & 15.93 & **17.32** & 15.23 & **18.02** \\ \multirow{4}{*}{2} & (p,(d,(p,(c)))) & \multicolumn{1}{c|}{} & HR@ & 11.66 & **12.75** & 16.90 & **18.35** & 15.31 & **16.51** & 14.38 & **16.58** \\ & & & \multicolumn{1}{c|}{} & HR@ & 33.52 & **34.86** & **44.65** & 39.54 & 38.39 & **40.29** & **43.71** & 39.77 \\ & & & \multicolumn{1}{c|}{} & HR@ & 30.53 & **32.80** & **39.79** & 34.77 & 35.02 & **35.16** & **36.92** & 36.53 \\ \multirow{4}{*}{2} & (p,(c)),(p,(c)) & \multicolumn{1}{c|}{} & HR@ & 12.40 & **12.42** & 15.22 & **15.96** & 15.03 & **15.69** & 15.56 & **16.45** \\ & & & \multicolumn{1}{c|}{} & HR@ & **11.46** & 11.38 & 14.56 & **15.25** & 14.21 & **14.74** & 14.52 & **15.36** \\ \cline{2-11} & (p,(d,(p,(c)))),(p,((d,(c)))) & \multicolumn{1}{c|}{} & HR@ & 13.16 & **14.87** & 17.49 & **19.86** & 17.06 & **19.07** & 16.58 & **18.66** \\ \cline{2-11} & (p,(d,(p,(c)))),(p,((d,(c)))) & \multicolumn{1}{c|}{} & HR@ & 13.16 & **13.19** & 16.48 & **18.89** & 15.49 & **18.27** & 14.69 & **17.22** \\ \hline \multirow{4}{*}{3} & (p,(d,(d,(p,(c)))),(p,((d,(c))))) & \multicolumn{1}{c|}{} & HR@ & 14.63 & **18.02** & 25.67 & **26.17** & 23.93 & **24.34** & 18.58 & **26.31** \\ & & & \multicolumn{1}{c|}{} & HR@ & 13.47 & **16.95** & 24.38 & **25.13** & 22.63 & **23.41** & 17.72 & **24.92** \\ \cline{2-11} & (p,(c)),(p,(d,(p,(c)))) & \multicolumn{1}{c|}{} & HR@ & 17.20 & **26.63** & 25.22 & **22.92** & 23.22 & **23.29** & 22.67 & **24.53** \\ \cline{2-11} & (p,(c)),(p,(d,(c))),(p,((d,(c)))) & \multicolumn{1}{c|}{} & HR@ & 15.63 & **19.61** & 21.76 & **21.93** & 21.73 & **22.67** & 21.51 & **23.01** \\ \cline{2-11} & (p,(d,(p,(c)))),(p,((d,(c)))) & \multicolumn{1}{c|}{} & HR@ & 24.66 & **28.11** & **45.10** & 44.12 & 40.28 & **40.62** & 47.14 & **47.56** \\ \cline{2-11} & (p,(d,(p,(c)))),(p,((d,(c)))) & \multicolumn{1}{c|}{} & HR@ & 22.57 & **24.22** & **40.14** & 37.87 & 35.71 & **36.70** & 40.95 & **41.65** \\ \cline{2-11} & (p,(d,(p,(c)))),(p,((d,(c)))) & \multicolumn{1}{c|}{} & HR@ & 13.17 & **13.317** & **17.06** & 16.72 & 18.04 & **18.80** & 16.62 & **18.31** \\ \multirow{4}{*}{3} & (p,(d,(p,(c)))),(p,((d,(c)))) & \multicolumn{1}{c|}{} & HR@ & 11.81 & **12.88** & **17.09** & 16.44 & 16.66 & **17.42** & 15.88 & **17.24** \\ \cline{2-11} & (p,(d,(p,(c)))),(p,((d,(c)))) & \multicolumn{1}{c|}{} & HR@ & 16.94 & **19.63** & 20.26 & **22.94** & 21.66 & **23.85** & 19.70 & **22.65** \\ \cline{2-11} & (p,(d,(p,(c)))),(p,((d,(c)))) & \multicolumn{1}{c|}{} & HR@ & 15.62 & **17.59** & 20.76 & **21.60** & 20.45 & **22.19** & 17.52 & **21.70** \\ \cline{2-11} & (p,(d,(p,(c)))),(p,((d,(c)))) & \multicolumn{1}{c|}{} & HR@ & 16.23 & **19.75** & 24.45 & **25.89** & 23.39 & **25.48** & 22.33 & **25.63** \\ \cline{2-11} & (p,(d,(p,(c)))),(p,((d,(c)))) & \multicolumn{1}{c|}{} & HR@ & 15.05 & **18.36** & 23.30 & **24.15** & 21.60 & **24.26** & 20.87 & **24.00** \\ \cline{2-11} & (p,(d,(p,(c)))),(p,((d,(c)))) Limitation Although our experiments demonstrate that MEQE improves the performance of existing models on the CEQA task, the evaluation is conducted on specific benchmark datasets constructed with theorem provers from the largest general-domain eventuality graph ASER [46]. The generalizability of the proposed approach to specific or professional fields may require further investigation and evaluation. ## 7 Conclusion In this paper, we introduced complex eventuality query answering (CEQA) as a more rigorous definition of complex query answering (CQA) for eventuality knowledge graphs (EVKGs). We addressed the issue of implicit logical constraints on the occurrence and temporal order of eventualities, which had not been adequately considered in the existing definition of CQA. To ensure consistent reasoning, we leveraged theorem provers to construct benchmark datasets that enforce implicit logical constraints on the answers. Furthermore, we proposed constraint memory-enhanced query encoding with (MEQE) to enhance the performance of state-of-the-art neural query encoders on the CEQA task. Our experiments showed that MEQE significantly improved the performance of existing models on the CEQA task. Overall, our work provides a more comprehensive and effective solution to the complex query-answering problem on eventuality knowledge graphs.
2305.01627
Theoretical tidal evolution constants for stellar models from the pre-main sequence to the white dwarf stage Apsidal motion constants, moment of inertia, and gravitational potential energy
One of the most reliable means of studying the stellar interior is through the apsidal motion in double line eclipsing binary systems since these systems present errors in masses, radii, and effective temperatures of only a few per cent. On the other hand, the theoretical values of the apsidal motion to be compared with the observed values depend on the stellar masses of the components and more strongly on their radii (fifth power).The main objective of this work is to make available grids of evolutionary stellar models that, in addition to the traditional parameters (e.g. age, mass, log g, T$_{\rm eff}$), also contain the necessary parameters for the theoretical study of apsidal motion and tidal evolution. This information is useful for the study of the apsidal motion in eclipsing binaries and their tidal evolution, and can also be used for the same purpose in exoplanetary systems. All models were computed using the MESA package. We consider core overshooting for models with masses $\ge$ 1.2 M$_\odot$. For the amount of core overshooting we adopted a recent relationship for mass $\times$ core overshooting. We adopted for the mixing-length parameter $\alpha_{\rm MLT}$ the value 1.84 (the solar-calibrated value). Mass loss was taken into account in two evolutionary phases. The models were followed from the pre-main sequence phase to the white dwarf (WD) stage.The evolutionary models containing age,luminosity, log g, and Teff, as well as the first three harmonics of the internal stellar structure (k$_2$, k$_3$, and k$_4$), the radius of gyration $\beta$ y, and the dimensionless variable $\alpha$, related to gravitational potential energy, are presented in 69 tables covering three chemical compositions: [Fe/H] = -0.50, 0.00, and 0.50. Additional models with different input physics are available.
A. Claret
2023-05-02T17:38:38Z
http://arxiv.org/abs/2305.01627v1
Theoretical tidal evolution constants for stellar models from the pre-main sequence to the white dwarf stage ###### Abstract Context: Aims:One of the most reliable means of studying the stellar interior is through the apsidal motion in double line eclipsing binary systems since these systems present errors in masses, radii, and effective temperatures of only a few per cent. On the other hand, the theoretical values of the apsidal motion to be compared with the observed values depend on the stellar masses of the components and more strongly on their radii (fifth power). The main objective of this work is to make available grids of evolutionary stellar models that, in addition to the traditional parameters (e.g. age, mass, log g, T\({}_{\rm eff}\)), also contain the necessary parameters for the theoretical study of apsidal motion and tidal evolution. This information is useful for the study of the apsidal motion in eclipsing binaries and their tidal evolution, and can also be used for the same purpose in exoplanetary systems. Methods:All models were computed using the MESA package. We consider core overshooting for models with masses \(\geq\) 1.2 M\({}_{\odot}\). For the amount of core overshooting we adopted a recent relationship for mass \(\times\) core overshooting. We adopted for the mixing-length parameter \(\alpha_{\rm MLF}\) the value 1.84 (the solar-calibrated value). Mass loss was taken into account in two evolutionary phases. The models were followed from the pre-main sequence phase to the white dwarf (WD) stage. Results:The evolutionary models containing age, luminosity, log g, and Teff, as well as the first three harmonics of the internal stellar structure (k\({}_{\rm 2}\), k\({}_{\rm 3}\), and k\({}_{\rm 4}\)), the radius of gyration \(\beta\) y, and the dimensionless variable \(\alpha\), related to gravitational potential energy, are presented in 69 tables covering three chemical compositions: [Fe/H] = -0.50, 0.00, and 0.50. Additional models with different input physics are available. Conclusions: ## 1 Introduction Double line eclipsing binary systems (DLEBS) are the best sources for obtaining absolute stellar parameters with great precision. In addition, because of their proximity, some effects may appear due to the interaction of the two components, such as mutual irradiation, tidal distortion, and mass exchange. DLEBS are very important in astrophysics because the perturbations due to the proximity of the two components act as probes and make it possible to investigate in detail the evolution and in some particular cases their internal structure. In this sense, such perturbations play a very similar role to that of the usual techniques of physics labs in which objects are perturbed applying for example a magnetic and/or electric field and studying its behaviour under the action of the applied perturbations. In the case of DLEBS the presence of the companion changes the gravitational field of both, which affects the equilibrium configuration of both components (effect of tides). This alteration is responsible for the loss of spherical symmetry of the binary components and it depends on the internal structure of the components. The two stars can also be distorted by the effect of the rotation that tends to flatten them on the poles. From the theoretical point of view, it is possible to describe such distortions as a function of the internal structure of both stars. The orbit of this pair of stars will not be Keplerian because the orbital elements will be functions of time, in particular the argument of periastron \(\omega\). In general, there are three physical phenomena that can give rise to apsidal motion: the loss of the spherical symmetry by distortions; the presence of a third body; and a relativistic effect, the best known example being the advance of perihelion of the planet Mercury. On the other hand, earlier comparisons between the internal structure constants derived from the observed apsidal motions were reported a long time ago; they indicate that stellar structure is more centrally concentrated in mass than those extracted from the stellar evolutionary models (see e.g. Introduction in Claret&Gimenez 1993). Such discrepancies were partially resolved later by Claret&Gimenez (1993, 2010) considering new times of minima, new opacity tables, and core overshooting. For several years the apsidal motion of DI Her was a serious problem since the comparison between the theoretical calculations and the observed value of \(\dot{\omega}\) differed by almost 500%. Various mechanisms were invoked to explain such a discrepancy, including alternative theories of gravitation. For the confrontation of theory and observational data, Claret (1998) analysed some aspects of the apsidal motion of DI Her. The main conclusion of that paper was that an alternative theory of gravitation was not necessary to explain the observational value of the apsidal motion. Finally the case of DI Her was solved observationally through the Rossiter-McLaughlin effect by Albrecht et al. (2009). Later Claret et al. (2010) using the data obtained by Albrecht et al. (2009), mainly those related to the Rossiter-McLaughlin effect, found a good agreement between the theoretical value of k\({}_{2}\) and its observational counterpart. More recently, Lang, Winn, and Albrecht (2022), using TESS data combined with previous observations, obtained a significant result since the three-dimensional spin directions of the two components of DI Her could be determined. With these data these authors have found good agreement between k\({}_{\rm 2hcho}\), provided by Claret et al. (2021), and its observational counterpart. To the best of our knowledge, the last systematic comparison between theoretical and observed values of apsidal motion rates was carried out by Claret et al. (2021) who used an observational sample of 27 selected DLEBS to compare the theoretical values of k\({}_{2}\) with their observational counterparts. These authors have used minimum times extracted from the light curves provided by the Transiting Exoplanet Survey Satellite (TESS). Very good agreement has been found between the theoretically predicted values and their observational counterparts, including the troublesome case of DI Her. Another very important contribution to the study of apsidal motion came from a group from the Astronomical Institute at Charles University. Zasche&Wolf (2019) investigated 21 eccentric eclipsing binaries (early-type) located in the Small Magellanic Cloud and determined their apsidal motions and analysed their respective light curves. More recently Zasche et al. (2021) present an extensive sample of 162 early-type binary systems showing apsidal motion located in the Large Magellanic Cloud. This point is particularly important given that light curves and apsidal motion modelling were carried out for the first time for several systems simultaneously and in an environment with a chemical composition different from solar (for a more recent reference on apsidal motion measurements, see Zasche et al. 2023). The comments in the previous paragraphs refer mainly to DLEBS that are still on the main sequence or close to it. Burdge et al. (2019) studied the orbital decay of compact stars in a hydrogen-poor low-mass white dwarf (WD), FJ053332.05+020911.6. One of the components of this system exhibits ellipsoidal variations due to tidal distortions. The estimated mass for this component (PTF J0533+0209B) is of the order of 0.20 M\({}_{\odot}\). We note that the gravity-darkening effect must also be taken into account for compact stars distorted by tides and/or rotation (Claret 2021). Until then we computed our evolutionary stellar models containing internal structure constants only up to the giant phases. For the case of PTFJ053332.05+020911.6 and other similar systems, where at least one WD has been detected, we decided to extend our grids from the pre-main sequence (PMS) to the WD phase. This is the main objective of the present paper. Stellar evolutionary models: Apsidal motion internal constants, momentum of inertia, and gravitational potential energy The evolutionary tracks were computed using the Modules for Experiments in Stellar Astrophysics package (MESA; see Paston et al. 2011, 2013, 2015; v7385). We introduced a subroutine to compute the apsidal motion constants (k\({}_{2}\), k\({}_{3}\), k\({}_{4}\)), the moment of inertia, and the gravitational potential energy. In this paper we do not consider directly the effects of rotation. The adopted mixing-length parameter \(\alpha_{\rm MLT}\) was 1.84 (the solar-calibrated value; Torres et al. 2015). However, the \(\alpha_{\rm MLT}\) parameter seems to depend on the evolutionary status and/or metallicity, as shown by Magic et al. (2015) using 3D simulations. As commented in Claret (2019), it is not easy to compare these results with those coming from MESA, due to the different input physics, for example the equation of state and opacities. For the opacities we adopted the element mixture given by Asplund et al. (2009). The helium content follows the enrichment law Y = Y\({}_{p}\) + 1.67 Z, where Y\({}_{p}\) is the primordial helium content (Ade et al. 2016). The mass range covers the interval from 0.2 to 8.0 M\({}_{\odot}\) for three chemical compositions: [Fe/H] -0.5, 0.00, and +0.50. The grids for the two extra chemical compositions, [Fe/H] =-0.50 and 0.50, were computed to take into account observational errors in [Fe/H] for systems located in the solar environment. As commented in the Introduction, the evolutionary tracks were computed from the PMS up to the WD stage. We adopted the following scheme for mass loss: for the interval 0.2-1.8 M\({}_{\odot}\) we followed the recipe by Reimers (1977) with \(\eta_{B}\) = 0.1 and for the AGB scheme we adopted the formalism by Blocker (1995) with \(\eta_{B}\) = 10.0. For models more massive than 1.8 M\({}_{\odot}\) we assumed \(\eta_{R}\) = 0.1 and \(\eta_{B}\) = 30.0. The adopted wind switch RGB-AGB was 1.0\(\times\)10\({}^{-4}\). Convective core overshooting was considered for models with stellar mass higher than or equal to 1.2 M\({}_{\odot}\). In this paper we adopt the diffusive approximation, represented by the free parameter f\({}_{ov}\) (Freytag et al. 1996 and Herwig et al. 1997). The diffusion coefficient in the overshooting region is given by the expression \(D_{ov}=D_{o}exp\left(\frac{-z_{o}}{H_{v}}\right)\), where D\({}_{o}\) is the diffusion coefficient at the convective boundary, \(z\) is the geometric distance from the edge of the convective zone, H\({}_{v}\) is the velocity scale-height at the convective boundary expressed as H\({}_{v}\) = f\({}_{ov}\) H\({}_{p}\), and the coefficient f\({}_{ov}\) is a free parameter that governs the width of the overshooting layer. It is known that models computed adopting core overshooting are more centrally concentrated in mass than their standard counterparts following Claret&Gimenez (1991). For the amount of core overshooting we adopted the relationship between the stellar mass and f\({}_{ov}\) derived by Claret&Torres (2019), instead of adopting a single value of core overshooting for the entire range of masses, as was done in the past. Figure 1: Hertzprung–Russell diagram for some models from the PMS to cooling WD stage. The masses of the models are (from right to left) 0.7 (black), 1.0 (red), 1.4 (green), 3.0 (blue), and 8.0 (magenta) in solar units. \(\alpha_{\rm MLT}\) = 1.84, [Fe/H] = 0.00. ### Apsidal motion constants: \(k_{2}\), \(k_{3}\), and \(k_{4}\) The theoretical apsidal motion constants k\({}_{2}\), k\({}_{3}\), and k\({}_{4}\) were derived simultaneously by integrating the Radau equation using a fifth-order Runge-Kutta method, with a tolerance level of 10\({}^{-7}\): \[\frac{a{\rm d}\eta_{j}}{{\rm d}a}+\frac{6\rho(a)}{\overline{\rho}(a)}(\eta_{j} +1)+\eta_{j}(\eta_{j}-1)=j(j+1),\ j=2,3,4. \tag{1}\] Here the auxiliary parameter \(\eta_{j}\) is given by \[\eta_{j}\equiv\frac{a}{\epsilon_{j}}\frac{{\rm d}\epsilon_{j}}{{\rm d}a}. \tag{2}\] In Eq. 1, \(a\) denotes the mean radius of the stellar configuration, \(\epsilon_{j}\) is a measure of the deviation from sphericity, \(\rho(a)\) is the mass density at the distance \(a\) from the centre of the configuration, and \(\overline{\rho}(a)\) is the mean mass density within a sphere of radius \(a\). The apsidal motion constant of order \(j\) is given by \[k_{j}=\frac{j+1-\eta_{j}(R)}{2\left(j+\eta_{j}(R)\right)}, \tag{3}\] where \(\eta_{j}(R)\) indicates the values of \(\eta_{j}\) at the surface of the star. We note that these equations were derived in the framework of static tides. For the case of dynamic tides, we need to treat with more elaborated equations because the rate of static tides is derived assuming that the orbital period is larger than the periods of the free oscillation modes. However, dynamic tides can significantly change this scenario due to the effects of the compressibility of the stellar fluid. This is important in systems that are nearly synchronized synchronism. In this case, for higher rotational angular velocities, additional deviations due to resonances appear if the forcing frequencies of the dynamic tides come into the range of the free oscillation modes of the component stars. The role of dynamical tides was evaluated for some DLEBS by Claret&Willems (2002), Willems&Claret (2003), Claret&Gimenez (2010), and more recently in Claret et al. (2021). As mentioned in the Introduction, our stellar evolutionary tracks were computed without taking rotation into account. In order to evaluate the effects of rotation on the apsidal motion constants, a correction on the internal structure constants was proposed by Claret (1999). This correction is given by the equation \[\Delta{\rm log}{\rm k}_{2}\equiv{\rm log}{\rm k}_{2,{\rm standard}}-\lambda. \tag{4}\] Here \(\lambda=2V^{2}/(3gR)\), where \(g\) is the surface gravity and \(V\) is the equatorial rotational velocity. Figure 1 shows the Hertzprung-Russell diagram (HR) for some selected models: 0.70, 1.00, 1.40, 3.00, and 8.00 M\({}_{\odot}\). In Figs. 2 and 3 we show the evolution of log k\({}_{2}\) as a function of log g for models with initial masses of 8.00 and 1.00 M\({}_{\odot}\), respectively. The behaviour of the two models is similar when they reach the WD stage (log k\({}_{2}\) is of the order of -1.00). This implies that, using simple models based on polytropes, the equivalent \(n\) would be of the order of 2.1, where \(n\) is the polytropic index. This data confirm the earlier calculations for WD using polytropes as input physics. We recall that for the case of non-relativistic electrons \(n\approx 1.5\) and for the case of relativistic electrons \(n=3.0\) this index would be \(\approx 2.0\) (see Kopal (1959), pag. 35). The parameter k\({}_{2}\) is applied in the studies of apsidal motion of DLEBS and/or exoplanets, and is also useful for computing tidal evolution. For example, the differential equations that govern the tidal evolution depend not only on this parameter, but also on the radius of gyration (see Hut 1980, 1982). We can write the corresponding differential equations as \[\frac{de}{dt}=-\frac{27k_{21}}{t_{F1}}q(q+1)\left(\frac{R_{1}}{A }\right)^{8}\frac{e}{(1-e^{2})^{13/2}}\] \[\left(f_{3}-11/18(1-e^{2})^{3/2}f_{4}\frac{\Omega_{1}}{\omega} \right), \tag{5}\] \[\frac{dA}{dt}=-\frac{6k_{21}}{t_{F1}}q(q+1)\left(\frac{R_{1}}{A} \right)^{8}\frac{A}{(1-e^{2})^{15/2}}\] \[\left(f_{1}-(1-e^{2})^{3/2}f_{2}\frac{\Omega_{1}}{\omega}\right), \tag{6}\] \[\frac{d\Omega_{1}}{dt}=\frac{3k_{21}}{t_{F1}\beta_{1}^{2}}q^{2}\left(\frac{R_{ 1}}{A}\right)^{6}\frac{\omega}{(1-e^{2})^{6}}\left(f_{2}-(1-e^{2})^{3/2}f_{5} \frac{\Omega_{1}}{\omega}\right), \tag{7}\] \[\frac{d\Omega_{2}}{dt}=\frac{3k_{22}}{t_{F2}\beta_{2}^{2}}q^{2}_{2}\left(\frac{R _{2}}{A}\right)^{6}\frac{\omega}{(1-e^{2})^{6}}\left(f_{2}-(1-e^{2})^{3/2}f_{5} \frac{\Omega_{2}}{\omega}\right). \tag{8}\] Figure 3: Same as Figure 2, but for an initial mass of 1.00 M\({}_{\odot}\). Figure 2: Behaviour of logk\({}_{2}\) as a function of log g. [Fe/H]=0.00, initial mass of 8.00 M\({}_{\odot}\). In the above equations \(e\) represents the orbital eccentricity, A is the semi-major axis, M\({}_{i}\) is the mass of component \(i\), \(\Omega_{i}\) is the angular velocity of the component \(i\), \(\omega\) is the mean orbital angular velocity, R\({}_{i}\) is the radius of the component \(i\), q = M\({}_{2}\)/M\({}_{1}\), q\({}_{2}\) = M\({}_{1}\)/M\({}_{2}\), and t\({}_{\rm{fr}}\) is an estimation of the timescale of tidal friction for each component. ### Calculation of gravitational potential energy \(\Omega\) and moment of inertia \(I\) As indicated by Claret (2019) the effects of General Relativity on the calculation of the moment of inertia and gravitational potential energy can be neglected for stars during the PMS, main sequence, and even for WD. However, for consistency with our previous papers on compact stars, here we adopt the relativistic formalism throughout. Therefore, the moment of inertia can be computed using the equations \[J=\frac{8\pi}{3}\int_{0}^{R}\Lambda(r)r^{4}\left[\rho^{\prime}(r)+P(r)/c^{2} \right]dr,\] \[I\approx\frac{J}{\left(1+\frac{2GM}{R^{2}c^{2}}\right)}\equiv(\beta R)^{2}M, \tag{9}\] where \(\beta\) is the radius of gyration. The gravitational energy of a spherically symmetric star can be written as \[\Omega=-4\pi\int_{0}^{R}r^{2}\rho^{\prime}(r)\left[\Lambda^{1/2}(r)-1\right]dr \equiv-a\frac{GM^{2}}{R}. \tag{10}\] In the above equation \(P(r)\) is the pressure, \(\rho^{\prime}(r)\) the energy density, and the function \(\Lambda(r)\) is given by \(\left[1-\frac{2GM(r)}{rc^{2}}\right]^{-1}\). The parameter \(\alpha\) is a dimensionless number that measures the relative mass concentration. In the case of less elaborated stellar models (e.g. polytropes), we have \(\alpha=3/(5-n)\), where \(n\) is the polytropic index. Equations 9 and 10 were integrated simultaneously adopting the same numerical scheme and tolerance level as in Eq. 1. Some interesting properties of the moment of inertia and the gravitational potential energy: The \(\Gamma\) function The factors \(\alpha\) and \(\beta\) are connected through the function \(\Gamma\) introduced by Claret (2012) and improved by Claret&Hempel (2013), which is defined as \[\Gamma(mass,EOS)=\frac{[\alpha\beta]}{\Lambda(R)^{0.8}}, \tag{11}\] where EOS is the equation of state. One of the most striking properties of this function is that the final products of stellar evolution (white dwarfs, neutron-quark hybrids, and proto-neutron stars at the onset of formation of black holes) recover the value calculated for the PMS stage (i.e. \(\Gamma(\)mass, EOS\()\)) \(\approx 0.40\). We note for the last four mentioned systems that the effects of General Relativity are strong. This invariance was also extended to models of gaseous planets with masses between 0.1 and 50.0 M\({}_{\rm{Jupiter}}\), following from the gravitational contraction up to an age of \(\approx 20\) Myr. As a consequence of this invariance a macroscopic stability criterion for neutron, hybrid, and quark star models was established. More detailed information on this function, the'memory effect', and the stability criterion can be found in Claret (2012), Claret &Hempel (2013), and Claret (2014). As examples of the behaviour of \(\Gamma(\)mass, EOS\()\), Figures 4 and 5 show the invariance of such a function for the PMS-WD stages for two different models, 7.00 and 0.40 M\({}_{\odot}\), respectively, adopting the solar composition. In both figures \(\Gamma(\)mass, EOS\()\) increases by about three orders of magnitude with respect to its value at PMS (this increase is not shown fully in Figs. 4 and 5 due to the chosen scale). On the other hand, it is clear from both figures that there is a connection between the values of \(\Gamma(\)mass, EOS\()\) and the total thermal power from PP, CNO, and triple-\(\alpha\): the larger the thermonuclear contributions, the larger the value of \(\Gamma(\)mass, EOS\()\). During the PMS phase, when the chemical composition is homogeneous, \(\Gamma(\)mass, EOS\()\) \(\approx 0.40\) and \(\epsilon_{\rm{N}}\) \(\approx 0.0\). However, in the WD phase, although the initial chemical composition has been altered by thermonuclear reactions, the value 0.40 is recov Figure 4: Time evolution of the function \(\Gamma(\)mass, EOS\()\) for a model with initial mass of 7.00 M\({}_{\odot}\) evolving from the PMS to the WD stage, \(\alpha_{\rm{MLT}}\)=1.84, f\({}_{ov}\) = 0.016, [Fe/H]=0.00. The red line represents \(\Gamma(\)mass, EOS\()\), while the black line indicates the total thermal power from PP and CNO (excluding neutrinos) and the blue line indicates the total thermal power from triple-\(\alpha\) (also excluding neutrinos). The nuclear power \(\epsilon_{N}\) is in logarithmic scale. Figure 5: Same as Figure 4, but for a model with initial mass of 0.40 M\({}_{\odot}\) and f\({}_{ov}\) = 0.000. ered, given that these reactions cease. In summary, the property of \(\Gamma\)(mass, EOS) presents the same value (\(\approx\) 0.40) in the initial and final stages of stellar evolution. We confirmed this behaviour for all the models of our grids. This behaviour is also valid for gaseous planets, neutron-quark-hybrid stars, and proto-neutron stars at the onset of formation of black holes. ## 3 Final remarks and table organization We computed three evolutionary grids covering three metallicities: [Fe/H]=-0.50, 0.00, and +0.50 from PMS to the WD stage. The covered mass range was 0.20-8.00 M\({}_{\odot}\). For such models, in addition to the characteristic parameters (age, luminosity, log g, effective temperatures), the internal structure constants (k\({}_{2}\), k\({}_{3}\), k\({}_{4}\)), the moment of inertia, and the gravitational potential energy have also been computed. The resulting tables have been prepared mainly for studies of DLEBS and/or exoplanetary systems. Tables 1-3 summarize the input physics for each series of models, while Tables 1-69 contain the necessary theoretical inputs for the comparison with the absolute dimensions of the DLEBS as well as the necessary parameters for the apsidal motion and tidal evolution studies. ###### Acknowledgements. I thank M. Broz for his pertinent comments and suggestions that have improved this paper. The Spanish MEC (AYA2015-71718-R and ESP2017-87676-C5-2-8) is gratefully acknowledged for its support during the development of this work. AC also acknowledges financial support from the grant CEX2021-001131-S funded by MCIN/AEI/10.13039/501100011033. This research has made use of the SIMBAD database, operated at the CDS, Strasbourg, France, and of NASA's Astrophysics Data System Abstract Service.
2307.00340
On relation between renormalized frequency and heat capacity for particles in an anharmonic potential
For free particles in a simple harmonic potential plus a weak anharmonicity, characterized by a set of anharmonic parameters, Newtonian mechanics asserts that there is a renormalization of the natural frequency of the periodic motion; and statistical mechanics claims that the anharmonicity causes a correction to the heat capacity of an ideal gas in the anharmonic potential. The orbital motion and thermal motion depend on the same anharmonic parameters, but in different combinations. These two manners of combinations are fundamentally different, demonstrating that statistical law can not emerge from the many-body limit of deterministic law for one-body.
Y. T. Liu, Y. H. Zhao, Y. Zhong, J. M. Shen, J. H. Zhang, Q. H. Liu
2023-07-01T13:38:11Z
http://arxiv.org/abs/2307.00340v1
On relation between renormalized frequency and heat capacity for particles in an anharmonic potential ###### Abstract For free particles in a simple harmonic potential plus a weak anharmonicity, characterized by a set of anharmonic parameters, Newtonian mechanics asserts that there is a renormalization of the natural frequency of the periodic motion; and statistical mechanics claims that the anharmonicity causes a correction to the heat capacity of an ideal gas in the anharmonic potential. The orbital motion and thermal motion depend on the same anharmonic parameters, but in different combinations. These two manners of combinations are fundamentally different, demonstrating that statistical law can not emerge from the many-body limit of deterministic law for one-body. renormalization, anharmonicity, heat capacity, Poincare-Lindstedt method, classical orbits, statistical law. ## I Introduction Anharmonicity plays a crucial role in modern physics, for instance, classical anharmonic \(\phi^{4}\) model in quantum field theory, [1] and various anharmonic effects in condensed matter physics, [2; 3; 4; 5; 6; 7; 8; 9; 10] and secular evolution in planetary orbits. [11] In the perturbation theory, the anharmonicity is usually associated with the renormalization of the natural frequency to remove the superficial divergence, i.e., to eliminate the secular term in the naive solution; [12; 13; 11; 14] and the relationship between the renormalization in quantum field theory and removal of the secular term in perturbation expansion has been a subject under intensive investigations. [15; 16; 17; 18; 19; 20; 21; 22] However, there are still problems not yet fully understood. On one hand, in Newtonian mechanics the anharmonicity usually leads to a renormalization of the natural frequency; and on the other, in statistical mechanics the simplest situation is that the anharmonicity causes a correction to the heat capacity for an ideal gas in the anharmonic potential. We have therefore two different mechanics both start from the same Hamiltonian to deal with the same many-body system: one is the Newtonian mechanics from which every particle has its own orbit; and another is the statistical mechanics from which every particle situates at a microstate with a definite probability. An immediate question then arises: Can the statistical law emerge from the _many-body limit_ of deterministic law for one-body? This question may be of _fundamental importance_, and in present paper, an exactly solvable one-dimensional system is used to understand this question in some depth. Assume that there is a particle of mass \(m\) moving in a potential field \(U\left(x\right)\) (\(x\in\left(-\infty,\infty\right)\)) given by, \[U\left(x\right)=U\left(0\right)+\frac{1}{2}m\omega_{0}^{2}\ell^{2}\left(\left( \frac{x}{\ell}\right)^{2}-a\left(\frac{x}{\ell}\right)^{3}+b\left(\frac{x}{ \ell}\right)^{4}+c\left(\frac{x}{\ell}\right)^{5}+d\left(\frac{x}{\ell} \right)^{6}\right), \tag{1}\] where \(\omega_{0}\) is the natural frequency of the unperturbed potential \(m\omega_{0}^{2}x^{2}/2\), and \(a\), \(b\), \(c\), and \(d\) are four small dimensionless parameters accounting for various orders of the anharmonicity, and \(\ell\left(\neq 0\right)\) is a characteristic length accounting for, e.g. the anharmonicity and we usually set \(U\left(0\right)=0\). We will call these four constants \(a\), \(b\), \(c\), and \(d\) as anharmonic parameters. Once the anharmonicity happens at infinity \(\ell\rightarrow\infty\), \(U\left(x\right)\) reproduces the usual harmonic one. The characteristic length \(\ell\) can be conveniently chosen to be the amplitude of the initial position, and can in fact be freely specified because our conclusion is independent of its specific value. When \(c=d=0\), the potential (1) becomes, \[U\left(x\right)=\frac{1}{2}m\omega_{0}^{2}\ell^{2}\left(\left(\frac{x}{\ell} \right)^{2}-a\left(\frac{x}{\ell}\right)^{3}+b\left(\frac{x}{\ell}\right)^{4} \right). \tag{2}\] Landau used such a form of potential (2) describing the anharmonicity of the vibrations and their interaction with the rotation within a diatomic molecule (see Eq. (49.11) in Ref. [23]), and we follow Landau's convention [23] to take a negative sign before first order anharmonicity \(a\left(x/\ell\right)^{3}\) in (1) and (2) though the interval of \(x\) in Landau model [23] is half space \(x\in\left(0,\infty\right)\) but what we are interested in is the full one. Every particle in the potential \(U\left(x\right)\) (1) moves along an exclusive trajectory, no matter what energy it has. However, each trajectory has its own frequency provided that it takes an exclusive value of energy. As we show shortly (c.f. Eqs. (11a) and (57)), we can expand the renormalized frequency up to order \(\left(x/\ell\right)^{4}\) in the following form, \[\omega\approx\omega_{0}\left(1+\chi^{\left(1\right)}\mu+\chi^{\left(2\right)} \mu^{2}+\chi^{\left(3\right)}\mu^{3}+\chi^{\left(4\right)}\mu^{4}\right), \tag{3}\] where parameter \(\mu\) is dimensionless parameter, defined by, \[\mu\equiv\frac{A}{\ell}\succ 0, \tag{4}\] where \(A\equiv x\left(t=0\right)\succ 0\) is initial position of the particle, which can also be used to characterize the value of the energy the particle takes, and once letting \(\ell=A\), we have \(\mu=1\). To note that the parameter \(\ell\) must be the same in both Newtonian mechanics and statistical physics, otherwise the comparison of their results is meaningless. The common features all trajectories in the potential (1) share are from (3) \(\chi^{\left(i\right)}=\chi^{\left(i\right)}\left(a,b,c,d\right)\) (\(i=1,2,3,4\)), and we call \(\chi^{\left(i\right)}\) the \(i\)-th order _orbital anharmonicity_ (OA). It is worth stressing that OAs are independent of initial conditions \(x\) and \(dx/dt\), whose possible uncertainty or stochasticity does not effect OAs. In analogue, we will introduce \(i\)-th order _thermal anharmonicity_\(\zeta^{\left(i\right)}=\zeta^{\left(i\right)}\left(a,b,c,d\right)\) in the similar expansion of the heat capacity, \[C\approx C_{0}\left(1+\alpha_{1}\zeta^{\left(1\right)}+\alpha_{2}^{\left(2 \right)}\zeta^{\left(2\right)}+\alpha_{3}\zeta^{\left(3\right)}+\alpha_{4} \zeta^{\left(4\right)}\right), \tag{5}\] where \(\alpha_{i}\) are some expansion coefficients, and \(C_{0}=Nk_{B}\) with \(N\) the number of the particle and \(k_{B}\) the Boltzmann constant. The key finding of the present study is, \[\zeta^{\left(i\right)}\text{ (}i=1,2,3,4\text{) is linearly independent of OAs (}\chi^{\left(1\right)},\chi^{\left(2\right)},\chi^{\left(3\right)},\chi^{14} \text{).} \tag{6}\] We are confident that it is a completely novel and physically significant result for OA and _thermal anharmonicity_ offer proper and faithful characterization of the anharmonicity from the point of Newtonian dynamics and thermodynamics, respectively. The importance of the inequivalence between OA and _thermal anharmonicity_ can be understood from the opposite but untrue limit: If \(\chi^{\left(i\right)}=\zeta^{\left(i\right)}\), we could safely say that statistical law for the many-body system and the many-body limit of Newtonian mechanics for every particle in it are at least heavily overlapped and are even of same origin in nature. Otherwise, Eq. (6) strongly suggests that the statistical law can not emerge from the many-body limit of deterministic law for one-body. In very rough terms, the molecular dynamics can not exactly and completely reproduce all thermodynamic results. This paper is organized as follows. Section II and III give the detailed steps of calculations of \(\omega\) and \(C\) with \(c=d=0\), and Section IV presents only the final results of both \(\omega\) and \(C\) with nonvanishing \(c\) and \(d\), and all calculational steps are omitted. Explicitly, in section II, we utilize the Poincare-Lindstedt method to solve the equation of motion of position \(x\) in terms of time \(t\), from which we see in detail how the natural frequency is renormalized. In section III, the anharmonicity induced correction of the heat capacity is calculated and an order-by-order comparison between the renormalized frequency and the heat capacity is made. In section IV, the potential containing higher order anharmonicities with \(c\neq 0\) and \(d\neq 0\) in (1) is studied, and we see clearly that heat capacity \(C\) not only involves these OAs \(\chi^{\left(1\right)},\chi^{\left(2\right)},\chi^{\left(3\right)}\), and \(\chi^{\left(4\right)}\) but also the anharmonic parameters \(a,b,c\), and \(d\). In final section V, a brief conclusion is given. ## II Renormalized frequencies for second order anharmonic oscillator The equation of motion for one particle in the potential (2) is, \[m\frac{d^{2}x}{dt^{2}}=-\frac{dU\left(x\right)}{dx}=-m\omega_{0}^{2}\left(x- \frac{3}{2}a\ell\left(\frac{x}{\ell}\right)^{2}+2b\ell\left(\frac{x}{\ell} \right)^{3}\right), \tag{7}\] where the initial conditions at instant \(t=0\) are, \[x\left(0\right)=A,\frac{dx(0)}{dt}=0. \tag{8}\] Making a variable transform, \[x(t)\rightarrow\ell\varphi(t), \tag{9}\] we have from (4), (7) and (8), \[\frac{d^{2}\varphi}{dt^{2}}=-\omega_{0}^{2}\left(\varphi-\frac{3}{2}a\varphi^{2} +2b\varphi^{3}\right),\varphi\left(0\right)=\mu,\frac{d\varphi(0)}{dt}=0. \tag{10}\] To this equation, no exact solution is possible due to the nonlinearity in \(\varphi\), and even worse, the regular perturbation approaches fail for they lead to the secular term in the solutions of \(\varphi=\varphi(t)\). Instead, the Poincare-Lindstedt method gives uniformly valid asymptotic expansions for the periodic solutions of weakly nonlinear oscillations. [12; 13; 14] By the method, we mean that following three transformations must be done simultaneously, \[\omega_{0} \rightarrow \omega=\omega_{0}+\omega_{1}+\omega_{2}+..., \tag{11a}\] \[t \rightarrow \tau=\frac{\omega_{0}}{\omega}t,\] (11b) \[x\left(t\right) \rightarrow \ell\xi\left(\tau\right)=x\left(t\left(\tau\right)\right), \tag{11c}\] where \(\omega_{1}\sim O(a)\) and \(\omega_{2}\sim O(a^{2})\sim O(b)\) are the first and second order renormalization of the frequency, and so forth. In the same time, we have, \[\xi(\tau)\approx\xi_{0}(\tau)+\xi_{1}(\tau)+\xi_{2}(\tau)+... \tag{12}\] in which \(\xi_{0}(\tau)\) is the equation of motion for the unperturbed oscillator satisfying the initial conditions, \[\xi_{0}(0)=\mu,\frac{d\xi_{0}(0)}{d\tau}=0, \tag{13}\] and \(\xi_{1}\sim O(a)\) and \(\xi_{2}\sim O(a^{2})\) are the first and second order correction of the position \(\xi(\tau)\), with the initial conditions, respectively, \[\xi_{i}(0)=0,\frac{d\xi_{i}(0)}{d\tau}=0,(i=1,2). \tag{14}\] The correct equation of motion takes the following form, accurate up to \(O(b)\) or \(O(a^{2})\), \[\frac{d^{2}\xi}{d\tau^{2}}\approx-\left(\omega_{0}+\omega_{1}+\omega_{2} \right)^{2}\left(\xi-\frac{3}{2}a\xi^{2}+2b\xi^{3}\right), \tag{15}\] The zeroth, first, and second order equations of motion of Eq. (15) are, respectively, \[\frac{d^{2}\xi_{0}}{d\tau^{2}}+\omega_{0}^{2}\xi_{0}=0, \tag{16}\] \[\frac{d^{2}\xi_{1}}{d\tau^{2}}+\omega_{0}^{2}\xi_{1}+\left(- \frac{3}{2}a\omega_{0}^{2}\xi_{0}^{2}+2\omega_{1}\omega_{0}\xi_{0}\right)=0,\] (17) \[\frac{d^{2}\xi_{2}}{d\tau^{2}}+\omega_{0}^{2}\xi_{2}+\left(2 \omega_{1}\omega_{0}-3a\omega_{0}^{2}\xi_{0}\right)\xi_{1}+2\omega_{0}^{2}b \xi_{0}^{3}-3a\omega_{1}\omega_{0}\xi_{0}^{2}+\left(2\omega_{2}\omega_{0}+ \omega_{1}^{2}\right)\xi_{0}=0. \tag{18}\] The zeroth order equation of motion gives the usual harmonic oscillatory solution, \[\xi_{0}(\tau)=\mu\cos\left(\omega_{0}\tau\right). \tag{19}\] The naive solution of the first order equation of motion is then, \[\xi_{1}(\tau)=-\omega_{1}\tau\mu\sin\left(\omega_{0}\tau\right)+\frac{1}{4}a \mu^{2}\left(3-2\cos\left(\omega_{0}\tau\right)-\cos\left(2\omega_{0}\tau \right)\right). \tag{20}\] The first term in the right-hand side gives the divergent oscillatory amplitude \(\omega_{1}\tau\mu\) as time \(\tau\rightarrow\infty\) with \(\omega_{1}\neq 0\). To remove the divergence, we have to choose, \[\omega_{1}=0. \tag{21}\] The correct first order solution of equation of motion is thus, \[\xi_{1}(\tau)=\frac{1}{4}a\mu^{2}\left(3-2\cos\left(\omega_{0}\tau\right)-\cos \left(2\omega_{0}\tau\right)\right). \tag{22}\] The naive solution of the second order equation of motion is, \[\xi_{2}(\tau) = \frac{\mu\tau}{16}\left(3\mu^{2}\left(5a^{2}-4b\right)\omega_{0}- 16\omega_{2}\right)\sin(\omega_{0}\tau) \tag{23}\] \[+\frac{\mu^{3}}{16}\left(-12a^{2}+\left(\frac{29}{4}a^{2}-b \right)\cos\left(\omega_{0}\tau\right)+4a^{2}\cos\left(2\omega_{0}\tau\right)+ \left(b+\frac{3}{4}a^{2}\right)\cos\left(3\omega_{0}\tau\right)\right).\] The first term in the right-hand side gives also the divergent oscillatory amplitude as time \(\tau\rightarrow\infty\), and this divergence can simply be removed with \(\omega_{2}\) being selected to satisfy, \[3\mu^{2}\left(5a^{2}-4b\right)\omega_{0}-16\omega_{2}=0. \tag{24}\] We have the second order correction of the frequency \(\omega_{2}\), \[\omega_{2}=\frac{3}{16}\left(5a^{2}-4b\right)\mu^{2}\omega_{0}=\chi^{(2)}\mu^ {2}\omega_{0}. \tag{25}\] where \(\chi^{(2)}\) is the second order OF which is a combination of second order anharmonic parameters \(b\) and \(a^{2}\), \[\chi^{(2)}\equiv\frac{3}{16}\left(5a^{2}-4b\right). \tag{26}\] The correct second order solution of equation of motion (18) is, \[\xi_{2}(\tau)=\frac{\mu^{3}}{16}\left(-12a^{2}+\left(\frac{29}{4}a^{2}-b \right)\cos\left(\omega_{0}\tau\right)+4a^{2}\cos\left(2\omega_{0}\tau\right) +\left(\frac{3}{4}a^{2}+b\right)\cos\left(3\omega_{0}\tau\right)\right). \tag{27}\] The important result is then that the natural frequency \(\omega_{0}\) is renormalized to be, up to accuracy of second order anharmonicity \(O(a^{2})\sim O(b)\), \[\omega=\left(1+\chi^{(2)}\mu^{2}\right)\omega_{0}. \tag{28}\] The oscillation is composed of a single prime frequency \(\omega_{0}\) and its higher order harmonics, \[\xi(\tau) \approx \mu\cos\left(\omega_{0}\tau\right)+\frac{1}{4}a\mu^{2}\left(3-2 \cos\left(\omega_{0}\tau\right)-\cos\left(2\omega_{0}\tau\right)\right) \tag{29}\] \[+\frac{\mu^{3}}{16}\left(-12a^{2}+\left(\frac{29}{4}a^{2}-b \right)\cos\left(\omega_{0}\tau\right)+4a^{2}\cos\left(2\omega_{0}\tau\right) +\left(b+\frac{3}{4}a^{2}\right)\cos\left(3\omega_{0}\tau\right)\right).\] It is easily to verify that the energy is conserved for we have, \[E\left(t\right)=\frac{1}{2}m\left(\frac{dx}{dt}\right)^{2}+U\left(x\right)= \frac{1}{2}m\omega_{0}^{2}A^{2}\left(1-a\mu+b\mu^{2}\right)=E\left(t=0\right). \tag{30}\] The anharmonicity induced correction of the energy is, \[\Delta E=\frac{1}{2}m\omega_{0}^{2}A^{2}\left(-a\mu+b\mu^{2}\right). \tag{31}\] Requiring that the energy shift is small, we have, \[\frac{\left|\Delta E\right|}{\frac{1}{2}m\left(\omega_{0}A\right)^{2}}=\left| -a\mu+b\mu^{2}\right|\ll 1. \tag{32}\] The sufficient conditions for this equation are, \[\left|a\right|\ll 1,\left|b\right|\ll 1. \tag{33}\] This is what small constants \(a\) and \(b\) mean in Newtonian mechanics. Once these conditions break, the perturbation method (15) does not apply. ## III Second order anharmonicity induced correction of heat capacity For our purpose, we need to compute the partition function in Boltzmann statistical mechanics, with \(H=p^{2}/2m+U(x)\), \[Z\equiv\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\exp(-\beta H)\frac{dxdp}{h }=Z_{T}Z_{U}, \tag{34}\] where \(h\) is the Planck's constant, and \(Z_{T}\) is the momentum factor of the partition function divided by \(h\), \[Z_{T}\equiv\int_{-\infty}^{\infty}\exp(-\beta\frac{p^{2}}{2m})\frac{dp}{h}= \frac{\sqrt{2\pi}}{h}\sqrt{\frac{m}{\beta}}, \tag{35}\] and \(Z_{U}\) is the configurational factor of the partition function, with transform \(x\rightarrow\ell\xi\), \[Z_{U} = \int_{-\infty}^{\infty}\exp(-\beta U(x))dx \tag{36}\] \[= \int_{-\infty}^{\infty}\exp\left(-\beta\left(\frac{1}{2}m\omega_{ 0}^{2}\ell^{2}\left(\left(\frac{x}{\ell}\right)^{2}-a\left(\frac{x}{\ell} \right)^{3}+b\left(\frac{x}{\ell}\right)^{4}\right)\right)\right)dx\] \[= \ell\int_{-\infty}^{\infty}\exp\left(-\frac{\xi^{2}}{2\eta^{2}} \right)\exp\left(-\frac{-a\xi^{3}+b\xi^{4}}{2\eta^{2}}\right)d\xi\] \[\approx \ell\int_{-\infty}^{\infty}\exp\left(-\frac{\xi^{2}}{2\eta^{2}} \right)\left(1+\frac{a\xi^{3}-b\xi^{4}}{2\eta^{2}}+\frac{1}{8}\left(\frac{a \xi^{3}}{\eta^{2}}\right)^{2}\right)d\xi\] \[= \sqrt{2\pi}\ell\eta\left(1+\frac{3}{8}\left(5a^{2}-4b\right)\eta^ {2}\right),\] where \(\eta\) is the dimensionless parameter, defined by, \[\eta\equiv\sqrt{\frac{1}{\beta m\omega_{0}^{2}\ell^{2}}}=\sqrt{\frac{k_{B}T}{ m\omega_{0}^{2}\ell^{2}}}. \tag{37}\] The partition function is then, \[Z\equiv Z_{T}Z_{U}\approx\frac{2\pi}{\beta h\omega_{0}}\left(1+\frac{2\chi^{( 2)}}{\beta m\omega_{0}^{2}\ell^{2}}\right)=\frac{2\pi}{\beta h\omega_{0}} \left(1+2\chi^{(2)}\eta^{2}\right). \tag{38}\] Once \(\chi^{(2)}\) is negligible, the partition function reduces to be, \[Z\approx\frac{2\pi}{\beta h\omega_{0}}, \tag{39}\] which leads to the energy equipartition result for the heat capacity of the oscillatory degree of freedom, \[C_{0}=Nk_{B}. \tag{40}\] The anharmonicity gives rise to the correction to the internal energy, up to accuracy of second order anharmonicity \(O(a^{2})\sim O(b)\), \[\Delta U^{(2)}=-N\frac{\partial}{\partial\beta}\ln\left(1+\frac{2\chi^{(2)}}{ \beta m\omega_{0}^{2}\ell^{2}}\right)\approx-\frac{2\chi^{(2)}N}{m\omega_{0}^ {2}\ell^{2}}\frac{\partial}{\partial\beta}\frac{1}{\beta}=\frac{2\chi^{(2)}N}{ m\omega_{0}^{2}\ell^{2}\beta^{2}}. \tag{41}\] The corresponding correction to heat capacity is proportional to the first power of the temperature via \(\eta^{2}\) (37), \[\Delta C^{(2)}=\frac{\partial\Delta U}{\partial T}\approx 4\eta^{2}\chi^{(2)}Nk _{B}, \tag{42}\] which is also compatible with the Landau's result. [23] Evidently, in statistical mechanics, \(\Delta C^{(2)}\) is the second order quantity which linearly depends on \(\chi^{(2)}\). Thus, we have the heat capacity, \[C\approx\left(1+4\eta^{2}\chi^{(2)}\right)Nk_{B}. \tag{43}\] It takes the form (5) which suggests in general a _linear_ dependence of \(C\) on \(\chi^{(i)}\). Whether such a form (5) persists for the heat capacity with higher order anharmonicities is an interesting problem. In next section, we show that this _linear_ dependence on the OAs breaks. High order anharmonicities: renormalization of the natural frequency and heat capacity correction When \(c\neq 0\) and \(d\neq 0\), the equation of motion (15) becomes, \[\frac{d^{2}\xi}{d\tau^{2}}\approx-\left(\omega_{0}+\omega_{1}+\omega_{2}+\omega_ {3}+\omega_{4}\right)^{2}\left(\xi-\frac{3}{2}a\xi^{2}+2b\xi^{3}+\frac{5}{2}c \xi^{4}+3d\xi^{5}\right). \tag{44}\] Utilization of the Poincare-Lindstedt method to solve this equation of motion of position \(\xi\) in terms of time \(\tau\), Eq. (44) gives results for each order in the following. The first three solutions \((\xi_{0}(\tau),\xi_{1}(\tau),\xi_{2}(\tau))\) are already given in (29), and the third order solution \(\xi_{3}(\tau)\) is, \[\xi_{3}(\tau)=\frac{\eta^{4}}{128}(\Lambda_{0}+\Lambda_{1}\cos(\omega_{0}\tau) +\Lambda_{2}\cos(2\omega_{0}\tau)+\Lambda_{3}\cos(3\omega_{0}\tau)+\Lambda_{4} \cos(4\omega_{0}\tau)), \tag{45}\] where, \[\Lambda_{0} = 3\left(75a^{3}-84ab-40c\right), \tag{46a}\] \[\Lambda_{1} = -119a^{3}+140ab+64c,\] (46b) \[\Lambda_{2} = \frac{32}{3}\left(-9a^{3}+12ab+5c\right),\] (46c) \[\Lambda_{3} = -3a\left(3a^{2}+4b\right),\] (46d) \[\Lambda_{4} = \frac{1}{3}\left(-3a^{3}-12ab+8c\right). \tag{46e}\] The fourth order solution \(\xi_{4}(\tau)\) is, \[\xi_{4}(\tau)=\frac{3\eta^{5}}{64}(\Omega_{0}+\Omega_{1}\cos(\omega_{0}\tau)+ \Omega_{2}\cos(2\omega_{0}\tau)+\Omega_{3}\cos(3\omega_{0}\tau)+\Omega_{4} \cos(4\omega_{0}\tau)+\Omega_{5}\cos(5\omega_{0}\tau)), \tag{47}\] where, \[\Omega_{0} = -a\left(75a^{3}-116ab-56c\right), \tag{48a}\] \[\Omega_{1} = \frac{2357a^{4}}{64}+\frac{23b^{2}}{12}-\frac{1475a^{2}b}{24}- \frac{292ac}{9}-\frac{8d}{3},\] (48b) \[\Omega_{2} = \frac{16}{9}a\left(18a^{3}-30ab-13c\right),\] (48c) \[\Omega_{3} = \frac{93a^{4}}{16}-2b^{2}-\frac{11a^{2}b}{4}+\frac{3ac}{4}+\frac {5d}{2},\] (48d) \[\Omega_{4} = \frac{1}{9}a\left(3a^{3}+12ab-8c\right),\] (48e) \[\Omega_{5} = \frac{5a^{4}}{192}+\frac{b^{2}}{12}+\frac{5a^{2}b}{24}-\frac{11ac} {36}+\frac{d}{6}. \tag{48f}\] The third order and fourth order normalized frequencies are, respectively, \[\omega_{3}=-a\chi^{(2)}\eta^{3}\omega_{0}=\chi^{(3)}\eta^{3}\omega_{0}, \tag{49}\] where \(\chi^{(3)}\) is the third order OA, defined by, \[\chi^{(3)}\equiv-a\chi^{(2)}, \tag{50}\] and, \[\omega_{4}=\frac{3\eta^{4}}{1024}\left(1155a^{4}-2200a^{2}b-1120ac+304b^{2}-3 20d\right)\omega_{0}=\chi^{(4)}\eta^{4}\omega_{0}, \tag{51}\] where \(\chi^{(4)}\) is the fourth order OA, formed by a non-trivial combination of all fourth order parameters \((a^{4},a^{2}b,ac,b^{2},d)\), defined by, \[\chi^{(4)}\equiv\frac{3\eta^{4}}{1024}\left(1155a^{4}-2200a^{2}b-1120ac+304b^ {2}-320d\right). \tag{52}\] We anticipate that \(\chi^{(3)}\) and \(\chi^{(4)}\) will appear in the heat capacity. To see how two quantities \(\chi^{(3)}\) and \(\chi^{(4)}\) may appear in the higher order corrections to heat capacity, let us compute the partition function \(Z\equiv Z_{T}Z_{U}\) with the full form of potential (1). The result is, with calculations similar to (36)-(38), \[Z\equiv Z_{T}Z_{U}\approx\frac{2\pi}{\beta h\omega_{0}}\left(1+2\chi^{(2)}\eta ^{2}+\gamma\eta^{4}\right). \tag{53}\] where, \[\gamma = \frac{15}{128}\left(7\left(33a^{4}-72a^{2}b-32ac+16b^{2}\right)-6 4d\right) \tag{54}\] \[= -8b\chi^{(2)}+8\chi^{(4)}.\] The anharmonicity gives rise to the correction to the internal energy in the following, \[\Delta U=-N\frac{\partial}{\partial\beta}\ln\left(1+\frac{2\chi^{(2)}}{ \beta m\omega_{0}^{2}\ell^{2}}+\frac{\gamma}{\left(\beta m\omega_{0}^{2}\ell^ {2}\right)^{2}}\right)\approx Nk_{B}T\left(\frac{2\chi^{(2)}}{\beta m\omega_ {0}^{2}\ell^{2}}+2\frac{\gamma-2\left(\chi^{(2)}\right)^{2}}{\left(\beta m \omega_{0}^{2}\ell^{2}\right)^{2}\beta}\right) \tag{55}\] The corresponding correction to the heat capacity is, \[\Delta C=\frac{\partial\Delta U}{\partial T}=Nk_{B}\left(4\chi^{(2)}\eta^{2}- 12\left(\left(\chi^{(2)}\right)^{2}+4b\chi^{(2)}-4\chi^{(4)}\right)\eta^{4} \right). \tag{56}\] Thus fourth order correction to the heat capacity does not depend on the OAs alone because of presence of a term \(b\chi^{(2)}\). Collecting all results together, we have the renormalized frequency \(\omega\) and heat capacity \(C\), respectively, \[\omega \approx \omega_{0}\left(1+\chi^{(2)}\mu^{2}-a\chi^{(2)}\mu^{3}+\chi^{(4)} \mu^{4}\right),\;\text{and} \tag{57}\] \[C \approx Nk_{B}\left(1+4\chi^{(2)}\eta^{2}-12\left(\left(\chi^{(2)} \right)^{2}+4b\chi^{(2)}-4\chi^{(4)}\right)\eta^{4}\right). \tag{58}\] In terms of the thermal anharmonicity, we have two nonzero elements, the 2nd and fourth order anharmonicity \(\chi^{(2)}\) and \(\left(\chi^{(2)}\right)^{2}+4b\chi^{(2)}-4\chi^{(4)}\). It is evidently that both OA and the thermal anharmonicity have only two common elements, the first and second order anharmonicities \(\chi^{(1)}\left(=0\right)\) and \(\chi^{(2)}\), showing that from particle orbits statistical law can be largely re-constructed. However, in essence, the thermal properties are independent of the orbits for statistical mechanics dictate its own manner of dependence on the anharmonic parameters. Before closing this section, we discuss an interesting situation which a little bit deviates the theme of present study. In statistical mechanics, we can theoretically assume the characteristic length to be a "thermal one" \(\ell=\sqrt{k_{B}T/m\omega_{0}^{2}}\). Then, we have \(C=Nk_{B}\) which is irrelevant to the anharmonicity. The demonstration is straightforward, because the configurational factor of the partition function \(Z_{U}\) is, with transform \(x\rightarrow\ell\xi\), \[Z_{U} = \int_{-\infty}^{\infty}\exp(-\beta U(x))dx \tag{59}\] \[= \int_{-\infty}^{\infty}\exp\left(-\frac{\beta}{2}m\omega_{0}^{2} \ell^{2}\left(\left(\frac{x}{\ell}\right)^{2}+\sum_{j=3}a_{j}\left(\frac{x}{ \ell}\right)^{j}\right)\right)dx\] \[= \ell\int_{-\infty}^{\infty}\exp\left(-\frac{1}{2}\sum_{j=2}a_{j} \xi^{j}\right)d\xi\] \[= \ell f\left(\left\{a_{j}\right\}\right),\] where \(U(x)\) contains anharmonicity of arbitrarily high orders, and \(a_{2}=1\) and \(a_{j}\) (\(j\geq 3\)) are anharmonic parameters, and \(f\left(\left\{a_{j}\right\}\right)\equiv\int_{-\infty}^{\infty}\exp\left(- \frac{1}{2}\sum_{j=2}a_{j}\xi^{j}\right)d\xi\) is independent of the temperature. Since the dependence of \(Z_{U}\) on the temperature is via \(\ell=\sqrt{k_{B}T/m\omega_{0}^{2}}\) only, the partition function becomes \(Z=\left(k_{B}T/\hbar\omega_{0}\right)f\left(\left\{a_{j}\right\}\right)\). We have immediately \(C=Nk_{B}\). To note that the anharmonic potential of form (1) with \(\ell=\sqrt{k_{B}T/m\omega_{0}^{2}}\) is in fact problematic because by definition, Hamiltonian must be temperature-indepedent. However, from the pure theoretical consideration, such a potential may be considered as an effective one, which may enrich our understanding of the energy equipartition theorem. Conclusions A particle in a simple harmonic potential is fully understood. However, once the potential is added by some weakly anharmonic terms, the problem becomes highly non-trivial as we learn from Kolmogorov-Arnold-Moser theorem and Fermi-Pasta-Ulam-Tsingou nonlinear lattice oscillations. Once a particle moves in the anharmonic potential field, the natural frequency must be renormalized and the OAs can then be introduced; and the dependence of the OAs on the anharmonic parameters is dictated by the Newtonian mechanics. For a classical ideal gas in the same anharmonic potential field, the internal energy and heat capacity have their own ways of dependence on the anharmonic parameters, determined by the statistical mechanics, and the corresponding so-called thermal anharmonicities are introduced. The OAs and thermal anharmonicities reflect the anharmonicity in potential in mechanics and thermodynamics, respectively. The first two order anharmonicities in the mechanics and the thermodynamics are the same, whereas the third and fourth order anharmonicities are different. Though no higher order anharmonicities are calculated, we can conclude that the statistical law can not emerge from the _many-body limit_ of deterministic law for few-body. It seems to us that the clear difference between the OA and the thermal anharmonicity is useful in exploring the relation between a single particle that obeys the Newtonian mechanics and many particles that follows the statistical mechanics. The applications to other problems are under exploration. ###### Acknowledgements. QHL is grateful to the members, especially to Professor Hong Qian at University of Washington and Hong Zhao at Xiamen University and Professor Zhigang Zheng at Huaqiao University, of Online Club Nanothermodynamica (Founded in June 2020) for extensive discussions of various problems in statistical physics. We are indebted to Dr. Xinyuan Ai for participation of the early stage of the present work. This work is financially supported by National Natural Science Foundation of China under Grant No. 11675051 and No. 11905056.
2302.02389
Maximal stable quotients of invariant types in NIP theories
For a NIP theory $T$, a sufficiently saturated model $\mathfrak{C}$ of $T$, and an invariant (over some small subset of $\mathfrak{C}$) global type $p$, we prove that there exists a finest relatively type-definable over a small set of parameters from $\mathfrak{C}$ equivalence relation on the set of realizations of $p$ which has stable quotient. This is a counterpart for equivalence relations of the main result of the paper "On maximal stable quotients of definable groups in NIP theories" by M. Haskel and A. Pillay which shows the existence of maximal stable quotients of type-definable groups in NIP theories. Our proof adapts the ideas of the proof of this result, working with relatively type-definable subsets of the group of automorphisms of the monster model as defined in the paper "On first order amenability" by E. Hrushovski, K. Krupinski, and A. Pillay.
Krzysztof Krupiński, Adrián Portillo
2023-02-05T14:12:22Z
http://arxiv.org/abs/2302.02389v2
# Maximal stable quotients of invariant types in NIP theories ###### Abstract. For a NIP theory \(T\), a sufficiently saturated model \(\mathfrak{C}\) of \(T\), and an invariant (over some small subset of \(\mathfrak{C}\)) global type \(p\), we prove that there exists a finest relatively type-definable over a small set of parameters from \(\mathfrak{C}\) equivalence relation on the set of realizations of \(p\) which has stable quotient. This is a counterpart for equivalence relations of the main result of [1] on the existence of maximal stable quotients of type-definable groups in NIP theories. Our proof adapts the ideas of the proof of this result, working with relatively type-definable subsets of the group of automorphisms of the monster model as defined in [1]. Key words and phrases:Stable quotient, hyperimaginary, invariant type 2020 Mathematics Subject Classification: 03C45 Both authors are supported by the Narodowe Centrum Nauki grant no. 2016/22/E/ST1/00450. The first author is also supported by the Narodowe Centrum Nauki grant no. 2018/31/B/ST1/00357. ## 1. Introduction Let \(X\) be a \(A\)-type-definable sequence of \(X\). A _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) is a _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) if \(a_{i}\in X/E\) for all (equivalently, some) \(i<\omega\), and \(E\) is a _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) if \(a_{i}\in X/E\) for all (equivalently, some) \(i<\omega\), and \(E\) is a hyperdefinable set. The _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) is a hyperdefinable set, and the _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) if \(a_{i}\in X/E\) for all (equivalently, some) \(i<\omega\), and \(E\) is a hyperdefinable set. The _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) is a hyperdefinable set, and the _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) if \(a_{i}\in X/E\) for all (equivalently, some) \(i<\omega\), and \(E\) is a hyperdefinable set. The _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) is a hyperdefinable set, and the _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) if \(a_{i}\in X/E\) for all (equivalently, some) \(i<\omega\), and \(E\) is a hyperdefinable set. The _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) is a hyperdefinable set, and the _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) if \(a_{i}\in X/E\) for all (equivalently, some) \(i<\omega\), and \(E\) is a hyperdefinable set. The _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) is a hyperdefinable set, and the _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) if \(a_{i}\in X/E\) for all (equivalently, some) \(i<\omega\), and \(E\) is a hyperdefinable set. The _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) is a hyperdefinable set, and the _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) if \(a_{i}\in X/E\) for all (equivalently, some) \(i<\omega\), and \(E\) is a hyperdefinable set. The _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) is a hyperdefinable set, and the _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) if \(a_{i}\in X/E\) for all (equivalently, some) \(i<\omega\), and \(E\) is a hyperdefinable set. The _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) is a hyperdefinable set, and the _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) if \(a_{i}\in X/E\) for all (equivalently, some) \(i<\omega\), and \(E\) is a hyperdefinable set. The _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) is a hyperdefinable set, and the _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) if \(a_{i}\in X/E\) for all (equivalently, some) \(i<\omega\), and \(E\) is a hyperdefinable set. The _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) is a hyperdefinable set, and the _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) if \(a_{i}\in X/E\) for all (equivalently, some) \(i<\omega\), and \(E\) is a hyperdefinable set. The _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) is a hyperdefinable set, and the _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) if \(a_{i}\in X/E\) for all (equivalently, some) \(i<\omega\), and \(E\) is a hyperdefinable set. The _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) is a hyperdefinable set, and the _hyperdefinable_\((a_{i},b_{i})_{i<\omega}\) is a hyperdefinable set. Our proof is via a non-trivial adaptation of the ideas from the proof of the main theorem of [1], using relatively type-definable subsets of the group of automorphisms of the monster model (as defined in [1]). We do not know whether \(E^{st}\) is relatively type-definable over \(A\). At the end of Section 3, we will observe that if it was true, then the specific (large) saturation degree assumption in the above theorem could be removed. Another question is whether one could drop the invariance of \(p\) hypothesis from the above theorem. If such a strengthening is true, a proof would probably require some new tricks. In the last section of the paper, we compute \(E^{st}\) in two concrete examples which are expansions of local orders. In fact, in these examples, we give full classifications of all relatively type-definable (over a small subset of \(\mathfrak{C}\)) equivalence relations on \(p(\mathfrak{C}^{\prime})\) for a suitable invariant type \(p\in S(\mathfrak{C})\). ## 2. Basic results and transfers between models Let \(T\) be a complete first-order theory in a language \(L\), and \(\mathfrak{C},\mathfrak{C}^{\prime}\models T\) monster models such that \(\mathfrak{C}\) is \(\kappa\)-saturated and \(\mathfrak{C}^{\prime}\) is \(|\mathfrak{C}|^{+}\)-saturated. Note that \(|T|\) is the cardinality of the set of all formulas in \(L\). Unless stated otherwise, \(p(x)\) will always be a type in \(S_{x}(\mathfrak{C})\) invariant over some small \(A\subseteq\mathfrak{C}\) (i.e., \(|A|<\kappa\)), where \(x\) is a small tuple of variables. Whenever \(B\subseteq\mathfrak{C}^{\prime}\), by \(p\,|_{B}\) we mean the restriction to \(B\) of the unique extension of \(p\) to an \(A\)-invariant type in \(S(\mathfrak{C}^{\prime})\). If \(E\) is a type-definable equivalence relation and \(a\) is an element of its domain, \([a]_{E}\) denotes the \(E\)-class of \(a\). The goal of this section is to present a useful criterion that allows us to check whether a relatively type-definable equivalence relation \(E\) (over a small \(B\subseteq\mathfrak{C}\)) on \(p(\mathfrak{C}^{\prime})\) with stable quotient is, in fact, the finest one (see Lemma 2.7). As a corollary, we get the transfer to elementary extensions of \(\mathfrak{C}\) of the property of being the finest relatively type-definable equivalence relation on \(p(\mathfrak{C}^{\prime})\) (see Corollary 2.8). We also take the opportunity to prove a new characterization of stability of hyperdefinable sets in NIP theories (see Proposition 2.2). Let \(E\) be a type-definable equivalence relation on a type-definable subset \(X\) of \(\mathfrak{C}^{\lambda}\), where \(\lambda<\kappa\). The following definition is the hyperimaginary analogous of [1, Definition 1.2]. **Definition 2.1**.: _A hyperdefinable (over \(A\)) set \(X/E\) is weakly stable if for every \(A\)-indiscernible sequence \((a_{i},b_{i},c)_{i<\omega}\) with \(a_{i},b_{i}\in X/E\) for all (equivalently, some) \(i<\omega\), we have_ \[\operatorname{tp}(a_{i},b_{j},c/A)=\operatorname{tp}(a_{j},b_{i},c/A)\] _for all (some) \(i\neq j<\omega\)._ We obtain a hyperdefinable counterpart of [1, Proposition 4.2]. **Proposition 2.2**.: _A hyperdefinable set \(X/E\) which has NIP is weakly stable if and only if it is stable._ Proof.: Without loss of generality, assume that both \(X\) and \(E\) are type-definable over the empty set. It is clear that stable sets are weakly stable, even without the \(NIP\) assumption. By [1, Theorem 2.10], under the \(NIP\) assumption, the stability of \(X/E\) is equivalent to the fact that every indiscernible sequence of elements of \(X/E\) is totally indiscernible. Hence, it is enough to show that weak stability of \(X/E\) also implies this property. Suppose that the sequence \((a_{i})_{i<\omega}\) in \(X/E\) is indiscernible but not totally indiscernible. Let us, without loss of generality, replace \(\omega\) by \(\mathbb{Q}\). Then, there exist rational numbers \(i_{0}<\dots<i_{n-1}\) and a natural number \(j<n-1\) such that \[\operatorname{tp}(a_{i_{j}},a_{i_{j+1}}/A)\neq\operatorname{tp}(a_{i_{j+1}},a _{i_{j}}/A),\] where \(A\) is the set of all \(a_{i_{k}}\) for \(k<n\) distinct from \(j\) and \(j+1\). Choose any rationals \(l_{0}<l_{1}<\dots\) in the interval \((i_{j},i_{j+1})\). Let \(b_{i}=a_{l_{i}}\) for \(i<\omega\). Then, the sequence \((b_{i})_{i<\omega}\) is \(A\)-indiscernible and \(\operatorname{tp}(b_{i},b_{j}/A)\neq\operatorname{tp}(b_{j},b_{i}/A)\) for all \(i<j<\omega\). Moreover, this is witnessed by some finite tuple \(a\subseteq A\). Hence, the sequence \((b_{i},b_{i},a)_{i<\omega}\) contradicts the weak stability of \(X/E\). Next, we present a definition that we use throughout the whole section. This definition first appeared in [14, Definition 3.2]. **Definition 2.3**.: _Let \(A\subseteq\mathcal{M}\subseteq B\) and \(q(x)\in S(B)\). We say that \(q(x)\) is a strong helix extension over \(\operatorname{A}\) of \(q\upharpoonright_{\mathcal{M}}(x)\) if for all finite \(m\subseteq\mathcal{M}\)_ \[(\forall\varphi(x,y)\in L)(\forall b\subseteq B)[\varphi(x,b)\in q(x)\implies (\exists b^{\prime}\subseteq\mathcal{M})(\varphi(x,b^{\prime})\in q(x)\wedge b \underset{Am}{\equiv}b^{\prime})].\] Note that if \(q\in S(\mathfrak{C})\) is a strong helix extension over \(A\) of \(q\upharpoonright_{\mathcal{M}}(x)\), then \(\mathcal{M}\) is an \(\aleph_{0}\)-saturated model in the language \(L_{A}\) (i.e., \(L\) expanded by constants from \(A\)). Conversely, if \(\mathcal{M}\) is an \(\aleph_{0}\)-saturated model in \(L_{A}\) and \(q(x)\in S(\mathcal{M})\), there always exists \(q^{\prime}(x)\in S(B)\) which is a strong helix over \(A\) of \(q\) (see [14, Lemma 3.3]). **Lemma 2.4**.: _Assume that \(q(x)\in S(\mathcal{M})\) is \(A\)-invariant (for some \(A\subseteq\mathcal{M}\)) and \(q^{\prime}(x)\in S(\mathfrak{C})\) is a strong helix extension over \(A\) of \(q(x)\). Then \(q^{\prime}(x)\) is the unique global \(A\)-invariant extension of \(q(x)\)._ Proof.: To show \(A\)-invariance, suppose for a contradiction that for some \(\sigma\in\operatorname{Aut}(\mathfrak{C}/A)\) and \(\varphi(x,a)\in q^{\prime}(x)\) we have \(\neg\varphi(x,\sigma(a))\in q^{\prime}(x)\). Then, there exist \(a^{\prime},a^{\prime\prime}\in\mathcal{M}\) such that \(a^{\prime}\underset{A}{\equiv}a\) and \(a^{\prime\prime}\underset{A}{\equiv}\sigma(a)\) for which \(\varphi(x,a^{\prime})\in q(x)\) while \(\neg\varphi(x,a^{\prime\prime})\in q(x)\). Then \(a^{\prime}\underset{A}{\equiv}a^{\prime\prime}\), which contradicts the \(A\)-invariance of \(q(x)\). Uniqueness follows from the fact that \(\mathcal{M}\) is \(\aleph_{0}\)-saturated in \(L_{A}\). Given a partial type (possibly with parameters) \(\pi(x,y)\), we say that \(\pi(x,y)\)_defines an equivalence relation_ on a type-definable set \(X\) if \(\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime})\cap X(\mathfrak{C}^{\prime}) ^{2}\) is an equivalence relation. Given a relatively type-definable equivalence relation \(E\) on a type-definable set \(X\), a _partial type associated to \(E\)_ is any partial type \(\pi(x,y)\) such that \(\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime})\cap X(\mathfrak{C}^{\prime}) ^{2}=E\). We say that a relatively type-definable equivalence relation is _countably defined_ if some associated partial type \(\pi(x,y)\) consists of countably many formulas. Lemma 2.5 gives us a useful stability criterion when an equivalence relation on \(p(\mathfrak{C}^{\prime})\) is relatively type-definable over a sufficiently saturated small model. **Lemma 2.5**.: _Let \(\pi(x,y,z)\) be a partial type over the empty set, let \(a_{0}\subseteq\mathfrak{C}\) enumerate a small \(\aleph_{0}\)-saturated model \(\mathcal{M}_{0}\prec\mathfrak{C}\) in the language \(L_{A}\) (so containing \(A\)), and let \(\pi(x,y,a_{0})\) define an equivalence relation on \(p\upharpoonright_{a_{0}}(\mathfrak{C}^{\prime})\). Then, \(p(\mathfrak{C}^{\prime})\left/\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{0})\cap p(\mathfrak{C}^{\prime})^{2}\right.\) is stable if and only if \(p\upharpoonright_{a_{0}}(\mathfrak{C}^{\prime})\left/\pi(\mathfrak{C}^{\prime}, \mathfrak{C}^{\prime},a_{0})\cap p\upharpoonright_{a_{0}}(\mathfrak{C}^{\prime}) ^{2}\) is stable._ Proof.: Let \(E_{a_{0}}\) be the equivalence relation defined by \(\pi(x,y,a_{0})\) on \(p(\mathfrak{C}^{\prime})\) and let \(E^{\prime}_{a_{0}}\) be the equivalence relation defined by \(\pi(x,y,a_{0})\) on \(p\!\upharpoonright_{a_{0}}(\mathfrak{C}^{\prime})\). Assume first that \(p(\mathfrak{C}^{\prime})\left/\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime}, a_{0})\cap p(\mathfrak{C}^{\prime})^{2}\right.\) is unstable. Since stability does not depend on the choice of parameters over which the hyperdefinable set in question is defined, there exists a \(\mathfrak{C}\)-indiscernible sequence \((c_{i},b_{i})_{i<\omega}\) such that \(c_{i}\in p(\mathfrak{C}^{\prime})\) for all \(i<\omega\) and for all \(i\neq j\) \[\operatorname{tp}([c_{i}]_{E_{a_{0}}},b_{j}\left/\mathfrak{C}\right.)\neq \operatorname{tp}([c_{j}]_{E_{a_{0}}},b_{i}\left/\mathfrak{C}\right.).\] This implies that for all \(i\neq j\) we have \[\operatorname{tp}([c_{i}]_{E^{\prime}_{a_{0}}},b_{j}\left/\mathfrak{C}\right.) \neq\operatorname{tp}([c_{j}]_{E^{\prime}_{a_{0}}},b_{i}\left/\mathfrak{C} \right.),\] and so \(p\!\upharpoonright_{a_{0}}(\mathfrak{C}^{\prime})\left/\pi(\mathfrak{C}^{ \prime},\mathfrak{C}^{\prime},a_{0})\cap p\!\upharpoonright_{a_{0}}( \mathfrak{C}^{\prime})^{2}\,\) is unstable. Assume now that \(p\!\upharpoonright_{a_{0}}(\mathfrak{C}^{\prime})\left/\pi(\mathfrak{C}^{ \prime},\mathfrak{C}^{\prime},a_{0})\cap p\!\upharpoonright_{a_{0}}( \mathfrak{C}^{\prime})^{2}\) is unstable. This is witnessed by an \(a_{0}\)-indiscernible sequence \((c_{i},b_{i})_{i<\omega}\) such that \(c_{i}\in p\!\upharpoonright_{a_{0}}(\mathfrak{C}^{\prime})\) for all \(i<\omega\) and for all \(i\neq j\) \[\operatorname{tp}([c_{i}]_{E^{\prime}_{a_{0}}},b_{j}\left/a_{0}\right.)\neq \operatorname{tp}([c_{j}]_{E^{\prime}_{a_{0}}},b_{i}\left/a_{0}\right.).\] Consider \(q:=\operatorname{tp}((c_{i},b_{i})_{i<\omega}\left/a_{0}\right.)\) and let \(q^{\prime}\in S(\mathfrak{C})\) be a strong heir extension over \(A\) of \(q\). Let \((c^{\prime}_{i},b^{\prime}_{i})_{i<\omega}\) be a realization of \(q^{\prime}\). Then, 1. \((c^{\prime}_{i},b^{\prime}_{i})_{i<\omega}\) is \(\mathfrak{C}\)-indiscernible; 2. \(\operatorname{tp}(c^{\prime}_{i}/\mathfrak{C})=p(x)\) for all \(i<\omega\); 3. \(\operatorname{tp}([c^{\prime}_{i}]_{E_{a_{0}}},b^{\prime}_{j}/\mathfrak{C}) \neq\operatorname{tp}([c^{\prime}_{j}]_{E_{a_{0}}},b^{\prime}_{i}/\mathfrak{C })\). (1) If \((c^{\prime}_{i},b^{\prime}_{i})_{i<\omega}\) is not \(\mathfrak{C}\)-indiscernible, then it is witnessed by a formula (with parameters \(d\) from \(\mathfrak{C}\)) of the form \(\varphi(x_{i_{1}},y_{i_{1}},\ldots,x_{i_{n}},y_{i_{n}},d)\wedge\neg\varphi(x_{ j_{1}},y_{j_{1}},\ldots,x_{j_{n}},y_{j_{n}},d)\), for some \(i_{1}<\cdots<i_{n}\) and \(j_{1}<\cdots<j_{n}\). Now, using that \(q^{\prime}\) is a strong heir extension over \(A\) of \(q\), we can find \(d^{\prime}\subseteq\mathcal{M}_{0}\) such that \[\varphi(x_{i_{1}},y_{i_{1}},\ldots,x_{i_{n}},y_{i_{n}},d^{\prime})\wedge\neg \varphi(x_{j_{1}},y_{j_{1}},\ldots,x_{j_{n}},y_{j_{n}},d^{\prime})\in q,\] contradicting the \(a_{0}\)-indiscernibility of \((c_{i},b_{i})_{i<\omega}\). (2) follows from the fact that \(\operatorname{tp}(c^{\prime}_{i}/\mathfrak{C})\) is a strong heir extension of \(p\!\upharpoonright_{a_{0}}(x)\), which has to be \(p(x)\) by Lemma 2.4. (3) follows from (2), the fact that \(q^{\prime}\) is an extension of \(q\), and the fact that \(E_{a_{0}}\) is the restriction of \(E^{\prime}_{a_{0}}\) to \(p(\mathfrak{C}^{\prime})\). By (1), (2), and (3), \(p(\mathfrak{C}^{\prime})\left/\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime}, a_{0})\cap p(\mathfrak{C}^{\prime})^{2}\right.\) is unstable. Even though at first glance the requirement that \(\pi(x,y,a_{0})\) defines an equivalence relation on \(p\!\upharpoonright_{a_{0}}(\mathfrak{C}^{\prime})\) might not seem very natural, the following result shows that this can always be assumed. **Proposition 2.6**.: _Let \(E\) be a \(B\)-relatively-type-definable equivalence relation on \(p(\mathfrak{C}^{\prime})\). Then, \(E\) can be written as \(\bigcap_{i<I}E_{i}\), where \(|I|\leq|B|+|x|+|T|\), each \(E_{i}\) is a countably defined \(B_{i}\)-relatively-type-definable equivalence relation on \(p(\mathfrak{C}^{\prime})\) for some countable set \(B_{i}\subseteq\mathfrak{C}\), and a partial type associated to each \(E_{i}\) defines an equivalence relation on \(p\!\upharpoonright_{B_{i}}(\mathfrak{C}^{\prime})\). Thus, for \(B^{\prime}:=\bigcup_{i\in I}B_{i}\) we have \(|B^{\prime}|\leq|B|+|x|+|T|\) and a partial type over \(B^{\prime}\) associated to \(E\) defines an equivalence relation on \(p\!\upharpoonright_{B^{\prime}}(\mathfrak{C}^{\prime})\)._ _Moreover, if we start from a given partial type \(\pi\) associated to \(E\) consisting of reflexive and symmetric formulas and closed under conjunction, then the resulting partial type in the last sentence is precisely \(\pi\), and \(|B^{\prime}|\leq|\pi|\)._ Proof.: Fix a partial type associated to \(E\) which consists of reflexive and symmetric formulas and is closed under conjunction. Let \(\psi_{0}(x)\) be any formula in \(p(x)\) and \(\varphi_{0}(x,y)\) any formula in the partial type associated to \(E\). Then the partial type \[p(x)\wedge p(y)\wedge p(z)\wedge E(x,y)\wedge E(y,z)\] implies \(\psi_{0}(x)\wedge\psi_{0}(y)\wedge\psi_{0}(z)\wedge\varphi_{0}(x,z)\). By compactness, there are \(\varphi_{1}(x,y)\) in the partial type associated to \(E\) and \(\psi_{1}(x)\) in \(p(x)\) such that the formula \[\psi_{1}(x)\wedge\psi_{1}(y)\wedge\psi_{1}(z)\wedge\varphi_{1}(x,y)\wedge \varphi_{1}(y,z)\] implies \(\varphi_{0}(x,z).\) Proceeding by induction, we construct a partial type \[\{\varphi_{i}(x,y):i<\omega\}\] defining an equivalence relation on \(\bigcap_{i<\omega}\psi_{i}(\mathfrak{C}^{\prime})\). Let \(B_{\varphi_{0},\psi_{0}}\) be a countable set containing the parameters of all the constructed formulas \(\varphi_{i}(x,y)\) and \(\psi_{i}(x)\), \(i<\omega\). Then, the partial type \(\{\varphi_{i}(x,y):i<\omega\}\) clearly defines over \(B_{\varphi_{0},\psi_{0}}\) an equivalence relation on \(p\upharpoonright_{B_{\varphi_{0},\psi_{0}}}(\mathfrak{C}^{\prime})\). Applying this process separately to every \(\varphi(x,y)\) in the partial type associated to \(E\) and taking the intersections of any finitely many obtained equivalence relations gives us the desired directed family of equivalence relations. The following result is a criterion for when an equivalence relation on \(p(\mathfrak{C}^{\prime})\) relatively type-definable over a sufficiently saturated small (with respect to \(\mathfrak{C}\)) model is the finest relatively type-definable equivalence relation (over a small \(B\subseteq\mathfrak{C}\)) on \(p(\mathfrak{C}^{\prime})\) with stable quotient. **Lemma 2.7**.: _Consider \(\pi(x,y,z)\) and \(a_{0}\) as in Lemma 2.5. Then, the equivalence relation_ \[E_{a_{0}}:=\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{0})\cap p( \mathfrak{C}^{\prime})^{2}\] _is the finest relatively type-definable equivalence relation (over a small subset of parameters in \(\mathfrak{C}\)) on \(p(\mathfrak{C}^{\prime})\) whose quotient is stable if and only if_ \[E^{\prime}_{a_{0}}:=\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{0}) \cap p\upharpoonright_{a_{0}}(\mathfrak{C}^{\prime})^{2}\] _is an equivalence relation on \(p\upharpoonright_{a_{0}}(\mathfrak{C}^{\prime})\) with stable quotient and there is no partial type \(\rho(x,y,t)\) over the empty set (where \(|t|\leq 2^{|T|+|A|}+|a_{0}|\)) and \(a_{1}\subseteq\mathfrak{C}^{\prime}\) enumerating a small \(\aleph_{0}\)-saturated model in \(L_{A}\) containing \(\mathcal{M}_{0}\) such that the partial type \(\rho(x,y,a_{1})\) defines an equivalence relation on \(p\upharpoonright_{a_{1}}(\mathfrak{C}^{\prime})\),_ \[\rho(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{1})\cap p\upharpoonright_{ a_{1}}(\mathfrak{C}^{\prime})^{2}\subsetneq\pi(\mathfrak{C}^{\prime}, \mathfrak{C}^{\prime},a_{0})\cap p\upharpoonright_{a_{1}}(\mathfrak{C}^{ \prime})^{2},\] _and \(p\upharpoonright_{a_{1}}(\mathfrak{C}^{\prime})\left/\rho(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{1})\cap p\upharpoonright_{a_{1}}(\mathfrak{C}^{ \prime})^{2}\) is stable._ Proof.: \((\Leftarrow)\) By Lemma 2.5, the right hand side implies that \(E_{a_{0}}\) is stable. Assume that there exists \(E_{B}\), a relatively type-definable equivalence relation on \(p(\mathfrak{C}^{\prime})\) over some small set of parameters \(B\subseteq\mathfrak{C}\) such that the quotient \(p(\mathfrak{C}^{\prime})/E_{B}\) is stable and \(E_{B}\subsetneq E_{a_{0}}\). Take a presentation of \(E_{B}\) as \(\bigcap_{i\in I}E_{B_{i}}\) satisfying the conclusion of Proposition 2.6. Since \(p(\mathfrak{C}^{\prime})/E_{B}\) is stable, so are all \(p(\mathfrak{C}^{\prime})/E_{B_{i}}\). As \(E_{B}\subsetneq E_{a_{0}}\), there exists some \(i\in I\) such that \[E_{a_{0}}\cap E_{B_{i}}\subsetneq E_{a_{0}}.\] Choose any \(\aleph_{0}\)-saturated model \(\mathcal{M}_{1}\supseteq\mathcal{M}_{0}\cup B_{i}\) in the language \(L_{A}\) contained in \(\mathfrak{C}\) and of size at most \(2^{|T|+|A|}+|a_{0}|\). Enumerate it as \(a_{1}\). By the choice of \(E_{B_{i}}\), there is a partial type \(\delta(x,y,a_{1})\) defining \(E_{B_{i}}\) which also defines an equivalence relation on \(p\mathbin{\upharpoonright}_{a_{1}}(\mathfrak{C}^{\prime})\). Let \(\rho(x,y,a_{1}):=\pi(x,y,a_{0})\wedge\delta(x,y,a_{1})\). Then \(\rho(x,y,a_{1})\) defines an equivalence relation on \(p\mathbin{\upharpoonright}_{a_{1}}(\mathfrak{C}^{\prime})\) and \(p(\mathfrak{C}^{\prime})\mathbin{\left/\rho(\mathfrak{C}^{\prime},\mathfrak{C }^{\prime},a_{1})\cap p(\mathfrak{C}^{\prime})^{2}\right.}\) is stable. Hence, applying Lemma 2.5, we obtain that the quotient \[p\mathbin{\upharpoonright}_{a_{1}}(\mathfrak{C}^{\prime})\mathbin{\left/ \rho(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{1})\cap p\mathbin{ \upharpoonright}_{a_{1}}(\mathfrak{C}^{\prime})^{2}}.\] is stable. Moreover, \[\rho(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{1})\cap p\mathbin{ \upharpoonright}_{a_{1}}(\mathfrak{C}^{\prime})^{2}\subsetneq\pi(\mathfrak{C}^ {\prime},\mathfrak{C}^{\prime},a_{0})\cap p\mathbin{\upharpoonright}_{a_{1}}( \mathfrak{C}^{\prime})^{2}.\] Thus, we have proved that the right hand side of the lemma fails. \((\Rightarrow)\) By Lemma 2.5, the left hand side implies that \(E^{\prime}_{a_{0}}\) is stable. Assume that the right hand side does not hold. Since \(a_{1}\) is small, taking a realization of \(\operatorname{tp}(a_{1}/a_{0})\) if needed, we can assume that \(a_{1}\subseteq\mathfrak{C}\). Hence, by Lemma 2.5, the fact that the quotient \(p\mathbin{\upharpoonright}_{a_{1}}(\mathfrak{C}^{\prime})\mathbin{\left/ \rho(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{1})\cap p\mathbin{ \upharpoonright}_{a_{1}}(\mathfrak{C}^{\prime})^{2}}\) is stable implies that the quotient \(p(\mathfrak{C}^{\prime})\mathbin{\left/\rho(\mathfrak{C}^{\prime},\mathfrak{C }^{\prime},a_{1})\cap p(\mathfrak{C}^{\prime})^{2}\right.}\) is stable. Let \(b_{1},b_{2}\in p\mathbin{\upharpoonright}_{a_{1}}(\mathfrak{C}^{\prime})\) be elements witnessing \[\rho(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{1})\cap p\mathbin{ \upharpoonright}_{a_{1}}(\mathfrak{C}^{\prime})^{2}\subsetneq\pi(\mathfrak{C}^ {\prime},\mathfrak{C}^{\prime},a_{0})\cap p\mathbin{\upharpoonright}_{a_{1}}( \mathfrak{C}^{\prime})^{2}.\] That is, the pair \((b_{1},b_{2})\) belongs to \(\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{0})\) but not to \(\rho(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{1})\). Let \(q:=\operatorname{tp}(b_{1},b_{2}\mathbin{\left/a_{1}\right.})\) and let \(q^{\prime}\in S(\mathfrak{C})\) be a strong heir extension over \(A\) of \(q\). By Lemma 2.4, any realization \((b^{\prime}_{1},b^{\prime}_{2})\in q^{\prime}(\mathfrak{C}^{\prime})\) satisfies \(b^{\prime}_{1},b^{\prime}_{2}\in p(\mathfrak{C}^{\prime})\), \((b^{\prime}_{1},b^{\prime}_{2})\in\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{ \prime},a_{0})\), and \((b^{\prime}_{1},b^{\prime}_{2})\not\in\rho(\mathfrak{C}^{\prime},\mathfrak{C}^ {\prime},a_{1})\). Therefore, \[\rho(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{1})\cap p(\mathfrak{C}^ {\prime})^{2}\subsetneq\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{0}) \cap p(\mathfrak{C}^{\prime})^{2},\] which contradicts the minimality of \(E_{a_{0}}\). Let \(\mathfrak{C}\prec\mathfrak{C}_{1}\prec\mathfrak{C}^{\prime}\) be such that \(\mathfrak{C}^{\prime}\) is still a monster model with respect to \(\mathfrak{C}_{1}\), and let \(p_{1}(x)\in S(\mathfrak{C}_{1})\) be the unique \(A\)-invariant extension of \(p(x)\). **Corollary 2.8**.: _Assume that \(E\) is the finest relatively type-definable (over a small subset of \(\mathfrak{C}\)) equivalence relation on \(p(\mathfrak{C}^{\prime})\) with stable quotient. Then \(E\cap p_{1}(\mathfrak{C}^{\prime})^{2}\) is the finest relatively type-definable (over a small subset of \(\mathfrak{C}_{1}\)) equivalence relation on \(p_{1}(\mathfrak{C}^{\prime})\) with stable quotient._ Proof.: Using Proposition 2.6, we can find a small \(\aleph_{0}\)-saturated model \(\mathcal{M}_{0}\prec\mathfrak{C}\) in \(L_{A}\) (so containing \(A\)) enumerated as \(a_{0}\), and a partial type \(\pi(x,y,a_{0})\) defining \(E\) and defining an equivalence relation \(E^{\prime}_{a_{0}}\) on \(p\mathbin{\upharpoonright}_{a_{0}}(\mathfrak{C}^{\prime})\). By Lemma 2.7, the right hand side of the equivalence in Lemma 2.7 holds. But this right hand side does not depend on the choice of \(\mathfrak{C}\), and so, again by Lemma 2.7, \(E\cap p_{1}(\mathfrak{C}^{\prime})^{2}\) is the finest relatively type-definable (over a small subset of \(\mathfrak{C}_{1}\)) equivalence relation on \(p_{1}(\mathfrak{C}^{\prime})\) with stable quotient. However, there is no obvious transfer going in the opposite direction (i.e., from \(\mathfrak{C}_{1}\) to \(\mathfrak{C}\)), as an application of Proposition 2.6 for \(p_{1}\) may produce a model \(\mathcal{M}_{0}\prec\mathfrak{C}_{1}\) whose cardinality is bigger than the degree of saturation of \(\mathfrak{C}\), and then we cannot embed it into \(\mathfrak{C}\) via an automorphism. We have only the following corollary. **Corollary 2.9**.: _Assume that \(E\) is the finest relatively type-definable (over a small subset of \(\mathfrak{C}_{1}\)) equivalence relation on \(p_{1}(\mathfrak{C}^{\prime})\) with stable quotient, and suppose that \(E\) is relatively type-definable over a set \(B\) of small cardinality with respect to \(\mathfrak{C}\). Pick, by Proposition 2.6, an \(\aleph_{0}\)-saturated model \(\mathcal{M}_{0}\prec\mathfrak{C}_{1}\) in \(L_{A}\) (so containing \(A\)) of small size with respect to \(\mathfrak{C}\), enumerated as \(a_{0}\), and such that there exists a partial type \(\pi(x,y,a_{0})\) defining \(E\) and defining an equivalence relation \(E^{\prime}_{a_{0}}\) on \(p\,\mathord{\restriction}_{a_{0}}(\mathfrak{C}^{\prime})\). Let \(\sigma\in\operatorname{Aut}(\mathfrak{C}_{1}/A)\) be such that \(\sigma(a_{0})\subseteq\mathfrak{C}\). Then \(\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},\sigma(a_{0}))\cap p( \mathfrak{C}^{\prime})^{2}\) is the finest relatively type-definable (over a small subset of \(\mathfrak{C}\)) equivalence relation on \(p(\mathfrak{C}^{\prime})\) with stable quotient._ Proof.: By assumption and Lemma 2.7, the right hand side of that lemma holds for \(p_{1}\) in place of \(p\). Since \(\sigma(p_{1})=p_{1}\), it still holds for \(p_{1}\) and \(\sigma(a_{0})\) in place of \(a_{0}\). Since this right hand side does not depend on \(\mathfrak{C}_{1}\) and we have \(\sigma(a_{0})\subseteq\mathfrak{C}\), it holds for \(p\) and \(\sigma(a_{0})\), so again by Lemma 2.7, we get that \(\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},\sigma(a_{0}))\cap p( \mathfrak{C}^{\prime})^{2}\) is the finest relatively type-definable (over a small subset of \(\mathfrak{C}\)) equivalence relation on \(p(\mathfrak{C}^{\prime})\) with stable quotient. ## 3. The main theorem The goal of this section is to prove the theorem stated in the introduction (see Theorem 3.8). As in the previous section, we work in a complete first-order theory \(T\) and \(\mathfrak{C},\mathfrak{C}^{\prime}\models T\) are monster models such that \(\mathfrak{C}\) is \(\kappa\)-saturated and \(\mathfrak{C}^{\prime}\) is \(|\mathfrak{C}|^{+}\)-saturated. During this section, \(p(x)\) will always be a type in \(S_{x}(\mathfrak{C})\) invariant over some small \(A\subseteq\mathfrak{C}\) (i.e., \(|A|<\kappa\)), where \(x\) is a small tuple. In this section, we use results on relatively type-definable subsets of the group of automorphisms of \(\mathfrak{C}^{\prime}\) extracted from [10]. The following is Definition 2.14 of [10], which extends the notion of relatively definable subset of the monster model from [10, Appendix A]. **Definition 3.1**.: _By a relatively type-definable subset of \(\operatorname{Aut}(\mathfrak{C}^{\prime})\), we mean a subset of the form \(\{\sigma\in\operatorname{Aut}(\mathfrak{C}^{\prime}):\mathfrak{C}^{\prime} \models\pi(\sigma(a),b))\}\) for some partial type \(\pi(x,y)\) (without parameters), where \(x\) and \(y\) are short tuples of variables, and \(a\), \(b\) are corresponding tuples from \(\mathfrak{C}^{\prime}\)._ In particular, given a partial type over the empty set \(\pi(x,y,z)\) and (short) tuples \(a,b,c\) in \(\mathfrak{C}^{\prime}\) corresponding to \(x,y,z\), respectively, we have a relatively type-definable subset of \(\operatorname{Aut}(\mathfrak{C}^{\prime})\) of the form \[A_{\pi(x;y,z),a,b,c}:=\{\sigma\in\operatorname{Aut}(\mathfrak{C}^{\prime}): \mathfrak{C}^{\prime}\models\pi(\sigma(a),b,c)\}.\] In this section, when it is clear enough what the type \(\pi(x;y,z)\) is, we will denote sets of the form \(A_{\pi(x;y,z_{i}),a,a,a_{i}}\) as \(A_{\pi,a,a_{i}}\). We use relatively type-definable sets of the group \(\operatorname{Aut}(\mathfrak{C}^{\prime})\) to prove the following: **Lemma 3.2**.: _Let \(a\in\mathfrak{C}^{\prime}\) and \((a_{i})_{i<\omega}\subseteq\mathfrak{C}^{\prime}\) be such such such that \(a_{0}\underset{a}{\equiv}a_{i}\) for all \(i<\omega\) and \(a\models p\,\mathord{\restriction}_{a_{<\omega}}\). Let \(\pi(x,y,z)\) be a partial type over the empty set such that for every \(i<\omega\) the partial type \(\pi(x,y,a_{i})\) defines an equivalence relation on \(p\,\mathord{\restriction}_{a_{i}}(\mathfrak{C}^{\prime})\). Assume that there is a formula \(\varphi(x,y,z)\) implied by \(\pi(x,y,z)\) such that for every \(i<\omega\)_ \[\bigcap_{i\neq j}\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{j})\cap( p\,\mathord{\restriction}_{a_{<\omega}}(\mathfrak{C}^{\prime}))^{2}\not \subseteq\varphi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{i}).\] _Then, \(T\) has IP._ To prove this result, we need the following three observations on relatively type-definable subsets of \(\operatorname{Aut}(\mathfrak{C}^{\prime})\) of special kind. **Claim 3.3**.: _Let \(a\), \((a_{i})_{i<\omega}\), and \(\pi(x,y,z)\) be as in Lemma 3.2, and let \(E_{a_{i}}\) be the equivalence relation on \(p\,\mathord{\restriction}_{a_{i}}(\mathfrak{C}^{\prime})\) defined by \(\pi(x,y,a_{i})\). Then, for all \(i<\omega\),_ \(\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})\cap A_{\pi,a,a_{i}}\) is the stabilizer of the class \([a]_{E_{a_{i}}}\) under the action of \(\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})\), and \(\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{<\omega})\cap A_{\pi,a,a_{i}}\) is the stabilizer of the class \([a]_{E_{a_{i}}}\) under the action of \(\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{<\omega})\)._ Proof.: It is clear that \(\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})\) preserves both \(p\,[\,_{a_{i}}(\mathfrak{C}^{\prime})\) and \(E_{a_{i}}\). Let \(\sigma\in\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})\cap A_{\pi,a,a_{i}}\). By the definition of \(A_{\pi,a,a_{i}}\), we have \(\models\pi(\sigma(a),a,a_{i})\). Hence, \(\sigma(a)\in[a]_{E_{a_{i}}}\), and so \(\sigma([a]_{E_{a_{i}}})=[a]_{E_{a_{i}}}\). Thus, we have proved that \[\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})\cap A_{\pi,a,a_{i}}\subseteq \operatorname{Stab}_{\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})}([a]_{E_ {a_{i}}}).\] Conversely, let \(\sigma\in\operatorname{Stab}_{\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i}) }([a]_{E_{a_{i}}})\). This implies \(\sigma(a)E_{a_{i}}a\). Hence, \(\models\pi(\sigma(a),a,a_{i})\), and so \(\sigma\in\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})\cap A_{\pi,a,a_{i}}\). Thus, \[\operatorname{Stab}_{\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})}([a]_{E_ {a_{i}}})\subseteq\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})\cap A_{\pi, a,a_{i}}.\] The same proof works for \(\operatorname{Stab}_{\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{<\omega})}([a]_{E_{a_{i}}})\). **Claim 3.4**.: _Let \(a\), \(a_{0}\), and \(\pi(x,y,z)\) be as in Lemma 3.2. Then, for each formula \(\varphi(x,y,z)\) implied by \(\pi(x,y,z)\) there is a formula \(\theta(x,y,z)\) implied by \(\pi(x,y,z)\) such that_ \[(\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{0})\cap A_{\pi,a,a_{0}})\cdot( \operatorname{Aut}(\mathfrak{C}^{\prime}/a_{0})\cap A_{\theta,a,a_{0}})\cdot( \operatorname{Aut}(\mathfrak{C}^{\prime}/a_{0})\cap A_{\pi,a,a_{0}})\subseteq \operatorname{Aut}(\mathfrak{C}^{\prime}/a_{0})\cap A_{\varphi,a,a_{0}}.\] Proof.: Let us consider the type \(\pi^{\prime}(x_{1},x_{2};y,z):=\pi(x_{1},y,z)\cup\{x_{2}=z\}\). Then, \[\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{0})\cap A_{\pi,a,a_{0}}=A_{\pi^{ \prime}(x_{1},x_{2};y,z),aa_{0},a,a_{0}}.\] Hence, by the previous claim, \(A_{\pi^{\prime}(x_{1},x_{2};y,z),aa_{0},a,a_{0}}\) is a group, so it satisfies \[A_{\pi^{\prime}(x_{1},x_{2};y,z),aa_{0},a,a_{0}}^{3}=A_{\pi^{\prime}(x_{1},x_ {2};y,z),aa_{0},a,a_{0}}.\] For any formula \(\varphi(x,y,z)\) implied by \(\pi(x,y,z)\) we have \[A_{\pi^{\prime}(x_{1},x_{2};y,z),aa_{0},a,a_{0}}^{3}\subseteq A_{\varphi(x;y, z),a,a,a_{0}}.\] Applying compactness ([1, Corollary 4.8]), for each \(\varphi(x,y,z)\) implied by \(\pi(x,y,z)\) there is some \(\theta(x,y,z)\) implied by \(\pi(x,y,z)\) such that \[A_{\pi^{\prime}(x_{1},x_{2};y,z),aa_{0},a,a_{0}}\cdot A_{\{x_{2}=z\}\wedge \theta(x_{1};y,z),aa_{0},a,a_{0}}\cdot A_{\pi^{\prime}(x_{1},x_{2};y,z),aa_{0}, a,a_{0}}\subseteq A_{\varphi(x;y,z),a,a,a_{0}}.\] Finally, since every automorphism on the left hand side belongs to \(\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{0})\), we conclude that \[(\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{0})\cap A_{\pi,a,a_{0}})\cdot( \operatorname{Aut}(\mathfrak{C}^{\prime}/a_{0})\cap A_{\theta,a,a_{0}})\cdot( \operatorname{Aut}(\mathfrak{C}^{\prime}/a_{0})\cap A_{\pi,a,a_{0}})\subseteq \operatorname{Aut}(\mathfrak{C}^{\prime}/a_{0})\cap A_{\varphi,a,a_{0}}.\] **Claim 3.5**.: _Let \(a\), \((a_{i})_{i<\omega}\), and \(\pi(x,y,z)\) be as in Lemma 3.2. Then, for any formulas \(\varphi(x,y,z)\) and \(\theta(x,y,z)\) implied by \(\pi(x,y,z)\), for every \(i<\omega\):_ \[(\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{0})\cap A_{\pi,a,a_{0}})\cdot( \operatorname{Aut}(\mathfrak{C}^{\prime}/a_{0})\cap A_{\theta,a,a_{0}})\cdot( \operatorname{Aut}(\mathfrak{C}^{\prime}/a_{0})\cap A_{\pi,a,a_{0}})\subseteq \operatorname{Aut}(\mathfrak{C}^{\prime}/a_{0})\cap A_{\varphi,a,a_{0}}\] _if and only if_ \[(\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})\cap A_{\pi,a,a_{i}})\cdot( \operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})\cap A_{\theta,a,a_{i}})\cdot( \operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})\cap A_{\pi,a,a_{i}})\subseteq \operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})\cap A_{\varphi,a,a_{i}}.\] Proof.: Let \(\tau\in\operatorname{Aut}(\mathfrak{C}^{\prime}/a)\) be such that \(\tau(a_{0})=a_{i}\). The conjugation by \(\tau\) \[\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{0}) \rightarrow\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})\] \[\sigma \mapsto\tau\sigma\tau^{-1}\] is a bijection whose inverse is the conjugation by \(\tau^{-1}\). Moreover, \[\models\pi(\tau\sigma\tau^{-1}(a),a,a_{i})\iff\models\pi(\sigma\tau^{-1}(a),a,a_ {0})\iff\models\pi(\sigma(a),a,a_{0}).\] Analogous equivalences also hold for \(\varphi\) and for \(\theta\) in place of \(\pi\). Hence, the desired equivalence follows by applying the conjugation by \(\tau\). We are now ready to prove Lemma 3.2. **Proof of Lemma 3.2.** Note that for all \(i<\omega\), using automorphisms of \(\mathfrak{C}^{\prime}\) fixing \((a_{i})_{i<\omega}\), we can reduce the condition \[\bigcap_{j\neq i}\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{j})\cap(p \upharpoonright_{a_{<\omega}}(\mathfrak{C}^{\prime}))^{2}\not\subseteq\varphi( \mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{i})\] to \[\bigcap_{j\neq i}\pi(\mathfrak{C}^{\prime},a,a_{j})\cap p\upharpoonright_{a_{< \omega}}(\mathfrak{C}^{\prime})\not\subseteq\varphi(\mathfrak{C}^{\prime},a,a_ {i}),\] because, given a pair \((c,d)\) witnessing the former condition, there exists some \(\sigma\in\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{<\omega})\) such that \(\sigma(d)=a\), and then the pair \((\sigma(c),a)\) witnesses the latter condition. Moreover, using the same approach, one can see that the latter condition can be expressed using relatively type-definable subsets of \(\operatorname{Aut}(\mathfrak{C}^{\prime})\) as \[\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{<\omega})\cap A_{\bigwedge_{j\neq i }\pi(x;y,z_{j}),a,a,(a_{j})_{j\neq i}}\not\subseteq A_{\varphi(x;y,z_{i}),a,a,a_{i}}.\] For every \(i<\omega\), choose some \[\sigma_{i}\in\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{<\omega})\cap A_{ \bigwedge_{j\neq i}\pi(x;y,z_{j}),a,a,(a_{j})_{j\neq i}}\setminus A_{\varphi( x;y,z_{i}),a,a,a_{i}},\] and let \(\sigma_{I}\) denote the composition \(\prod_{i\in I}\sigma_{i}\), for any finite \(I\subseteq\omega\). By Claims 3.4 and 3.5, there is a formula \(\theta(x,y,z)\) implied by \(\pi(x,y,z)\) such that for all \(i<\omega\) \[(\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})\cap A_{\pi,a,a_{i}})\cdot( \operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})\cap A_{\theta,a,a_{i}})\cdot( \operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})\cap A_{\pi,a,a_{i}})\subseteq \operatorname{Aut}(\mathfrak{C}^{\prime}/a_{i})\cap A_{\varphi,a,a_{i}}.\] **Claim**.: _For any finite \(I\subseteq\omega\)_ \[\models\theta(\sigma_{I}(a),a,a_{i})\iff i\notin I.\] Proof of claim.: Firstly, take \(i\not\in I\). Then, for every \(j\in I\), \(\sigma_{j}\) belongs to the set \(\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{<\omega})\cap A_{\pi,a,a_{i}}\). By Claim 3.3, the set \(\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{<\omega})\cap A_{\pi,a,a_{i}}\) is a group, and so we get \(\sigma_{I}\in\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{<\omega})\cap A_{\pi,a,a_{i}}\). Hence, \(\theta(\sigma_{I}(a),a,a_{i})\) holds. Now take \(i\in I\) and write \(I:=I_{0}\sqcup\{i\}\sqcup I_{1}\). For each \(j\in I_{0}\cup I_{1}\) we have \(\sigma_{j}\in\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{<\omega})\cap A_{\pi,a,a_{i}}\). Then, \(\theta(\sigma_{I}(a),a,a_{i})\) does not hold. Otherwise, \[\sigma_{I}=\sigma_{I_{0}}\sigma_{i}\sigma_{I_{1}}\in\operatorname{Aut}( \mathfrak{C}^{\prime}/a_{<\omega})\cap A_{\theta,a,a_{i}},\] which, by Claim 3.3 and the statement before the claim, implies \[\sigma_{i}\in\operatorname{Aut}(\mathfrak{C}^{\prime}/a_{<\omega})\cap A_{ \varphi,a,a_{i}},\] a contradiction with our choice of \(\sigma_{i}\). The formula \(\theta\) witnesses that \(T\) has IP. When we write (NIP) in the statement of a result, it means that we assume that the theory \(T\) has NIP. **Lemma 3.6** (Nip).: _Let \(p(x)\in S(\mathfrak{C})\) be an \(A\)-invariant type, let \(\pi(x,y,z)\) be a partial type over the empty set, and let \(a_{0}\subseteq\mathfrak{C}^{\prime}\) be such that \(\pi(x,y,a_{0})\) defines an equivalence relation on \(p\!\upharpoonright_{a_{0}}(\mathfrak{C}^{\prime})\). Then, for any \((a_{i})_{i<\lambda}\), where \(\lambda\geq\overline{\boldsymbol{\Delta}}_{(2^{(|a_{0}|+|x|+|T|+|A|))+}}\), satisfying \(a_{i}\underset{A}{\equiv}a_{0}\) for all \(i<\lambda\), there exists \(i<\lambda\) such that_ \[\bigcap_{j\neq i}\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{j})\cap(p \!\upharpoonright_{a_{i<\lambda}}(\mathfrak{C}^{\prime}))^{2}\subseteq\pi( \mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{i}).\] Proof.: Assume the conclusion does not hold. Then, for every \(i<\lambda\) \[\bigcap_{j\neq i}\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{j})\cap(p \!\upharpoonright_{a_{i<\lambda}}(\mathfrak{C}^{\prime}))^{2}\not\subseteq \pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{i}).\] Take pairs \((b_{i},c_{i})_{i<\lambda}\) witnessing it. Let \((a^{\prime}_{i},b^{\prime}_{i},c^{\prime}_{i})_{i<\omega}\subseteq\mathfrak{C} ^{\prime}\) be an \(A\)-indiscernible sequence obtained by extracting indiscernibles from the sequence \((a_{i},b_{i},c_{i})_{i<\lambda}\) (see [1, Lemma 1.2]). Then, since \(p\) is \(A\)-invariant, for all \(i<\omega\) the elements \((a^{\prime}_{i},b^{\prime}_{i},c^{\prime}_{i})\) satisfy: \[(b^{\prime}_{i},c^{\prime}_{i})\in\bigcap_{j\neq i}\pi(\mathfrak{C}^{\prime}, \mathfrak{C}^{\prime},a^{\prime}_{j})\cap(p\!\upharpoonright_{a^{\prime}_{i< \omega}}(\mathfrak{C}^{\prime}))^{2};\] \[(b^{\prime}_{i},c^{\prime}_{i})\not\in\pi(\mathfrak{C}^{\prime},\mathfrak{C} ^{\prime},a^{\prime}_{i});\] \[a^{\prime}_{i}\equiv_{A}a^{\prime}_{0}\equiv_{A}a_{0}.\] Hence, by the indiscernibility of the sequence \((a^{\prime}_{i},b^{\prime}_{i},c^{\prime}_{i})_{i<\omega}\), there exists a formula \(\varphi(x,y,z)\) implied by \(\pi(x,y,z)\) such that for all \(i<\omega\) \[(b^{\prime}_{i},c^{\prime}_{i})\not\in\varphi(\mathfrak{C}^{\prime},\mathfrak{ C}^{\prime},a^{\prime}_{i}).\] Take any \(a\models p\!\upharpoonright_{a^{\prime}_{<\omega}}\). Since \(p\) is \(A\)-invariant, \(a^{\prime}_{i}\equiv_{A}a^{\prime}_{j}\) implies \(a^{\prime}_{i}\equiv_{a}a^{\prime}_{j}\). Moreover, since \(a^{\prime}_{i}\equiv_{A}a^{\prime}_{0}\equiv_{A}a_{0}\), \(\pi(x,y,a_{0})\) defines an equivalence relation on \(p\!\upharpoonright_{a_{0}}(\mathfrak{C}^{\prime})\), and \(p\) is \(A\)-invariant, we get that \(\pi(x,y,a^{\prime}_{i})\) defines an equivalence relation on \(p\!\upharpoonright_{a^{\prime}_{i}}(\mathfrak{C}^{\prime})\) for all \(i<\omega\). Hence, the sequence \((a^{\prime}_{i})_{i<\omega}\) together with \(a\), \(\pi(x,y,z)\), and \(\varphi(x,y,z)\) satisfy the assumptions of Lemma 3.2, and so we get IP, which is a contradiction. **Remark 3.7**.: _Let \(E\) and \(F\) be type-definable equivalence relations on a type-definable set \(X\), where \(E\) is finer than \(F\) (i.e, \(E\subseteq F\)). We can define an equivalence relation \(F\!/\!_{E}\) on \(X/E\) by: \([a]_{E}F\!/\!_{E}[b]_{E}\) if and only if \(aFb\)._ The next theorem is the main result of this paper. **Theorem 3.8** (Nip).: _Let \(p(x)\in S(\mathfrak{C})\) be an \(A\)-invariant type. Assume that \(\mathfrak{C}\) is at least \(\overline{\boldsymbol{\Delta}}_{(\boldsymbol{\Delta}_{2}(|x|+|T|+|A|))+}\)-saturated. Then, there exists a finest equivalence relation \(E^{st}\) on \(p(\mathfrak{C}^{\prime})\) relatively type definable over a small set of parameters from \(\mathfrak{C}\) and with stable quotient \(p(\mathfrak{C}^{\prime})/E^{st}\)._ Proof.: Let \(\nu:=\overline{\boldsymbol{\Delta}}_{(\boldsymbol{\Delta}_{2}((|x|+|T|+|A|))+}\). **Claim**.: _If for every countable partial type \(\pi(x,y,z)\) over the empty set and countable tuple \(a_{0}\) from \(\mathfrak{C}\) such that \(\pi(x,y,a_{0})\) defines an equivalence relation \(E_{a_{0}}\) on \(p(\mathfrak{C}^{\prime})\) with stable quotient there is no sequence \((a_{i})_{i<\nu}\) of (countable) tuples \(a_{i}\) in \(\mathfrak{C}\) such that for all \(i<\nu\) we have \(a_{i}\underset{A}{\equiv}a_{0}\) and \(\bigcap_{j<i}E_{a_{j}}\not\subseteq E_{a_{i}}\), then the theorem holds._ Proof of claim.: Consider an arbitrary collection \((E_{i})_{i\in I}\) of relatively type-definable equivalence relations (over small subsets of \(\mathfrak{C}\)) on \(p(\mathfrak{C}^{\prime})\) with stable quotient. Our goal is to prove that the intersection \(\bigcap_{i\in I}E_{i}\) is a relatively type-definable over a small subset of \(\mathfrak{C}\) equivalence relation on \(p(\mathfrak{C}^{\prime})\) with stable quotient. Using Proposition 2.6, we can write each \(E_{j}\) as \(\bigcap_{i\in I_{j}}F_{j}^{i}\), where each \(F_{j}^{i}\) is a countably defined relatively type-definable (over a countable subset of \(\mathfrak{C}\)) equivalence relation on \(p(\mathfrak{C}^{\prime})\). Since the \(F_{j}^{i}\)'s are coarser than the corresponding \(E_{j}\), each \(F_{j}^{i}\) also has stable quotient. We can now write \[\bigcap_{j\in I}E_{j}=\bigcap_{j\in I}\bigcap_{i\in I_{j}}F_{j}^{i}.\] Note that the number of possible countable types \(\pi_{j}^{i}(x,y,z)\) over \(\emptyset\) associated to the \(F_{j}^{i}\)'s is bounded by \(2^{|x|+|T|}\), and the set of types over \(A\) of the countable tuples of parameters used in the definitions of the \(F_{j}^{i}\)'s is bounded by \(2^{|T|+|A|}\). Hence, by the assumptions of the claim, the intersection \(\bigcap_{j\in I}E_{j}\) coincides with an intersection \(\bigcap_{k\in K}F_{j_{k}}^{i_{k}}\), where \(|K|\leq 2^{|T|+|A|}\times 2^{|T|+|x|}\times\nu=\nu\). In fact, since \(2^{|T|+|A|+|x|}\) is strictly smaller than the cofinality of \(\nu\), we can even get \(|K|<\nu\). Finally, by [1, Remark 1.4], \(\bigcap_{k\in K}F_{j_{k}}^{i_{k}}\) is a relatively type-definable over a small subset of \(\mathfrak{C}\) (as \(\mathfrak{C}\) is \(\nu\)-saturated) equivalence relation on \(p(\mathfrak{C}^{\prime})\) with stable quotient. Suppose the theorem fails. By the claim, there exists a countable type \(\pi(x,y,z)\) over \(\emptyset\) and a countable tuple \(a_{0}\) in \(\mathfrak{C}\) such that \(\pi(x,y,a_{0})\) defines an equivalence relation on \(p(\mathfrak{C}^{\prime})\) with \(p(\mathfrak{C}^{\prime})\left/\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime },a_{0})\cap p(\mathfrak{C}^{\prime})^{2}\right.\) stable and there is \((a_{i})_{i<\nu}\subseteq\mathfrak{C}\) such that for all \(i<\nu\), \(a_{i}\equiv_{A}a_{0}\) and \(\bigcap_{j<i}\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{j})\cap p( \mathfrak{C}^{\prime})^{2}\not\subseteq\pi(\mathfrak{C}^{\prime},\mathfrak{C} ^{\prime},a_{i})\). By Proposition 2.6, enlarging \(a_{0}\), we can assume that \(a_{0}\) enumerates an \(\aleph_{0}\)-saturated model in \(L_{A}\) of size at most \(2^{|T|+|A|}\) and \(\pi(x,y,a_{0})\) defines an equivalence relation on \(p\left\lfloor{}_{a_{0}}(\mathfrak{C}^{\prime})\right\rfloor\); by Lemma 2.5, this relation also yields stable quotient. Let \((b_{i},c_{i})_{i<\nu}\) be a sequence witnessing that \(\bigcap_{j<i}\pi(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime},a_{j})\cap p( \mathfrak{C}^{\prime})^{2}\not\subseteq\pi(\mathfrak{C}^{\prime},\mathfrak{C} ^{\prime},a_{i})\). Let \((a_{i}^{\prime},b_{i}^{\prime},c_{i}^{\prime})_{i<\nu}\subseteq\mathfrak{C}^{\prime}\) be an \(A\)-indiscernible sequence extracted from \((a_{i},b_{i},c_{i})_{i<\nu}\). Then, since \(p\) is \(A\)-invariant, we get that for all \(i<\nu\) \[(b_{i}^{\prime},c_{i}^{\prime})\in\left(\bigcap_{j<i}\pi(\mathfrak{C}^{\prime },\mathfrak{C}^{\prime},a_{j}^{\prime})\cap(p\left\lfloor{}_{a_{<\nu}^{ \prime}}(\mathfrak{C}^{\prime})\right\rfloor)^{2}\right)\setminus\pi(\mathfrak{ C}^{\prime},\mathfrak{C}^{\prime},a_{i}^{\prime}).\] Moreover, since \(a_{0}^{\prime}\equiv_{A}a_{0}\), we get that \(\pi(x,y,a_{0}^{\prime})\) defines an equivalence relation on \(p\left\lceil{}_{a_{0}^{\prime}}\left(\mathfrak{C}^{\prime}\right)\right\rceil\), and we also have \(a_{i}^{\prime}\equiv_{A}a_{0}^{\prime}\) for all \(i<\nu\). Therefore, by Lemma 3.6, there exists some \(\beta<\nu\) such that \[(*)\quad\quad\bigcap_{\alpha\neq\beta}\pi(\mathfrak{C}^{\prime},\mathfrak{C} ^{\prime},a_{\alpha}^{\prime})\cap(p\left\lfloor{}_{a_{<\nu}^{\prime}}( \mathfrak{C}^{\prime})\right\rfloor^{2}\subseteq\pi(\mathfrak{C}^{\prime}, \mathfrak{C}^{\prime},a_{\beta}^{\prime}).\] In the sequence \((a_{i}^{\prime},b_{i}^{\prime},c_{i}^{\prime})_{i<\nu}\), let us insert a sequence \((d_{i}^{\prime},e_{i}^{\prime},f_{i}^{\prime})_{i<\omega}\) from \(\mathfrak{C}^{\prime}\) in place of the element \((a_{\beta}^{\prime},b_{\beta}^{\prime},c_{\beta}^{\prime})\) so that the resulting sequence is still \(A\)-indiscernible. Then, since \(p\) is \(A\)-invariant, for all \(i<\omega\) \[(**)\quad\quad(e_{i}^{\prime},f_{i}^{\prime})\in\left(\bigcap_{j<i}\pi( \mathfrak{C}^{\prime},\mathfrak{C}^{\prime},d_{j}^{\prime})\cap(p\left\lfloor{}_ {\begin{subarray}{c}a_{\alpha\leq\nu}^{\prime},d_{<\omega}^{\prime}\\ \alpha\neq\beta\end{subarray}}(\mathfrak{C}^{\prime}))^{2}\right)\setminus\pi( \mathfrak{C}^{\prime},\mathfrak{C}^{\prime},d_{i}^{\prime}).\] Hence, due to the \(A\)-indiscernibility of the sequence \((d^{\prime}_{i},e^{\prime}_{i},f^{\prime}_{i})_{i<\omega}\), there exists some formula \(\varphi\) implied by \(\pi\) such that for all \(i<\omega\) we have \((e^{\prime}_{i},f^{\prime}_{i})\not\in\varphi(\mathfrak{C}^{\prime},\mathfrak{C }^{\prime},d^{\prime}_{i})\). Moreover, since \(d^{\prime}_{i}\equiv_{A}a^{\prime}_{0}\equiv_{A}a_{0}\) and using the \(A\)-invariance of \(p\), we get that \(\pi(x,y,d^{\prime}_{i})\) defines an equivalence relation on \(p\!\upharpoonright_{d^{\prime}_{i}}(\mathfrak{C}^{\prime})\). Let us consider the set \[X:=p\!\upharpoonright_{\begin{subarray}{c}a^{\prime}_{\alpha<\nu}\\ \alpha\neq\beta\end{subarray}}\!(\mathfrak{C}^{\prime}).\] By the above choices and \(A\)-invariance of \(p\), the type \(\bigcap_{\begin{subarray}{c}\alpha<\nu\\ \alpha\neq\beta\end{subarray}}\pi(x,y,a^{\prime}_{\alpha})\) defines an equivalence relation \(E\) on \(X\) with stable quotient, and the sequence \((d^{\prime}_{i},[e^{\prime}_{i}]_{E},[f^{\prime}_{i}]_{E})\) is indiscernible over \[B:=A\cup\{a^{\prime}_{\alpha}:\alpha<\nu;\alpha\neq\beta\}.\] Let \(E_{i}\) be the equivalence relation defined by \(\pi(x,y,d^{\prime}_{i})\) on \(p\!\upharpoonright_{\begin{subarray}{c}a^{\prime}_{\alpha<\nu},d^{\prime}_{i} \\ \alpha\neq\beta\end{subarray}}\!(\mathfrak{C}^{\prime})\) and \(E\!\upharpoonright_{p_{d^{\prime}_{i}}}\) the restriction of \(E\) to \(p\!\upharpoonright_{\begin{subarray}{c}a^{\prime}_{\alpha<\nu},d^{\prime}_{i} \\ \alpha\neq\beta\end{subarray}}\!(\mathfrak{C}^{\prime})\). By \((**)\), the elements \([e^{\prime}_{i}]_{E\!\upharpoonright_{p_{d^{\prime}_{j}}}}\) and \([f^{\prime}_{i}]_{E\!\upharpoonright_{p_{d^{\prime}_{j}}}}\) are \(E_{j}/E\!\upharpoonright_{p_{d^{\prime}_{j}}}\)-related for all \(j<i\) (recall Remark 3.7, and note that \(E\!\upharpoonright_{p_{d^{\prime}_{j}}}\subseteq E_{j}\) by \((*)\)). This is coded in \[\operatorname{tp}(\left(d^{\prime}_{j},[e^{\prime}_{i}]_{E},[f^{\prime}_{i}]_ {E}\right)/_{B}).\] Since \(X/E\) is stable, we know that \[\operatorname{tp}(\left(d^{\prime}_{j},[e^{\prime}_{i}]_{E},[f^{\prime}_{i}]_ {E}\right)/_{B})=\operatorname{tp}(\left(d^{\prime}_{i},[e^{\prime}_{j}]_{E},[ f^{\prime}_{j}]_{E}\right)/_{B}),\] so we conclude that the elements \([e^{\prime}_{j}]_{E\!\upharpoonright_{p_{d^{\prime}_{i}}}}\) and \([f^{\prime}_{j}]_{E\!\upharpoonright_{p_{d^{\prime}_{i}}}}\) are \(E_{i}/E\!\upharpoonright_{p_{d^{\prime}_{i}}}\)-related for all \(j<i\). Hence, the elements \(e^{\prime}_{j}\) and \(f^{\prime}_{j}\) are \(E_{i}\)-related for all \(j<i\). We have shown that the sequence \((d^{\prime}_{i},e^{\prime}_{i},f^{\prime}_{i})\) satisfies: \[\pi(e^{\prime}_{j},f^{\prime}_{j},d^{\prime}_{i})\text{ for all }i\neq j ;\ i,j<\omega;\] \[\neg\varphi(e^{\prime}_{i},f^{\prime}_{i},d^{\prime}_{i})\text{ for all }i<\omega.\] Take any \(a\models p\!\upharpoonright_{d^{\prime}_{<\omega}}\). Since \(d^{\prime}_{i}\equiv_{A}d^{\prime}_{0}\) for all \(i<\omega\), we get that \(d^{\prime}_{i}\equiv_{a}d^{\prime}_{0}\) for all \(i<\omega\). Thus, the sequence \((d^{\prime}_{i})_{i<\omega}\) satisfies the assumption of Lemma 3.2, and so we get IP, a contradiction. We end this section with some comments on whether the large saturation condition in Theorem 3.8 is necessary or could be eliminated. Note that in the above proof, in order to apply extracting indiscernibles to the sequence \((a_{i},b_{i},c_{i})_{i<\lambda}\), we need to know that \(\nu\) is at least \(\operatorname{\vline{\Delta}}_{(2^{2|T|+|A|}\!\upharpoonright_{|x|+|T|+|A|})^{+} }=\operatorname{\vline{\Delta}}_{(2^{2|T|+|A|}\!\upharpoonright_{|x|})^{+}}\). On the other hand, the proof of the claim requires that any number smaller than \(\nu\) is bounded in \(\mathfrak{C}\). That is why the whole proof requires that \(\mathfrak{C}\) is at least \(\operatorname{\vline{\Delta}}_{(2^{2|T|+|A|}\!\upharpoonright_{|x|})^{+}}\)-saturated. In the statement of the theorem, it is enough to assume that \(\mathfrak{C}\) is \(\operatorname{\vline{\Delta}}_{(2^{2|T|+|A|}\!\upharpoonright_{|x|})^{+}}\)-saturated; we used a bigger degree of saturation, which is notationally more concise. Although our proof uses essentially the assumption on the degree of saturation, one could still try to transfer the existence of the finest relatively definable equivalence relation from big models to their elementary substructures. Let \(\mathfrak{C}\prec\mathfrak{C}_{1}\prec\mathfrak{C}^{\prime}\) be such that \(\mathfrak{C}^{\prime}\) is still a monster with respect to \(\mathfrak{C}_{1}\) and let \(p_{1}(x)\in S(\mathfrak{C}_{1})\) be the unique \(A\)-invariant extension of \(p(x)\). While Corollary 2.8 allows us to transfer the existence of the finest relatively type-definable (over a small subset of \(\mathfrak{C}\)) equivalence relation on \(p(\mathfrak{C}^{\prime})\) with stable quotient to the finest relatively type-definable (over a small subset of \(\mathfrak{C}_{1}\)) equivalence relation on \(p_{1}(\mathfrak{C}^{\prime})\), in order to eliminate the specific saturation assumption in Theorem 3.8, we would need to have a transfer going in the other direction. In Corollary 2.9, we proved such a transfer but only under the additional assumption that the finest relatively type-definable (over a small subset of \(\mathfrak{C}_{1}\)) equivalence relation \(E\) on \(p_{1}(\mathfrak{C}^{\prime})\) is defined by type over a set of parameters of small size with respect to \(\mathfrak{C}\). Therefore, the specific saturation assumption could be eliminated if we could answer positively the following question. **Question 3.9**.: _In the context of Theorem 3.8, is \(E^{st}\) always relatively type-definable over \(A\)?_ In the examples studied in the next section, this turns out to be true. Also, in the context of type-definable groups studied in [10], \(G^{st}\) is type-definable over the parameters over which \(G\) is type-definable. ## 4. Examples We present two examples where \(E^{st}\) is computed explicitly, the second example is based on [11, 12]. In fact, in both examples, we give full classifications of all relatively type-definable (over small subsets of \(\mathfrak{C}\)) equivalence relations on \(p(\mathfrak{C}^{\prime})\), for suitably chosen \(p\in S(\mathfrak{C})\). **Example 1**.: Let our language be \(L:=\{R_{r}(x,y),f_{s}(x):r\in\mathbb{Q}^{+},s\in\mathbb{Q}\}\) and \(T\) be the theory of \((\mathbb{R},R_{r},f_{s})_{r\in\mathbb{Q}^{+},s\in\mathbb{Q}}\), where \(f_{s}(x):=x+s\) and \(R_{r}(x,y)\) holds if and only if \(0\leq y-x\leq r\). We define the directed distance between two points as a function \[d:\mathfrak{C}\times\mathfrak{C}\to\mathbb{R}\cup\mathbb{Q}_{+}\cup\mathbb{Q }_{-}\cup\{\infty\}\] satisfying \[d(x,y) =q\in\mathbb{Q}\iff y=f_{q}(x);\] \[d(x,y) =r\in\mathbb{R}^{+}\setminus\mathbb{Q}\iff\forall s_{1},s_{2}\in \mathbb{Q}^{+}\text{ such that }s_{1}<r<s_{2},\neg R_{s_{1}}(x,y)\wedge R_{s_{2}}(x,y);\] \[d(x,y) =q_{+}\in\mathbb{Q}_{+}\iff y\neq f_{q}(x)\text{ is infinitely close to }f_{q}(x)\text{ on the right};\] \[d(x,y) =q_{-}\in\mathbb{Q}_{-}\iff y\neq f_{q}(x)\text{ is infinitely close to }f_{q}(x)\text{ on the left};\] \[d(x,y) =\infty\iff\neg(R_{s}(x,y)\lor R_{s}(y,x))\text{ for all }s\in\mathbb{Q}^{+}.\] We complete the definition of \(d\) extending it symmetrically in the negative irrational case. **Lemma 4.1**.: _Properties of the distance:_ 1. \(d(a,f_{q}(b))=q+d(a,b)\) _and_ \(d(f_{q}(a),b)=-q+d(a,b)\)_;_ 2. _For any distinct real numbers_ \(r_{1},r_{2}\)_, if_ \(d(a,b)=r_{1}\) _and_ \(d(a,c)=r_{2}\)_, then_ \(d(b,c)=r_{2}-r_{1}\)_;_ 3. _For any irrational_ \(r\)_, if_ \(d(a,b)=r\) _and_ \(d(b,c)=0_{\pm}\)_, then_ \(d(a,c)=r\)_;_ 4. _For any irrational_ \(r\)_, if_ \(d(a,b)=r=d(a,c)\)_, then_ \(d(b,c)=0_{\pm}\)_._ Proof.: (1) follows from the definition of the distance. (2) Since the rational case is covered in (1), we can assume that \(r_{1},r_{2}\) are irrationals. Consider the case \(0<r_{1}<r_{2}\); other cases are similar. Let \(q\) be any rational bigger than \(r_{2}-r_{1}\). We can write \(q\) as \(q_{2}-q_{1}\), where \(q_{1},q_{2}\) are rationals, \(q_{1}>r_{1}\), \(q_{2}>r_{2}\), and \(q_{2}>q_{1}\). Since \(R_{q_{1}}(a,b)\) and \(R_{q_{2}}(a,c)\) hold, so does \(R_{q_{2}-q_{1}}(b,c)\). Hence, \(d(b,c)\leq q\). Let now \(q\) be any positive rational smaller than \(r_{2}-r_{1}\). We can write \(q\) as \(q_{2}-q_{1}\), where \(q_{1},q_{2}\) are rationals, \(q_{1}>r_{1}\), \(q_{2}<r_{2}\), and \(q_{2}>q_{1}\). Since \(R_{q_{1}}(a,b)\) holds, \(R_{q_{2}-q_{1}}(b,c)\) cannot hold; otherwise, \(R_{q_{2}}(a,c)\) would hold, contradicting \(d(a,c)=r_{2}\). Hence, \(d(b,c)\geq q\). (3) Consider the case \(r>0\) and \(d(b,c)=0_{+}\); the other cases are analogous. Let \(q\) be any rational bigger than \(r\). We can write \(q\) as \(q_{1}+q_{2}\), where \(q_{1},q_{2}\) are rationals, \(q_{1}>r\), and \(q_{2}>0\). Then, \(R_{q_{1}}(a,b)\) and \(R_{q_{2}}(b,c)\) hold, hence, so does \(R_{q_{1}+q_{2}}(a,c)\). This implies that \(d(a,c)\leq q\). Let now \(q\) be any positive rational smaller than \(r\). Then, \(R_{q}(a,c)\) cannot hold; otherwise it would imply \(R_{q}(a,b)\), a contradiction. (4) Consider the case \(r>0\); the other case is similar. Consider any rationals \(q_{1}\), \(q_{2}\) satisfying \(0<q_{1}<r<q_{2}\). Then, \(R_{q_{2}-q_{1}}(f_{q_{1}}(a),b)\wedge R_{q_{2}-q_{1}}(f_{q_{1}}(a),c)\) holds, which imply \(R_{q_{2}-q_{1}}(b,c)\lor R_{q_{2}-q_{1}}(c,b)\). Since \(q_{2}\) and \(q_{1}\) were arbitrary, this means that \(b\) and \(c\) are infinitesimally close. It is clear that the distance determines the quantifier-free type of a pair \((a,b)\). Since our language only contains unary and binary symbols, the collection of distances between elements of a given \(n\)-tuple determines its quantifier-free type. **Proposition 4.2**.: _The theory \(T\) has NIP and quantifier elimination._ Proof.: \(T\) has NIP, because it is a reduct of an o-minimal theory. We prove quantifier elimination using a back and forth argument. Let \(\mathcal{M}\) and \(\mathcal{N}\) be two \(\aleph_{0}\)-saturated models of \(T\) and let \((a_{1},\dots,a_{n})\) and \((b_{1},\dots,b_{n})\) be tuples of elements of \(\mathcal{M}\) and \(\mathcal{N}\), respectively, satisfying the same quantifier free type. Choose a new element \(a_{n+1}\in\mathcal{M}\). There are three cases: 1. \(a_{n+1}\) is infinitely far from \(a_{1},\dots,a_{n}\); 2. \(a_{n+1}=f_{q}(a_{i})\) for some \(q\in\mathbb{Q}\) and \(i=1,\dots,n\); 3. \(a_{n+1}\) is related (i.e., at finite distance) to some of the \(a_{i}\)'s but is not equal to \(f_{q}(a_{i})\) for any \(q\in\mathbb{Q}\) and \(i=1,\dots,n\). In the first two cases, by \(\aleph_{0}\)-saturation, we can clearly choose \(b_{n+1}\in\mathcal{N}\) such that \((a_{1},\dots,a_{n+1})\) and \((b_{1},\dots,b_{n+1})\) have the same quantifier-free type. Now, let us tackle the third case. In the third case, by removing the elements of the sequence \((a_{1},\dots,a_{n})\) which are at infinite distance from \(a_{n+1}\) as well as the corresponding elements of the sequence \((b_{1},\dots,b_{n})\), we may assume that no \(a_{i}\) is infinitely far from \(a_{n+1}\). Note also that for each \(i<n\) there is at most one \(q_{i}\in\mathbb{Q}\) such that \(f_{q_{i}}(a_{i})\) is infinitesimally close to \(a_{n+1}\). Let \(A\) be the set of all such \(f_{q_{i}}(a_{i})\)'s. First, consider the case when \(A\neq\emptyset\). Note that \(A\) is totally ordered (for example, by the relation \(R_{1}(x,y)\)). Let \(B:=\{f_{q_{i}}(b_{i}):f_{q_{i}}(a_{i})\in A\}\). Then, there exists \(b_{n+1}\) with the same distances to the elements in \(B\) as \(a_{n+1}\) to the corresponding elements in \(A\). By Lemma 4.1, \[\operatorname{tp}^{\mathrm{qf}}(b_{1},\dots,b_{n},b_{n+1})=\operatorname{tp}^ {\mathrm{qf}}(a_{1},\dots,a_{n},a_{n+1}).\] In the case when \(A=\emptyset\), \(d(a_{i},a_{n+1})\) is irrational for every \(i\leq n\). Pick \(b_{n+1}\) so that \(d(b_{1},b_{n+1})=d(a_{1},a_{n+1})\). Since \(A=\emptyset\), by Lemma 4.1(4), we get that \(d(a_{1},a_{n+1})\neq d(a_{1},a_{i})\) for all \(1<i\leq n\). Hence, Lemma 4.1(2) implies that \[\operatorname{tp}^{\mathrm{qf}}(b_{1},\dots,b_{n},b_{n+1})=\operatorname{tp}^ {\mathrm{qf}}(a_{1},\dots,a_{n},a_{n+1}).\qed\] Let \(\mathfrak{C},\mathfrak{C}^{\prime}\) be monster models of \(T\), where \(\mathfrak{C}^{\prime}\) is also a monster model with respect to \(\mathfrak{C}\), and let \(p\in S_{x}(\mathfrak{C})\) be the complete global type determined by \[\bigwedge_{c\in\mathfrak{C}}\bigwedge_{n\in\omega}(\neg R_{n}(x,c)\wedge\neg R_ {n}(c,x)).\] We denote by \(E(x,y)\) the equivalence relation on \(\mathfrak{C}^{\prime}\) defined by \[\bigwedge_{r\in\mathbb{Q}^{+}}(R_{r}(x,y)\lor R_{r}(y,x))\] and by \(E\!\upharpoonright\!_{p}\) the equivalence relation on \(p(\mathfrak{C}^{\prime})\) defined by the same partial type. **Lemma 4.3**.: _The hyperdefinable set \(\mathfrak{C}^{\prime}/E(\mathfrak{C}^{\prime},\mathfrak{C}^{\prime})\) is stable._ Proof.: By [13, Theorem 2.10], it is enough to prove that for any \(A\subseteq\mathfrak{C}^{\prime}\) with \(|A|\leq\mathfrak{c}\) we have \(|S_{\mathfrak{C}^{\prime}/E}(A)|\leq\mathfrak{c}\). Clearly, the elements \(c\) and \(c^{\prime}\) are in the same \(E\)-class if and only if \(c=c^{\prime}\) or \(d(c,c^{\prime})=0_{\pm}\). Note that whenever \(d(c,a)=d(c^{\prime},a)\neq\infty\), then \(cEc^{\prime}\). Therefore, specifying the distance \(d(c,a)\neq\infty\) from \(c\) to a given element \(a\in A\) determines the class \([c]_{E}\). On the other hand, by q.e., the condition saying that \(d(c,a)=\infty\) for all \(a\in A\) determines \(\operatorname{tp}(c/A)\). Therefore, \(|S_{\mathfrak{C}^{\prime}/E}(A)|\leq\mathfrak{c}\times\mathfrak{c}+1= \mathfrak{c}\). **Proposition 4.4**.: _The only equivalence relations on \(p(\mathfrak{C}^{\prime})\) relatively type-definable over a small subset of \(\mathfrak{C}\) are equality, \(E\!\upharpoonright\!_{p}\), and the total equivalence relation._ Proof.: Let \(F(x,y)\) be any equivalence relation on \(p(\mathfrak{C}^{\prime})\) relatively type-definable over a small subset of \(\mathfrak{C}\). Let \(S_{n}(x,y):=R_{n}(x,y)\lor R_{n}(y,x)\). There are two cases. Case 1: \(\exists a,b\in p(\mathfrak{C}^{\prime})\) such that \(aFb\) and \(\models\bigwedge_{n\in\mathbb{N}}\neg S_{n}(a,b)\). For any \(c,d\in p(\mathfrak{C}^{\prime})\) we can find \(e\in p(\mathfrak{C}^{\prime})\) such that \(\models\bigwedge_{n\in\mathbb{N}}\neg S_{n}(c,e)\wedge\bigwedge_{n\in\mathbb{ N}}\neg S_{n}(d,e)\). Hence, by q.e., \[(d,e)\equiv_{\mathfrak{C}}(a,b)\equiv_{\mathfrak{C}}(c,e).\] As \(F\) is \(\mathfrak{C}\)-invariant, we conclude that \(cFd\). This implies that \(F\) is the total relation. Case 2: \(\forall a,b\in p(\mathfrak{C}^{\prime})\) if \(aFb\), then there exists \(n\in\mathbb{N}\) such that \(\models S_{n}(a,b)\). First, we show that \(aFb\) implies \(aE\!\upharpoonright\!_{p}b\). Assume that it is not the case. Then there exists \(m\in\mathbb{Q}^{+}\) such that \(aFb\) and \(\neg S_{m}(a,b)\). On the other hand, \(S_{n}(a,b)\) for some \(n\in\mathbb{N}\). Since \(a\equiv_{\mathfrak{C}}b\), there is \(\sigma\in\operatorname{Aut}(\mathfrak{C}^{\prime}/\mathfrak{C})\) satisfying \(\sigma(a)=b\). Let \(b_{i}:=\sigma^{i}(a)\) for \(i<\omega\). Clearly, \[(a,b)\equiv_{\mathfrak{C}}(b,b_{2})\equiv_{\mathfrak{C}}(b_{2},b_{3})\equiv_ {\mathfrak{C}}\cdots.\] We deduce that for all \(k\in\mathbb{N}\), \(aFb_{k}\) and \(\models\neg S_{km}(a,b_{k})\). Hence, by compactness, there exists \(b^{\prime}\in p(\mathfrak{C}^{\prime})\) such that \(aFb^{\prime}\) and \(\models\neg S_{n}(a,b^{\prime})\) for all \(n\in\mathbb{N}\), contradicting the hypothesis of the second case. Finally, if \(F\) is not equality, there exist elements \(a\neq b\in p(\mathfrak{C}^{\prime})\) such that \(aFb\), and so \(aE\!\upharpoonright\!_{p}b\) by the last paragraph. Take any distinct \(c,d\in p(\mathfrak{C}^{\prime})\) satisfying \(cE\!\upharpoonright\!_{p}d\). Then, by q.e., either \((a,b)\equiv_{\mathfrak{C}}(c,d)\) or \((a,b)\equiv_{\mathfrak{C}}(d,c)\). Both cases imply \(cFd\), which means that \(F\) and \(E\!\upharpoonright\!_{p}\) are the same equivalence relation. Since \(p(\mathfrak{C}^{\prime})\) is not stable, we obtain the following: **Corollary 4.5**.: _The equivalence relation \(E\!\upharpoonright\!_{p}\) is the finest equivalence relation on \(p(\mathfrak{C}^{\prime})\) relatively type definable over a small set of parameters from \(\mathfrak{C}\) and with stable quotient, that is \(E^{st}=E\!\upharpoonright\!_{p}\)._ **Example 2**.: This example is based on [14, Section 4]. We work in the language \(L:=\{+,-,1,R_{r}(x,y):r\in\mathbb{Q}^{+}\}\) and our theory is \(\operatorname{Th}((\mathbb{R},+,-,1,R_{r}(x,y))_{r\in\mathbb{Q}^{+}})\). The next result was proven in [14, Proposition 4.1, Proposition 4.8]. **Fact 4.6**.: _The theory \(T\) has \(NIP\) and quantifier elimination._ As in the previous example, let \(\mathfrak{C},\mathfrak{C}^{\prime}\) be monster models of \(T\), where \(\mathfrak{C}^{\prime}\) is also a monster model with respect to \(\mathfrak{C}\), and let \(p\in S_{x}(\mathfrak{C})\) be the complete global type determined by \[\{\neg R_{r}(x,c)\wedge\neg R_{r}(c,x):c\in\mathfrak{C},r\in\mathbb{Q}^{+}\}.\] Without loss of generality, for convenience we can assume that \(\mathfrak{C}^{\prime}\) is a reduct of a monster model of \(\operatorname{Th}(\mathbb{R},+,-,1,\leq)\). So it makes sense to use \(\leq\). As in the previous example, let \(S_{r}(x,y):=R_{r}(x,y)\lor R_{r}(y,x)\). We say that \(x,y\) are _related_ if \(S_{r}(x,y)\) holds for some \(r\in\mathbb{Q}^{+}\). We denote by \(E(x,y)\) the equivalence relation on \(\mathfrak{C}^{\prime}\) defined by \[\bigwedge_{r\in\mathbb{Q}^{+}}S_{r}(x,y),\] and by \(E\operatorname{\upharpoonright}_{p}\) the equivalence relation on \(p(\mathfrak{C}^{\prime})\) defined by the same partial type. In other words, this is the relation on \(p(\mathfrak{C}^{\prime})\) of lying in the same coset modulo the subgroup of all infinitesimals in \(\mathfrak{C}^{\prime}\) which will be denoted by \(\mu\). Another possible relatively type-definable (over a small subset of \(\mathfrak{C}\)) equivalence relations on \(p(\mathfrak{C}^{\prime})\) are as follows. Take any \(c\in\mathfrak{C}\). Let \(E_{c}\) be the equivalence relation on \(p(\mathfrak{C}^{\prime})\) with classes \(\{a,-a+c\}\), where \(a\) ranges over \(p(\mathfrak{C}^{\prime})\). It is clear clear that this is an equivalence relation on \(p(\mathfrak{C}^{\prime})\) defined by a type over \(c\). We also have the equivalence relation \(E_{c}^{\mu}\) with classes \((a+\mu)\cup(-a+c+\mu)\), where \(a\) ranges over \(p(\mathfrak{C}^{\prime})\), which is also defined by a type over \(c\). For any non-empty small set \(A\) of positive infinitesimals in \(\mathfrak{C}\) we will consider the equivalence relation \(E_{A}\) on \(p(\mathfrak{C}^{\prime})\) given as \[\bigwedge_{a\in A}\bigwedge_{n\in\mathbb{N}^{+}}|x-y|\leq\frac{1}{n}a.\] Note that this relation is relatively type-definable over \(A\) on \(p(\mathfrak{C}^{\prime})\) in the original language \(L\) by the following condition \[\bigwedge_{a\in A}\bigwedge_{n\in\mathbb{N}^{+}}R_{1}(n(x-y),a)\wedge R_{1}(n( y-x),a).\] One can also combine the above examples to produce one more class of equivalence relations on \(p(\mathfrak{C}^{\prime})\). Take any \(c\in\mathfrak{C}\) and any non-empty small set \(A\) of positive infinitesimals in \(\mathfrak{C}\). Let \(\mu_{A}\) be the infinitesimals in \(\mathfrak{C}^{\prime}\) defined by \[\bigwedge_{a\in A}\bigwedge_{n\in\mathbb{N}^{+}}|x|\leq\frac{1}{n}a.\] Then we have the equivalence relation \(E_{A,c}\) on \(p(\mathfrak{C}^{\prime})\) with classes \[(a+\mu_{A})\cup(-a+c+\mu_{A}),\] which is clearly defined on \(p(\mathfrak{C}^{\prime})\) by a type over \(Ac\). **Theorem 4.7**.: _The only equivalence relations on \(p(\mathfrak{C}^{\prime})\) relatively type-definable over a small subset of \(\mathfrak{C}\) are: the total equivalence relation, equality, \(E\upharpoonright_{p}\), the relations of the form \(E_{c}\) or \(E_{c}^{\mu}\) (where \(c\in\mathfrak{C}\)), and the relations of the form \(E_{A}\) or \(E_{A,c}\) for any non-empty small set \(A\) of positive infinitesimals in \(\mathfrak{C}\) and any \(c\in\mathfrak{C}\)._ In the proof below, by a non-trivial term \(t(x,y)\) (in the language \(L\)) we mean an expression \(nx+my+k\), where \(m,n,k\in\mathbb{Z}\) and \(m\neq 0\) or \(n\neq 0\). Proof.: Let \(F(x,y)\) be an arbitrary equivalence relation on \(p(\mathfrak{C}^{\prime})\) relatively type-definable over a small subset of \(\mathfrak{C}\). **Claim**.: _Either \(F\) is the total equivalence relation, or \(F\) is finer than \(E_{c}^{\mu}\) (i.e., \(F\subseteq E_{c}^{\mu}\)) for some \(c\in\mathfrak{C}\)._ Proof of Claim.: We consider two cases. Case 1: \(\exists a\), \(b\in p(\mathfrak{C}^{\prime})\) such that \(aFb\) and \(\models\bigwedge_{n\in\mathbb{Q}^{+}}\bigwedge_{c\in\mathfrak{C}}\neg S_{n}(t( a,b),c)\) for all non-trivial terms \(t(x,y)\). Take any \(a^{\prime},b^{\prime}\in p(\mathfrak{C}^{\prime})\). We can find \(d^{\prime}\in p(\mathfrak{C}^{\prime})\) such that \(\models\bigwedge_{n\in\mathbb{Q}^{+}}\bigwedge_{c\in\mathfrak{C}}\neg S_{n}(t (a^{\prime},d^{\prime}),c)\) and \(\models\bigwedge_{n\in\mathbb{N}^{+}}\bigwedge_{c\in\mathfrak{C}}\neg S_{n}(t (b^{\prime},d^{\prime}),c)\) for all non-trivial terms \(t(x,y)\). Then, by q.e., \((a^{\prime},d^{\prime})\equiv_{\mathfrak{C}}(a,b)\equiv_{\mathfrak{C}}(b^{ \prime},d^{\prime})\). Since \(F\) is \(\mathfrak{C}\)-invariant, we conclude that \(a^{\prime}Fb^{\prime}\), hence \(F\) is the total equivalence relation. Case 2: \(\forall a\), \(b\in p(\mathfrak{C}^{\prime})\) if \(aFb\), then there are \(n\in\mathbb{Q}^{+}\), \(c\in\mathfrak{C}\), and a non-trivial term \(t(x,y)\) such that \(\models S_{n}(t(a,b),c)\). Suppose that for every \(c\in\mathfrak{C}\), \(F\) is not finer than \(E_{c}^{\mu}\). We will reach a contradiction, but this will require quite a bit of work. First, we claim that there \(a,b\in p(\mathfrak{C}^{\prime})\) such that \[(*)\quad\quad aFb\;\;\text{and}\;\;\models\bigwedge_{q\in\mathbb{Q}^{+}}\neg S _{q}(a,b)\;\;\text{and}\;\;\models\bigwedge_{q\in\mathbb{Q}^{+}}\bigwedge_{c \in\mathfrak{C}}\neg S_{q}(a,-b+c).\] To show it, notice that since \(F\) is not contained in any \(E_{c}^{\mu}\), either we get a pair \((a,b)\in F\) satisfying the above condition, or such that \(S_{m}(a,b)\) and \(\neg S_{n}(a,b)\) for some positive rationals \(n,m\), or we get two pairs \((a,b),(a^{\prime},b^{\prime})\in F\) and elements \(c,c^{\prime}\in\mathfrak{C}\) such that \(c-c^{\prime}\notin\mu\) and \(a+b-c\in\mu\) and \(a^{\prime}+b^{\prime}-c^{\prime}\in\mu\). In this third case, applying an an automorphism of \(\mathfrak{C}^{\prime}\) over \(\mathfrak{C}\) mapping \(a^{\prime}\) to \(a\), we may assume that \(a^{\prime}=a\), and so we get \(F(b,b^{\prime})\) and \(b-b^{\prime}\in c-c^{\prime}+\mu\). Then \(b+b^{\prime}\in 2b^{\prime}+c-c^{\prime}+\mu\) is not related to any element of \(\mathfrak{C}\) (as \(2b^{\prime}\) is not related), so \(\models\bigwedge_{q\in\mathbb{Q}^{+}}\bigwedge_{c^{\prime\prime}\in\mathfrak{C }}\neg S_{q}(b,-b^{\prime}+c^{\prime\prime})\), and either \(\models\bigwedge_{q\in\mathbb{Q}^{+}}\neg S_{q}(b,b^{\prime})\) (so we are done), or there are positive rationals \(m,n\) such that \(S_{m}(b,b^{\prime})\) and \(\neg S_{n}(b,b^{\prime})\). In this way, the whole third case reduces to the second one, i.e. we have a pair \((a,b)\in F\) with \(S_{m}(a,b)\) and \(\neg S_{n}(a,b)\) for some positive rationals \(n,m\). Let \(\sigma\in\operatorname{Aut}(\mathfrak{C}^{\prime}/\mathfrak{C})\) be such that \(\sigma(a)=b=:b_{1}\). We produce an infinite sequence Then for all \(k\in\mathbb{N}^{+}\), \(aFb_{k}\) and \(\models S_{km}(a,b_{k})\) and \(\models\neg S_{kn}(a,b_{k})\). Since \(\models S_{km}(a,b_{k})\) and \(b_{k}\) is not related to anything in \(\mathfrak{C}\), we get \(\bigwedge_{q\in\mathbb{Q}^{+}}\bigwedge_{c\in\mathfrak{C}}\neg S_{q}(a,-b_{k}+c)\). As we can use arbitrarily large \(k\), the desired \(b\) (i.e., such that \((a,b)\) satisfies \((*)\)) exists by compactness (or rather \(|\mathfrak{C}|^{+}\)-saturation of \(\mathfrak{C}^{\prime}\)). We will show now that there is \(b^{\prime}\in p(\mathfrak{C}^{\prime})\) such that \[(**)\quad\quad aFb^{\prime}\;\;\text{and}\;\;\models\bigwedge_{c\in\mathfrak{C }}\bigwedge_{q\in\mathbb{Q}^{+}}\neg S_{q}(a-b^{\prime},c)\wedge\neg S_{q}(a +b^{\prime},c).\] Namely, either \(b^{\prime}:=b\) already satisfies it, or \(a-b\) is related to some infinite \(c\in\mathfrak{C}\). In the latter case, \(a-b\) is related precisely to the elements from the set \(c+\mathbb{R}+\mu\). Let \(\sigma\in\operatorname{Aut}(\mathfrak{C}^{\prime}/\mathfrak{C})\) be such that \(\sigma(a)=b=:b_{1}\). We get Then \(aFb_{k}\), and one easily checks that \(a-b_{k}\) is related precisely to the elements from the set \(kc+\mathbb{R}+\mu\), and so \(a+b_{k}\) is not related to anything in \(\mathfrak{C}\). Since \(c\) is infinite, the sets \(kc+\mathbb{R}+\mu\) are pairwise disjoint for different \(k\)'s, and so we find the desired \(b^{\prime}\) using compactness (or rather \(|\mathfrak{C}|^{+}\)-saturation of \(\mathfrak{C}^{\prime}\)). Since we are working in a divisible group, we can replace the terms \(t(x,y)\) in the statement of Case 2 by expressions \(t_{q}(x,y):=x-qy\) for \(q\in\mathbb{Q}\) or \(t_{q}(x,y):=qy\) for \(q\in\mathbb{Q}\setminus\{0\}\). Note that for \(t_{z}(x,y):=x-zy\) and \(d\in p(\mathfrak{C}^{\prime})\), there exists at most one rational \(q\) such that \[S_{n}(t_{q}(a,d),c)\] holds for some \(c\in\mathfrak{C}\) and \(n\in\mathbb{Q}^{+}\). For if there existed \(q\neq q^{\prime}\in\mathbb{Q}\), \(n,n^{\prime}\in\mathbb{Q}^{+}\), and \(c,c^{\prime}\in\mathfrak{C}\) such that \(S_{n}(t_{q}(a,d),c)\) and \(S_{n^{\prime}}(t_{q^{\prime}}(a,d),c^{\prime})\), then \((q-q^{\prime})d\) would be related to some element of \(\mathfrak{C}\). Hence, so would be \(d\), a contradiction with \(d\in p(\mathfrak{C}^{\prime})\). This also shows that for every \(q\in\mathbb{Q}\setminus\{0\}\), \(qd\) is not related to anything in \(\mathfrak{C}\). We will show now that there exists \(b^{\prime\prime}\in p(\mathfrak{C}^{\prime})\) such that \[aFb^{\prime\prime}\ \text{ and }\ \models\bigwedge_{q\in\mathbb{Q}}\bigwedge_{n \in\mathbb{Q}^{+}}\bigwedge_{c\in\mathfrak{C}}\neg S_{n}(t_{q}(a,d),c),\] where \(t_{z}(x,y):=x-zy\), contradicting the assumption of Case 2. Namely, either \(d:=b^{\prime}\) does the job, or there are \(q\in\mathbb{Q}\), \(n\in\mathbb{Q}^{+}\), and \(c\in\mathfrak{C}\) such that \(S_{n}(t_{q}(a,b^{\prime}),c)\). By the choice of \(a\) and \(b^{\prime}\) satisfying \((**)\), we have that \(q\notin\{-1,0,1\}\). Again, let \(\sigma\in\operatorname{Aut}(\mathfrak{C}^{\prime}/\mathfrak{C})\) be such that \(\sigma(a)=b^{\prime}=:b^{\prime}_{1}\). We get Then \(aFb^{\prime}_{k}\) for all \(k\in\mathbb{N}^{+}\). On the other hand, applying powers of \(\sigma\), we easily conclude that for every \(k\in\mathbb{N}^{+}\), \(t_{q^{k}}(a,b^{\prime}_{k})\) is related to some element of \(\mathfrak{C}\). Hence, by an observation above, we get that for all rationals \(r\neq q^{k}\), \(t_{r}(a,b^{\prime}_{k})\) is not related to anything in \(\mathfrak{C}\). Since \(q\notin\{-1,0,1\}\), we know that \(q,q^{2},\dots\) are pairwise distinct. So the desired \(b^{\prime\prime}\) exists by compactness. **Claim**.: \(F\cap E\upharpoonright p\) _is either equality, or \(E\upharpoonright p\), or \(E_{A}\) for some non-empty small set \(A\) of positive infinitesimals in \(\mathfrak{C}\)._ Proof of Claim.: We may assume that \(F\subseteq E\upharpoonright p\), and just work with \(F\). Let \(B\) be a small dcl-closed subset of \(\mathfrak{C}\) over which \(F\) is relatively defined on \(p(\mathfrak{C}^{\prime})\). Extending the notation from before the statement of Theorem 4.7, for any \(B^{\prime}\subseteq B\) put \[E_{B^{\prime}}:=\{(x,y)\in p(\mathfrak{C}^{\prime})^{2}:\bigwedge_{b\in B^{ \prime}}\bigwedge_{n\in\mathbb{N}^{+}}|y-x|\leq\frac{1}{n}b\},\] where \(B^{\prime+}:=\{b\in B^{\prime}:b>0\}\). Let \(A:=\bigcup\{B^{\prime}\subseteq B:F\subseteq E_{B^{\prime}}\}\). Then \[F\subseteq\bigcap\{E_{B^{\prime}}:B^{\prime}\subseteq B\;\;\text{such that}\;\;F \subseteq E_{B^{\prime}}\}=E_{A},\] and, as \(F\subseteq E\upharpoonright p\), we have that \(1\in A\). We will show that either \(F\) is equality, or \(F=E_{A}\). This will clearly complete the proof of the claim (note that if \(A\) does not contain any positive infinitesimals, then \(E_{A}=E\upharpoonright p\)). Suppose \(E\) is not the equality. It remains to show that \(F\supseteq E_{A}\). Case 1: \(A=B\). Pick any distinct \(\alpha,\beta\in p(\mathfrak{C}^{\prime})\) such that \(\alpha F\beta\). Then \[\bigwedge_{a\in A^{+}}|\alpha-\beta|\leq a.\] Consider any \(\alpha^{\prime},\beta^{\prime}\in p(\mathfrak{C}^{\prime})\) with \(\alpha^{\prime}E_{A}\beta^{\prime}\). Then either \(\alpha^{\prime}=\beta^{\prime}\) (and so \(\alpha^{\prime}F\beta^{\prime}\)), or \(\bigwedge_{a\in A^{+}}0<|\beta^{\prime}-\alpha^{\prime}|\leq a\). In the latter case, it is remains to show that \(\alpha\beta\equiv_{A}\alpha^{\prime}\beta^{\prime}\) or \(\alpha\beta\equiv_{A}\beta^{\prime}\alpha^{\prime}\) (as then \(\alpha^{\prime}F\beta^{\prime}\), since \(F\) is relatively type-definable over \(A\)). Without loss of generality, \(\beta>\alpha\) and \(\beta^{\prime}>\alpha^{\prime}\); equivalently, \(R_{1}(\alpha,\beta)\) and \(R_{1}(\alpha^{\prime},\beta^{\prime})\) both hold. Since \(\alpha\equiv_{\mathfrak{C}}\alpha^{\prime}\), we can assume that \(\alpha=\alpha^{\prime}\). It suffices to show that \[\{0<t-\alpha\leq a:a\in A^{+}\}\] determines a complete type over \(\operatorname{dcl}(A,\alpha)\). By the o-minimality of \((\mathbb{R},+,-,1,\leq)\), this boils down to showing that there is no \(b\in\operatorname{dcl}^{*}(A,\alpha)\) with \(\bigwedge_{a\in A^{+}}\alpha<b\leq\alpha+a\), where \(\operatorname{dcl}^{*}\) is computed in the language \(\{+,-,1,\leq\}\). If there was such a \(b\), then, by q.e. for the theory of divisible ordered abelian groups, it would be of the form \(\gamma+q\alpha\) for some \(\gamma\in A\) and \(q\in\mathbb{Q}\), and we would have \(\bigwedge_{a\in A^{+}}0<\gamma+(q-1)\alpha\leq a\). If \(q=1\), we get \(0<\gamma<\gamma\), a contradiction. If \(q\neq 1\), we get that \(\alpha\) is related to an element of \(A\) which contradicts the fact that \(\alpha\in p(\mathfrak{C}^{\prime})\). Case 2: \(A\subsetneq B\). Then we can find \(b\in B^{+}\) such that \(\bigwedge_{a\in A^{+}}b<a\). Consider any such \(b\). Then, \(F(x,y)\) holds for some \(x,y\in p(\mathfrak{C}^{\prime})\) such that \(y-x>\frac{1}{n}b\) for some \(n\in\mathbb{N}^{+}\). Indeed, otherwise \(F\subseteq E_{A\cup\{b\}}\subsetneq E_{A}\), which contradicts the minimality of \(E_{A}\) (note that \(E_{A\cup\{b\}}\subsetneq E_{A}\) is witnessed by \((c,c+b)\) for any \(c\in p(\mathfrak{C}^{\prime})\), as \(A\) is closed under taking fractions). Let \(\sigma\in\operatorname{Aut}(\mathfrak{C}^{\prime}/\mathfrak{C})\) be such that \(\sigma(x)=y=:y_{1}\). We get We easily conclude that \(F(x,y_{k})\) and \(y_{k}-x>\frac{k}{n}b\) for all \(k\); in particular, \(y_{n}-x>b\). By compactness (or rather \(|\mathfrak{C}|^{+}\)-saturation of \(\mathfrak{C}^{\prime}\)), there exist \(x^{\prime},y^{\prime}\in p(\mathfrak{C}^{\prime})\) such that \(F(x^{\prime},y^{\prime})\) and: 1. \(\bigwedge_{a\in A^{+}}0<y^{\prime}-x^{\prime}<a\); 2. \(\bigwedge_{b\in B^{+}}(\text{if}\;\;\bigwedge_{a\in A^{+}}b<a,\;\;\text{then} \;\;b<y^{\prime}-x^{\prime})\). We will check now that whenever \(x^{\prime\prime},y^{\prime\prime}\in p(\mathfrak{C}^{\prime})\) satisfy (1) and (2), then \(x^{\prime}y^{\prime}\equiv_{B}x^{\prime\prime}y^{\prime\prime}\). For that, without loss of generality, we can assume that \(x^{\prime}=x^{\prime\prime}\). It remains to show that the partial type \[\pi(t):=\{0<t-x^{\prime}<a:a\in A^{+}\}\cup\{b<t-x^{\prime}:b\in B^{+}\;\; \text{such that}\;\;\bigwedge_{a\in A^{+}}b<a\}\] determines a complete type over \(\operatorname{dcl}(B,x^{\prime})\). By the o-minimality of \((\mathbb{R},+,-,1,\leq)\), this boils down to showing that there is no \(c\in\operatorname{dcl}^{*}(B,x^{\prime})\) realizing \(\pi(t)\), where \(\operatorname{dcl}^{*}\) is computed in the language \(\{+,-,1,\leq\}\). If there was such a \(c\), then, by q.e. for the theory of divisible ordered abelian groups, it would be of the form \(\beta+qx^{\prime}\) for some \(\beta\in B\) and \(q\in\mathbb{Q}\), so \[0<\beta+(q-1)x^{\prime}<A^{+}\ \text{ and }\ \bigwedge_{b\in B^{+}}(\text{if }\,b<A^{+},\ \text{ then }\,b<\beta+(q-1)x^{\prime}).\] If \(q=1\), then \(0<\beta<A^{+}\), so \(\beta<\beta\), a contradiction. If \(q\neq 1\), then \(x^{\prime}\) is related to an element of \(B\), which contradicts the fact that \(x^{\prime}\in p(\mathfrak{C}^{\prime})\). Finally, consider any \((\alpha,\beta)\in E_{A}\), say with \(\beta>\alpha\). Applying \(\sigma\in\operatorname{Aut}(\mathfrak{C}^{\prime}/\mathfrak{C})\) mapping \(y^{\prime}\) to \(\alpha\), we obtain \(\gamma:=\sigma(x^{\prime})\) such that the pair \((\gamma,\alpha)\) satisfies (1) and (2). Since \(0<\beta-\alpha<A^{+}\) and \(A\) is closed under taking fractions, the pair \((\gamma,\beta)\) also satisfies (1) and (2). Therefore, \(\gamma\alpha\equiv_{B}x^{\prime}y^{\prime}\equiv_{B}\gamma\beta\). As \(x^{\prime}Fy^{\prime}\), we conclude that \(\alpha F\beta\), which completes the proof of the claim. The above two claims easily imply the classification given in the theorem. **Corollary 4.8**.: _The equivalence relation \(E\!\upharpoonright_{p}\) is the finest equivalence relation on \(p(\mathfrak{C}^{\prime})\) relatively type-definable over a small set of parameters of \(\mathfrak{C}\) and with stable quotient, that is \(E^{st}=E\!\upharpoonright_{p}\)_ Proof.: The quotient \(p(\mathfrak{C}^{\prime})/E\!\upharpoonright_{p}\) is stable by [13, Proposition 4.9]. Let \(F\) be a relatively type-definable (over a small subset of \(\mathfrak{C}\)) equivalence relation on \(p(\mathfrak{C}^{\prime})\) strictly finer than \(E\!\upharpoonright_{p}\). By Theorem 4.7, \(F=E_{A}\) for some non-empty small set \(A\) of positive infinitesimals in \(\mathfrak{C}\). One easily concludes that for every \(a,d,e\in p(\mathfrak{C}^{\prime})\) and infinitesimal \(c\in\mathfrak{C}^{\prime}\) bigger than all infinitesimals in \(\mathfrak{C}\) such that \(dFa\) and \(eF(a+c)\) we have \(d<e\), and so \(\neg R_{1}(e,d)\). Take any \(a\in p(\mathfrak{C}^{\prime})\) and infinitesimal \(c\in\mathfrak{C}^{\prime}\) bigger than all infinitesimals in \(\mathfrak{C}\). Using Ramsey's theorem and compactness, we extract a \(\mathfrak{C}\)-indiscernible sequence \((a^{\prime}_{i})_{i<\omega}\) from the sequence \((a+kc)_{k<\omega}\). Then, the sequence \(([a^{\prime}_{i}]_{F})_{i<\omega}\) is \(\mathfrak{C}\)-indiscernible but not totally \(\mathfrak{C}\)-indiscernible, since the formula \(R_{1}(x,y)\) witnesses that \[\operatorname{tp}([a^{\prime}_{i}]_{F},[a^{\prime}_{i+1}]_{F}/\mathfrak{C}) \neq\operatorname{tp}([a^{\prime}_{i+1}]_{F},[a^{\prime}_{i}]_{F}/\mathfrak{C}).\] Thus, \(p(\mathfrak{C}^{\prime})/F\) is unstable by virtue of [13, Theorem 2.10].
2305.06148
A semi-automatic method for document classification in the shipping industry
In the shipping industry, document classification plays a crucial role in ensuring that the necessary documents are properly identified and processed for customs clearance. OCR technology is being used to automate the process of document classification, which involves identifying important documents such as Commercial Invoices, Packing Lists, Export/Import Customs Declarations, Bills of Lading, Sea Waybills, Certificates, Air or Rail Waybills, Arrival Notices, Certificate of Origin, Importer Security Filings, and Letters of Credit. By using OCR technology, the shipping industry can improve accuracy and efficiency in document classification and streamline the customs clearance process. The aim of this study is to build a robust document classification system based on keyword frequencies. The research is carried out by analyzing Contract-Breach law documents available with IN-D. The documents were collected by scraping the Singapore Government Judiciary website. The database developed has 250 Contract-Breach documents. These documents are splitted to generate 200 training documents and 50 test documents. A semi-automatic approach is used to select keyword vectors for document classification. The accuracy of the reported model is 92.00 %.
Narayanan Arvind
2023-03-29T10:00:43Z
http://arxiv.org/abs/2305.06148v1
# A Semi-Automatic Method for Document Classification in the Shipping Industry ###### Abstract In the shipping industry, document classification plays a crucial role in ensuring that the necessary documents are properly identified and processed for customs clearance. OCR technology is being used to automate the process of document classification, which involves identifying important documents such as Commercial Invoices, Packing Lists, Export/Import Customs Declarations, Bills of Lading, Sea Waybills, Certificates, Air or Rail Waybills, Arrival Notices, Certificate of Origin, Importer Security Filings, and Letters of Credit. By using OCR technology, the shipping industry can improve accuracy and efficiency in document classification and streamline the customs clearance Document classification, Maritime industry, Shipping industry, Shipping documents, Customs clearance, Computer vision, Python, optical character recognition, OCR ## 1 Introduction Document classification is an essential task in the shipping industry as it involves processing and managing vast amounts of information related to shipping operations. To ensure compliance with regulations and to enhance efficiency, shipping companies need to classify various documents such as bills of lading, cargo manifests, and customs declarations accurately [1]. With the advent of digitalization, these documents are increasingly being submitted in a digital format, which necessitates the use of automated techniques for document classification. However, the accuracy of process. The aim of this study is to build a robust document classification system based on keyword frequencies. The research is carried out by analyzing "Contract-Breach" law documents available with IN-D. The documents were collected by scraping the Singapore Government Judiciary website. The database developed has 250 "Contract-Breach" documents. These documents are splitted to generate 200 training documents and 50 test documents. A semi-automatic approach is used to select keyword vectors for document classification. The accuracy of the reported model is 92.00 %.
2310.02198
Strong Faithfulness for ELH Ontology Embeddings
Ontology embedding methods are powerful approaches to represent and reason over structured knowledge in various domains. One advantage of ontology embeddings over knowledge graph embeddings is their ability to capture and impose an underlying schema to which the model must conform. Despite advances, most current approaches do not guarantee that the resulting embedding respects the axioms the ontology entails. In this work, we formally prove that normalized ${\cal ELH}$ has the strong faithfulness property on convex geometric models, which means that there is an embedding that precisely captures the original ontology. We present a region-based geometric model for embedding normalized ${\cal ELH}$ ontologies into a continuous vector space. To prove strong faithfulness, our construction takes advantage of the fact that normalized ${\cal ELH}$ has a finite canonical model. We first prove the statement assuming (possibly) non-convex regions, allowing us to keep the required dimensions low. Then, we impose convexity on the regions and show the property still holds. Finally, we consider reasoning tasks on geometric models and analyze the complexity in the class of convex geometric models used for proving strong faithfulness.
Victor Lacerda, Ana Ozaki, Ricardo Guimarães
2023-10-03T16:47:35Z
http://arxiv.org/abs/2310.02198v1
# Strong Faithfulness for \(\mathcal{ELH}\) Ontology Embeddings ###### Abstract Ontology embedding methods are powerful approaches to represent and reason over structured knowledge in various domains. One advantage of ontology embeddings over knowledge graph embeddings is their ability to capture and impose an underlying schema to which the model must conform. Despite advances, most current approaches do not guarantee that the resulting embedding respects the axioms the ontology entails. In this work, we formally prove that normalized \(\mathcal{ELH}\) has the strong faithfulness property on convex geometric models, which means that there is an embedding that precisely captures the original ontology. We present a region-based geometric model for embedding normalized \(\mathcal{ELH}\) ontologies into a continuous vector space. To prove strong faithfulness, our construction takes advantage of the fact that normalized \(\mathcal{ELH}\) has a finite canonical model. We first prove the statement assuming (possibly) non-convex regions, allowing us to keep the required dimensions low. Then, we impose convexity on the regions and show the property still holds. Finally, we consider reasoning tasks on geometric models and analyze the complexity in the class of convex geometric models used for proving strong faithfulness. **Keywords:** Geometric Models, Description Logic, Knowledge Graph Embeddings, Faithfulness ## 1 Introduction Knowledge Graphs (KGs) are a popular method for representing knowledge using triples of the form (subject, predicate, object), called _facts_. Although public KGs, such as Wikidata (Vrandecic and Krotzsch, 2014), contain a large number of facts, they are incomplete. This has sparked interest in using machine learning methods to suggest plausible facts to add to the KG based on patterns found in the data. Such methods are based on knowledge graph embedding (KGE) techniques, which aim to create representations of KGs in vector spaces, where a notion of similarity between individuals applies. Many attempts have been made to learn representations of knowledge graphs for use in downstream tasks (Dai et al., 2020). These methods have traditionally focused only on embedding triples (facts), ignoring the knowledge about relations in general, possibly combined with logical operators (we may write _concepts_ for unary relations and _roles_ for binary relations). The latter corresponds to the "_TBox_ part" of the knowledge, which is a quite established notion in the fields of Description Logic and Semantic Web (Baader et al., 2017; Hitzler et al., 2009). Embeddings that consider TBoxes are a more recent phenomenon (see Section 2), we refer to them as _ontology embeddings_, where the _ontology_ can have both facts and a TBox. Ontology embeddings offer advantages over traditional KGEs as they exploit the semantic relationships between concepts and roles. This enables ontology embeddings to better capture rich and nuanced relationships between concepts, making them good candidates for tasks requiring fine-grained reasoning, such as hierarchical reasoning and logical inference. One question that arises is how similar to the source ontology these embeddings are, and, more strictly, whether the generated embeddings are _guaranteed_ to precisely represent the meaning of the source ontology and its entailments (of particular interest, the TBox entailments). This property is called the _strong_ model faithfulness property (Ozcep et al., 2020). So far, no previous work for ontology embeddings for fragments of \(\mathcal{EL}^{++}\) has attempted to prove this property holds for their embedding method, nor has its existence has been formally proven for the \(\mathcal{ELH}\) language. Additionally, in the literature, no work has tried to tackle the full problem of embedding role inclusions. Only _EmEL_(Mondal et al., 2021) has acknowledged the issue, but in fact only role equivalence is included in their framework (not strict role inclusions). _Contribution_ We investigate whether \(\mathcal{ELH}\) has the strong faithfulness property over convex geometric models. We first prove the statement for embeddings in low dimensions, considering a region-based representation for (possibly) non-convex regions (Section 4). Also, we prove that the same property does not hold when we consider convex regions and only 1 dimension. We then investigate strong faithfulness on convex geometric models with more dimensions (Section 5). We do so including embeddings for role inclusions, a problem that has not been well studied in the \(\mathcal{ELH}\) ontology embedding literature. Additionally, we consider model checking in convex geometric models (Section 6). ## 2 Ontology Embeddings Various methods for embedding ontologies have been proposed, with fragments of ontologies being their primary targets. is a simple yet powerful language. These embedding methods are _region-based_, that is, they map concepts to regions and entities to vectors (in some cases, entities are transformed into nominals and also embedded as regions), and represent roles using translations or regions within the vector space. The precise shape of the embedding regions varies depending on the method. In _EmEL_(Mondal et al., 2021) and _ELem_(Kulmanov et al., 2019), the embeddings map concepts to \(n\)-dimensional _balls_. One disadvantage of this approach is that the intersection between two balls is not itself a ball. Newer approaches addressing this issue such as _BoxEL_, _Box2EL_, and _ELBE_(Xiong et al., 2022; Jackermeier et al., 2023; Peng et al., 2022), starting with _BoxE_(Abboud et al., 2020), represent concepts as \(n\)-dimensional _boxes_, in some cases using so-called "translational bumps" to capture relations between entities. Another language,, has been studied under a _cone semantics_(Ozcep et al., 2020), which uses _axis-aligned cones_ as its geometric interpretation. In the context of KGEs, \(n\)-dimensional _parallelograms_ have also been used in _ExpressivE_(Pavlovic and Sallinger, 2023). Other approaches for accommodating TBox axioms in the embeddings have also been considered. Approaching the problem from a different direction, _OWL2Vec*_(Chen et al., 2021) targets the DL language and does not rely on regions, but uses the NLP algorithm _word2vec_ to include lexical information (such as annotations) along with the graph structure of an OWL ontology. Another framework, _TransOWL_(d'Amato et al., 2021), uses background knowledge injection to improve link prediction for models such as _TransE_ and _TransR_. Although expressively powerful and well performing in tasks such as subsumption checking and link prediction (deductive reasoning has been understudied), the generated embeddings often lack formal guarantees with respect to the source ontology. In the KGE literature, it is a well known that, e.g., _TransE_(Bordes et al., 2013) is unable to model one-to-many relations (a difficulty present even in recent ontology embedding methods such as _BoxEL_) or symmetric relations. This has spurted a quest for more expressive models, with the intention of capturing an increasing list of relation types and properties such as composition, intersection, hierarchy of relations, among others (Lin et al., 2015; Yang et al., 2015; Trouillon et al., 2016; Pavlovic and Sallinger, 2023). Expressivity is a key notion in ontology embedding methods, which often also feature these relation types and potentially other forms of constraints. For example, in _Box2EL_, _ELem_, and _ELBE_(Jackermeier et al., 2023; Kulmanov et al., 2019; Peng et al., 2022), axioms of the form \(\exists r.C\sqsubseteq\bot\) are only approximated by \(\exists r.\top\sqsubseteq\bot\). This means that strong TBox faithfulness is not respected. Moreover, only _EmEL_(Mondal et al., 2021) includes embeddings for role inclusions, with severe restrictions, namely that \(r\sqsubseteq s\) also enforces \(s\sqsubseteq r\), so it is not strong faithful. ## 3 Basic Notions _The Description Logic \(\mathcal{ELH}\)_ Let \(N_{C}\), \(N_{R}\), and \(N_{I}\) be countably infinite and pairwise disjoint sets of _concept names_, _role names_, and _individual names_, respectively. \(\mathcal{ELH}\)_concepts_\(C,D\) are built according to the syntax rule \[C,D::=\top\mid\bot\mid A\ \mid(C\sqcap D)\mid\exists r.C\] where \(A\in N_{C}\) and \(r\in N_{R}\). \(\mathcal{ELH}\)_concept inclusions_ (CIs) are of the form \(C\sqsubseteq D\), _role inclusions_ (RIs) are of the form \(r\sqsubseteq s\), \(\mathcal{ELH}\)_concept assertions_ are of the form \(A(a)\) and _role assertions_ are of the form \(r(a,b)\), where \(A\in N_{C}\), \(a,b\in N_{I}\), \(r,s\in N_{R}\), and \(C\), \(D\) range over \(\mathcal{ELH}\) concepts. _Instance queries_ (IQs) are role assertions or of the form \(C(a)\), with \(C\) being an arbitrary \(\mathcal{ELH}\) concept. An \(\mathcal{ELH}\)_axiom_ is an \(\mathcal{ELH}\) CI, an RI, or an IQ. A _normalized \(\mathcal{ELH}\)_ TBox is one that only contains CIs of the following forms: \(A_{1}\sqcap A_{2}\sqsubseteq B\), \(\exists r.A\sqsubseteq B\), and \(A\sqsubseteq\exists r.B\) where \(A_{1},A_{2},A,B\in N_{C}\) and \(r\in N_{R}\). We say that an \(\mathcal{ELH}\) concept is in _normal form_ if it is of the form \(A\), \(\exists r.A\), or \(A\sqcap B\), with \(A,B\in N_{C}\) and \(r\in N_{R}\). Similarly, an \(\mathcal{ELH}\) ontology is in _normal form_ if its TBox part is a normalized \(\mathcal{ELH}\) TBox. An IQ is in _normal form_ if it is a role assertion or of the form \(C(a)\) with \(C\) being a concept in normal form. The semantics of \(\mathcal{ELH}\) is defined classically by means of _interpretations_\(\mathcal{I}=(\Delta^{\mathcal{I}},{}^{\mathcal{I}})\), where \(\Delta^{\mathcal{I}}\) is a non-empty countable set called the _interpretation domain_, and \({}^{\mathcal{I}}\) is an _interpretation function_ mapping each concept name \(A\) in \(N_{C}\) to a subset \(A^{\mathcal{I}}\) of \(\Delta^{\mathcal{I}}\), each role name \(r\) in \(N_{R}\) to a binary relation \(r^{\mathcal{I}}\subseteq\Delta^{\mathcal{I}}\times\Delta^{\mathcal{I}}\), and each individual name \(a\) in \(N_{I}\) to an element \(a^{\mathcal{I}}\in\Delta^{\mathcal{I}}\). We extend the function \({}^{\mathcal{I}}\) inductively to arbitrary concepts by setting \(\top^{\mathcal{I}}:=\Delta^{\mathcal{I}}\), \(\bot^{\mathcal{I}}:=\emptyset\), and \[(C\sqcap D)^{\mathcal{I}} :=C^{\mathcal{I}}\cap D^{\mathcal{I}},\text{ and }\] \[(\exists r.C)^{\mathcal{I}} :=\{d\in\Delta^{\mathcal{I}}\mid\exists e\in C^{\mathcal{I}}\text{ such that }(d,e)\in r^{\mathcal{I}}\}.\] An interpretation \(\mathcal{I}\)_satisfies_: (1) \(C\sqsubseteq D\) iff \(C^{\mathcal{I}}\subseteq D^{\mathcal{I}}\); (2) \(C(a)\) iff \(a^{\mathcal{I}}\in C^{\mathcal{I}}\); (3) \(r(a,b)\) iff \((a^{\mathcal{I}},b^{\mathcal{I}})\in r^{\mathcal{I}}\). An \(\mathcal{ELH}\)_TBox_\(\mathcal{T}\) (Termological Box) is a finite number of \(\mathcal{ELH}\) concept and role inclusions. An \(\mathcal{ELH}\)_ABox_\(\mathcal{A}\) is a finite number of \(\mathcal{ELH}\) concept and role assertions. The union of a TBox and an ABox forms an \(\mathcal{ELH}\) ontology. An \(\mathcal{ELH}\) ontology \(\mathcal{O}\)_entails_ an \(\mathcal{ELH}\) axiom \(\alpha\), in symbols \(\mathcal{O}\models\alpha\) if for every interpretation \(\mathcal{I}\), we have that \(\mathcal{I}\models\mathcal{O}\) implies \(\mathcal{I}\models\alpha\) (we may write similarly for the CI and RI entailments of a TBox). We denote by \(N_{C}(\mathcal{O}),N_{R}(\mathcal{O}),N_{I}(\mathcal{O})\) the set of concept names, role names, and individual names occurring in an ontology \(\mathcal{O}\). We may also write \(N_{I}(\mathcal{A})\) for the set of individual names occurring in an ABox \(\mathcal{A}\). The _signature_ of an ontology \(\mathcal{O}\), denoted \(\mathsf{sig}(\mathcal{O})\), is the union of \(N_{C}(\mathcal{O}),N_{R}(\mathcal{O})\), and \(N_{I}(\mathcal{O})\). _Geometric models_ We go from the traditional model-theoretic interpretation of the \(\mathcal{ELH}\) language to geometric interpretations, using definitions from previous works by Gutierrez-Basulto and Schockaert (2018) and Bourgaux et al. (2021). Let \(m\) be a natural number and \(f\colon\mathbb{R}^{m}\times\mathbb{R}^{m}\mapsto\mathbb{R}^{2\cdot m}\) a fixed but arbitrary linear map satisfying the following: 1. the restriction of \(f\) to \(\mathbb{R}^{m}\times\{0\}^{m}\) is injective; 2. the restriction of \(f\) to \(\{0\}^{m}\times\mathbb{R}^{m}\) is injective; 3. \(f(\mathbb{R}^{m}\times\{0\}^{m})\cap f(\{0\}^{m}\times\mathbb{R}^{m})=\{0^{2 \cdot m}\}\); where \(0^{m}\) denotes the vector \((0,...,0)\) with \(m\) zeros. For instance, the concatenation function is a linear map that satisfies Points 1, 2, and 3. We say that a linear map that satisfies Points 1, 2, and 3 is an _isomorphism preserving linear map_. **Definition 1** (Geometric Interpretation).: _Let \(f\) be an isomorphism preserving linear map and \(m\) a natural number. An \(m\)-dimensional \(f\)-geometric interpretation \(\eta\) of \((N_{C},N_{R},N_{I})\) assigns to each_ * \(A\in N_{C}\) _a region_ \(\eta(A)\subseteq\mathbb{R}^{m}\)__ * \(r\in N_{R}\) _a region_ \(\eta(r)\subseteq\mathbb{R}^{2\cdot m}\)_, and_ * \(a\in N_{I}\) _a vector_ \(\eta(a)\in\mathbb{R}^{m}\)_._ _We now extend the definition for arbitrary \(\mathcal{ELH}\) concepts:_ \[\eta(\bot) :=\emptyset\] \[\eta(\top) :=\mathbb{R}^{m},\] \[\eta(C\sqcap D) :=\eta(C)\cap\eta(D)\text{, and}\] \[\eta(\exists r.C) :=\{v\in\mathbb{R}^{m}\mid\exists u\in\eta(C)\text{ with }f(v,u)\in\eta(r)\}.\] _Intuitively, the function \(f\) combines two vectors that represent a pair of elements in a classical interpretation relation. An \(m\)-dimensional \(f\)-geometric interpretation \(\eta\) satisfies_ * _an_ \(\mathcal{ELH}\) _concept assertion_ \(A(a)\)_, if_ \(\eta(a)\in\eta(A)\)_,_ * _a role assertion_ \(r(a,b)\)_, if_ \(f(\eta(a),\eta(b))\in\eta(r)\)_,_ * _an_ \(\mathcal{ELH}\) _IQ_ \(C(a)\)_, if_ \(\eta(a)\in\eta(C)\)_,_ * _an_ \(\mathcal{ELH}\) _CI_ \(C\sqsubseteq D\)_, if_ \(\eta(C)\subseteq\eta(D)\)_, and_ * _an_ \(\text{RI}\ r\sqsubseteq s\)_, if_ \(\eta(r)\subseteq\eta(s)\)_._ _We write \(\eta\models\alpha\) if \(\eta\) satisfies an \(\mathcal{ELH}\) axiom \(\alpha\). When speaking of \(m\)-dimensional \(f\)-geometric interpretations, we may omit \(m\)-dimensional and \(f\)-, as well as use the term "model" instead of "interpretation". A geometric interpretation satisfies an ontology \(\mathcal{O}\), in symbols \(\eta\models\mathcal{O}\), if it satisfies all axioms in \(\mathcal{O}\). We say that a geometric interpretation is finite if the regions associated with concept and role names have a finite number of vectors and we only need to consider a finite number of individual names, which is the case when considering the individuals that occur in an ontology._ Motivated by the theory of conceptual spaces and findings on cognitive science (Gardenfors, 2000; Zenker and Gardenfors, 2015), and also by previous work on ontology embeddings for quasi-chained rules (Gutierrez-Basulto and Schockaert, 2018), we consider convexity as an interesting restriction for the regions associated with concepts and relations in a geometric model. **Definition 2**.: _A geometric interpretation \(\eta\) is convex if, for every \(E\in N_{C}\cup N_{R}\), every \(v_{1},v_{2}\in\eta(E)\) and every \(\lambda\in[0,1]\), if \(v_{1},v_{2}\in\eta(E)\) then \((1-\lambda)v_{1}+\lambda v_{2}\in\eta(E)\)._ **Definition 3**.: _Let \(S=\{v_{1},\ldots,v_{m}\}\subseteq\mathbb{R}^{d}\). A vector \(v\) is in the convex hull \(S^{*}\) of \(S\) iff there exist \(v_{1},\ldots,v_{n}\in S\) and scalars \(\lambda_{1},\lambda_{2},...,\lambda_{n}\in\mathbb{R}\) such that_ \[v=\sum_{i=1}^{n}\lambda_{i}v_{i}=\lambda_{1}v_{1}+\lambda_{2}v_{2}+...+\lambda _{n}v_{n},\] _where \(\lambda_{i}\geq 0\), for \(i=1,\ldots,n\), and \(\sum_{i=1}^{n}\lambda_{i}=1\)._ Apropos of convexity, we highlight and prove some of its properties used later in our results. **Proposition 1**.: _For finite \(S_{1},S_{2}\subseteq\mathbb{R}^{d}\), where \(d\) is an arbitrary dimension, we have that \(S_{1}\subseteq S_{2}\) implies \(S_{1}^{*}\subseteq S_{2}^{*}\)._ In the following, whenever we say a vector is _binary_, we mean that its values in each dimension can only be \(0\) or \(1\). **Theorem 2**.: _Let \(S\subseteq\{0,1\}^{d}\) where \(d\) is an arbitrary dimension. For any \(n\in\mathbb{N}\), for any \(v=\sum_{i=1}^{n}\lambda_{i}v_{i}\), such that \(v_{i}\in S\), if \(v\in S^{*}\setminus S\) then \(v\) is non-binary._ **Corollary 3**.: _If \(v\) is binary and \(v\in S^{*}\) then \(v\in S\)._ Finally, we define model faithfulness based on the work by Ozcep et al. (2020). **Definition 4** (Faithfulness).: _Let \(\mathcal{O}\) be a satisfiable ontology (or any other representation allowing the distinction between IQs and TBox axioms). Given an \(m\)-dimensional \(f\)-geometric interpretation \(\eta\), we say that:_ * \(\eta\) _is a_ strongly concept-faithful model _of_ \(\mathcal{O}\) _iff, for every concept_ \(C\) _and individual name_ \(b\)_, if_ \(\eta(b)\in\eta(C)\) _then_ \(\mathcal{O}\models C(b)\)_;_ * \(\eta\) _is a_ strongly IQ-faithful model _of_ \(\mathcal{O}\) _iff it is strongly concept-faithful and for each role_ \(r\) _and individual names_ \(a,b\)_: if_ \(f(\eta(a),\eta(b))\in\eta(r)\)_, then_ \(\mathcal{O}\models r(a,b)\)_;_ * \(\eta\) _is a_ strongly TBox-faithful model _of_ \(\mathcal{O}\) _iff for all TBox axioms_ \(\tau\)_: if_ \(\eta\models\tau\)_, then_ \(\mathcal{O}\models\tau\)_._ _We say that an ontology language has the strong faithfulness property over a class of geometric interpretations \(\mathcal{C}\) if for every satisfiable ontology \(\mathcal{O}\) in this language there is a geometric interpretation in \(\mathcal{C}\) that is both a strongly IQ-faithful and a strongly TBox-faithful model of \(\mathcal{O}\)._ The range of concepts, roles, and individual names in Definition 4 varies depending on the language and setting studied. We omit the notion of weak faithfulness by Ozcep et al. (2020) as it does not apply for \(\mathcal{ELH}\) since ontologies in this language are always satisfiable (there is no negation). Note that the "if-then" statements in Definition 4 become "if and only if" when \(\eta\) satisfies the ontology. Intuitively, model faithfulness expresses how similar is the embedding with respect to the original ontology. While strong faithfulness for the TBox is easily justifiable, this seems counter-intuitive for IQs, since embeddings are often intended for KG completion. ## 4 Strong Faithfulness In this section we prove initial results about faithfulness for \(\mathcal{ELH}\). In particular, we prove that \(\mathcal{ELH}\) has the strong faithfulness property over \(m\)-dimensional \(f\)-geometric interpretations for any \(m\geq 1\) but this is not the case if we require that regions in the geometric interpretations are convex. We first introduce a mapping from classical interpretation to (possibly) non-convex geometric interpretations and then use it with the notion of canonical model to establish strong faithfulness for \(\mathcal{ELH}\). Definition 5: Let \(\mathcal{I}=(\Delta^{\mathcal{I}},{}^{\mathcal{I}})\) be a classical \(\mathcal{ELH}\) interpretation, and we assume without loss of generality, since \(\Delta^{\mathcal{I}}\) is non-empty and countable, that \(\Delta^{\mathcal{I}}\) is a (possibly infinite) interval in \(\mathbb{N}\) starting on \(0\). Let \(\bar{\mu}\colon\Delta^{\mathcal{I}}\mapsto\mathbb{R}^{1}\) be a mapping from our classical interpretation domain to a vector space where: \[\bar{\mu}(d)=\begin{cases}(-\infty,-d]\cup[d,\infty),&\text{if $\Delta^{ \mathcal{I}}$ is finite and $d=max(\Delta^{\mathcal{I}})$},\\ (-d-1,-d]\cup[d,d+1),&\text{otherwise.}\end{cases}\] where \(d\in\mathbb{N}\) and \((-d-1,-d]\) and \([d,d+1)\) are intervals over \(\mathbb{R}^{1}\), closed on \(d\) and \(-d\), and open on \(d+1\) and \(-d-1\). Remark 1: For any interpretation \(\mathcal{I}\), \(\bar{\mu}\) covers the real line, that is, \(\bigcup_{d\in\Delta^{\mathcal{I}}}\bar{\mu}(d)=\mathbb{R}^{1}\). Definition 6: We call \(\bar{\eta}_{\mathcal{I}}\) the _geometric interpretation_ of \(\mathcal{I}\) and define it as follows. Let \(\mathcal{I}\) be a classical \(\mathcal{ELH}\) interpretation. The _geometric interpretation_ of \(\mathcal{I}\), denoted \(\bar{\eta}_{\mathcal{I}}\), is defined as: \[\bar{\eta}_{\mathcal{I}}(a) :=d\text{, such that $d=a^{\mathcal{I}}$, for all $a\in N_{I}$},\] \[\bar{\eta}_{\mathcal{I}}(A) :=\{v\in\bar{\mu}(d)\mid d\in A^{\mathcal{I}}\}\text{, for all $A\in N_{C}$},\text{ and}\] \[\bar{\eta}_{\mathcal{I}}(r) :=\{f(v,e)\mid v\in\bar{\mu}(d)\text{ for $(d,e)\in r^{\mathcal{I}}$}\}\text{, for all $r\in N_{R}$}.\] In Figure 1, we illustrate with an example the mapping in Definition 6. We now show that for (possibly) non-convex geometric models, a classical interpretation \(\mathcal{I}\) models arbitrary IQs and arbitrary TBox axioms if and only if their geometrical interpretation \(\bar{\eta}_{\mathcal{I}}\) also models them. Theorem 4.1: _For all \(\mathcal{ELH}\) axioms \(\alpha\), \(\mathcal{I}\models\alpha\) iff \(\bar{\eta}_{\mathcal{I}}\models\alpha\)._ We now provide a definition of canonical model for \(\mathcal{ELH}\) ontologies inspired by a standard chase procedure. In our definition, we use a _tree shaped interpretation_\(\mathcal{I}_{D}\) of an \(\mathcal{ELH}\) concept \(D\), with the root denoted \(\rho_{D}\). This is defined inductively. For \(D\) a concept name \(A\in N_{C}\) we define \(\mathcal{I}_{A}\) as the interpretation with \(\Delta^{\mathcal{I}_{A}}:=\{\rho_{A}\}\), \(A^{\mathcal{I}_{A}}:=\{\rho_{A}\}\), and all other concept and role names interpreted as the empty set. For \(D=\exists r.C\), we define \(\mathcal{I}_{D}\) as the interpretation with \(\Delta^{\mathcal{I}_{D}}:=\{\rho_{D}\}\cup\Delta^{\mathcal{I}_{C}}\), all concept and role name interpretations are as for \(\mathcal{I}_{C}\) except that we add \((\rho_{D},\rho_{C})\) to \(r^{\mathcal{I}_{D}}\) and assume \(\rho_{D}\) is fresh (i.e., it is not in \(\Delta^{\mathcal{I}_{C}}\)). Finally, for \(D=C_{1}\cap C_{2}\) we define \(\Delta^{\mathcal{I}_{D}}:=\Delta^{\mathcal{I}_{C_{1}}}\cup(\Delta^{\mathcal{ I}_{C_{2}}}\setminus\{\rho_{C_{2}}\})\), assuming \(\Delta^{\mathcal{I}_{C_{1}}}\) and \(\Delta^{\mathcal{I}_{C_{2}}}\) are disjoint, and with all concept and role name interpretations as in \(\mathcal{I}_{C_{1}}\) and \(\mathcal{I}_{C_{2}}\), except that we connect \(\rho_{C_{1}}\) with the elements of \(\Delta^{\mathcal{I}_{C_{2}}}\) in the same way as \(\rho_{C_{2}}\) is connected. In other words, we _identify \(\rho_{C_{1}}\)_ with the root \(\rho_{C_{2}}\) of \(\mathcal{I}_{D_{2}}\). Definition 7: The canonical model \(\mathcal{I}_{\mathcal{O}}\) of a satisfiable \(\mathcal{ELH}\) ontology \(\mathcal{O}\) is defined as the union of a sequence of interpretations \(\mathcal{I}_{0},\mathcal{I}_{1},\ldots\), where \(\mathcal{I}_{0}\) is defined as: \[\Delta^{\mathcal{I}_{0}} :=\{a\mid a\in N_{I}(\mathcal{A})\},\] \[A^{\mathcal{I}_{0}} :=\{a\mid A(a)\in\mathcal{A}\}\text{ for all }A\in N_{C},\text{ and }\] \[r^{\mathcal{I}_{0}} :=\{(a,b)\mid r(a,b)\in\mathcal{A}\},\text{ for all }r\in N_{R}.\] Suppose \(\mathcal{I}_{n}\) is defined. We define \(\mathcal{I}_{n+1}\) by choosing a CI or an RI in \(\mathcal{O}\) and applying one of the following rules: * if \(C\sqsubseteq D\in\mathcal{O}\) and \(d\in C^{\mathcal{I}_{n}}\setminus D^{\mathcal{I}_{n}}\) then define \(\mathcal{I}_{n+1}\) as the result of adding to \(\mathcal{I}_{n}\) a copy of the tree shaped interpretation \(\mathcal{I}_{D}\) and identifying \(d\) with the root of \(\mathcal{I}_{D}\) (assume that the elements in \(\Delta^{\mathcal{I}_{D}}\) are fresh, that is, \(\Delta^{\mathcal{I}_{D}}\cap\Delta^{\mathcal{I}_{n}}=\emptyset\)); * if \(r\sqsubseteq s\in\mathcal{O}\) and \((d,e)\in r^{\mathcal{I}_{n}}\setminus s^{\mathcal{I}_{n}}\) then set \(\mathcal{I}_{n+1}\) as the result of adding \((d,e)\) to \(s^{\mathcal{I}_{n}}\). We assume the choice of CIs and RIs and corresponding rule above to be fair, i.e., if a CI or RI applies at a certain place, it will eventually be applied there. Theorem 4.1: _Let \(\mathcal{O}\) be a satisfiable \(\mathcal{ELH}\) ontology and let \(\bar{\mathcal{I}}_{\mathcal{O}}\) be the canonical model of \(\mathcal{O}\) (Definition 7). Then,_ * _for all_ \(\mathcal{ELH}\) _IQs and CIs_ \(\alpha\) _over_ \(\mathsf{sig}(\mathcal{O})\)_,_ \(\bar{\mathcal{I}}_{\mathcal{O}}\models\alpha\) _iff_ \(\mathcal{O}\models\alpha\)_; and_ * _for all RIs_ \(\alpha\) _over_ \(\mathsf{sig}(\mathcal{O})\)_,_ \(\bar{\mathcal{I}}_{\mathcal{O}}\models\alpha\) _iff_ \(\mathcal{O}\models\alpha\)_._ We are now ready to state our theorem combining the results of Theorems 4.1 and 4.2 and the notion of strong faithfulness for IQs and TBox axioms. Theorem 4.2: _Let \(\mathcal{O}\) be a satisfiable \(\mathcal{ELH}\) ontology and let \(\bar{\mathcal{I}}_{\mathcal{O}}\) be the canonical model of \(\mathcal{O}\) (see Definition 7). The \(m\)-dimensional \(f\)-geometric interpretation of \(\bar{\mathcal{I}}_{\mathcal{O}}\)_ _(see Definition 6) is a strongly IQ and TBox faithful model of \(\mathcal{O}\)._ What Theorem 4.2 demonstrates is that the existence of canonical models for \(\mathcal{ELH}\) allows us to connect our result relating classical and geometric interpretations to faithfulness. This property of canonical models is crucial and can potentially be extended to other description logics that also have canonical models (however, many of such logics do not have polynomial size canonical models, a property we use in the next section, so we focus on \(\mathcal{ELH}\) in this work). Corollary 7: _For all \(m\geq 1\) and isomorphism preserving linear maps \(f\), \(\mathcal{ELH}\) has the strong faithfulness property over \(m\)-dimensional \(f\)-geometric interpretations._ However, requiring that the regions of the geometric model are convex makes strong faithfulness more challenging. The next theorem hints that such models require more dimensions and a more principled approach to map \(\mathcal{ELH}\) ontologies in a continuous vector space. **Theorem 8**.: \(\mathcal{ELH}\) _does not have the strong faithfulness property over convex \(1\)-dimensional \(f\)-geometric models._ Proof.: We reason by cases in order to show impossibility of the strong model faithfulness property for the class of _convex_\(1\)-dimensional \(f\)-geometric model for arbitrary \(\mathcal{ELH}\) ontologies. Let \(\mathcal{O}\) be an \(\mathcal{ELH}\) ontology, \(A\), \(B\), \(C\in N_{C}\) concept names, \(a,b\in N_{I}\) individuals, and let \(\eta(A)\), \(\eta(B)\), \(\eta(C)\), \(\eta(a)\), and \(\eta(b)\) be their corresponding geometric interpretations to \(\mathbb{R}^{1}\). Assume \(\mathcal{O}\models A\sqcap B(a)\). There are three initial cases on how to choose the interval placement of \(\eta(A)\) and \(\eta(B)\): * **Null intersection:**\((\eta(A)\cap\eta(B))=\emptyset\); If \((\eta(A)\cap\eta(B))=\emptyset\), then either \((\eta(a)\in\eta(A)\) and \((\eta(a)\not\in\eta(B)\), or \((\eta(a)\in\eta(B)\) and \((\eta(a)\not\in\eta(A)\). Recall the definition of satisfiability for concept assertions. Since we assumed \(\mathcal{O}\models A\sqcap B(a)\), we would want our geometric interpretation to be such that \(\eta(a)\in\eta(A)\cap\eta(B)\), a contradiction. * **Total inclusion:**\((\eta(A)\subseteq\eta(B))\)**and/or**\((\eta(B)\subseteq\eta(A))\); Consider an extension \(\mathcal{O}^{\prime}\) of our ontology where \(\mathcal{O}^{\prime}\models A(c)\) and \(\mathcal{O}^{\prime}\not\models B(c)\). If we let \((\eta(A)\subseteq\eta(B))\), it is clear that our ontology cannot be faithfully modeled, since by our assumption of total inclusion, we would have that \(\eta(c)\in\eta(A)\) and \(\eta(c)\in\eta(B)\), which goes against \(\mathcal{O}^{\prime}\not\models B(c)\). The same holds for the total inclusion in the other direction, where \((\eta(B)\subseteq\eta(A))\). Therefore, we go to our last initial case to be considered. * **Partial intersection:**\((\eta(A)\cap\eta(B))\not=\emptyset\)**and** This is in fact the only way of faithfully giving a geometric interpretation to our concept assertion \(A\sqcap B(a)\), while still leaving room for ABox axioms such that an arbitrary element could belong to one of our classes \(A\) or \(B\) without necessarily belonging to both of them. Then, \(\eta(A)\cap\eta(B)\) and \(\eta(A)\not\subseteq\eta(B)\) nor \(\eta(B)\not\subseteq\eta(A)\). After having forced the geometric interpretation of our two initial concepts \(A\) and \(B\) to partially intersect, we now show that by adding a third concept \(C\), in which \(\mathcal{O}\models A\sqcap B\sqcap C(a)\), either \(\eta(A)\subset\eta(B)\cup\eta(C)\) or \(\eta(B)\subset\eta(A)\cup\eta(C)\), even though this interpretation is not included in our original ontology. We are unable to include a concept assertion \(A(a)\in\mathcal{O}\) without also having that \(\eta(a)\in\eta(C)\) in our geometric interpretation, or likewise for the case in which \(B(a)\in\mathcal{O}\). Stemming from the fact that our geometric interpretation must be convex, and it is modeled in an euclidean \(\mathbb{R}^{1}\) space, we can visualize our classes \(A\), \(B\), and \(C\) as intervals on the real line. Assume, without loss of generality, that \(\eta(A)\) is placed to the left of \(\eta(B)\) (see Fig. 2). Then, \(C\) can only be placed either to the right of \(B\) or to the left of \(A\). By reasoning in the same way as before, we know that \(\eta(C)\) must partially intersect with either \(\eta(A)\) or \(\eta(B)\), so one end of the interval representing \(C\) must be placed in \(\eta(A)\cap\eta(B)\), without us having that either \(\eta(C)\subseteq\eta(A)\), \(\eta(C)\subseteq\eta(B)\), \(\eta(C)\subseteq\eta(A)\cap\eta(B)\), or \(\eta(C)\subseteq\eta(A)\cap\eta(B)\), or \(\eta(A)\subseteq\eta(B)\). The following lemma shows that the \(\eta(A)\) and \(\eta(B)\) are equivalent. **Lemma 9**.: _Let \(\mathcal{O}\) be an \(\mathcal{ELH}\) ontology, \(A\), \(B\), \(C\), \(\eta(A)\cap\eta(B)\) or \(\eta(C)\subseteq\eta(A)\cup\eta(B)\). This last requirement is due to the fact that we want to be able to have an ontology such that \(\mathcal{O}\models C(a)\) and where \(\mathcal{O}\not\models A(a)\), \(\mathcal{O}\not\models B(a)\), or \(\mathcal{O}\not\models A(a)\cap B(a)\). Assuming the intersection between \(\eta(A)\) and \(\eta(B)\neq\emptyset\) there are three more cases to be considered: * **C is in the intersection of A and B \(\eta(C)\subseteq\eta(A)\cap\eta(B)\) (Fig. 2 (a)):** If \(\eta(C)\subseteq\eta(A)\cap\eta(B)\), it is immediately clear that by extending \(\mathcal{O}\) such that \(\mathcal{O}\models C(b)\) but \(\mathcal{O}\not\models A(b)\), we would end up with \(\eta(b)\in\eta(C)\). But since we assumed that \(\eta(C)\subseteq\eta(A)\cap\eta(B)\), this means that \(\eta(b)\in\eta(A)\), and therefore our geometric interpretation would model the concept assertion \(A(b)\), a contradiction. * **C goes from the intersection \(\eta(A)\cap\eta(B)\) to \(\eta(A)\setminus\eta(B)\) (Fig. 2 (b)):** In this situation, we would have \(\eta(C)\subseteq\eta(A)\), and if \(\mathcal{O}\models C(a)\), we would necessarily have that \(\eta(a)\in\eta(C)\), but this means we would also have \(\eta(a)\in\eta(A)\), leading to the unwarranted consequence that \(\eta\models A(a)\). There is one last case. * **C is placed in a region such that \(\eta(C)\cap(\eta(A)\cup\eta(B))\neq\emptyset\) and \(\eta(C)\setminus(\eta(A)\cup\eta(B))\neq\emptyset\) (Fig. 2 (c)):** This would mean that \(\eta(B)\subseteq\eta(A)\cup\eta(C)\), and that any concept assertion \(B(a)\) would entail either \(C(a)\) or \(A(a)\) in our geometric interpretation, while it is not necessary that \(\mathcal{O}\models A(a)\) or \(\mathcal{O}\models B(a)\). Since we are in \(\mathbb{R}^{1}\), this desired placement can happen either to the right or to the left of the number line. By assumption that \(\eta(A)\) has been placed to the left of \(\eta(B)\) as shown in Fig. 2 and following, we have just shown that placing \(\eta(C)\) to the right of \(\eta(B)\) leads to a contradiction. The same reasoning applies if we choose to place it to the left of \(\eta(A)\). There are no more cases to be considered. Figure 3: The three possible cases when there is an element in the intersection of \(A,B,C\). The problem illustrated in Theorem 3.1 arises even if the ontology language does not have roles (as it is the case, e.g., of Boolean \(\mathcal{ALC}\), investigated by Ozcep et al. (2020)). It also holds if we restrict to normalized \(\mathcal{ELH}\). We address the problem of mapping normalized \(\mathcal{ELH}\) ontologies to convex geometric models in the next section. ## 5 Strong Faithfulness on Convex Models We prove that normalized \(\mathcal{ELH}\) has the strong faithfulness property over a class of _convex_ geometric models. We introduce a new mapping \(\mu\) from the domain of a classical interpretation \(\mathcal{I}\) to a vector space and a new geometric interpretation \(\eta_{\mathcal{I}}\) based on this mapping. Our proofs now require us to fix the isomorphism preserving linear map \(f\) used in the definition of geometric interpretations (Definition 1). We choose the concatenation function, denoted \(\oplus\), as done in the work by Gutierrez-Basulto and Schockaert (2018). The strategy for proving strong faithfulness for normalized \(\mathcal{ELH}\) requires us to (a) find a suitable non-convex geometric interpretation for concepts and roles, and (b) show that the convex hull of the region maintains the property intact. **Definition 8**.: _Let \(\mathcal{I}=(\Delta^{\mathcal{I}},\mathcal{I})\) be a classical \(\mathcal{ELH}\) interpretation, and \(\mathcal{O}\) an \(\mathcal{ELH}\) ontology. We start by defining a new map \(\mu\colon\Delta^{\mathcal{I}}\mapsto\mathbb{R}^{d}\), where \(\mathsf{d}\) corresponds to \(|N_{I}(\mathcal{O})|+|N_{C}(\mathcal{O})|+|N_{R}(\mathcal{O})|\cdot|\Delta^{ \mathcal{I}}|\). We assume, without loss of generality, a fixed ordering in our indexing system for positions in vectors, where indices \(0\) to \(|N_{I}(\mathcal{O})|-1\) correspond to the indices for individual names; \(|N_{I}(\mathcal{O})|\) to \(k=|N_{I}(\mathcal{O})|+|N_{C}(\mathcal{O})|-1\) correspond to the indices for concept names; and \(k\) to \(k+(|N_{R}(\mathcal{O})|\cdot|\Delta^{\mathcal{I}}|)-1\) correspond to the indices for role names together with an element of \(\Delta^{\mathcal{I}}\). We adopt the notation \(v[a]\), \(v[A]\), and \(v[r,d]\) to refer to the position in a vector \(v\) corresponding to \(a\), \(A\), and \(r\) together with an element \(d\), respectively (according to our indexing system). For example, \(v[a]=0\) means that the value at the index corresponding to the individual name \(a\) is \(0\). A vector is binary iff \(v\in\{0,1\}^{d}\). We now define \(\mu\) using binary vectors. For all \(d\in\Delta^{\mathcal{I}}\), \(a\in N_{I}\), \(A\in N_{C}\) and \(r\in N_{R}\):_ * \(\mu(d)[a]=1\) _if_ \(d=a^{\mathcal{I}}\)_, otherwise_ \(\mu(d)[a]=0\)_,_ * \(\mu(d)[A]=1\) _if_ \(d\in A^{\mathcal{I}}\)_, otherwise_ \(\mu(d)[A]=0\)_, and_ * \(\mu(d)[r,e]=1\) _if_ \((d,e)\in r^{\mathcal{I}}\)_, otherwise_ \(\mu(d)[r,e]=0\)_._ Fig. 4 illustrates a possible mapping for element \(d\in\Delta^{\mathcal{I}}\), where \(d\in a_{0}^{\mathcal{I}}\), \(d\in A_{0}^{\mathcal{I}}\) and \((d,d_{0})\in r_{0}^{\mathcal{I}}\). We now introduce a definition for (possibly) non-convex geometric interpretations, in line with the mapping \(\mu\) above. **Definition 9**.: _Let \(\mathcal{I}\) be a classical \(\mathcal{ELH}\) interpretation. The geometric interpretation of \(\mathcal{I}\), denoted \(\eta_{\mathcal{I}}\), is defined as:_ \[\eta_{\mathcal{I}}(a):=\mu(a^{\mathcal{I}})\text{, for all }a\in N_{I},\] \[\eta_{\mathcal{I}}(A) :=\{\mu(d)\mid\mu(d)[A]=1,d\in\Delta^{\mathcal{I}}\}\text{, for all }A\in N_{C}\text{,}\] \[\eta_{\mathcal{I}}(r) :=\{\mu(d)\oplus\mu(e)\mid\mu(d)[r,e]=1,d,e\in\Delta^{\mathcal{I}}\} \text{, for all }r\in N_{R}\text{.}\] An intuitive way of thinking about our definition \(\mu\) is that it maps domain elements to a subset of the vertex set of the \(\mathsf{d}\)-dimensional unit hypercube (see Example 1). **Example 1**.: _Consider \(A,B\in N_{C}\) and \(a\in N_{I}\). Let \(\mathcal{I}\) be an interpretation with \(d,e\in\Delta^{\mathcal{I}}\) such that \(d=a^{\mathcal{I}}\), \(d\in A^{\mathcal{I}}\), and \(e\in A^{\mathcal{I}}\cap B^{\mathcal{I}}\). We illustrate \(\mu(d)\) and \(\mu(e)\) in Fig. 5. In symbols, \(\mu(d)[a]=1\), \(\mu(d)[A]=1\), and \(\mu(d)[B]=0\), while \(\mu(e)[a]=0\), \(\mu(e)[A]=1\), and \(\mu(e)[B]=1\)._ Before proving strong faithfulness with convex geometric models, we show that \(\eta_{\mathcal{I}}\) preserves the axioms that hold in the original interpretation \(\mathcal{I}\). It is possible for two elements \(d,e\in\Delta^{\mathcal{I}}\) to be mapped to the same vector \(v\) as a result of our mapping \(\mu\). This may happen when \(d,e\not\in\{a^{\mathcal{I}}\mid a\in N_{I}\}\) but it does hinder our results. **Proposition 9**.: _If \(\mu(d)=\mu(e)\), then \(d\in C^{\mathcal{I}}\) iff \(e\in C^{\mathcal{I}}\)._ We use a similar strategy as before to prove our result. **Theorem 10**.: _For all \(\mathcal{E}\mathcal{H}\) axioms \(\alpha\), \(\mathcal{I}\models\alpha\) iff \(\eta_{\mathcal{L}_{\mathcal{O}}}\models\alpha\)._ Since the definition of \(\eta_{\mathcal{I}}\) uses vectors in a dimensional space that depends on the size of \(\Delta^{\mathcal{I}}\) and \(\mathcal{O}\), we need the canonical models to be finite. Therefore, we employ _finite_ canonical models for normalized \(\mathcal{E}\mathcal{L}\mathcal{H}\) because canonical models for arbitrary \(\mathcal{E}\mathcal{L}\mathcal{H}\) CIs are not guaranteed to be finite. Our definition of canonical model is a non-trivial adaptation of other definitions found in the literature (e.g., (Borgwardt and Thost, 2015; Lutz and Wolter, 2010)). Let \(\mathcal{A}\) be an \(\mathcal{E}\mathcal{L}\mathcal{H}\) ABox, \(\mathcal{T}\) a normalized \(\mathcal{E}\mathcal{L}\mathcal{H}\) TBox, and \(\mathcal{O}:=\mathcal{A}\cup\mathcal{T}\). We first define: \[\Delta^{\mathcal{I}\mathcal{O}}_{u}:=\{c_{A}\,|\,A\in N_{C}(\mathcal{O})\cup\{ \top\}\}\text{ and }\] Figure 5: A mapping of \(\mu(d)\) and \(\mu(e)\) according to interpretation \(\mathcal{I}\). The axes colored in red, blue, and green correspond to the dimensions associated with \(a\), \(A\), and \(B\), respectively. \[\Delta_{u+}^{\mathcal{I}_{\mathcal{O}}}:=\Delta_{u}^{\mathcal{I}_{\mathcal{O}}}\cup\{ c_{A\sqcap B}\,|\,A,B\in N_{C}(\mathcal{O})\}\ \cup\ \{c_{\exists r.B}\,|\,r\in N_{R}(\mathcal{O}),B\in N_{C}(\mathcal{O})\cup\{\top\}\}.\] **Definition 10**.: _The canonical model \(\mathcal{I}_{\mathcal{O}}\) of \(\mathcal{O}\) is defined as_ \[\Delta^{\mathcal{I}_{\mathcal{O}}} :=N_{I}(\mathcal{A})\cup\Delta_{u+}^{\mathcal{I}_{\mathcal{O}}}, \qquad a^{\mathcal{I}_{\mathcal{O}}}:=a,\] \[A^{\mathcal{I}_{\mathcal{O}}} :=\{a\in N_{I}(\mathcal{A})\,|\,\mathcal{O}\models A(a)\}\ \cup\ \{c_{D}\in\Delta_{u+}^{\mathcal{I}_{\mathcal{O}}}\,|\, \mathcal{T}\models D\sqsubseteq A\}\text{, and}\] \[r^{\mathcal{I}_{\mathcal{O}}} :=\{(a,b)\in N_{I}(\mathcal{A})\times N_{I}(\mathcal{A})\,|\, \mathcal{O}\models r(a,b)\}\ \cup\] \[\{(a,c_{B})\in N_{I}(\mathcal{A})\times\Delta_{u}^{\mathcal{I}_{ \mathcal{O}}}\,|\,\mathcal{O}\models\exists r.B(a)\}\cup\{(c_{\exists s.B},c_{ B})\in\Delta_{u+}^{\mathcal{I}_{\mathcal{O}}}\times\Delta_{u}^{\mathcal{I}_{ \mathcal{O}}}\,|\,\mathcal{T}\models s\sqsubseteq r\}\] \[\cup\{(c_{D},c_{B})\in\Delta_{u+}^{\mathcal{I}_{\mathcal{O}}} \times\Delta_{u}^{\mathcal{I}_{\mathcal{O}}}\,|\,\mathcal{T}\models D\sqsubseteq A,\ \mathcal{T}\models A\sqsubseteq\exists r.B,\ \text{for some }A\in N_{C}(\mathcal{O})\},\] _for all \(a\in N_{I}\), \(A\in N_{C}\), and \(r\in N_{R}\)._ The following holds for the canonical model just defined. **Theorem 11**.: _Let \(\mathcal{O}\) be a normalized \(\mathcal{ELH}\) ontology. The following holds_ * _for all_ \(\mathcal{ELH}\) _IQs and CIs_ \(\alpha\) _in normal form over_ \(\mathsf{sig}(\mathcal{O})\)_,_ \(\mathcal{I}_{\mathcal{O}}\models\alpha\) _iff_ \(\mathcal{O}\models\alpha\)_; and_ * _for all RIs_ \(\alpha\) _over_ \(\mathsf{sig}(\mathcal{O})\)_,_ \(\mathcal{I}_{\mathcal{O}}\models\alpha\) _iff_ \(\mathcal{O}\models\alpha\)_._ The main difference between our definition and others in the literature relates to our purposes of proving strong faithfulness, as we discuss in Section 5. We require the CIs and RIs (in normal form and in \(\mathsf{sig}(\mathcal{O})\)) that are entailed by the ontology are exactly those that hold in the canonical model. **Theorem 12**.: _Let \(\mathcal{O}\) be an \(\mathcal{ELH}\) ontology and let \(\mathcal{I}_{\mathcal{O}}\) be the canonical model of \(\mathcal{O}\) (Definition 10). The \(\mathsf{d}\)-dimensional (possibly non-convex) \(\oplus\)-geometric interpretation \(\eta_{\mathcal{I}_{\mathcal{O}}}\) of \(\mathcal{I}_{\mathcal{O}}\) is a strongly and IQ and TBox faithful model of \(\mathcal{O}\)._ We now proceed with the main theorems of this section. Note that the dimensionality of the image domain of \(\mu\) can be much higher than the one for \(\bar{\mu}\) in Section 4 (which can be as low as just 1, see Corollary 7). We use the results until now as intermediate steps to bridge the gap between classical and convex geometric interpretations. In our construction of convex geometric interpretations, the vectors mapped by \(\mu\) and the regions given by the non-convex geometric interpretation \(\eta_{\mathcal{I}}\) are the anchor points for the convex closure of these sets. We introduce the notion of the _convex hull_ of a geometric interpretation \(\eta_{\mathcal{I}}\) using Definition 3. **Definition 11**.: _We denote by \(\eta_{\mathcal{I}}^{*}\) the convex hull of the geometric interpretation \(\eta_{\mathcal{I}}\) and define \(\eta_{\mathcal{I}}^{*}\) as:_ \[\eta_{\mathcal{I}}^{*}(a) :=\mu(a^{\mathcal{I}})\text{, for all }a\in N_{I};\] \[\eta_{\mathcal{I}}^{*}(A) :=\{\mu(d)\mid d\in A^{\mathcal{I}}\}^{*}\text{, for all }A\in N_{C};\text{ and}\] \[\eta_{\mathcal{I}}^{*}(r) :=\{\mu(d)\oplus\mu(e)\mid(d,e)\in r^{\mathcal{I}}\}^{*}\text{, for all }r\in N_{R}.\] **Remark 2**.: _In Definition 11, \(\eta_{\mathcal{I}}^{*}(a)=\eta_{\mathcal{I}}(a)\) for all \(a\in N_{I}\). We include the star symbol in the notation to make it clear that we are referring to the geometric interpretation of individual names in the context of convex regions for concepts and roles._ **Theorem 13**.: _Let \(\eta_{\mathcal{I}}\) be a geometric interpretation as in Definition 9. If \(\alpha\) is an \(\mathcal{ELH}\) CI, an \(\mathcal{ELH}\) RI, or an \(\mathcal{ELH}\) IQ in normal form then \(\eta_{\mathcal{I}}\models\alpha\) iff \(\eta_{\mathcal{I}}^{*}\models\alpha\)._ We are now ready to consider strong IQ and TBox faithfulness for convex regions. **Theorem 14**.: _Let \(\mathcal{O}\) be a normalized \(\mathcal{ELH}\) ontology and let \(\mathcal{I}_{\mathcal{O}}\) be the canonical model of \(\mathcal{O}\) (Definition 10). The \(\mathsf{d}\)-dimensional convex \(\oplus\)-geometric interpretation of \(\mathcal{I}_{\mathcal{O}}\) (Definition 11) is a strongly IQ and TBox faithful model of \(\mathcal{O}\)._ We now state a corollary analogous to Corollary 7, though here we cannot state it for all classes of \(m\)-dimensional \(f\)-geometric interpretations (we know by Theorem 8 that this is impossible for any class of \(1\)-dimensional geometric interpretations). We omit "\(m\)-dimensional" in Corollary 15 to indicate that this holds for the larger class containing geometric interpretations with an arbitrary number of dimensions (necessary to cover the whole language). **Corollary 15**.: _Normalized \(\mathcal{ELH}\) has the strong faithful property over \(\oplus\)-geometric interpretations._ ## 6 Model Checking on Geometric Models Here we study upper bounds for the complexity of model checking problems using convex geometric models as those defined in Definition 11 and normalized \(\mathcal{ELH}\) axioms. The results and algorithms in this section are underpinned by Theorem 13, which allow us to use \(\eta_{\mathcal{I}}\) instead of \(\eta_{\mathcal{I}}^{*}\) for model checking purposes. The advantage of using \(\eta_{\mathcal{I}}\) instead of \(\eta_{\mathcal{I}}^{*}\) is that the algorithms need to inspect only finitely many elements in the extension of each concept and each role, as long as the original interpretation \(\mathcal{I}\) has finite domain (and we only need to consider a finite number of concept, role, and individual names). For example, let \(\mathcal{I}=(\Delta^{\mathcal{I}},\mathcal{I})\) with \(\Delta^{\mathcal{I}}\) finite. If \(A\in\mathsf{N_{C}}\) then \(\eta_{\mathcal{I}}^{*}(A)\) can have infinitely many elements, while \(\eta_{\mathcal{I}}(A)\) will have at most \(|\Delta^{\mathcal{I}}|\) elements (by Definition 9). Before presenting the algorithms, we discuss some assumptions that facilitate our analysis: 1. indexing vectors and comparing primitive types use constant time; 2. accessing the extension of an individual name, concept name, or role name in \(\eta_{\mathcal{I}}\) takes constant time; 3. iterating over \(\eta_{\mathcal{I}}(A)\) (\(\eta_{\mathcal{I}}(r)\)) consumes time \(O(|\Delta^{\mathcal{I}}|)\) (\(O(|\Delta^{\mathcal{I}}|\cdot|\Delta^{\mathcal{I}}|)\)) for all \(A\in\mathsf{N_{C}}\) (\(r\in\mathsf{N_{R}}\)); and 4. if \(A\in\mathsf{N_{C}}\) (\(r\in\mathsf{N_{R}}\)), testing if \(v\in\eta_{\mathcal{I}}(A)\) (\(v\in\eta_{\mathcal{I}}(r)\)) consumes time \(O(\mathsf{d}\cdot|\Delta^{\mathcal{I}}|)\) (\(O(\mathsf{d}\cdot|\Delta^{\mathcal{I}}|\cdot|\Delta^{\mathcal{I}}|)\)). Assumption (1) is standard when analysing worst-case complexity. The others are pessimistic assumptions on the implementation of \(\eta_{\mathcal{I}}\) (and \(\eta_{\mathcal{I}}^{*}\)). E.g., encoding the binary vectors as integers and implementing bit wise operations could reduce the complexity of membership access and iteration. Also, using a hash map with a perfect hash function would decrease the membership check to constant time. We are now ready to present our upper bounds. For normalised \(\mathcal{ELH}\) CIs, we provide Algorithm 1 to decide if a concept inclusion holds in a convex geometric model built as in Definition 11. Theorem 13 guarantees that \(\eta_{\mathcal{I}}^{*}\models C\sqsubseteq D\) iff \(\eta_{\mathcal{I}}\models C\sqsubseteq D\) for any CI in normalised \(\mathcal{ELH}\). Thus, as long as \(\Delta^{\mathcal{I}}\) is finite, Algorithm 1 terminates and outputs whether \(\eta_{\mathcal{I}}^{*}\models C\sqsubseteq D\). Theorem 16 establishes that Algorithm 1 runs in polynomial time in the size of \(\Delta^{\mathcal{I}}\) and the dimension of vectors in \(\eta_{\mathcal{I}}^{*}\). **Theorem 16**.: _Given a finite geometric interpretation \(\eta_{\mathcal{I}}\) and an \(\mathcal{ELH}\) CI in normal form, Algorithm 1 runs in time in \(O(\mathsf{d}\cdot\mathsf{n}^{4})\), where \(\mathsf{d}\) is as in Definition 8 and \(\mathsf{n}=|\Delta^{\mathcal{I}}|\)._ As \(\mathsf{d}\) depends linearly on \(\Delta^{\mathcal{I}}\) and the size of the signature. If the latter is regarded as a constant, we can simply say that Algorithm 1 has time in \(O(\mathsf{n}^{5})\), where \(\mathsf{n}=|\Delta^{\mathcal{I}}|\). Similarly as for Algorithm 1, Theorem 13 allows us to design an algorithm to determine if a convex geometric model \(\eta_{\mathcal{I}}^{*}\) satisfies an IQ in normal form \(\alpha\), as we show in Algorithm 2. ``` 0: a convex geometric interpretation \(\eta_{\mathcal{I}}\) and an \(\mathcal{ELH}\) concept inclusion in normal form \(\alpha\) 0: returns True if \(\eta_{\mathcal{I}}^{*}\models\alpha\), False otherwise 1:if\(\alpha=A\sqsubseteq B\)then\(\triangleright\)\(A,B\in\mathsf{N}_{\mathsf{C}}\) 2:for\(v\in\eta_{\mathcal{I}}(A)\)do 3:if\(v[B]=0\)then return False 4:elseif\(\alpha=A_{1}\sqcap A_{2}\sqsubseteq B\)then\(\triangleright\)\(A_{1},A_{2},B\in\mathsf{N}_{\mathsf{C}}\) 5:for\(v\in\eta_{\mathcal{I}}(A_{1})\)do 6:if\(v[A_{2}]=1\wedge v[B]=0\)then return False 7:elseif\(\alpha=A\sqsubseteq\exists r.B\)then\(\triangleright\)\(A,B\in\mathsf{N}_{\mathsf{C}},r\in\mathsf{N}_{\mathsf{R}}\) 8:for\(v\in\eta_{\mathcal{I}}(A)\)do 9:for\(u\in\eta_{\mathcal{I}}(B)\)do 10:if\(v\oplus u\in\eta_{\mathcal{I}}(r)\)then 11:\(count\gets count+1\) 12:if\(\text{count = 0}\)then return False 13:elseif\(\alpha=\exists r.A\sqsubseteq B\)then\(\triangleright\)\(A,B\in\mathsf{N}_{\mathsf{C}},r\in\mathsf{N}_{\mathsf{R}}\) 14:for\(v\oplus u\in\eta_{\mathcal{I}}(r)\)do 15:if\(u[A]=1\) and \(v[B]=0\)then return False 16:return True ``` **Algorithm 1** Check if a convex geometric model (Definition 11) satisfies an \(\mathcal{ELH}\) CI in normal form Theorem 17 shows that Algorithm 2 runs in time polynomial in \(\mathsf{d}\cdot|\Delta^{\mathcal{I}}|\). **Theorem 17**.: _Given a finite geometric interpretation \(\eta_{\mathcal{I}}\) and an \(\mathcal{ELH}\) IQ in normal form, Algorithm 2 runs in time \(O(\mathsf{d}\cdot\mathsf{n}^{3})\), with \(\mathsf{d}\) as in Definition 8 and \(\mathsf{n}=|\Delta^{\mathcal{I}}|\)._ Next, we present Algorithm 3, which handles RIs. Again, as a consequence of Theorem 13, we only need to check the inclusion between two finite sets of vectors in \(\mathbb{R}^{2\mathsf{d}}\). Finally, we show an upper bound using Algorithm 3. **Theorem 18**.: _Given a finite geometric interpretation \(\eta_{\mathcal{I}}\) and an \(\mathcal{ELH}\) role inclusion, Algorithm 3 runs in time in \(O(\mathsf{d}\cdot\mathsf{n}^{4})\), where \(\mathsf{d}\) is as in Definition 8 and \(\mathsf{n}=|\Delta^{\mathcal{I}}|\)._ The three algorithms presented in this section run in polynomial time in \(\mathsf{d}\cdot|\Delta^{\mathcal{I}}|\). We recall that the construction of \(\eta_{\mathcal{I}}\) (and by extension of \(\eta_{\mathcal{I}}^{*}\)) requires that both the signature and \(\Delta^{\mathcal{I}}\) are finite (which is reasonable for normalized \(\mathcal{ELH}\)), otherwise the vectors in \(\eta_{\mathcal{I}}\) would have infinite dimension. ``` 0: a convex geometric interpretation \(\eta_{\mathcal{I}}^{*}\) and an \(\mathcal{ELH}\) \(\mathrm{IQ}\) in normal form \(\alpha\) 0: returns True if \(\eta_{\mathcal{I}}^{*}\models\alpha\), False otherwise 1:if\(\alpha=A(a)\)then\(\triangleright\)\(A\in\mathsf{N}_{\mathsf{C}},a\in\mathsf{N}_{\mathsf{I}}\) 2:if\(\eta_{\mathcal{I}}(a)[A]=1\)thenreturn True 3:else if\(\alpha=(A\sqcap B)(a)\)then\(\triangleright\)\(A,B\in\mathsf{N}_{\mathsf{C}},a\in\mathsf{N}_{\mathsf{I}}\) 4:if\((\eta_{\mathcal{I}}(a)[A]=1)\land(\eta_{\mathcal{I}}(a)[B]=1)\)thenreturn True 5:else if\(\alpha=(\exists r.A)(a)\)then\(\triangleright\)\(A\in\mathsf{N}_{\mathsf{C}},r\in\mathsf{N}_{\mathsf{R}},a\in\mathsf{N}_{\mathsf{I}}\) 6:for\(u\in\eta_{\mathcal{I}}(A)\)do 7:if\(\eta_{\mathcal{I}}(a)\oplus u\in\eta_{\mathcal{I}}(r)\)thenreturn True 8:else if\(\alpha=r(a,b)\)then\(\triangleright\)\(r\in\mathsf{N}_{\mathsf{R}},a,b\in\mathsf{N}_{\mathsf{I}}\) 9:if\(\eta_{\mathcal{I}}(a)\oplus\eta_{\mathcal{I}}(b)\in\eta_{\mathcal{I}}(r)\)thenreturn True ``` **Algorithm 2** check if a convex geometric model (as in Definition 11) satisfies an \(\mathcal{ELH}\) role inclusion ## 7 Conclusion We have proven that \(\mathcal{ELH}\) has the strong faithful property over (possibly) non-convex geometric models, and that normalized \(\mathcal{ELH}\) has the strong faithful property over convex geometric models. Furthermore, we give upper bounds for the complexity of checking satisfaction for \(\mathcal{ELH}\) axioms in normal form in the class of convex geometric models that we use for strong faithfulness. As future work, we would like to implement an embedding method that is formally guaranteed to generate strongly TBox faithful embeddings for normalized \(\mathcal{ELH}\) ontologies. Furthermore, showing that other ontology languages (for which finite canonical models exist) have the strong faithfulness property using a similar strategy is promising.
2306.07706
Towards Explainable TOPSIS: Visual Insights into the Effects of Weights and Aggregations on Rankings
Multi-Criteria Decision Analysis (MCDA) is extensively used across diverse industries to assess and rank alternatives. Among numerous MCDA methods developed to solve real-world ranking problems, TOPSIS remains one of the most popular choices in many application areas. TOPSIS calculates distances between the considered alternatives and two predefined ones, namely the ideal and the anti-ideal, and creates a ranking of the alternatives according to a chosen aggregation of these distances. However, the interpretation of the inner workings of TOPSIS is difficult, especially when the number of criteria is large. To this end, recent research has shown that TOPSIS aggregations can be expressed using the means (M) and standard deviations (SD) of alternatives, creating MSD-space, a tool for visualizing and explaining aggregations. Even though MSD-space is highly useful, it assumes equally important criteria, making it less applicable to real-world ranking problems. In this paper, we generalize the concept of MSD-space to weighted criteria by introducing the concept of WMSD-space defined by what is referred to as weight-scaled means and standard deviations. We demonstrate that TOPSIS and similar distance-based aggregation methods can be successfully illustrated in a plane and interpreted even when the criteria are weighted, regardless of their number. The proposed WMSD-space offers a practical method for explaining TOPSIS rankings in real-world decision problems.
Robert Susmaga, Izabela Szczech, Dariusz Brzezinski
2023-06-13T11:49:44Z
http://arxiv.org/abs/2306.07706v1
# Towards Explainable TOPSIS: ###### Abstract Multi-Criteria Decision Analysis (MCDA) is extensively used across diverse industries to assess and rank alternatives. Among numerous MCDA methods developed to solve real-world ranking problems, TOPSIS remains one of the most popular choices in many application areas. TOPSIS calculates distances between the considered alternatives and two predefined ones, namely the ideal and the anti-ideal, and creates a ranking of the alternatives according to a chosen aggregation of these distances. However, the interpretation of the inner workings of TOPSIS is difficult, especially when the number of criteria is large. To this end, recent research has shown that TOPSIS aggregations can be expressed using the means (M) and standard deviations (SD) of alternatives, creating MSD-space, a tool for visualizing and explaining aggregations. Even though MSD-space is highly useful, it assumes equally important criteria, making it less applicable to real-world ranking problems. In this paper, we generalize the concept of MSD-space to weighted criteria by introducing the concept of WMSD-space defined by what is referred to as weight-scaled means and standard deviations. We demonstrate that TOPSIS and similar distance-based aggregation methods can be successfully illustrated in a plane and interpreted even when the criteria are weighted, regardless of their number. The proposed WMSD-space offers a practical method for explaining TOPSIS rankings in real-world decision problems. keywords: TOPSIS, weighted criteria ranking, interpretability, visualization, aggregated distance ranking + Footnote †: journal: Applied Soft Computing [ the alternatives and produces non-negative real values, which determine a linear pre-order that can be used for ranking. The TOPSIS method has been widely used in many applications, including logistics (Bottani and Rizzi, 2006), manufacturing (Wang, 2009; Zhang et al., 2023), marketing (Yu et al., 2011), sustainable development (Piwowarski et al., 2018), and engineering (Lin et al., 2023); for a much broader survey of TOPSIS and its applications, see, e.g., the review of Behzadian et al. (2012); Zavadskas et al. (2016); Zyoud and Fuchs-Hanusch (2017). In particular, there have been several studies focusing on the normalization and weighting procedures used in TOPSIS. The research of Opricovic and Tzeng (2004) has analyzed the impact of different normalization procedures and different aggregation functions on the final ranking obtained using the TOPSIS and VIKOR methods. Similarly, Zavadskas et al. (2006) describe the influence of a normalization method on the final TOPSIS rankings. An alternative approach to criterion weighting is considered by Chakraborty and Yeh (2009). The topic of weights is also an important part of studies on the Relative Ratio method (Li, 2009), which estimates differences between alternatives to create a ranking that balances the distance from the ideal solution and the distance from the anti-ideal solution. Similar approaches to weight balancing and relative closeness have been also proposed by Kuo (2017) and Abootalebi et al. (2019). With the use of the ROR methodology (Greco et al. (2010)), TOPSIS has also been adapted by Zielniewicz (2017) to incorporate predefined relations between alternatives as a form of preferential information from the decision maker. Many other interesting issues relating to TOPSIS, including its combinations with other methods, its variations and adaptations, are described in works by Corrente and Tasiou (2023); Yu et al. (2015); Chen (2019); Tian et al. (2018); Yoon and Kim (2017). Last but not least are the attempts aimed at visualizing the results of TOPSIS, e.g. Walesiak (2016), where a multidimensional scaling approach (Borg and Groenen (2005)) has been employed. The bulk of the research on TOPSIS, as indicated by the referenced works, is mainly focused on practical use cases and different ways of performing criteria weighting and normalization. In addition to these application-oriented studies, recently, we have formalized the inner workings of TOPSIS by describing aggregations using the mean (M) and standard deviation (SD) of each alternative. This allowed us to propose a space for visualizing multi-criteria aggregations called MSD-space (Susmaga et al., 2023). However, the MSD-space assumes equally weighted criteria, making it less applicable to modern TOPSIS variations and real-world ranking tasks. In this paper, we generalize the MSD-space methodology to problems with arbitrarily defined criteria weights. We show how weights affect rankings of alternatives under various TOPSIS aggregations and how the effects of weights provided by multiple experts can be compared. The detailed contributions of this paper are as follows: * In Sections 2 and 3, we formalize the TOPSIS procedure. We recall the definitions of utility space, MSD-space, and their properties. Finally, we show how arbitrary criteria weights can be re-scaled and used to generalize utility space into weighted utility space. * In Section 4, we define weight-scaled means and standard deviations as equivalents of alternative means and standard deviations in the weighted utility space. As a result, we prove the IA-WMSD property and introduce WMSD-space that represents alternatives in two dimensions regardless of the number of analyzed criteria and their weights. We also visualize WMSD-space and show how it can be used to express various aggregation functions using the weight-scaled means and standard deviations of alternatives. * In Section 5, we apply the proposed WMSD visualization to two case studies. We show how WMSD-space can be used to explore the properties of a given dataset, compare the effects of weights defined by different experts and underline the implications of using different aggregations. * In Section 6, we summarize the paper, discuss our findings, and suggest further research. ## 2 Preliminaries The majority of research on the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method involves a predefined, finite set of \(m\) entities (referred to as _alternatives_) described by a set of \(n\) features (_criteria_). Consequently, the information can be effectively represented in a \(m\times n\) matrix of values, commonly known as the _decision matrix_. An example of such a decision matrix \(\mathbf{X}\) is illustrated in Figure 1A, comprising four alternatives (students) characterized by three criteria (final grades in subjects). Unlike research papers that focus on specific applications of TOPSIS, our study will not be confined to a specific set of \(m\) alternatives. Instead, we will delve into the general characteristics of all possible alternatives given a set of \(n\) criteria. Our analysis of all conceivable alternative representations is influenced by strategies designed for visually inspecting general properties of machine learning metrics (Brzezinski et al., 2018, 2017; Susmaga and Szczech, 2015, 2015). To conduct this analysis, in this section, we will provide the essential definitions needed to formalize the TOPSIS procedure. Furthermore, we will review the conclusions from our previous research on interpreting TOPSIS by revisiting the definitions of utility space, IA-MSD property, and MSD-space (Susmaga et al., 2023). The notation introduced in the following paragraphs will be used to generalize utility space and MSD-space into their weighted counterparts in the subsequent sections of this paper. ### Formalizing TOPSIS Using the Utility Space TOPSIS (Hwang and Yoon, 1981) is a multi-criteria decision analysis (MCDA) method that ranks objects (_alternatives_) from the best to the worst in terms of their distance to ideal and anti-ideal points. The Figure 1: The dataset that will serve as the running example for explaining different representations of objects analyzed in this paper. (A) The original dataset (decision matrix) describing four students (alternatives) using final grades from three subjects (criteria). (B) The same dataset depicted as a subset of the criteria space, i.e., of all possible alternatives described by the three criteria describing students. (C) The same alternatives presented as a subset of utility space, the re-scaled equivalent of criteria space. (D) The analyzed students represented in MSD-space, a space defined by the mean (M) and standard deviation (SD) of the utility space descriptions of the alternatives. (E) Alternatives represented in weighted utility space, with weights \(\mathbf{w}=[0.5,0.6,1.0]\). (F) Alternatives represented in WMSD-space, a space defined by the weight-scaled mean (WM) and weight-scaled standard deviation (WSD) of the weighted utility space descriptions of the alternatives. description of alternatives with respect to considered attributes is commonly given in the form of a vector. Among attributes typically used in MCDA, there are _criteria_, characterized by preference-ordered domains. The main actions performed by TOPSIS method can be summarized as: 1. **prepare the representations** of alternatives in terms of criteria. Apart from forming the decision matrix, this part of the procedure may also normalize the criteria and incorporate the user-given weights that, actually, constitute his preferential information; 2. **determine two reference points**, ideal and anti-ideal, and verify how far each alternative is from them. 3. **rank the alternatives** with respect of some aggregation function that combines the distances between the alternatives and the ideal/anti-ideal points. Preparing representations of alternativesTOPSIS starts with encoding real-world objects (e.g., students described by criteria referring to their grades) into a decision matrix \(\mathbf{X}\) (Figure 1A). The decision matrix is a finite subset of the criterion space \(CS\) (Figure 1B), where if a criterion \(\mathcal{K}\) belongs to the set of all possible criteria \(\mathbb{K}\) (\(\mathcal{K}\in\mathbb{K}\)), then its domain is a real-valued interval \(\mathcal{V}=[v_{min},v_{max}]\). Since TOPSIS is based on calculating distances, the bounds of the interval need to be finite. Additionally, criteria may differ in their preference types (gain or cost), with the least preferred value denoted as \(v_{*}\) and the most preferred value as \(v^{*}\). Vectors \([v_{1}^{*},v_{2}^{*},...,v_{n}^{*}]\) and \([v_{1*},v_{2*},...,v_{n*}]\) will be referred to as the ideal (\(I\)) and anti-ideal (\(A\)) points, respectively. Working on criteria with varying domains and types can make the analysis more troublesome and reduce the meaningfulness of the results, thus a criteria transformation is often applied. In this paper, we will use a min-max re-scaling that transforms the criteria space into the utility space _US_ (Figure 1C) using the function \(\mathcal{U}:\mathcal{V}\rightarrow[0,1]\). Precisely, given: * a domain \(\mathcal{V}=[v_{min},v_{max}]=[v_{*},v^{*}]\) of a criterion \(\mathcal{K}\in\mathbb{K}\) of type 'gain', the re-scaling function \(\mathcal{U}\) associated with \(\mathcal{K}\) is defined as \(\mathcal{U}(v)=\frac{v-v_{*}}{v^{*}-v_{*}}\) for \(v\in\mathcal{V}\), * a domain \(\mathcal{V}=[v_{min},v_{max}]=[v^{*},v_{*}]\) of a criterion \(\mathcal{K}\in\mathbb{K}\) of type 'cost', the re-scaling function \(\mathcal{U}\) associated with \(\mathcal{K}\) is defined as \(\mathcal{U}(v)=\frac{v_{*}-v}{v_{*}-v^{*}}\) for \(v\in\mathcal{V}\). The \(\mathcal{U}(\cdot)\) function is introduced to simplify further TOPSIS processing without the loss of generality and is independent of decision matrix normalization that could be performed by a user. After the \([0,1]\) re-scaling, the criteria will all be of type 'gain' and have \([0,1]\) domains. Since the _US_ is the space of all conceivable alternative representations (images), particular decision matrices are simply represented by subsets of _US_. Aiming at formalizing general dataset-independent properties, we shall deploy _US_ in all further considerations. Determination of the ideal/anti-ideal points and distance calculationGiven a set of criteria \(\mathscr{K}\), \(|\mathscr{K}|=n\geq 1\), the utility space is an \(n\)-dimensional hypercube \([0,1]\times[0,1]\times\cdots\times[0,1]\) with \(2^{n}\) vertices of the form \([z_{1},z_{2},...,z_{n}]\), where \(z_{j}\in\{0,1\}\). Moreover, for each alternative representation \(E\in CS\) there exists \(\mathbf{u}\in\textit{US}\) such that \(\mathbf{u}\) is the image of \(E\) under the re-scaling transformation--if \(E=[v_{1},v_{2},...,v_{n}]\in CS\), then \([\mathcal{U}_{1}(v_{1}),\mathcal{U}_{2}(v_{2}),...,\mathcal{U}_{n}(v_{n})]\in \textit{US}\). In particular, _US_ contains vectors \(\mathbf{1}=[1,1...,1]\) and \(\mathbf{0}=[0,0,...,0]\), which are the respective images of the ideal point and anti-ideal point. In our running example, there are three criteria, thus the points \(\mathbf{1}=[1,1,1]\) and \(\mathbf{0}=[0,0,0]\) represent in \(US\) the ideal point and anti-ideal point, respectively (see Figure 1C). With the \(\mathbf{1}\) and \(\mathbf{0}\) points at hand, TOPSIS calculates how far each alternative is from them. To perform this operation, the Euclidean distance measure is used: given vectors \(\mathbf{a}=[a_{1},a_{2},...,a_{n}]\), \(\mathbf{b}=[b_{1},b_{2},...,b_{n}]\), the Euclidean distance between them is defined as \(\delta_{2}(\mathbf{a},\mathbf{b})=\sqrt{\sum_{j=1}^{n}|a_{j}-b_{j}|^{2}}\). Since the maximal Euclidean distance in _US_ extends between vectors \(\mathbf{1}\) and \(\mathbf{0}\), it is dependent on \(n\) and equals \(\sqrt{n}\). For our analyses to be \(n\)-independent and easily interpretable regardless of \(n\), we define a re-scaled Euclidean distance as \(\delta_{2}^{01}(\mathbf{a},\mathbf{b})=\frac{\delta_{2}(\mathbf{a},\mathbf{b}) }{\sqrt{n}}\), ranging always between \([0,1]\) (instead of \([0,\sqrt{n}]\)). The re-scaled distances of an alternative's image \(\mathbf{u}\in\mathit{US}\) to the ideal and anti-ideal point will be denoted as \(\delta_{2}^{01}(\mathbf{u},\mathbf{1})\) and \(\delta_{2}^{01}(\mathbf{u},\mathbf{0})\), respectively. Ranking alternatives according to an aggregationThe distances of each alternative's representation to the reference points are aggregated with respect to some chosen aggregation function, the value of which naturally forms a ranking of the alternatives. We shall focus on three classic TOPSIS aggregations, defined in terms of \(\delta_{2}^{01}(\mathbf{u},\mathbf{1})\) and \(\delta_{2}^{01}(\mathbf{u},\mathbf{0})\) as: \[\mathsf{I}(\mathbf{u}) =1-\delta_{2}^{01}(\mathbf{u},\mathbf{1}),\] \[\mathsf{A}(\mathbf{u}) =\delta_{2}^{01}(\mathbf{u},\mathbf{0}),\] \[\mathsf{R}(\mathbf{u}) =\frac{\delta_{2}^{01}(\mathbf{u},\mathbf{0})}{\delta_{2}^{01}( \mathbf{u},\mathbf{1})+\delta_{2}^{01}(\mathbf{u},\mathbf{0})},\] where \(\mathbf{u}\in\mathit{US}\) is the image of an alternative. The first (\(\mathsf{I}\)) is based solely on the distance to the _ideal_ point, next (\(\mathsf{A}\)) on the distance to the _anti_-ideal point, whereas the _relative_ distance, denoted as \(\mathsf{R}\), takes both previous distances into account. Using \(1-\delta_{2}^{01}(\mathbf{u},\mathbf{1})\) instead of a straightforward distance \(\delta_{2}^{01}(\mathbf{u},\mathbf{1})\) in the \(\mathsf{I}(\mathbf{u})\) aggregation, serves only as a means to have all aggregations as functions to be maximized. Although only the \(\mathsf{R}(\mathbf{u})\) aggregation is predominantly deployed in TOPSIS, it is defined on the basis of \(\mathsf{I}(\mathbf{u})\) and \(\mathsf{A}(\mathbf{u})\), and as such inherits from them its main properties. For this reason, all three aggregations will be examined in this paper. ### The IA-MSD Property and MSD-space Given a representation of an alternative in utility space \(\mathbf{u}\in\mathit{US}\), let: \[sum(\mathbf{u}) =\sum_{j=1}^{n}u_{j},\] \[mean(\mathbf{u}) =\frac{sum(\mathbf{u})}{n},\] \[var(\mathbf{u}) =\frac{\left\|\mathbf{u}-\overline{\mathbf{u}}\right\|_{2}^{2}}{ n},\text{ with }\overline{\mathbf{u}}=[mean(\mathbf{u}),mean(\mathbf{u}),...,mean( \mathbf{u})],\] \[std(\mathbf{u}) =\sqrt{var(\mathbf{u})}.\] Using the above notation, in our previous paper (Susmaga et al., 2023), we have employed the fact that for every \(\mathbf{u}\in\mathit{US}\) vectors \(\overline{\mathbf{u}}-\mathbf{0}\) and \(\mathbf{u}-\overline{\mathbf{u}}\) and \(\mathbf{u}-\overline{\mathbf{u}}\) and \(\mathbf{1}-\overline{\mathbf{u}}\) are orthogonal, and, therefore, one can apply the Pythagorean theorem to relate these vectors (Figure 2A). Figure 2: A depiction of the IA-MSD Property in \(\mathit{US}\) and MSD-space for a three-dimensional problem. (A) Vector orthogonality depicted in \(\mathit{US}\). (B) Illustration of the IA-MSD property in MSD-space. The re-scaled \(\delta_{2}^{01}\) lengths of vectors \(\overline{\mathbf{u}}\) and \(\mathbf{u}-\overline{\mathbf{u}}\) from panel \(\mathsf{A}\) correspond to the values of \(mean(\mathbf{u})\) and \(std(\mathbf{u})\) depicted in MSD-space. (C) Color encoding of the aggregation function \(\mathsf{R}(\mathbf{u})\), with blue representing the least preferred and red the most preferred values. Moreover, in (Susmaga et al., 2023) we have shown that the lengths of the above-mentioned vectors can be defined as follows: * \(\delta_{2}^{01}(\overline{\textbf{u}},\textbf{0})=mean(\textbf{u})\), * \(\delta_{2}^{01}(\overline{\textbf{u}},\textbf{1})=1-mean(\textbf{u})\), * \(\delta_{2}^{01}(\textbf{u},\overline{\textbf{u}})=std(\textbf{u})\). These characteristics of _US_ allowed us to formulate the _IA-MSD property_. **Definition 1** (IA-MSD Property).: \[\delta_{2}^{01}(\textbf{u},\textbf{0})=\sqrt{mean(\textbf{u})^{2 }+std(\textbf{u})^{2}},\] \[\delta_{2}^{01}(\textbf{u},\textbf{1})=\sqrt{(1-mean(\textbf{u} ))^{2}+std(\textbf{u})^{2}}.\] The IA-MSD property shows that the distances of an alternative to the ideal and anti-ideal point are functions of the mean and standard deviation of the alternative. This interesting dependency between the distances of alternatives to the predefined ideal (\(I\)) and anti-ideal (\(A\)) points on the one hand and \(mean(\textbf{u})\) and \(std(\textbf{u})\) on the other, inspired us to define _MSD-space_. This space uses the mean (M) and standard deviation (SD) of an alternative's _US_ representation as its constituents (Figure 2B). **Definition 2** (MSD-space).: \[\text{MSD-space}=\{[mean(\textbf{u}),std(\textbf{u})]|\textbf{u}\in US\}\] The MSD-space is a two-dimensional space and, therefore, can be visualized on a plane where the mean (M) of an alternative defines its position on the x-axis and the standard deviation (SD) the position on the y-axis. Since MSD-space is a transformation of _US_ which is \([0,1]\)-bounded, the range of values of M and SD is also bounded. As a result, for a given number of criteria \(n\), there is a limited range of attainable means and standard deviations, which forms the boundary (shape) of MSD-space (Figure 2B). Moreover, the IA-MSD property makes it possible to define all TOPSIS aggregations in terms of \(mean(\textbf{u})\) and \(std(\textbf{u})\): \[\mathsf{I}(\textbf{u}) =1-\sqrt{(1-mean(\textbf{u}))^{2}+std(\textbf{u})^{2}},\] \[\mathsf{A}(\textbf{u}) =\sqrt{mean(\textbf{u})^{2}+std(\textbf{u})^{2}},\] \[\mathsf{R}(\textbf{u}) =\frac{\sqrt{mean(\textbf{u})^{2}+std(\textbf{u})^{2}}}{\sqrt{( 1-mean(\textbf{u}))^{2}+std(\textbf{u})^{2}}+\sqrt{mean(\textbf{u})^{2}+std (\textbf{u})^{2}}}.\] Since all the discussed TOPSIS aggregations are functions of merely two parameters, \(mean(\textbf{u})\) and \(std(\textbf{u})\), one can visualize their values in MSD-space using a color map (Figure 2C). Using such a visualization, one can analyze how the preferences expressed by different aggregations change with varying values of M and SD. ## 3 Criteria Weights and Weighted Utility Space The utility space and MSD-space proposed in (Susmaga et al., 2023), and recalled in the previous section, assumed that all of the criteria are equally important. In practice, this is rarely the case, as criteria are very often assigned different weights by experts. In this section, we formalize criteria weighting and show how the utility space _US_ can be transformed to its weighted counterpart _VS_. ### Normalized Criteria Weights Let \(\mathbf{w}=[w_{1},w_{2},...,w_{n}]\) be a vector of real values, acting as criterion weights (shortly: weights), where \(n\) is the number of criteria. These weights will be subjected to the following three assumptions: * all non-negative, * at least one non-zero, * all finite. These assumptions have the following justifications. First, we consider only non-negative weights. This non-negativity results from the fact that the weight expresses the magnitude of the criterion's relative importance (greater weight, greater importance). In particular, if \(w_{i}>w_{j}\), then the \(i\)-th criterion is expected to have more influence on the final result of the method than the \(j\)-th criterion. Therefore, the first considered assumption is: \(w_{i}\geq 0\) for all \(i\). Next, notice that relation '\(\geq\)' admits two disjoint sub-cases: '\(=\)' and '\(>\)'. Whenever \(w_{i}=0\), then the \(i\)-th criterion is in practice 'zeroed' and thus not taken into account in any further considerations. On the other hand, \(w_{i}\neq 0\) means that the \(i\)-th criterion is taken into account. This explains the second assumption, namely: \(w_{i}\neq 0\) for at least one \(i\) ('at least one non-zero'). The assumption ensures that the undesired case of all zero weights cannot occur. Finally, while it is in general possible to consider weights unbounded from above, we shall consider bounded, and thus finite, weights ('all finite'). Such a constraint may be implied by different 'normalizing' conditions, as e.g. \(\sum_{i=1}^{n}w_{i}=1\) ('sum is 1') or \(\max_{i=1}^{n}w_{i}=1\) ('max is 1'). These different expression forms may be unified using a \(p\)-parametrized Minkowski circle of a finite, positive radius (in this case radius is 1). In particular, choosing a Minkowski circle for \(p=1\) requires that \(\mathbf{w}\) satisfies \(\left\|\mathbf{w}\right\|_{1}=1\) ('sum is 1'), while choosing a Minkowski circle for \(p=\infty\) requires that \(\mathbf{w}\) satisfies \(\left\|\mathbf{w}\right\|_{\infty}=1\) ('max is 1'). Bounding all the weights from above, or considering exclusively finite weights, constitutes thus the third assumption. Notice that using the Minkowski circles is advantageous enough to ensure satisfying not only the third assumption (because no vector containing any infinite value is a part of any Minkowski circle of finite, positive radius), but also the second assumption (because the vector of exclusively zero weights is not a part of any Minkowski circle of finite, positive radius). As a result, the three assumptions ('all non-negative', 'at least one non-zero', and 'all finite'), are expressed with only two conditions: * \(\mathbf{w}\geq\mathbf{0}\) ('all non-negative'), * \(\left\|\mathbf{w}\right\|_{\infty}=1\) ('max is 1', ensuring 'at least one non-zero' and 'all finite'). It should be stressed that even though the postulated assumptions exclude situations in which _all_ weights are zero, they do not exclude situations in which _some_ weights are zero. As stated above, in such a situation, the criteria corresponding to zero weights are in practice eliminated from all further considerations. In result, 'zeroing' weights may be viewed as a form of 'criterion selection' (only criteria corresponding to positive weights are selected). In the following sections, we will assume that criteria weights adhere to the 'all non-negative' and'max is 1' conditions. This can be easily implemented in practice, as any set of real values satisfying 'all non-negative', 'at least one non-zero' and 'all finite' can be re-scaled to be 'all non-negative' and'max is 1'. Given a particular \(\mathbf{w}\), the vector of weights, we will often consider a coefficient \(s\), defined as follows: \(s=\frac{\left\|\mathbf{w}\right\|}{mean(\mathbf{w})}\). Because \(\left\|\mathbf{w}\right\|>0\) and \(mean(\mathbf{w})>0\) are guaranteed by \(\mathbf{w}\neq\mathbf{0}\) (implied by the assumptions concerning \(\mathbf{w}\)), the value of \(s\): * always exists (because its denominator is non-zero), * never equals zero (because its nominator is non-zero). In particular, for \(\mathbf{w}=\mathbf{1}\), \(s\) becomes \(s=\frac{\left\|\mathbf{w}\right\|}{mean(\mathbf{w})}=\frac{\left\|\mathbf{1} \right\|}{mean(\mathbf{1})}=\frac{\sqrt{n}}{1}=\sqrt{n}\). ### VS: The Weighted Utility Space While _US_ has the shape of a hypercube, it changes as soon as it becomes non-uniformly weighted, i.e., as soon as non-uniform criteria weights are applied. By weighing the criteria, one introduces preferential information from the decision maker (which is of a different origin than elements of _US_) and alters the influence of the criteria on the final TOPSIS ranking. Thus, the shape of the weighted version of _US_ (in which all TOPSIS operations are actually performed) generalizes to a hyperrectangle, with the special case of the hypercube obtained for all weights equal to one. More precisely, every case of weights equal to a predefined positive constant (not necessarily one) would also result in a hypercube. Given vectors \(\mathbf{a}=[a_{1},a_{2},...,a_{n}]\) and \(\mathbf{b}=[b_{1},b_{2},...,b_{n}]\), let \(\mathbf{a}\circ\mathbf{b}\) denote their element-wise (Hadamard) product, i.e., \(\mathbf{a}\circ\mathbf{b}=[a_{1}\cdot b_{1},a_{2}\cdot b_{2},...,a_{n}\cdot b_{n}]\). Now, let \(\mathbf{w}=[w_{1},w_{2},...,w_{n}]\) be a vector of weights. Given these weights and an \(n\)-dimensional _US_ we define \(\textit{VS}=\{\mathbf{v}:\mathbf{v}=\mathbf{w}\circ\mathbf{u},\mathbf{u}\in US\}\) (Figure 3). _VS_ is thus the image of _US_, with (in particular): * \(\mathbf{0}\in\textit{VS}\) being the image of \(\mathbf{0}\in\textit{US}\), * \(\mathbf{w}\in\textit{VS}\) being the image of \(\mathbf{1}\in\textit{US}\). By the assumptions of \(\mathbf{w}\) ('max is 1'), if \(\mathbf{u}\in\textit{US}\), then \(\mathbf{v}=\mathbf{w}\circ\mathbf{u}\leq\mathbf{u}\). In result, \(\mathbf{1}\not\in\textit{VS}\) in general (this only happens when \(\mathbf{w}=\mathbf{1}\), since then _VS_ = _US_ and \(\mathbf{1}\in\textit{VS}\) is the image of \(\mathbf{1}\in\textit{US}\)). Clearly, \(\textit{VS}\subseteq\textit{US}\), with \(\textit{VS}=\textit{US}\) only for \(\mathbf{w}=\mathbf{1}\); in all other cases \(\textit{VS}\subset\textit{US}\). Additionally, while _US_ is an \(n\)-dimensional hypercube, _VS_ is represented by a \(n_{p}\)-dimensional hyperrectangle, where \(n_{p}=|\{i:w_{i}>0\}|\). The assumption 'at least one non-zero' of \(\mathbf{w}\) ensures that \(n_{p}\geq 1\), so in general, \(1\leq n_{p}\leq n\). In the most often satisfied case of \(w_{i}>0\) for all \(i\), \(n_{p}=n\). Similarly to _US_, two vertices, namely \(\mathbf{w}\) and \(\mathbf{0}\) (images of \(\mathbf{1}\) and \(\mathbf{0}\) from \(US\)), are of special interest in _VS_, as they constitute the endpoints of the segment that will be referred to as the main diagonal of _VS_ and denoted as \(D_{\mathbf{0}}^{\mathbf{w}}\). The diagonal \(D_{\mathbf{0}}^{\mathbf{w}}\) is an image of \(D_{\mathbf{0}}^{\mathbf{1}}\) (the diagonal of _US_), since it contains all vectors \(\{\mathbf{w}\circ\mathbf{d}|\mathbf{d}\in D_{\mathbf{0}}^{\mathbf{1}}\}\), in particular \(\mathbf{w}\) (for \(\mathbf{d}=\mathbf{1}\)) and \(\mathbf{0}\) (for \(\mathbf{d}=\mathbf{0}\)). And similarly to \(D_{\mathbf{0}}^{\mathbf{1}}\), \(D_{\mathbf{0}}^{\mathbf{w}}\) satisfies \(D_{\mathbf{0}}^{\mathbf{w}}\subseteq\textit{VS}\), but is dependent on \(n_{p}\) (rather than \(n\)), as \(D_{\mathbf{0}}^{\mathbf{w}}\subset\textit{VS}\) for \(n_{p}>1\) and \(D_{\mathbf{0}}^{\mathbf{w}}=\textit{VS}\) for \(n_{p}=1\). ## 4 The IA-WMSD Property and WMSD-space Fully analogously to _US_, the space _VS_ cannot visualized for more than three criteria. As a result, for \(n>3\), it is difficult to visually compare and analyze aggregations of weight-based TOPSIS. To solve this problem, in this section, we introduce two features of the weighted alternatives: their weight-scaled Figure 3: Vector orthogonality presented in (A) _US_ and (B) _VS_, for \(n=n_{p}=2\). The weight vector used to transform the presented _US_ into _VS_ is \(\mathbf{w}=[1.0,0.5]\) mean (WM) and weight-scaled standard deviation (WSD), and describe dependencies between them and the distances to the images of the chosen points, namely the ideal point \(I\) (image \(\mathbf{w}\)) and the anti-ideal point \(A\) (image \(\mathbf{0}\)). We also introduce a weighted version of the MSD-space, called the WMSD-space, which is based on WM and WSD instead of M and SD. Being 2-dimensional, the new space may successfully be used to visualize crucial aspects of different aggregations of the weight-based TOPSIS. Finally, in this section we will formalize the relation between the distances to predefined points and WM and WSD into the IA-WMSD property and WMSD-space. ### Distance Calculation in the VS space The maximal Euclidean distance \(\delta_{2}\) in _VS_ is that of \(D_{\mathbf{0}}^{\mathbf{w}}\), which extends between vectors \(\mathbf{0}\) and \(\mathbf{w}\) (Figure 3). This maximal distance equals \(\|\mathbf{w}\|\), which makes it heavily dependent on \(\mathbf{w}\). To make the maximal distance in _VS_ independent of at least some characteristics of \(\mathbf{w}\), we define the re-scaled weighted Euclidean distance, \(\delta_{\mathbf{w}}^{01}\), which is a generalization (and thus a full analog) of the re-scaled Euclidean distance \(\delta_{2}^{01}\) (see Section 2). Given \(s=\frac{\|\mathbf{w}\|}{mean(\mathbf{w})}\), let \(\delta_{\mathbf{w}}^{01}\) be defined as \(\delta_{\mathbf{w}}^{01}(\mathbf{a},\mathbf{b})=\frac{\delta_{2}(\mathbf{a}, \mathbf{b})}{s}\). In result, given any \(\mathbf{w}\) and any \(\mathbf{a},\mathbf{b}\in\textit{VS}\): \(\delta_{2}(\mathbf{a},\mathbf{b})\in[0,\!\|\mathbf{w}\|]\), but \(\delta_{\mathbf{w}}^{01}(\mathbf{a},\mathbf{b})\in[0,\frac{\|\mathbf{w}\|}{s }]=[0,\frac{\|\mathbf{w}\|}{\frac{\|\mathbf{w}\|}{mean(\mathbf{w})}}]=[0,mean (\mathbf{w})]\). Notice that the assumption'max is 1' ensures \(0<mean(\mathbf{w})\leq 1\), so \([0,mean(\mathbf{w})]\) is a proper interval, additionally satisfying \([0,mean(\mathbf{w})]\subseteq[0,1]\). It is also clear that for \(\mathbf{w}=\mathbf{1}\), in which case \(s=\sqrt{n}\), \(\delta_{\mathbf{w}}^{01}\) becomes \(\delta_{2}^{01}\): \(\delta_{\mathbf{w}}^{01}(\mathbf{a},\mathbf{b})=\delta_{\mathbf{1}}^{01}( \mathbf{a},\mathbf{b})=\frac{\delta_{2}(\mathbf{a},\mathbf{b})}{\sqrt{n}}= \delta_{2}^{01}(\mathbf{a},\mathbf{b})\). ### The IA-WMSD Property in VS Given two (column) vectors \(\mathbf{a}\) and \(\mathbf{b}\neq\mathbf{0}\), let us define (Meyer, 2000): * vector \(\mathbf{a}\diagup_{\mathbf{y}}\mathbf{b}=\frac{\mathbf{a}\cdot\mathbf{b}}{ \|\mathbf{b}\|^{2}}\mathbf{b}\), the _vector projection_ of \(\mathbf{a}\) onto \(\mathbf{b}\), * vector \(\mathbf{a}\diagup\mathbf{b}=\mathbf{a}-\mathbf{a}\diagup\mathbf{b}\), the _vector rejection_ of \(\mathbf{a}\) from \(\mathbf{b}\). Notice that \(\|\mathbf{b}\|\neq 0\) is guaranteed by \(\mathbf{b}\neq\mathbf{0}\), so the projection vector always exists, and this means that also the rejection vector always exists. By definition, vectors \(\mathbf{a}\diagup\mathbf{b}\) and \(\mathbf{a}\diagup\mathbf{b}\) are orthogonal. Let us illustrate the notions of projections/rejections with an example in _VS_ (Figure 3 and Figure 4A). Given \(n=2\), which implies a two-dimensional _VS_, consider an exemplary \(\mathbf{u}=[0.75,0.50]\in\textit{US}\) and the vector of weights \(\mathbf{w}=[1.0,0.5]\). Applying the weights from \(\mathbf{w}\) to \(\mathbf{u}\) results in vector \(\mathbf{v}=\mathbf{w}\circ\mathbf{u}=[1.0,0.5]\circ[0.75,0.50]=[0.75,0.25]\in \textit{VS}\). Now, projecting \(\mathbf{v}=[0.75,0.25]\) onto \(\mathbf{w}=[1.0,0.5]\) produces \(\mathbf{v}\diagup\mathbf{w}=\frac{\mathbf{v}\cdot\mathbf{w}}{\|\mathbf{w}\|^ {2}}\mathbf{w}=\frac{[0.75,0.25]\cdot[1.0,0.5]}{\|[1.0,0.5]\|^{2}}\cdot[1.0,0.5 ]=\frac{0.75\cdot 1.0+0.25\cdot 0.5}{\sqrt{1.0\cdot 1.0+0.5\cdot 0.5}}\cdot[1.0,0.5]= \frac{0.875\cdot 1.0+0.25\cdot 0.5}{1.0\cdot 1.0+0.5\cdot 0.5}\cdot[1.0,0.5]= \frac{0.875\cdot 1.0,0.5]}{1.25}\cdot[1.0,0.5]=0.70\cdot[1.0,0.5]=[0.70,0.35]\). Simultaneously, rejecting \(\mathbf{v}=[0.75,0.25]\) from \(\mathbf{w}=[1.0,0.5]\) produces \(\mathbf{v}\diagup\mathbf{w}=\mathbf{v}-\mathbf{v}\diagup\mathbf{w}=[0.75,0.25]- [0.75,0.25]\diagup\mathbf{[}1.0,0.5]=[0.75,0.25]-[0.70,0.35]=[0.05,-0.10]\). Following the inherent properties of projection and rejection, vectors \(\mathbf{v}\diagup\mathbf{w}\) and \(\mathbf{v}\diagup\mathbf{w}\) are orthogonal: \((\mathbf{v}\diagup\mathbf{w})(\mathbf{v}\diagup\mathbf{w})^{T}=[0.70,0.35][0.05, -0.10]^{T}=0.035-0.035=0\). Now, recall that \(s=\frac{\|\mathbf{w}\|}{mean(\mathbf{w})}\) for any weight vector \(\mathbf{w}\). Given any \(\mathbf{v}=\mathbf{u}\circ\mathbf{w}\in VS\) we define: * \(mean_{\mathbf{w}}^{01}(\mathbf{v})=\frac{\|\mathbf{v}\diagup\mathbf{w}\|}{s}\), which will be referred to as the _weight-scaled mean_ (WM) of \(\mathbf{v}\), * \(std_{\mathbf{w}}^{01}(\mathbf{v})=\frac{\|\mathbf{v}\diagup\mathbf{w}\|}{s}\), which will be referred to as the _weight-scaled standard deviation_ (WSD) of \(\mathbf{v}\). Notice that both \(mean_{\mathbf{w}}^{01}(\mathbf{v})\) and \(std_{\mathbf{w}}^{01}(\mathbf{v})\) always exist, which is guaranteed by the existence of \(\mathbf{v}\diagup\mathbf{w}\) and \(\mathbf{v}\diagup\mathbf{w}\) and by the fact that \(s\neq 0\). Moreover, \(mean_{\mathbf{w}}^{01}(\mathbf{0})=0\) and \(mean_{\mathbf{w}}^{01}(\mathbf{w})=mean(\mathbf{w})\), whereas \(std_{\mathbf{w}}^{01}(\mathbf{0})=0\) and \(std_{\mathbf{w}}^{01}(\mathbf{w})=0\). Continuing our _VS_-based example, we get \(s=\frac{\|\mathbf{w}\|}{mean(\mathbf{w})}=\frac{\|[1.0,0.5]\|}{mean([1.0,0.5] )\|}=\frac{1.12}{0.75}=1.49\), therefore: * \(mean_{\mathbf{w}}^{01}(\mathbf{v})=\frac{\|\mathbf{v}\diagup\mathbf{w}\|}{s}= \frac{\|[0.70,0.35]\|}{1.12}=\frac{0.783}{1.49}=0.525\), * \(std_{\mathbf{w}}^{01}(\mathbf{v})=\frac{\left\|\mathbf{v}\diag{\mathbf{w}}\right\| }{s}=\frac{\left\|[0.05,0.10]\right\|}{s}=\frac{0.112}{1.49}=0.075\). Now, given \(\mathbf{v}\in\mathit{{VS}}\), let us observe how the diagonal \(D_{\mathbf{0}}^{\mathbf{w}}\) relates \(mean_{\mathbf{w}}^{01}(\mathbf{v})\) and \(std_{\mathbf{w}}^{01}(\mathbf{v})\): \(mean_{\mathbf{w}}^{01}(\mathbf{v})\) specifies how far away \(\mathbf{v}\) is from \(\mathbf{0}\) when measured _along_\(D_{\mathbf{0}}^{\mathbf{w}}\), while \(std_{\mathbf{w}}^{01}(\mathbf{v})\) specifies how far away \(\mathbf{v}\) is from \(D_{\mathbf{0}}^{\mathbf{w}}\) when measured along a direction that is _perpendicular_ to it. More formally, let \(\overline{\mathbf{v}}=\mathbf{v}\diag{\mathbf{w}}\). In this case: * \(\delta_{\mathbf{w}}^{01}(\overline{\mathbf{v}},\mathbf{0})=mean_{\mathbf{w}}^ {01}(\mathbf{v})\), * \(\delta_{\mathbf{w}}^{01}(\overline{\mathbf{v}},\mathbf{w})=mean(\mathbf{w})- mean_{\mathbf{w}}^{01}(\mathbf{v})\), * \(\delta_{\mathbf{w}}^{01}(\overline{\mathbf{v}},\mathbf{v})=std_{\mathbf{w}}^ {01}(\mathbf{v})\). In our example \(\overline{\mathbf{v}}=\mathbf{v}\diag{\mathbf{w}}=[0.70,0.35]\), while \(\mathbf{v}\diag{\mathbf{w}}=[0.05,0.10]\), therefore: * \(\delta_{\mathbf{w}}^{01}(\overline{\mathbf{v}},\mathbf{0})=\frac{\left\|[0.7 0,0.35]-[0.00,0.00]\right\|}{s}=\frac{\left\|[0.70,0.35]\right\|}{s}=\frac{ \left\|[0.70,0.35]\right\|}{s}=\frac{0.783}{1.49}=0.525\) (clearly, \(\delta_{\mathbf{w}}^{01}(\overline{\mathbf{v}},\mathbf{0})=mean_{\mathbf{w}}^ {01}(\mathbf{v})=0.525\)), * \(\delta_{\mathbf{w}}^{01}(\overline{\mathbf{v}},\mathbf{w})=\frac{\left\|[0.7 0,0.35]-[1.0,0.50]\right\|}{s}=\frac{\left\|[-0.30,-0.15]\right\|}{s}=\frac{0.3 35}{1.49}=0.225\) (clearly, \(\delta_{\mathbf{w}}^{01}(\overline{\mathbf{v}},\mathbf{w})=mean(\mathbf{w})- mean_{\mathbf{w}}^{01}(\mathbf{v})=0.750-0.525=0.225\)), * \(\delta_{\mathbf{w}}^{01}(\overline{\mathbf{v}},\mathbf{v})=\frac{\left\| \overline{\mathbf{v}}-\mathbf{v}\right\|}{s}=\frac{\left\|[0.70,0.35]-[0.75,0.2 5]\right\|}{s}=\frac{\left\|[-0.05,-0.10]\right\|}{s}=\frac{0.112}{1.49}=0.075\) (clearly, \(\delta_{\mathbf{w}}^{01}(\overline{\mathbf{v}},\mathbf{v})=std_{\mathbf{w}}^ {01}(\mathbf{v})=0.075\)). What is important, because \(\mathbf{v}=\mathbf{w}\circ\mathbf{u}\) and for \(\mathbf{w}=\mathbf{1}\) we get \(s=\sqrt{n}\), it may be shown that: * \(mean_{\mathbf{1}}^{01}(\mathbf{v})=\frac{\left\|(\mathbf{v})\diag{\mathbf{ \lambda}}\right\|}{\sqrt{n}}=\frac{\left\|(\mathbf{1}\circ\mathbf{u})\diag{ \mathbf{\lambda}}\right\|}{\sqrt{n}}=\frac{\left\|\mathbf{u}\diag{\mathbf{ \lambda}}\right\|}{\sqrt{n}}=\frac{\frac{\left\|\mathbf{u}\diag{\mathbf{ \lambda}}\right\|}{\sqrt{n}}}{\sqrt{n}}=\frac{\frac{\left\|\mathbf{u}-\frac{ \left\|\mathbf{u}\diag{\mathbf{\lambda}}\right\|}{\sqrt{n}}}{\sqrt{n}}}{\sqrt{n }}=\frac{\frac{\left\|\mathbf{u}-\frac{\left\|\mathbf{u}\diag{\mathbf{\lambda}} \right\|}{\sqrt{n}}}{\sqrt{n}}}{\sqrt{n}}=\frac{\left\|\mathbf{u}-\frac{ \left\|\mathbf{u}\right\diag{\mathbf{\lambda}}\right\|}{\sqrt{n}}}{\sqrt{n}}= \frac{\left\|\mathbf{u}-\frac{\left\|\mathbf{u}\diag{\mathbf{\lambda}}\right\|}{ \sqrt{n}}}{\sqrt{n}}=\frac{\left\|\mathbf{u}-\frac{\left\|\mathbf{u}-\frac{ \left\|\mathbf{u}\diag{\mathbf{\lambda}}\right\|}{\sqrt{n}}}{\sqrt{n}}}{\sqrt{n }}=\frac{\left\|\mathbf{u}-\frac{\left\|\mathbf{u}-\frac{\left\|\mathbf{u} \right\diag{\mathbf{\lambda}}\right\|}{\sqrt{n}}}{\sqrt{n}}}{\sqrt{n}}=\frac{ \left\|\mathbf{u}-\frac{\left\|\mathbf{u}-\frac{\left\|\mathbf{u}\right\diag{ \mathbf{\lambda}}\right\|}{\sqrt{n}}}{\sqrt{n}}}{\sqrt{n}}=\frac{\left\|\mathbf{u}- \frac{\left\|\mathbf{u}-\frac{\left\|\mathbf{u}\right\diag{\mathbf{\lambda}} \right\|}{\sqrt{n}}}{\sqrt{n}}}{\sqrt{n}}=\frac{\left\|\mathbf{u}-mean( \mathbf{u})\cdot\mathbf{1}\right\|}{\sqrt{n}}=\frac{\left\|\mathbf{u}-mean( \mathbf{u})\cdot\mathbf{1}\right\|}{\sqrt{n}}=\frac{\left\|\mathbf{u}-mean( \mathbf{u})\cdot\mathbf{1}\right\|}{\sqrt{n}}=\frac{\left\|\mathbf{u}-mean( \mathbf{u})\cdot\mathbf{1}\right\|^{T}(\mathbf{u}-mean(\mathbf{u})\cdot \mathbf{1})}{n}=\sqrt{var(\mathbf{u})}=std(\mathbf{u}),\] which means that \(mean_{\mathbf{w}}^{01}(\mathbf{v})\) and \(std_{\mathbf{w}}^{01}(\mathbf{v})\) constitute natural generalizations of \(mean(\mathbf{u})\) and \(std(\mathbf{u})\). As can be noticed in Figure 4A, WM satisfies \(mean_{\mathbf{w}}^{01}(\mathbf{v})=\delta_{\mathbf{w}}^{01}(\mathbf{v}\diag{ \mathbf{\lambda}}\mathbf{w},\mathbf{0})\). Simultaneously, WSD satisfies \(std_{\mathbf{w}}^{01}(\mathbf{v})=\delta_{\mathbf{w}}^{01}(\mathbf{v}\diag{ \mathbf{\lambda}}\mathbf{w},\mathbf{0})=\delta_{\mathbf{w}}^{01}(\mathbf{v} \diag{\mathbf{\lambda}}\mathbf{w})\). All of the abovementioned considerations allow us to formulate the IA-WMSD property. **Definition 3** (IA-WMSD Property).: For every \(\mathbf{w}\) defining \(\mathbf{v}\in\mathit{{VS}}\): \[\delta_{\mathbf{w}}^{01}(\mathbf{v},\mathbf{0}) =\sqrt{mean_{\mathbf{w}}^{01}(\mathbf{v})^{2}+std_{\mathbf{w}}^{01 }(\mathbf{v})^{2}},\] \[\delta_{\mathbf{w}}^{01}(\mathbf{v},\mathbf{w}) =\sqrt{\left(mean(\mathbf{w})-mean_{\mathbf{w}}^{01}(\mathbf{v}) \right)^{2}+std_{\mathbf{w}}^{01}(\mathbf{v})^{2}}.\] Notice that \(mean(\mathbf{w})\) in the above may also be expressed as \(mean(\mathbf{w})=\frac{\left\|\mathbf{w}\right\|}{\left\|\mathbf{w}\right\|}\cdot mean (\mathbf{w})=\frac{\left\|\mathbf{w}\right\|}{\frac{\left\|\mathbf{w}\right\|}{ \left\|\mathbf{w}\right\|}}=\frac{\left\|\mathbf{w}\right\|}{s}\), which emphasizes the divisor \(s\), common to the core definitions of this paper. It should be also additionally stressed that for \(\mathbf{w}=\mathbf{1}\) the IA-WMSD property becomes the IA-MSD property. Finalizing the \(\mathit{{VS}}\)-based example, with \(\mathbf{w}=[1.0,0.5]\), \(\mathbf{v}=[0.75,0.25]\) and \(s=1.49\), we get: * \(\delta_{\mathbf{w}}^{01}(\mathbf{v},\mathbf{0})=\frac{\left\|\mathbf{v}- \mathbf{0}\right\|}{s}=\frac{\left\|[0.75,0.25]-[0.00,0.00]\right\|}{1.49}=\frac{ \left\|[0.75,0.25]\right\|}{1.49}=\frac{0.79}{1.49}=0.53\) * \(\delta_{\mathbf{w}}^{01}(\mathbf{v},\mathbf{w})=\frac{\left\|[0.75,0.25]-[1.00, 0.50]\right\|}{s}=\frac{\left\|[-0.25,-0.25]\right\|}{1.49}=\frac{0.35}{1.49}=0.24\) which allows to * \(\sqrt{mean(\mathbf{w})-(mean_{\mathbf{w}}^{01}(\mathbf{v}))^{2}+std_{\mathbf{w }}^{01}(\mathbf{v})^{2}}=\sqrt{(0.750-0.525)^{2}+0.075^{2}}=\sqrt{0.225^{2}+0.075 ^{2}}=\sqrt{0.056}=0.24=\delta_{\mathbf{w}}^{01}(\mathbf{v},\mathbf{w})\). Although the IA-WMSD property is independent of the number of criteria \(n\), it can still be visualized in _VS_ for \(n=2\) (Figure 4A). In the next section, we will discuss how the IA-WMSD property can be used to create \(n\)-independent visualizations of weight-based TOPSIS aggregations. ### The WMSD-space Analogously to the case of unweighted criteria and the resulting MSD-space (Susmaga et al., 2023), the relation between the re-scaled weighted distances of an alternative to the predefined reference points allows us to propose a new space called _WMSD-space_ that uses WM and WSD as its components. **Definition 4** (WMSD-space).: \[\text{WMSD-space}=\{[mean_{\mathbf{w}}^{01}(\mathbf{v}),std_{\mathbf{w}}^{01} (\mathbf{v})]|\mathbf{v}\in VS\}\] The WMSD-space can be represented in 2D space wherein the weight-scaled mean (WM) of the alternatives is presented on the x-axis and the weight-scaled standard deviation (WSD) of the alternatives on the y-axis. As depicted in Figure 4, WMSD-space may be treated as an image of _VS_ under a two-dimensional transformation by functions \(mean_{\mathbf{w}}^{01}(\mathbf{v})\) and \(std_{\mathbf{w}}^{01}(\mathbf{v})\) and thus expressed as _WMSD\((VS)\)_. It is worth underlining that the IA-WMSD property holds in WMSD-space, where it follows the Pythagorean theorem for two right triangles (pink and green triangles in Figure 4B). This is a result of WMSD-space being a 'rotational' projection of _VS_ into two dimensions that retains the IA-WMSD property. Since WMSD-space is actually based on the weighted utility space, which is bounded, the extreme values of WM and WSD are also bounded. In other words, for a given set of criteria weights, there is only a limited range of attainable WM and WSD values. In result, one can depict the boundary of WMSD-space, which depends on \(\mathbf{w}\), and thus also on the number of criteria \(n_{p}\). Figure 5 presents the shape of WMSD-space for different numbers of criteria \(n\) and weight vectors \(\mathbf{w}\). Owing to the symmetry of the 'rotational' projections, the boundary of the WMSD-space does not depend on the order of the elements of \(\mathbf{w}\), which means that the shape of the WMSD-space remains the same for every permutation of these elements. Looking at Figure 5, it can be noticed that setting \(\mathbf{w}=\mathbf{1}\) makes WMSD-space equivalent to MSD-space. Indeed, the scaling \(s=\frac{\|\mathbf{w}\|}{mean(\mathbf{w})}\) applied to WM, WSD, and the distance measure \(\delta_{\mathbf{w}}^{01}\) has been chosen to replicate the relation between _VS_ and _US_ in the relation between WMSD-space and MSD-space. Let us recall that MSD-space is based on a re-scaled distance measure \(\delta_{2}^{01}\), which simply divides the Euclidean distance \(\delta_{2}\) by \(\sqrt{n}\). By doing so, \(\delta_{2}^{01}\) is independent of the number of criteria \(n\), making the maximum distance in MSD-space always \(1\). Notice that \(\sqrt{n}\) is a special case of \(\|\mathbf{w}\|\) for \(\mathbf{w}=\mathbf{1}\), which emphasizes the fact that \(\delta_{\mathbf{w}}^{01}\) is a natural generalization of \(\delta_{2}^{01}\) for the case of \(\mathbf{w}\neq\mathbf{1}\). Although making \(s=\|\mathbf{w}\|\) would suffice Figure 4: An illustration of the IA-WMSD property in (A) _VS_ and (B) WMSD-space, for \(\mathbf{w}=[1.0,0.5]\) and an exemplary point \(\mathbf{v}=[0.75,0.25]\). The illustration shows how the re-scaled lengths \(\delta_{\mathbf{w}}^{01}\) of vectors \(\mathbf{\nabla}\) and \(\mathbf{v}-\mathbf{\nabla}\) are equal to the weight-scaled mean (WM) and standard deviation (WSD) which define WMSD-space. to ensure the IA-WMSD property, additionally scaling by \(mean(\mathbf{w})\) makes the sizes between WMSD-space and MSD-space follow the relation \(mean(\mathbf{w})<mean(\mathbf{1})=1\). As a result, in WMSD-space the maximal value of WM is \(mean(\mathbf{w})\) instead of \(1\). It is also worth noticing that the number of non-zero criteria and the particular values of their weights (i.e. the size and the values of \(\mathbf{w}\)) affect the number of vertices of the WMSD-space boundary (Figure 5). Finally, as was the case for MSD-space, WMSD-space can always be depicted in two dimensions because, as opposed to _VS_, the WMSD-space is by definition two-dimensional (or one-dimensional when \(n_{p}=1\)). We will use this property to visualize alternatives and values of TOPSIS aggregation functions in WMSD-space. ### TOPSIS Aggregations in WMSD-space The application of the weights, while changing _US_ into _VS_, does necessarily influence the image of the ideal point, as the image moves from \(\mathbf{1}\) in _US_ to \(\mathbf{w}\) in _VS_. The same does not concern the image of the anti-ideal point, as it remains the same, being equal to \(\mathbf{0}\) in _US_ and to \(\mathbf{w}\circ\mathbf{0}=\mathbf{0}\) in _VS_. As a result, TOPSIS utilizes \(\mathbf{w}\) instead of \(\mathbf{1}\) when computing the 'distance to the ideal'. This means that new versions of the aggregation functions denoted by \(\mathbf{l}_{\mathbf{w}}\), \(\mathbf{A}_{\mathbf{w}}\) and \(\mathbf{R}_{\mathbf{w}}\), must be introduced. When expressed in terms of \(\delta^{01}_{\mathbf{w}}(\mathbf{v},\mathbf{w})\) ('distance to the ideal') and \(\delta^{01}_{\mathbf{w}}(\mathbf{v},\mathbf{0})\) ('distance to the anti-ideal'), where \(\mathbf{u}\in\textit{US}\) and \(\mathbf{v}=\mathbf{w}\circ\mathbf{u}\in\textit{VS}\), they Figure 5: Visualizations of WMSD-space for the number of criteria (A) \(n=3\), (B) \(n=4\), (C) \(n=5\), each for three different sets of weights depicted by different line types. Notice that the dotted light gray line on each subplot corresponds to uniform weights and, therefore, the special case of WMSD-space, which is MSD-space. It is also worth noting how the arithmetic mean of the weight (\(mean(\mathbf{w})\)) corresponds to the maximal x-axis coordinate of WMSD-space. are defined as follows: \[\mathsf{l}_{\mathbf{w}}(\mathbf{v}) =1-\frac{\delta^{01}_{\mathbf{w}}(\mathbf{v},\mathbf{w})}{mean( \mathbf{w})},\] \[\mathsf{A}_{\mathbf{w}}(\mathbf{v}) =\frac{\delta^{01}_{\mathbf{w}}(\mathbf{v},\mathbf{0})}{mean( \mathbf{w})},\] \[\mathsf{R}_{\mathbf{w}}(\mathbf{v}) =\frac{\delta^{01}_{\mathbf{w}}(\mathbf{v},\mathbf{0})}{\delta^{ 01}_{\mathbf{w}}(\mathbf{v},\mathbf{w})+\delta^{01}_{\mathbf{w}}(\mathbf{v}, \mathbf{0})}.\] As was the case with \(\mathsf{l}(\mathbf{u})\) (see Section 2), aggregation \(\mathsf{l}_{\mathbf{w}}(\mathbf{v})\) features a reversal \((1-\frac{\delta^{01}_{\mathbf{w}}(\mathbf{v},\mathbf{w})}{mean(\mathbf{w})}\) instead of \(\frac{\delta^{01}_{\mathbf{w}}(\mathbf{v},\mathbf{w})}{mean(\mathbf{w})}\)). This modification was introduced to ensure that all aggregations are interpreted as functions that need to be maximized. The distances from the ideal and anti-ideal points in \(\mathsf{l}_{\mathbf{w}}(\mathbf{v})\) and \(\mathsf{A}_{\mathbf{w}}(\mathbf{v})\) have been divided by \(mean(\mathbf{w})\) to make the values of these aggregations fall between 0 and 1, just as is the case for \(\mathsf{l}(\mathbf{u}),\mathsf{A}(\mathbf{u}),\mathsf{R}(\mathbf{u}),\mathsf{ R}_{\mathbf{w}}(\mathbf{v})\). Now, when visualizing WMSD-space it is useful to show alternatives against the values of TOPSIS aggregations (\(\mathsf{l}_{\mathbf{w}}(\mathbf{v})\), \(\mathsf{A}_{\mathbf{w}}(\mathbf{v})\) and \(\mathsf{R}_{\mathbf{w}}(\mathbf{v})\)), which can be color-coded as was done for MSD-space (Susmaga et al., 2023). As seen in Figure 6, coloring WMSD-space in a way that represents the values of the aggregation functions reveals the interplay of \(mean^{01}_{\mathbf{w}}(\mathbf{v})\) and \(std^{01}_{\mathbf{w}}(\mathbf{v})\). The WM-WSD interplay for different aggregation functions in WMSD-space resembles that described for MSD-space. This is to be expected as MSD-space constitutes a special case of WMSD-space where \(\mathbf{w}=\mathbf{1}\). Table 1, shows which aggregation functions act like type 'cost' or type 'gain' criteria depending on \(mean^{01}_{\mathbf{w}}(\mathbf{v})\) and \(std^{01}_{\mathbf{w}}(\mathbf{v})\). Notice that the mean weight \(mean(\mathbf{w})\) plays a role in the properties of aggregation \(\mathsf{R}_{\mathbf{w}}(\mathbf{v})\). In the following section, we will apply such color-coded visualizations of WMSD-space to practical ranking problems. \begin{table} \begin{tabular}{c c c} \hline \hline aggregation & \(mean^{01}_{\mathbf{w}}(\mathbf{v})\) & \(std^{01}_{\mathbf{w}}(\mathbf{v})\) \\ \hline \(\mathsf{l}_{\mathbf{w}}(\mathbf{u})\) & gain & cost \\ \(\mathsf{A}_{\mathbf{w}}(\mathbf{u})\) & gain & gain \\ & & \(mean^{01}_{\mathbf{w}}(\mathbf{v})<\frac{mean(\mathbf{w})}{2}\): gain \\ \(\mathsf{R}_{\mathbf{w}}(\mathbf{u})\) & gain & \(mean^{01}_{\mathbf{w}}(\mathbf{v})=\frac{mean(\mathbf{w})}{2}\): neutrality \\ & & \(mean^{01}_{\mathbf{w}}(\mathbf{v})>\frac{mean(\mathbf{w})}{2}\): cost \\ \hline \hline \end{tabular} \end{table} Table 1: The relation between \(mean^{01}_{\mathbf{w}}(\mathbf{v})\) and \(std^{01}_{\mathbf{w}}(\mathbf{v})\) for the analyzed aggregation functions. Figure 6: An exemplary point \(\mathbf{v}=[0.75,0.25]\) depicted in WMSD-space defined by \(\mathbf{w}=[1.0,0.5]\) for aggregations (A) \(\mathsf{l}_{\mathbf{w}}(\mathbf{v})\), (B) \(\mathsf{A}_{\mathbf{w}}(\mathbf{v})\) and (C) \(\mathsf{R}_{\mathbf{w}}(\mathbf{v})\). Color encodes the aggregation value, with blue representing the least preferred and red the most preferred values. ## 5 Case Studies In this section, we present two case studies conducted on a dataset of students described in terms of school grades and on a dataset of counties described in terms of factors constituting the Index of Economic Freedom. The goal of the case studies is to visualize the alternatives within the MSD-space and WMSD-space, present the impact of introducing weights, and discuss how the two spaces depict the relations between each alternative's properties and their aggregation values. In the next subsections, the following notation shall be used to present alternative rankings: \(\mathbf{X}_{i}\succ_{agg}\mathbf{X}_{j}\): alternative \(\mathbf{X}_{i}\) is preferred over \(\mathbf{X}_{j}\) under every aggregation from \(agg\); \(\mathbf{X}_{i}\sim_{agg}\mathbf{X}_{j}\): \(\mathbf{X}_{i}\) and \(\mathbf{X}_{j}\) are indifferent under \(agg\); \(\mathbf{X}_{i}\prec_{agg}\mathbf{X}_{j}\): \(\mathbf{X}_{j}\) is preferred over \(\mathbf{X}_{i}\) under \(agg\). Moreover, for an alternative \(\mathbf{X}_{i}\) we will use \(mean(\mathbf{X}_{i})\) as a shorthand for \(mean(\mathit{US}(\mathbf{X}_{i}))\) and \(mean_{\mathbf{w}}^{01}(\mathbf{X}_{i})\) as a shorthand for \(mean_{\mathbf{w}}^{01}(\mathit{VS}(\mathbf{X}_{i}))\). Analogously, \(std(\mathbf{X}_{i})\) and \(std_{\mathbf{w}}^{01}(\mathbf{X}_{i})\) will also denote re-scaled standard deviations calculated for images of the alternative in \(\mathit{US}\) and \(\mathit{VS}\) respectively. ### Student Grades The first dataset contains \(15\) alternatives, i.e., students described by three criteria which are the average grades obtained by these students in Maths, Biology, and Art. The domains of the criteria are \([0,100]\) for Maths, \([1,6]\) for Biology, and \([1,6]\) for Art. The alternatives are presented in Table 2. The rankings of the alternatives are considered in two scenarios: when all criteria are of equal importance (i.e., \(\mathbf{w}=[1.0,1.0,1.0]\)) and when \(\mathbf{w}=[0.5,0.6,1.0]\). The description of alternatives in terms of \(\mathit{US}\) (equal weights), \(\mathit{VS}\) (unequal weights), MSD-space, WMSD-space and the three aggregations are in Table 3. The alternatives have been chosen to cover different areas of the MSD-space and WMSD-space (see Figure 7) and to represent some of their characteristic points, e.g. the worst possible alternative (\(\mathbf{S}_{10}\)) or best possible alternative (\(\mathbf{S}_{12}\)). The visualizations in Figure 7 confront the shapes of MSD-space and WMSD-space. The latter can be regarded as a natural generalization of the MSD-space that incorporates different weights assigned to the criteria. For \(\mathbf{w}\neq\mathbf{1}\) the WMSD-space is characterized by a potentially larger number of vertices1 and a smaller range than the MSD-space. Naturally, preference information given by the decision maker in the form of weights influences not only the shape of the space but also the position of the alternatives within \begin{table} \begin{tabular}{l c c c} \hline \hline & Math & Bio & Art \\ \hline \(\mathbf{S}_{1}\) & 29.11 & 2.46 & 2.46 \\ \(\mathbf{S}_{2}\) & 49.37 & 3.53 & 3.47 \\ \(\mathbf{S}_{3}\) & 70.89 & 4.54 & 4.54 \\ \(\mathbf{S}_{4}\) & 40.51 & 3.53 & 1.89 \\ \(\mathbf{S}_{5}\) & 35.44 & 4.80 & 3.22 \\ \(\mathbf{S}_{6}\) & 59.49 & 3.47 & 5.11 \\ \(\mathbf{S}_{7}\) & 44.30 & 4.80 & 1.38 \\ \(\mathbf{S}_{8}\) & 93.67 & 5.05 & 2.39 \\ \(\mathbf{S}_{9}\) & 55.70 & 2.20 & 5.62 \\ \(\mathbf{S}_{10}\) & 0.00 & 1.00 & 1.00 \\ \(\mathbf{S}_{11}\) & 0.00 & 5.56 & 1.13 \\ \(\mathbf{S}_{12}\) & 100.00 & 6.00 & 6.00 \\ \(\mathbf{S}_{13}\) & 100.00 & 1.44 & 5.87 \\ \(\mathbf{S}_{14}\) & 70.71 & 4.84 & 3.22 \\ \(\mathbf{S}_{15}\) & 89.90 & 3.32 & 4.79 \\ \hline \hline \end{tabular} \end{table} Table 2: Descriptions of alternatives for the first case study; students described by their average grades from Maths \([0-100]\), Biology \([1-6]\), and Art \([1-6]\). those spaces. For example, the positions of \(\mathbf{S}_{8}\) and \(\mathbf{S}_{9}\) change quite drastically, as the two alternatives almost swap their positions: they are both characterized by a very similar \(std(\mathbf{u})\) and \(std_{\mathbf{w}}^{01}(\mathbf{v})\) but \(mean(\mathbf{S}_{8})=0.68>mean(\mathbf{S}_{9})=0.57\) whereas \(mean_{\mathbf{w}}^{01}(\mathbf{S}_{8})=0.35<mean_{\mathbf{w}}^{01}(\mathbf{S }_{9})=0.50\). As expected, the images of the best possible alternative (\(\mathbf{1}\)) and the worst possible one (\(\mathbf{0}\)) have fixed relative positions no matter what the values of weights are, in the sense that they are always situated, respectively, in the rightmost and leftmost vertices of MSD-space and WMSD-space (see \(\mathbf{S}_{12}\) and \(\mathbf{S}_{10}\) in Figure 7). The change of alternative position imposed by the incorporation of weights can also be observed for alternatives that are characterized by different vectors in _US_, but a single point in MSD-space. To illustrate this case, let us consider \(\mathbf{S}_{6}\) and \(\mathbf{S}_{14}\). As shown in Table 3 and Figure 7 vectors \([0.59,0.49,0.82]\) (the image of \(\mathbf{S}_{6}\)) and \([0.71,0.77,0.44]\) (the image of \(\mathbf{S}_{14}\)) are characterized by \(mean(\mathbf{S}_{6})=mean(\mathbf{S}_{14})=0.64\) and \(std(\mathbf{S}_{6})=std(\mathbf{S}_{14})=0.14\). As a result, those vectors share the same point in MSD-space and are thus identically evaluated by the aggregations \(\mathsf{l}(\mathbf{u})\), \(\mathsf{A}(\mathbf{u})\) and \(\mathsf{R}(\mathbf{u})\). However, the two alternatives do not share the same point in WMSD-space, as their positions in WMSD-space are influenced by the weights. The weights affect the vectors in _US_, which are different for \(\mathbf{S}_{6}\) and \(\mathbf{S}_{14}\). As a result, the \(mean_{\mathbf{w}}^{01}(\mathbf{S}_{6})=0.50\neq mean_{\mathbf{w}}^{01}( \mathbf{S}_{14})=0.39\), which implies that the two alternatives are not identically evaluated by the considered aggregations (even though \(std_{\mathbf{w}}^{01}(\mathbf{S}_{6})=std_{\mathbf{w}}^{01}(\mathbf{S}_{14})= 0.10\)). In particular, \(\mathbf{S}_{6}\succ_{\mathsf{l}_{\mathbf{w}}(\mathbf{v}),\mathsf{A}_{\mathbf{ w}}(\mathbf{v}),\mathsf{R}_{\mathbf{w}}(\mathbf{v})}\mathbf{S}_{14}\) (see Table 3). Similarly, alternatives that are characterized by the same point in WMSD-space do not share, in general, the same point in MSD-space. In particular, \(\mathbf{S}_{6}\) and \(\mathbf{S}_{15}\) are characterized by different vectors in _US_, namely \([0.59,0.49,0.82]\) for \(\mathbf{S}_{6}\) and \([0.90,0.46,0.76]\) for \(\mathbf{S}_{15}\), but are characterized by \(mean_{\mathbf{w}}^{01}(\mathbf{S}_{6})=mean_{\mathbf{w}}^{01}(\mathbf{S}_{15} )=0.50\) and \(std_{\mathbf{w}}^{01}(\mathbf{S}_{6})=std_{\mathbf{w}}^{01}(\mathbf{S}_{15} )=0.10\), which puts them in the very same point of WMSD-space. This however, does not imply sharing the same point in MSD-space, since this position is influenced by the vector of weights (\(\mathbf{w}=[1.0,1.0,1.0]\) in MSD-space vs \(\mathbf{w}=[0.5,0.6,1.0]\) in WMSD-space). Consequently, \(\mathbf{S}_{6}\) and \(\mathbf{S}_{15}\) are evaluated identically by \(\mathsf{l}_{\mathbf{w}}(\mathbf{v})\), \(\mathsf{A}_{\mathbf{w}}(\mathbf{v})\) and \(\mathsf{R}_{\mathbf{w}}(\mathbf{v})\), but differently by \(\mathsf{l}(\mathbf{u})\), \(\mathsf{A}(\mathbf{u})\) and \(\mathsf{R}(\mathbf{u})\). Although the MSD-space and WMSD-space share the same character of the isolines under particular aggregation, the change of the alternatives' position across those spaces caused by the weights influences the final ratings of the alternatives. Let us look again at \(\mathbf{S}_{8}\) and \(\mathbf{S}_{9}\). It is clear that because of weights, their rankings reversed: \(\mathbf{S}_{8}\succ_{\mathsf{l}(\mathbf{u}),\mathsf{A}(\mathbf{u}),\mathsf{R}( \mathbf{u})}\mathbf{S}_{9}\) whereas \(\mathbf{S}_{9}\succ_{\mathsf{l}_{\mathbf{w}}(\mathbf{v}),\mathsf{A}_{\mathbf{ w}}(\mathbf{v}),\mathsf{R}_{\mathbf{w}}}\mathbf{S}_{8}\). Alternative \(\mathbf{S}_{8}\) is much better than \(\mathbf{S}_{9}\) on the first two criteria, but their importance was diminished by the weights being 0.5 and 0.6, causing \(\mathbf{S}_{9}\) to climb higher in those rankings that take weights into account. It should be stressed that if the decision maker does not treat all the criteria as equally important, and thus defines some of the weights to be different from one, the MSD-space and the WMSD-space will differ. Therefore, even though it is the WMSD-space that is used in all the subsequent computations and thus influences alone the final results of the method, it is created using both descriptions of the alternatives, as \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{_US_} & \multicolumn{3}{c}{_VS_} & \multicolumn{3}{c}{MSD} & \multicolumn{3}{c}{WMSD} & \multicolumn{3}{c}{Aggregations} \\ & \(U_{1}\) & \(U_{2}\) & \(U_{3}\) & \(V_{1}\) & \(V_{2}\) & \(V_{3}\) & M & SD & WM & WSD & \(\mathsf{l}(\mathbf{u})\) & \(\mathsf{A}(\mathbf{u})\) & \(\mathsf{R}(\mathbf{u})\) & \(\mathsf{l}_{\mathbf{w}}(\mathbf{v})\) & \(\mathsf{A}_{\mathbf{w}}(\mathbf{v})\) & \(\mathsf{R}_{\mathbf{w}}(\mathbf{v})\) \\ \hline \(\mathbf{S}_{1}\) & 0.29 & 0.29 & 0.29 & 0.15 & 0.17 & 0.29 & 0.29 & 0.00 & 0.20 & 0.00 & 0.29 & 0.29 & 0.29 & 0.29 & 0.29 & 0.29 & 0.29 \\ \(\mathbf{S}_{2}\) & 0.49 & 0.51 & 0.49 & 0.25 & 0.30 & 0.49 & 0.50 & 0.01 & 0.35 & 0.00 & 0.50 & 0.50 & 0.50 & 0.50 & 0.50 & 0.50 \\ \(\mathbf{S}_{3}\) & 0.71 & 0.71 & 0.71 & 0.35 & 0.43 & 0.71 & 0.71 & 0.00 & 0.50 & 0.00 & 0.71 & 0.71 & 0.71 & 0.71 & 0.71 & 0.71 \\ \(\mathbf{S}_{4}\) & 0.41 & 0.51 & 0.18 & 0.20 & 0.30 & 0.18 & 0.36 & 0.14 & 0.20 & 0.10 & 0.35 & 0.39 & 0.37 & 0.27 & 0.31 & 0.31 \\ \(\mathbf{S}_{5}\) & 0.35 & 0.76 & 0.44 & 0.18 & 0.46 & 0.44 & 0.52 & 0.17 & 0.35 & 0.10 & 0.49 & 0.55 & 0.52 & 0.49 & 0.51 & 0.50 \\ \(\mathbf{S}_{6}\) & 0.59 & 0.49 & 0.82 & 0.30 & 0.30 & 0.82 & 0.64 & 0.14 & 0.50 & 0.10 & 0.61 & 0.65 & 0.63 & 0.69 & 0.73 & 0.69 \\ \(\mathbf{S}_{7}\) & 0.44 & 0.76 & 0.08 & 0.22 & 0.46 & 0.08 & 0.43 & 0.28 & 0.20 & 0.20 & 0.36 & 0.51 & 0.44 & 0.23 & 0.40 & 0.34 \\ \(\mathbf{S}_{8}\) & 0.94 & 0.81 & 0.28 & 0.47 & 0.49 & 0.28 & 0.68 & 0.29 & 0.35 & 0.20 & 0.57 & 0.73 & 0.63 & 0.43 & 0.57 & 0.50 \\ \(\mathbf{S}_{9}\) & 0.56 & 0.24 & 0.92 & 0.28 & 0.14 & 0.92 & 0.57 & 0.28 & 0.50 & 0.20 & 0.49 & 0.64 & 0.56 & 0.60 & 0.77 & 0.66 \\ \(\mathbf{S}_{10}\) & 0.00 & 0.00 & 0. well as the weights. In result, WMSD-space is a preference-influenced, or biased, version of MSD-space. This means that while it is useful to consider WMSD-space to explain the final ranking of the method, it may be also useful to consider MSD-space, which always shows the data without the preferential bias introduced by the weights. After comparing the MSD-space and WMSD-space from the viewpoint of the impact that weights have on their shapes and on the alternatives' position within them, let us now illustrate some trade-offs and compensations between the values of \(mean_{\mathbf{w}}^{01}(\mathbf{v})\) and \(std_{\mathbf{w}}^{01}(\mathbf{v})\) in WMSD-space, and show how they influence the final rankings under different aggregations. The discussion will concentrate on WMSD-space, however, analogous considerations can be conducted for MSD-space with \(mean(\mathbf{u})\) and \(std(\mathbf{u})\) (see Susmaga et al. (2023)) as \(mean_{\mathbf{w}}^{01}(\mathbf{v})\) and \(std_{\mathbf{w}}^{01}(\mathbf{v})\) are natural generalizations of \(mean(\mathbf{u})\) and \(std(\mathbf{u})\), respectively. The interplay of \(mean_{\mathbf{w}}^{01}(\mathbf{v})\) and \(std_{\mathbf{w}}^{01}(\mathbf{v})\) in the context of preferences, as formalized in Table 1, is illustrated in the WMSD-space by the color reflecting the aggregation value imposed for each point of the space. Getting a higher ranking position requires an increase in the aggregation value, which is reflected by a change of the alternative's color towards dark red. This can naturally be achieved when the alternative obtains more Figure 7: Students depicted in MSD-space (left) and WMSD-space defined for weights being 0.5, 0.6, 1.0 (right) for three different aggregation functions: \(\mathbf{l}(\mathbf{u})\) (top), \(\mathbf{A}(\mathbf{u})\) (middle) and \(\mathbf{R}(\mathbf{u})\) (bottom). Color encodes the aggregation value, with blue representing the least preferred and red the most preferred values. desirable values on the criteria. In our example from Table 2, this would mean that a student should get better marks in some subjects while not worsening them in any other subject. This would result in the increase of \(mean_{\mathbf{w}}^{01}(\mathbf{v})\), which can, however, be hard to achieve, or in some cases even impossible. The preference related interplay in the WMSD-space between \(mean_{\mathbf{w}}^{01}(\mathbf{v})\) and \(std_{\mathbf{w}}^{01}(\mathbf{v})\) shows thus other ways to influence the ranking even without increasing \(mean_{\mathbf{w}}^{01}(\mathbf{v})\). First, let us focus on three alternatives characterized by the same \(mean_{\mathbf{w}}^{01}(\mathbf{v})=0.5\): \(\mathbf{S}_{3}\), \(\mathbf{S}_{6}\) and \(\mathbf{S}_{9}\) (analogous discussion is valid for, e.g., \(\mathbf{S}_{1}\), \(\mathbf{S}_{4}\) and \(\mathbf{S}_{7}\)). The ranking under the \(\mathsf{l}_{\mathbf{w}}(\mathbf{v})\) aggregation is the following: \(\mathbf{S}_{3}\succ_{\mathsf{L}_{\mathbf{w}}(\mathbf{v})}\mathbf{S}_{6}\succ_ {\mathsf{l}_{\mathbf{w}}(\mathbf{v})}\mathbf{S}_{9}\), as opposed to \(\mathsf{A}_{\mathbf{w}}(\mathbf{v})\), where \(\mathbf{S}_{9}\succ_{\mathsf{A}_{\mathbf{w}}(\mathbf{v})}\mathbf{S}_{6}\succ _{\mathsf{A}_{\mathbf{w}}(\mathbf{v})}\mathbf{S}_{3}\). Clearly, a change in the \(std_{\mathbf{w}}^{01}(\mathbf{v})\), with no change of \(mean_{\mathbf{w}}^{01}(\mathbf{v})\), is enough to influence the rankings. Under aggregation \(\mathsf{l}_{\mathbf{w}}(\mathbf{v})\) less variant values of the criteria are preferred as \(std_{\mathbf{w}}^{01}(\mathbf{v})\) is of cost-type for this aggregation. In contrast, under the \(\mathsf{A}_{\mathbf{w}}(\mathbf{v})\) aggregation an increase of \(std_{\mathbf{w}}^{01}(\mathbf{v})\) results in the increase of the aggregation function. Aggregation \(\mathsf{R}_{\mathbf{w}}(\mathbf{v})\) on the other hand, resembles aggregation \(\mathsf{A}_{\mathbf{w}}(\mathbf{v})\) when \(mean_{\mathbf{w}}^{01}(\mathbf{v})<\frac{mean(\mathbf{w})}{2}\) and aggregation \(\mathsf{l}_{\mathbf{w}}(\mathbf{v})\) when \(mean_{\mathbf{w}}^{01}(\mathbf{v})>\frac{mean(\mathbf{w})}{2}\). In the very middle of WMSD-space, i.e. when \(mean_{\mathbf{w}}^{01}(\mathbf{v})=\frac{mean(\mathbf{w})}{2}\), the change of \(std_{\mathbf{w}}^{01}(\mathbf{v})\) has no effect on the ranking at all. This brings us to the conclusion that the isolines visualized in the WMSD-space can guide the decision maker as to what actions need to be taken in order to influence the ranking without changing the \(mean_{\mathbf{w}}^{01}(\mathbf{v})\) of the alternative. Now, let us focus on another set of alternatives: \(\mathbf{S}_{4}\), \(\mathbf{S}_{5}\) and \(\mathbf{S}_{6}\) (analogous discussion is valid for, e.g., \(\mathbf{S}_{10}\), \(\mathbf{S}_{1}\), \(\mathbf{S}_{2}\), \(\mathbf{S}_{3}\) and \(\mathbf{S}_{12}\)). They are characterized by the same \(std_{\mathbf{w}}^{01}(\mathbf{v})\), but different \(mean_{\mathbf{w}}^{01}(\mathbf{v})\). Under all the three considered aggregations, the alternatives are ranked the same: \(\mathbf{S}_{6}\succ_{\mathsf{L}_{\mathbf{w}}(\mathbf{v}),\mathsf{A}_{\mathbf{ w}}(\mathbf{v}),\mathsf{R}_{\mathbf{w}}(\mathbf{v})}\mathbf{S}_{5}\succ_{\mathsf{L}_{ \mathbf{w}}(\mathbf{v}),\mathsf{A}_{\mathbf{w}}(\mathbf{v}),\mathsf{R}_{ \mathbf{w}}(\mathbf{v})}\mathbf{S}_{4}\). This results from the fact, that \(mean_{\mathbf{w}}^{01}(\mathbf{v})\) is of type-gain for all the considered aggregations. Thus, moving to the right in the WMSD-space (i.e. keeping the same \(std_{\mathbf{w}}^{01}(\mathbf{v})\) and only increasing \(mean_{\mathbf{w}}^{01}(\mathbf{v})\)) always increases the aggregation functions. This explains how intervention actions based on the increase of \(mean_{\mathbf{w}}^{01}(\mathbf{v})\) can be formed. Last but not least, the alternative's rating can be caused by a simultaneous change in \(mean_{\mathbf{w}}^{01}(\mathbf{v})\) and \(std_{\mathbf{w}}^{01}(\mathbf{v})\). A rather hard to predict compensation of those two is clearly visible in the WMSD-space, as the change in rating is equivalent to'switching between' isolines. Let us consider \(\mathbf{S}_{9}\) and \(\mathbf{S}_{13}\) (analogous discussion is valid for \(\mathbf{S}_{7}\) and \(\mathbf{S}_{11}\)). Observe that the rankings of those two alternatives are different under different aggregations: \(\mathbf{S}_{9}\succ_{\mathsf{L}_{\mathbf{w}}(\mathbf{v})}\mathbf{S}_{13}\) but \(\mathbf{S}_{13}\succ_{\mathsf{R}_{\mathbf{w}}(\mathbf{v})}\mathbf{S}_{9}\). It results from the fact that the isolines for the \(\mathsf{R}_{\mathbf{w}}(\mathbf{v})\) aggregation'straighten up' while moving towards the middle of \(mean_{\mathbf{w}}^{01}(\mathbf{v})\), while the isolines for the \(\mathsf{l}_{\mathbf{w}}(\mathbf{v})\) aggregation keep the same concentric character. The isolines are thus a visual representation of the trade-offs between \(mean_{\mathbf{w}}^{01}(\mathbf{v})\) and \(std_{\mathbf{w}}^{01}(\mathbf{v})\) for different aggregations. ### Index of Economic Freedom The second case study is based on publicly available data from the Index of Economic Freedom2, which covers 12 freedoms--from property rights to tax burdens--in 184 countries. The data has been annually collected for almost 30 years now by The Heritage Foundation (Kim, 2023) and served as the basis for many case studies and analyses, e.g., de Lima Silva et al. (2023); Puska et al. (2023); Dinc and Erilli (2022); de Lima Silva and de Almeida Filho (2020); Brkic et al. (2020). In particular, our case study is based on the data gathered for the \(25^{th}\) anniversary of the Index in 2019, which were used by de Lima Silva and de Almeida Filho (2020). Footnote 2: [https://www.heritage.org/index/](https://www.heritage.org/index/) Economic freedom is understood as the right of every human to control their own labor and property. Within the Index, 12 factors are measured and grouped into four categories: Rule of Law, Government Size, Regulatory Efficiency, Open Markets. There are three factors per category, each factor is graded on a 0-100 scale of type gain. Details on how the values of factors are determined for the considered countries are available in (Miller et al., 2019). For the purpose of this case study, we have limited the Index only to the 12 countries of South America and aggregated the criteria by taking the mean of factors forming each category (Table 4). To ensure reproducibility, the raw data from the Heritage Foundation, its transformations, and final rankings under different aggregations and weights are available in the online supplementary materials. The case study focuses on the \(\mathsf{R_{w}}(\mathbf{v})\) aggregation, as the most commonly used one in practice. The following four sets of weights are considered as examples of preference information given by different decision-makers: * \(\mathbf{w_{1}}=[1.00,1.00,1.00,1.00]\), * \(\mathbf{w_{2}}=[0.25,1.00,0.25,0.50]\), * \(\mathbf{w_{3}}=[0.50,1.00,0.25,0.25]\), * \(\mathbf{w_{4}}=[1.00,0.66,0.33,0.00]\). In particular, the first weight vector expresses equal importance of all criteria, allowing for the visualization of the alternatives in MSD-space, as opposed to the other sets of weights, that require WMSD-space. Interestingly, the last weight vector eliminates the influence of the Open Markets criterion on the final country rankings by setting its weight to zero; this results in \(n_{p}=3<n=4\). In result, the weighted utility VS shrinks to 3D for \(\mathbf{w_{4}}\). The preference expressed by \(\mathsf{R_{w}}(\mathbf{v})\) under the four considered weight vectors in WMSD-space, with the countries of South America superimposed on it, is presented in Figure 8. The final country rankings are gathered in Table 5. As can be noticed in Figure 8, the weights have a clear influence both on the shape of WMSD-spaces and the rankings of the alternatives. First, observe that \(mean(\mathbf{w_{2}})=mean(\mathbf{w_{3}})=mean(\mathbf{w_{4}})=0.5\), resulting in the same WM range (x-axis) in Figures 8B, C and D. Additionally, vector \(\mathbf{w_{3}}\) is simply a permutation of \(\mathbf{w_{2}}\), thus the whole shape of the respective WMSD-spaces is exactly the same (see Figure 8B and C). Nonetheless, the position of the alternatives within those shapes differs, leading to different rankings. The weights also naturally influence the number of vertices in the WMSD-spaces, which is, in particular, reflected by a smaller number of vertices when some criteria are given weights equal to zero (compare Figure 8B and D). Looking at the rankings under different weight vectors (Table 5), one notices the changes in country ratings imposed by incorporating weights. For example, Uruguay shifts from the second position under \(\mathbf{w_{1}}\) or \(\mathbf{w_{4}}\) to as far as the fifth position under \(\mathbf{w_{2}}\). Interestingly, for particular weight vectors, some countries are almost indiscernible as their values of the \(\mathsf{R_{w}}(\mathbf{v})\) aggregation differ only slightly, e.g., Argentina, Ecuador, and Suriname under \(\mathbf{w_{3}}\). The WMSD visualizations (Figure 8) provide, however, a much deeper explanation than the raw values of the aggregation function on why the ranking positions of those countries are (almost) the same. Observe that the considered countries are located in the green region, i.e., very close to the middle of the WM range (x-axis), being \(\frac{mean(\mathbf{w_{3}})}{2}=0.25\), which happens to be the region where \(std_{\mathbf{w}}^{01}(\mathbf{v})\) hardly \begin{table} \begin{tabular}{l l l l l l} \hline \hline ID & Country & Rule of Law & Gov. Size & Reg. Eff. & Open Markets \\ \hline ARG & Argentina & 41.93 & 50.60 & 54.50 & 61.67 \\ BOL & Bolivia & 17.50 & 49.77 & 60.17 & 41.80 \\ BRA & Brazil & 45.70 & 43.87 & 61.77 & 56.33 \\ CHL & Chile & 62.43 & 82.43 & 75.37 & 81.27 \\ COL & Colombia & 42.33 & 76.17 & 75.17 & 75.33 \\ ECU & Ecuador & 27.13 & 54.87 & 58.60 & 47.13 \\ GUY & Guyana & 39.27 & 71.33 & 66.07 & 50.60 \\ PRY & Paraguay & 31.67 & 90.50 & 54.50 & 70.53 \\ PER & Peru & 40.63 & 85.07 & 71.73 & 73.80 \\ SUR & Suriname & 35.60 & 52.57 & 59.27 & 44.87 \\ URY & Uruguay & 65.47 & 71.53 & 73.03 & 64.53 \\ VEN & Venezuela & 09.53 & 50.13 & 20.63 & 23.33 \\ \hline \hline \end{tabular} \end{table} Table 4: Descriptions of alternatives for the second case study; South American countries described by four criteria of type gain: Rule of Law, Government Size, Regulatory Efficiency, Open Markets. influences the rankings (recall the preference-related interplay of \(mean_{\mathbf{w}}^{01}(\mathbf{v})\) and \(std_{\mathbf{w}}^{01}(\mathbf{v})\) in Table1). Thus, despite occupying different points in the WMSD-space, the countries are ranked almost equally. Since the Index of Economic Freedom is updated annually, it would be interesting to visually compare the data from the year 2019 and the current one - 2023. This can be easily done using WMSD-space, as shown in Figure 9 for four exemplary countries: Chile, Uruguay, Suriname, and Venezuela. Observe, that under a particular aggregation (\(\mathsf{R_{w}}(\mathbf{v})\)) and weight vector (\(\mathbf{w_{3}}=[0.50,1.00,0.25,0.25]\)) the shape of the WMSD-space and the isolines of the aggregation function are fixed. Thus, the comparison of the data from various years only requires superimposing that data on the WMSD-space. The countries' positions Figure 8: The visualization of preference as expressed by \(\mathsf{R_{w}}(\mathbf{v})\) aggregation in WMSD-spaces defined for four different weight vectors. The color map reflects the preference: dark blue—the least preferred, dark red—the most preferred. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{2}{c}{\(\mathbf{w_{1}}\)} & \multicolumn{2}{c}{\(\mathbf{w_{2}}\)} & \multicolumn{2}{c}{\(\mathbf{w_{3}}\)} & \multicolumn{2}{c}{\(\mathbf{w_{4}}\)} \\ \([1.00,1.00,1.00,1.00]\) & \([0.25,1.00,0.25,0.50]\) & \([0.50,1.00,0.25,0.25]\) & \([1.00,0.66,0.33,0.00]\) \\ \hline CHL & 0.746 & CHL & 0.806 & CHL & 0.775 & CHL & 0.684 \\ URY & 0.685 & PER & 0.787 & PER & 0.725 & URY & 0.677 \\ PER & 0.659 & PRY & 0.785 & PRY & 0.713 & PER & 0.548 \\ COL & 0.658 & COL & 0.738 & URY & 0.701 & COL & 0.539 \\ PRY & 0.599 & URY & 0.700 & COL & 0.685 & GUY & 0.503 \\ GUY & 0.564 & GUY & 0.652 & GUY & 0.634 & PRY & 0.501 \\ ARG & 0.521 & ARG & 0.524 & ARG & 0.497 & BRA & 0.464 \\ BRA & 0.519 & ECU & 0.523 & ECU & 0.497 & ARG & 0.453 \\ SUR & 0.481 & SUR & 0.507 & SUR & 0.494 & SUR & 0.424 \\ ECU & 0.471 & BOL & 0.474 & BRA & 0.456 & ECU & 0.382 \\ BOL & 0.430 & BRA & 0.471 & BOL & 0.444 & BOL & 0.321 \\ VEN & 0.283 & VEN & 0.426 & VEN & 0.412 & VEN & 0.262 \\ \hline \hline \end{tabular} \end{table} Table 5: Rankings of South American countries resulting from \(\mathsf{R_{w}}(\mathbf{v})\) aggregation under different weight vectors. in the year 2019 are depicted by solid circles, whereas empty circles mark the countries' positions in 2023. This dynamic perspective shows that Venezuela's and Uruguay's positions improved, while Suriname's and Chile's got worse. As a result, Venezuela outranks Suriname, and Uruguay outranks Chile in 2023, which was not the case in 2019. All of the considered ranking transitions resulted from the changes on both \(mean_{\mathbf{w}}^{01}(\mathbf{v})\) and \(std_{\mathbf{w}}^{01}(\mathbf{v})\). A closer inspection reveals that out of the four considered countries, Chile is characterized by the biggest decrease in \(mean_{\mathbf{w}}^{01}(\mathbf{v})\), which directly caused Chile's drop in the ranking. On the other hand, the description of Venezuela in terms of the analyzed criteria became notably diversified, which was reflected by the biggest change in \(std_{\mathbf{w}}^{01}(\mathbf{v})\). Since Venezuela is situated in the left-hand side of the WMSD-space (\(mean_{\mathbf{w}}^{01}(\mathbf{VEN})<\frac{mean(\mathbf{w_{3}})}{2}=0.25\)) such an increase of the variety had a positive effect on the country's ranking position. The above considerations show that WMSD-space is a useful tool not only for visualizing the impact that weight and aggregations have on the final rankings, but also for analyzing changes in rankings over time. ## 6 Conclusions Explainability, regarded as methods that allow humans to understand and trust the results of algorithms, keeps gaining a lot of attention in artificial intelligence (Guidotti et al., 2018; Pradhan et al., 2023; Itani et al., 2020) and multi-criteria decision support (Ziemba et al., 2023; Cerneviciene and Kabasinskas, 2022). The ability to explain the method's internal logic is of particular importance in practical applications that impact society. However, such applications often involve domain experts who incorporate their preferential bias. In particular, in ranking methods like TOPSIS such preference information given by decision-makers is expressed in the form of weights imposed on the criteria. Since the existing MSD-based method for visualizing TOPSIS was designed for unweighted criteria, our goal was to propose a visualization method that generalizes to weighted criteria. In this paper, we have put forward a visual-based method for explaining TOPSIS rankings in practical decision support applications with expert-defined criteria weights. To this end, weight-scaled means and standard deviations of alternatives were defined as generalizations of means and standard deviations. Formalizing their relationship with distances of an alternative to predefined ideal/anti-ideal points (IA-WMSD property), a two-dimensional WMSD-space was proposed. It is based on weight-scaled means and standard deviations of alternatives and is capable of representing alternatives and aggregation functions in a plane regardless of the number of considered criteria and their weights. As such, WMSD-space is a tool for visual-based comparisons of different aggregation functions and the impact that weights defined by experts have on the final rankings. To highlight the practical usefulness of the proposed visualization, two case studies were conducted on a dataset of students described in terms of school grades and on a dataset of counties described in terms of factors constituting the Index of Economic Freedom. Using WMSD-space visualizations we discussed how weights affected rankings of alternatives under various TOPSIS aggregations and compared the effects of weights provided by multiple experts. Figure 9: The visualization in WMSD-space of the change of the countries’ positions between 2019 (solid circles) and 2023 (empty circles) under \(\mathsf{R}_{\mathsf{w}}(\mathbf{v})\) aggregation and weight vector \(\mathbf{w_{3}}=[0.50,1.00,0.25,0.25]\). Future research could include the development of TOPSIS modifications that would control the impact that weight-scaled means and standard deviations have on the final rankings. So far, the weights influence the very shape of WMSD-space and the positions of alternatives within it. An interesting next step would be to introduce a user-provided parameter affecting the shapes of the isolines. That would lead to a new TOPSIS aggregation, where the user could choose whether the rankings should be more influenced by the weight-scaled means or standard deviations. Similarly, an aggregation based on lexicographic ordering of weight-scaled means and standard deviations seems to have a lot of practical potential. Finally, meeting the needs of practitioners, we plan to develop a publicly available open source software library for WMSD visualizations of user-provided datasets. Combined with new spectrum of aggregations and possible improvement actions, it would make a valuable tool for hands-on decision makers. ## Acknowledgments This research was partly funded by the National Science Centre, Poland, grant number: 2022/47/D/ST6/01770.
2305.08165
Thermal instabilities in accretion disks II: Numerical Experiments for the Goldreich-Schubert-Fricke Instability and the Convective Overstability in disks around young stars
The linear stability analysis of a stratified rotating fluid (see paper I) showed that disks with a baroclinic stratification under the influence of thermal relaxation will become unstable to thermal instabilities. One instability is the Goldreich-Schubert-Fricke instability (GSF), which is the local version of the Vertical Shear Instability (VSI) and the other is a thermal overstability, the Convective Overstability (COS). In the present paper we reproduce the analytic predicted growth rates for both instabilities in numerical experiments of small axisymmetric sections of vertically isothermal disks with a radial temperature gradient, especially for cooling times longer than the critical cooling time for VSI. In this cooling time regime our simulations reveal the simultaneous and independent growth of both modes: COS and GSF. We consistently observe that GSF modes exhibit a faster growth rate compared to COS modes. Near the midplane, GSF modes eventually stop growing, while COS modes continue to grow and ultimately dominate the flow pattern. Away from the midplane, we find GSF modes to saturate, when bands of constant angular momentum have formed. In these bands we observe the formation and growth of eddies driven by the baroclinic term, further enhancing the velocity perturbations. In geophysics this effect is known as horizontal convection or sea-breeze instability. Three-dimensional simulations will have to show whether similar effects will occur when axisymmetry is not enforced. Our local simulations help to reveal the numerical resolution requirements to observe thermal instabilities in global simulations of disks around young stars.
Hubert Klahr, Hans Baehr, Julio David Melon Fuksman
2023-05-14T14:23:55Z
http://arxiv.org/abs/2305.08165v1
# Thermal instabilities in accretion disks II: ###### Abstract The linear stability analysis of a stratified rotating fluid (see paper I) showed that disks with a baroclinic stratification under the influence of thermal relaxation will become unstable to thermal instabilities. One instability is the Goldreich-Schubert-Fricke instability (GSF), which is the local version of the Vertical Shear Instability (VSI) and the other is a thermal overstability, the Convective Overstability (COS). In the present paper we reproduce the analytic predicted growth rates for both instabilities in numerical experiments of small axisymmetric sections of vertically isothermal disks with a radial temperature gradient, especially for cooling times longer than the critical cooling time for VSI. In this cooling time regime our simulations reveal the simultaneous and independent growth of both modes: COS and GSF. We consistently observe that GSF modes exhibit a faster growth rate compared to COS modes. Near the midplane, GSF modes eventually stop growing, while COS modes continue to grow and ultimately dominate the flow pattern. Away from the midplane, we find GSF modes to saturate, when bands of constant angular momentum have formed. In these bands we observe the formation and growth of eddies driven by the baroclinic term, further enhancing the velocity perturbations. In geophysics this effect is known as horizontal convection or sea-breeze instability. Three-dimensional simulations will have to show whether similar effects will occur when axisymmetry is not enforced. Our local simulations help to reveal the numerical resolution requirements to observe thermal instabilities in global simulations of disks around young stars. accretion, accretion disks -- circumstellar matter -- hydrodynamics -- instabilities -- turbulence -- methods: numerical -- solar system: formation -- planetary systems + Footnote †: journal: ApJ 0000-0002-0002-0883]Hubert Klahr 0000-0002-0002-3883]Hans Baehr 0000-0002-4880-0888]Julio David Melon Fuksman ## 1 Introduction The planet forming dusty disks around young stars are subject to a range of magnetic and non-magnetic instabilities (Lesur et al., 2022). The turbulence emerging from these instabilities has a profound impact on the planet formation process, via transporting and mixing dust, generating collisions among grains and ultimately influencing the migration of planets. Among the pure hydro instabilities there are non-linear instabilities like the Zombie Vortex Instability (ZVI) (Marcus et al., 2015), the stratorotational instability (Shalybkov & Rudiger, 2005) and the subcritical baroclinc instability (SBI) (Petersen et al., 2007). The Rossby wave instability is a linear yet radially global instability (Lovelace et al., 1999) and likewise the vertical shear instability (VSI) (Urpin & Brandenburg, 1998; Nelson et al., 2016) is a linear vertically global instability. The latter instability is actually an extension of the Goldreich-Schubert-Fricke instability (GSF) as studied in rotating stars (Goldreich & Schubert, 1967; Fricke, 1968) for the geometrically thin accretion disk. GSF operates the best for short cooling times, but operates actually for any cooling rate (Tassoul, 2000), albeit at slower growth rates as discussed in Urpin (2003). The original VSI work (Urpin & Brandenburg, 1998; Urpin, 2003) considers local modes, whereas all recent analytic work on VSI considers vertical global modes (Nelson et al., 2016; Lin & Youdin, 2015; Barker & Latter, 2015; Latter & Papaloizou, 2017; Cui & Latter, 2022; Latter & Kunz, 2022). So to separate the local treatment in the present paper from the global treatment in these papers, we refer to the GSF, by which we mean discussing growth rates from the local dispersion relation including thermal relaxation (Klahr, 2023). By this definition GSF is a local instability, whereas VSI is an overstability, even so the stability criterion is the same. In Tassoul (2000) the GSF is classified as a thermal instability, which together with the "Vibrational Instability of Rotating Stars" by Shibahashi (1980) (aka a thermal overstability) forms a set of "thermal instabilities". The Shibahashi process was rediscovered for disks around young stars by Klahr & Hubbard (2014) and dubbed convective overstability. Initial studies neglected the vertical stratification (Lyra, 2014), yet lately it was shown that COS will also exist in a vertically stratified disk (Klahr, 2023) (Paper I). This paper predicts growth rates for both GSF and COS modes as a function of disk stratification and cooling rate. The formalism to determine these growth rates was already derived in Urpin (2003), yet the existence of convective modes was not further investigated. In Paper I it was finally shown that a baroclinic atmosphere with its non-parallel contours of pressure and entropy must also possess directions in which the stratification is super-adiabatic. While this super-adiabatic stratification does not directly lead to convection as discussed in the Solberg-Hoiland criterion (Rudiger et al., 2002), because of the stabilizing effect of the epicyclic term in the absence of thermal relaxation, it is sufficient to amplify epicyclic oscillations for a thermal relaxation time on the order of the epicyclic oscillation time, i.e. the Keplerian period. Also the GSF (Urpin, 2003) and the VSI (Lin & Youdin, 2015) possess growth rates in this cooling time regime beyond the critical cooling time \(\tau_{c}\) for VSI. Yet, numerical studies were not able to reproduce them for global simulations of disks (Manger et al., 2021). The VSI seemed to be suppressed in the case of cooling times longer than the critical cooling time. The growth rates of the fundamental large-scale VSI modes (Nelson et al., 2013) scale proportional to the disk aspect ratio \(h=H/R\), radial temperature stratification \(q\) and Keplerian frequency \(\Omega\) \[\Gamma\approx h|q|\Omega, \tag{1}\] which can be reproduced in numerical simulations (Nelson et al., 2013; Stoll & Kley, 2014; Richard et al., 2016; Stoll et al., 2017; Manger et al., 2021) as long as the cooling time is shorter than the critical value \(\tau_{c}\) derived by Lin & Youdin (2015) (see also the local variant in Urpin (2003)). In the present paper we show that, with sufficient resolution of limited radial and vertical extent, we can reproduce the predicted growth-rates of Paper I. Translating our local resolution of 256 cells per pressure scale height \(H\) to a global simulation covering \(\pm 3.5H\) would need 1792 cells in the vertical direction to reproduce the growth of GSF and COS modes for long cooling times. Both GSF and COS modes can only be avoided in a barotropic atmosphere, i.e. if pressure is a function of density only, which means either globally constant entropy or globally constant temperature in the disk. Any temperature structure that is not barotropic (aka baroclinic), will lead to instability of both GSF and COS modes. For short cooling times \(\tau<\tau_{c}\) GSF will dominate, but for longer cooling times both GSF and COS have very similar growth rates. Thus, in disks around young stars with thermal relaxation, both instabilities will always co-exist. For the purpose of a numerical experiment we will use a slightly non-physical gravity law to either suppress COS or GSF, but for conservative gravity, this is not possible. Even in regions which are radially stably stratified (with respect to convection) one can observe the development of slanted COS modes. It is a major result of Paper I that COS does not strictly depend on radial unstable modes, but that slanted modes also play an important role. An effect of the limited vertical extent of our simulation domain can be observed in the saturation behavior of the linear growth phase. Whereas in the global simulations eventually Kelvin-Helmholtz eddies are created between the vertical VSI modes as shown by (Melon Fuksman et al. A&A, submitted) we see a rather smooth formation of radial confined regions of constant angular momentum. In these bands there is no vertical shear and thus GSF growth does not exist anymore. Also the COS does not persist anymore as the radial epicyclic frequency vanishes. Convection as described in the Solberg-Hoiland criterion does also not occur as we are able to measure. It is the baroclinic term itself that is now driving clockwise rotating eddies within the bands of constant angular momentum, similar to horizontal convection or the "Sea-breeze" mechanism in geophysics (Holton & Hakim, 2012). The mechanism is also related to the SBI (Petersen et al., 2007), yet in contrast to that it operates also for very short cooling times. The SBI, on the other hand, operates in the barotropic background of the disk, with the baroclinicity only generated by the rotation of the in-plane vortex itself. This hysteresis effect only occurs when the cooling rate is of the order of the rotation frequency of the vortex (Lesur & Papaloizou, 2010; Raettig, 2012). We test our predicted growth rates in non-linear hydrodynamic simulations for sufficiently small sections of a disk atmosphere (to narrow down the possible range of growth rates) in Section 2.1. In Section 2.2, we perform test simulations in which we suppress either COS or GSF by modifying stellar gravity to explore the unstable modes in their linear evolution and non-linear saturation independently. Finally, in Section 2.3, we give an example of the development of diagonal COS modes in radially stably stratified disks with a steep temperature gradient, as normal temperature gradients lead to insufficient growth rates to be handled with our dissipative numerical scheme. In Section 3 we discuss eddies that we find in the non linear state of our simulations, amplified by a separate process from COS and VSI. We identify the driving process as horizontal convection similar to a "Sea-Breeze" in geophysics. Section 4 summarizes our findings and gives an outlook to future work. Some details on the four movies we present along with this paper can be found in the appendix. ## 2 Numerical Experiments \begin{table} \begin{tabular}{l l l} \hline \hline \multicolumn{1}{c}{ Symbol} & \multicolumn{1}{c}{Definition} & \multicolumn{1}{c}{Description} \\ \hline \(R\),\(z\),\(\phi\) & & cylindrical coordinates \\ \(\rho\), \(P\) & & density, pressure \\ \(c_{v}\) & & specific heat \\ \(E\), \(T\) & \(E=c_{v}\rho T\) & internal energy, temperature \\ \(\gamma\) & & adiabatic index \\ \(K\) & \(P\rho^{-\gamma}\) & specific entropy \\ \(p\) & \(=\frac{\mathrm{d}\log\rho(R,0)}{\mathrm{d}\log R}\) & global density gradient \\ \(q\) & \(=\frac{\mathrm{d}\log^{2}(R,0)}{\mathrm{d}\log R}\) & global temperature gradient \\ \(a_{R},a_{z}\) & \(\nabla\!\mathrm{log}\rho\) & local density stratification \\ \(b_{R},b_{z}\) & \(\nabla\!\mathrm{log}P\) & local pressure stratification \\ \(s_{R},s_{z}\) & \(\nabla\!\mathrm{log}K\) & local entropy stratification \\ \(\Omega\) & & Keplerian frequency \\ \(c\) & \(P/\rho\) & isothermal speed of sound \\ \(H\) & \(c/\Omega\) & pressure scale height \\ \(h\) & \(H/R\) & aspect ratio \\ \(\tau\) & & thermal relaxation time \\ \(\tau_{c}\) & & critical \(\tau\) for VSI \\ \(\tau^{*}\) & \(\tau\gamma\Omega\) & dimensionless cooling time \\ \(k_{R},k_{z}\) & & radial, vertical wave number \\ **k** & & wave number vector \\ **a** & \(\mathbf{a}\cdot\mathbf{k}=0\) & direction vector for velocity perturbation \\ \(\kappa_{R}^{2}\) & \(=\frac{1}{R^{3}}\partial_{x}\Omega^{2}R^{4}\) & radial angular momentum gradient: epicyclic frequency \\ \(\kappa_{z}^{2}\) & \(=\frac{k^{2}}{R^{2}}\left(\kappa_{R}^{2}-\frac{k_{R}}{k_{z}}\kappa_{z}^{2}\right)\) & oscillation frequency (OF) for \(\mathbf{k}\) \\ \(N^{2}\) & \(=-\frac{1}{\rho Tc_{v}}\nabla P\nabla K\) & buoyancy frequency (BF) \\ \(N_{R}^{2},N_{z}^{2}\) & & radial, vertical BF \\ \(N_{k}^{2}\) & \(=\frac{N_{R}^{2}\left(1-\frac{b_{z}k_{z}}{b_{z}k_{z}}\right)k_{z}^{2}+N_{z}^{2} \left(1-\frac{b_{R}k_{z}}{b_{z}k_{z}}\right)k_{R}^{2}}{k^{2}}\) & uka Brunt-Väisäläs-frequency \\ \(N_{k}^{2}\) & \(=\min(N_{k}^{2})\mathbf{k}\) & buoyancy frequency for \(\mathbf{k}\) \\ \(N_{+}^{2}\) & \(=\max(N_{k}^{2})\mathbf{k}\) & lowest local BF \\ \(\kappa_{-}^{2}\) & \(=\min(\kappa_{\mathbf{k}}^{2})\mathbf{k}\) & lowest local OF \\ \(\kappa_{+}^{2}\) & \(=\max(\kappa_{\mathbf{k}}^{2})\mathbf{k}\) & largest local OF \\ \(\Gamma_{\mathrm{VSI}}\) & \(=\frac{H}{R}|q|\Omega\) & typical VSI growth rate \\ \(\Gamma_{\mathrm{GSF}}(\tau<\tau_{c})\) & \(=\frac{\frac{\pi}{2}}{\sqrt{2}}\frac{|q|}{\Omega}\) & GSF growth rate for \(\tau\to 0\) \\ \(\Gamma_{\mathrm{GSF}}\) & \(=\frac{q^{2}}{4}\frac{\gamma}{\gamma-1}\) & \(\frac{1}{\tau_{\mathrm{GSF}}^{2}+\tau^{*}}\Omega\) & approx. GSF growth rate \\ \(\Gamma_{\mathrm{COS}}\) & \(=\frac{q^{2}}{8}\frac{\gamma}{\gamma-1}\) & \(\frac{\tau^{*}}{1+\tau^{*2}}\Omega\) & approx COS growth rate \\ \hline \end{tabular} \end{table} Table 1: Used symbols and definitions: In paper I, we have seen that even for a given local stratification and thermal relaxation time, more than one instability can grow. In order to disentangle the various mechanisms, vertical shear vs. super-adiabatic stratification, we perform a limited set of numerical experiments. Specifically we want to restrict ourselves to three key questions in the present paper: 1. Can we reproduce the predicted growth rates in numerical simulations of stratified accretion disks using the full non-linear set of hydrodynamic equations? 2. Can we distinguish between COS and GSF modes in these simulations? 3. Can we reproduce the inclined COS modes in simulations with radially stable stratification? For all these tests we will apply axisymmetric setups and also restrict the simulation area to just a fraction of the disk atmosphere. This will 1) allow for a controlled experiment as the growth rates are height (\(z\)) dependent and 2) allow us to ensure sufficient numerical resolution and a reasonable run time. This means we will not be able to address the full development of a saturated Figure 1: Background structure of the simulation area around the midplane (\(z_{0}=0\)) for a disk with \(q=-1\) and \(p=-1.5\). Density \(\rho\), temperature \(T\), iso-contours and specific angular momentum \(j=\Omega R^{2}\). We emphasize the iso-contours for pressure (red) and entropy (blue). An the disk midplane, both gradients in entropy and pressure point radially inward (red and blue arrow). Away from the midplane, the iso-contours and thus the gradients bend with respect to each other and the direction of largest unstable buoyancy (green arrows) is no longer strictly radial but points towards and away from the midplane. This is the direction in which the COS would operate. The opposite direction is stably stratified (grey arrows). The non-alignment of density and pressure (baroclinic structure) is the reason the specific angular momentum decreases with height. three dimensional turbulent state and the angular momentum transport associated with it. Initial tests indicate a large numerical expense for such a study, especially if one wishes to adopt realistic values for the radial temperature gradient. We shall postpone these simulations to a future paper. The first question we will address by a set-up similar to Manger et al. (2021), yet apply a much smaller computational domain. ### Initial and Equilibrium Disk Structure Using the same grid size and resolution as in our global simulations (Manger et al., 2020, 2021; Pfeil and Klahr, 2021) does not provide the sufficient resolution to study the linear development of some of the slowly growing instabilities we consider in this paper. Furthermore, a global disk has a wide range of growth rates, depending on distance to the star as well as distance to the midplane. We will therefore pick two small sections of the disk roughly covering a vertical range either sitting in the midplane (\(z_{0}=0\)): \(z=-0.5H\) to \(z=0.5H\) or in the atmosphere (\(z_{0}=H\)): \(z=0.5H\) to \(z=1.5H\). The radial range will then span from \(R=1-0.5H\) to \(R=1+0.5H\) and an axisymmetric region of height and width \(H\). This shall allow for the study the effect of stratification on the unstable modes without mixing too many different conditions for instability and growth rates in one simulation and thus we can compare growth rates with analytical predictions. Figure 2: Background structure of the simulation area around one pressure scale height above the midplane (\(z_{0}=H\)) for a disk with \(q=-1\) and \(p=-1.5\). Density \(\rho\), temperature \(T\), iso-contours and specific angular momentum \(j=\Omega R^{2}\). We emphasize the iso-contours for density (black), pressure (red) and entropy (blue). At the midplane, both gradients in density and pressure point radially inward. Away from the midplane, the iso-contours and thus the gradients in density (black arrow) and pressure (red arrow) bend with respect to each other (baroclinic structure), which is the reason the specific angular momentum \(j\) decreases with height, or respectively the reason its iso-contours bend outward, which is the cause of the GSF and VSI. Using small computational domains on the order of a pressure scale height makes one wonder why not use shearing sheet coordinates as was successfully done for the the MRI (magneto rotational instability (Balbus & Hawley, 1991)). For the SBI (subcritical baroclinic instability) in Lyra & Klahr (2011) it was already necessary to introduce the global gradient of entropy in a linearized way. The same setup for the COS in Lyra (2014) did not consider vertical stratification. The effect of vertical density stratification, in combination with radial temperature stratification, to lead to vertical shear would then also have to be incorporated by linearizing certain terms, which may even be prone to numerical artifacts once radial periodicity is applied (McNally & Pessah, 2015). Thus, in the spirit of Klahr & Hubbard (2014), which used a cylindrical yet vertically unstratified set up we now go straight for a "global" setup in terms of applied equations, but use a small simulation domain. We use the PLUTO code (version 4.3) (Mignone et al., 2007) in a spherical (or cylindrical) axisymmetric setup using special reflective boundary conditions that invert the normal component of velocities at the boundaries (both radial and polar, respectively vertical). In a departure from normal reflective boundaries, we impose the initial values for all other quantities (pressure, density, rotation profile) for the other ghost-cell values. The radial velocity at the vertical boundaries is defined as free slip, i.e., there is zero gradient towards the ghost cells. We treat the vertical velocities at the radial boundaries in the same fashion. These Figure 3: Background structure of the simulation area around the midplane (\(z_{0}=0\)) for a disk with \(q=-0.5\) and \(p=-1.5\). Density \(\rho\), temperature \(T\), iso-contours and specific angular momentum \(j=\Omega R^{2}\). We emphasize the iso-contours for pressure (red) and entropy (blue). Due to the shallower temperature gradient, the entropy gradient in the midplane (blue arrow) points inward, where as the gradient in pressure points radially outward (red arrow), thus the midplane is radially stable with respect to convective modes. Outside the midplane, the iso-contours and thus the gradients twist with respect to each other and they do not point strictly in opposite directions. Thus, there is now a direction of unstable buoyancy (green arrow) very similar to the case with \(q=-1\). This is the direction in which the COS would operate. The opposite direction is stably stratified (grey arrows). The bending of the iso-contours in specific angular momentum as a cause for VSI and GSF modes is also present, only weaker due to the shallower temperature profile. boundaries ensure that we lose no mass and that even a strongly perturbed disk can decay towards the initial and equilibrium state. For the other details of our simulations we refer to Manger & Klahr (2018) In spherical coordinates, the small disk region is defined as \(r_{\rm min}=0.95\) to \(r_{\rm max}=1.05\) in radius, and from either A: \(\theta_{\rm min}=\frac{\pi}{2}-0.5h\) to \(\theta_{\rm max}=\frac{\pi}{2}+0.5h\) or B: \(\theta_{\rm min}=\frac{\pi}{2}+0.5h\) to \(\theta_{\rm max}=\frac{\pi}{2}+1.5h\). The standard resolution is \(256^{2}\) cells, i.e. 256 cells per pressure scale height. Lower resolution than this weakens the growth rates significantly. For a few cases (\(q=-0.5\)), we doubled the resolution to 512 cells per pressure scale height for better convergence towards the predicted growth rates. The boundary conditions could in principle affect our simulations, but at least in the linear and axisymmetric regime, we find no severe impact of the quasi-reflective boundary conditions on the evolution of perturbations. We tested that in simulations without thermal relaxation as well as with thermal relaxation, but no temperature gradient, and found that perturbations always decayed. During the non-linear stage of the instability we start to see some reflection of waves at the boundaries, which implies that for larger scales and especially full three dimensional simulations one will have to introduce damping layers as in Manger & Klahr (2018). We assume that our disks are vertically isothermal, such that the slope of midplane density \(p\) and temperature \(q\), as well as the aspect ratio \(h_{0}=H/R\) at radius \(R_{0}\) defines our initial and background state of temperature \(T(R,z)\) and density \(\rho(R,z)\) in our simulations as \[\rho(R,0)=\rho_{0}\left(\frac{R}{R_{0}}\right)^{p},\;\;\;\;\;T(R,0)=T_{0}\left( \frac{R}{R_{0}}\right)^{q}. \tag{2}\] The vertical density structure is then \[\rho(R,z)=\rho(R,0)e^{\frac{R^{2}}{H^{2}}\left(\frac{R}{\sqrt{R^{2}+z^{2}}}-1 \right)}, \tag{3}\] and the equilibrium rotation profile is \[\Omega^{2}=\frac{MG}{R^{3}}\sqrt{1+q\left(1-\frac{R}{\sqrt{R^{2}+z^{2}}} \right)+(p+q)\frac{H^{2}}{R^{2}}}. \tag{4}\] Our simulations are dimension free thus gravity constant \(G\) and stellar mass \(M\) are both equal to \(1\). Then the Keplerian speed at radius \(R=1\) is also \(1\), as is the Keplerian frequency \(\Omega_{\rm Kepler}=1\), which defines the orbital period as \(t_{\rm Orbit}=2\pi\). In Figure 1 and Figure 2 we show the initial density and temperature structure for model A with a radial temperature gradient of \(q=-1\) and density gradient \(p=-1.5\). Note the non-alignment of density, pressure and entropy contours, which constitutes the baroclinic state of the disk. This baroclinicity is likewise the cause for vertical shear and for convectively unstable directions. Note that neither the GSF nor VSI have strictly vertical modes, nor does COS have strictly radial modes. Nevertheless, GSF and COS modes are typically orthogonal to each other. In Figure 1, we emphasize that even close to the midplane, convective modes are not strictly radial, and the vertical shear is weaker than in the atmosphere as can be seen in Figure 2. Our models with a shallower temperature gradient \(q=-0.5\) are convectively stable in the midplane, as the entropy increases with distance to the star, but the pressure drops (See Figure 3). Outside the midplane, the pressure gradient turns towards the midplane, whereas the entropy gradient turns away from the midplane. As a result, outside the midplane, there are directions which are convectively unstable in the spirit of the COS. We omit a figure for the same parameters for the upper atmosphere as it looks qualitatively like the model with the steeper temperature gradient. ### Growth Rates: Local spherical simulations The full growth rates for COS and GSF have been determined by solving the dispersion relation from Paper I, which is equivalent to the one in the paper by Goldreich & Schubert (1967) as well as in the book by Tassoul (2000): \[\omega^{3}+\omega^{2}\frac{i}{\gamma\tau}-\omega\left[N_{\bf k}^{2}+\kappa_{ \bf k}^{2}\right]-\frac{i}{\gamma\tau}\kappa_{\bf k}^{2}=0. \tag{5}\] by numerical methods, which involves identifying the optimum wavenumber vector \({\bf k}=(k_{R},k_{z})\) for either COS or GSF. Additionally analytic approximation have been given is Paper I, as was already suggested by (Urpin, 2003). Stability and growth rates are determined by the oscillation frequency1\(\kappa_{\bf k}^{2}\) based on the gradient of specific angular momentum \(j=R^{2}\Omega\): Footnote 1: This is equal to \(Q^{2}\) in Urpin (2003). See our discussion in Paper 1. \[\kappa_{\bf k}^{2}=\frac{k_{z}^{2}}{k^{2}}\left(\kappa_{R}^{2}-\frac{k_{R}}{k _{z}}\kappa_{z}^{2}\right)=\frac{1}{R^{3}}\frac{k_{z}^{2}}{k^{2}}\left(\partial _{R}j^{2}-\frac{k_{R}}{k_{z}}\partial_{z}j^{2}\right), \tag{6}\] and the projected buoyancy frequency2\(N_{\mathbf{k}}^{2}\).: Footnote 2: In the notation of Urpin (2003) this is the term \(\omega_{g}^{2}\), but we want to stick with the notation of Brunt-Väisäla-frequencies. \[N_{\mathbf{k}}^{2}=-\frac{c^{2}}{\gamma}\frac{\left(k_{R}b_{z}-k_{z}b_{R} \right)\left(k_{R}s_{z}-k_{z}s_{R}\right)}{k^{2}}=-\frac{c^{2}}{\gamma}\frac{ \left(\mathbf{k}\times\mathbf{b}\right)\cdot\left(\mathbf{k}\times\mathbf{s} \right)}{k^{2}}, \tag{7}\] with the logarithmic entropy \(\mathbf{s}\) and pressure gradient \(\mathbf{b}\). Both \(\kappa_{\mathbf{k}}^{2}\) and \(N_{\mathbf{k}}^{2}\) are functions of \(R\) and \(z\), and depend on the direction of the velocity perturbation, which occurs perpendicular to \(\mathbf{k}\). The indicated analytic growth rates in Figure 4 are the fastest rates for either instability, i.e. their individual optimum \(\mathbf{k}\), see Paper I. As GSF is an instability with no real part in \(\omega\) and COS is an overstable oscillation with the real part of \(\omega\approx\Omega\), i.e. the local epicyclic frequency it is straight forward to separate the COS and GSF growth rates. To explain the shape of the growth rates as function of \(\tau\) and \(z\) it is helpful to analyze the approximate solutions of the dispersion relation. We find the local GSF growth rates for instantaneous cooling \[\Gamma_{\mathrm{GSF}}(\tau<<\tau_{c})=\frac{1}{2}\frac{|\kappa_{z}^{2}|}{ \Omega^{2}}\Omega=\frac{|q|}{2}\frac{|z|}{R}\Omega, \tag{8}\] thus height dependent and proportional to \(q\). For GSF as a function of normalized thermal relaxation time \(\tau^{*}=\tau\Omega\gamma\), we found \[\Gamma_{\mathrm{GSF}}=\frac{h^{2}q^{2}}{4}\frac{\gamma}{\gamma-1}\frac{1}{ \gamma\Omega\tau_{\mathrm{c,GSF}}+\tau^{*}}\Omega, \tag{9}\] with the critical cooling time for GSF: \[\tau_{\mathrm{c,GSF}}=h\frac{H}{2|z|}\frac{|q|}{\gamma-1}\Omega^{-1}. \tag{10}\] Figure 4: Numerically determined growth rates compared to analytic growth rates \(\Gamma\) for COS and GSF for \(p=-1.5\) as function of cooling time \(\tau\) for various heights above the midplane. (a): \(q=-1\) and (b): \(q=-0.5\). In ascending order \(z=0.1H,0.5H,H,1.5H,2H\) for a disk with \(H/R=0.1\) and the adiabatic index of \(\gamma=1.4\). The solid blue line corresponds to \(z=H\). The strength of COS is mostly independent of height, which makes the red lines almost indistinguishable. The brown symbol \(+\) are measured growth rates in local axisymmetric hydrodynamic simulations (at 256/\(H\)) for a box centered around \(z_{0}=h\) and magenta \(X\) symbols the same for a box at \(z_{0}=0\). For \(q=-0.5\) we added some runs at double resolution (512/\(H\)), indicated by magenta squares, which produces a better agreement. which for \(z=H/2\) leads to the same result as for the critical time for VSI: \(\tau_{c}\). Thus for long cooling times the growth rate is independent of \(z\) and proportional to \(q^{2}\)and \(h^{2}\). Likewise for the COS we derived: \[\Gamma_{\rm COS}=\frac{h^{2}q^{2}}{8}\frac{\gamma}{\gamma-1}\frac{\tau^{*}}{1+ {\tau^{*}}^{2}}\Omega, \tag{11}\] which has a maximum for \(\tau^{*}=1\). For \(\tau^{*}>1\), both growth rates GSF and COS attain a constant ratio of \(2\) with respect to each other and this result is largely independent of the radial density structure. We plot the predicted growth rates as a function of cooling time for a selected range of heights in Fig. 4 for \(q=-1\) and for \(q=-0.5\). We also include the estimates for critical VSI cooling time from Lin & Youdin (2015) and the growth rates from Nelson et al. (2013) as vertical and horizontal yellow lines, respectively, and find them as good indicators for the asymptotic behavior of the GSF growth rates at large heights for short cooling times. We find that the GSF growth rates for \(\tau^{*}=1\) are approximately one order of magnitude smaller than the optimal growth rates for GSF at the respective height and COS is therefore 40 times slower. For much longer cooling times like \(\tau^{*}=10\), both GSF and COS growth times are about two orders of magnitude longer than for the optimal VSI. We also added our measured values for growth rates from our numerical experiments in the following section to these plots and will discuss them in that section. The analytic growth rates are predictions for the linear phase of growth. Based on these growth rates we cannot estimate what the saturated level of turbulence may be or, even harder, what the contribution to turbulent angular momentum transport. Both questions, even though very relevant, are beyond the scope of this paper. The fact that we thought disks to be stable with respect to VSI beyond the critical cooling time in our recent simulations (Manger et al., 2021) hints at the problem that low growth rates need low numerical dissipation. Thus a careful choice of solving scheme and sufficient spatial resolution is key. A first attempt in establishing this knowledge is part of the following section. All models for \(q=-1\) (and respectively \(q=-0.5\)) use the same initial density and temperature structure, which also defines the temperature towards which temperature fluctuations are getting relaxed. In the PLUTO code we control the cooling by updating the pressure according to: \[P^{t+dt}=\rho T_{0}-\left(\rho T_{0}-P^{t}\right)e^{-dt/\tau}, \tag{12}\] which is stable for an arbitrarily short cooling time. The Strang splitting method is typically applied in the PLUTO code, i.e. one switches the order of hydrodynamic versus cooling operators each timestep. However, this had to be modified to a leap-frog type of splitting, to handle cooling times properly that are smaller than the hydrodynamic step: \[P^{t+0.5dt} = \rho T_{0}-\left(\rho T_{0}-P^{t}\right)e^{-0.5dt/\tau}, \tag{13}\] \[P^{*} = f_{\rm hydro}(P^{t+0.5dt}),\] (14) \[P^{t+dt} = \rho T_{0}-\left(\rho T_{0}-P^{*}\right)e^{-0.5dt/\tau}. \tag{15}\] The cooling time is varied from \(\tau^{*}=10^{-5}\) to \(\tau^{*}=\infty\) (no cooling) and the initial perturbation is \(10^{-4}\) of density, but the pressure remains unchanged, i.e. we perturb the temperature inversely to the density. It is not trivial to study the linear evolution phase for an instability with a Godunov solver aimed to be stable and accurate in the presence of shocks. One would actually wish to use a low Mach-number code (Almgren et al., 2006; Edelmann et al., 2021), which is a topic for future projects, or a high-order finite difference method such as the Pencil Code (Lyra and Klahr, 2011; Lyra, 2014). In addition, a spectral code with a Boussinesq ansatz as in the code SNOOPY, has been successfully used for VSI simulations (Latter and Kunz, 2022) as well as for unstratified COS simulations (Teed and Latter, 2021). Unfortunately, in quasi-incompressible local Boussinesq simulations the ability of sound waves to carry angular momentum (Heinemann and Papaloizou, 2009) is suppressed. Thus one needs global compressible simulations to study transport processes as well as the possible formation of particle traps like zonal flows and vortices. For the moment, we use a code that we know can handle global VSI simulations (Manger et al., 2021; Pfeil and Klahr, 2021). If our code cannot reproduce the linear growth rates, we would also not trust the results of a three dimensional simulation. With the PLUTO 4.3 code (Mignone et al., 2007) we found that high order time integration (Runge Kutta, 3) and space interpolation (WENO3) in combination with an accurate Riemann solver ( in our case a Roe solver) is essential for converging results at reasonable resolution. Parabolic interpolation of 5th order in space is slightly less suitable3. We tried both conservation of total Figure 5: Classical GSF for short cooling times: \(\tau^{*}=10^{-5}\) Simulation snap shots for \(q=-1,p=-1.5\),. (a): for the midplane (\(z=[-0.5H,0.5H]\)) and (b): for the atmosphere (\(z=[0.5H,1.5H]\)). We plot the vertical velocities (\(v_{z}\)) indicated with a red (positive = upward) and black/cyan (negative = downward) color scheme. Same simulations as in Fig. 6. Figure 6: Evolution of r.m.s. and maximal velocities for \(q=-1,p=-1.5,\tau^{*}=10^{-5}\). (a): for the midplane (\(z=[-0.5H,0.5H]\)) and (b): for the atmosphere (\(z=[0.5H,1.5H]\)), in units of the speed of sound and as function of time in units of orbits. We show the r.m.s. velocities (in the \(R\)-\(z\) plane) as solid black line. The largest radial velocity is indicated with the red line and the largest vertical velocity with the blue line. The magenta and yellow lines are the fitted growth rates \(\Gamma_{1}\) and \(\Gamma_{2}\) for two different time intervals given in plot with the analytic growth rate range for the simulated domain. energy and entropy conservation for the energy equation, with little difference in the linear phase. Thus our simulations use the total energy scheme of PLUTO, which is numerically slightly cheaper than the entropy conserving scheme. In Fig. 4, we plot the measured growth rates for \(q=-1\) for the midplane centered boxes (models A: crosses \(+\)) and \(z_{0}=H\) centered boxes (models B: \(X\)) along with the analytic predictions for GSF (blue lines) and COS (red lines) for various heights. The dotted line indicates the separation between the two sets of simulations, i.e. growth rate for \(z=0.5H\). Thus, we find a good reproduction of the predicted growth rate. For short cooling times, the GSF modes clearly must be the drivers of growth. For longer cooling times, the simulations in the higher atmosphere clearly show the right amount of decrease in growth as expected for GSF, yet for the midplane boxes the measured growth rates could be produced by either COS or GSF. We therefore inspect a trio of models with a range of cooling times \(\tau^{*}=10^{-5},\,1,\,10\) more closely, which will also show how we obtained the plotted growth rates for Fig. 4. #### 2.2.1 Models: \(\tau^{*}=10^{-5}\) Models with extremely short cooling timescales are the closest to the classical simulations of VSI (Nelson et al., 2016; Stoll and Kley, 2014; Manger and Klahr, 2018; Manger et al., 2020, 2021; Pfeil and Klahr, 2021). In Fig. 5, we show snapshots of the vertical velocities and temperature perturbations during the linear growth phase. One clearly recognizes the radially alternating vertical motion of the gas driven by the VSI/GSF. With careful inspection one finds that the direction of the vertical motion is neither purely vertical nor along contours of constant angular momentum, but half-way between both. This elucidates the cause of the linear GSF modes, as a flow of higher angular momentum material upward and more importantly outward into a region of lower angular momentum, just as in the classical Rayleigh criterion for rotational stability. This is the fundamental cause of the GSF and thus VSI. In Fig. 6, we show the time evolution of the velocities. We plot for both models the overall r.m.s. velocities of the radial and vertical components in order to measure the growth rates the largest velocities. We plot vertical velocities in blue and radial velocities in red; once more one recognizes that for both the midplane and the atmosphere models the GSF dominates, even at the midplane with the smaller expected growth rate. The saturation level of the r.m.s. speed in both cases is very similar, despite the difference in growth rate. Interestingly, we observe a different saturation effect for GSF modes than the effects of saturation discussed for global VSI simulations by Latter and Papaloizou (2018) and Cui and Latter (2022), probably due to the local nature of our simulations. In fact, after the linear growth phase, clockwise rotating eddies emerge, very similar to the ones we show in the following paragraph, where we will discuss them some more. #### 2.2.2 Models: \(\tau^{*}=1\) For a cooling time \(\tau^{*}=1\), the COS modes are expected to reach their fastest growth and the GSF modes grow significantly slower than in the \(\tau=10^{-5}\) case, yet still about four times faster than the COS (see Fig. 7). In Fig. 8, we see that for the \(z_{0}=H\) case, one still recognizes the typical GSF modes during the linear growth phase and Fig. 7 confirms that the measured growth rates coincide with the predicted GSF growth rates for the first 200 orbits. After the linear GSF growth comes to a halt, the system slowly continues to evolve, still growing, but at a lower rate. At this stage, the developing pattern is no longer dominated by vertical motion, but forms small loops in the \(R\)-\(z\) plane as can be seen in Fig. 8. The loops are initially located in the bands of constant angular momentum, created by the linear phase of the GSF and over time create more extended radial regions of constant angular momentum (See Movie 1a). We found that these eddies are not driven by convection, as we measured no down gradient corresponding transport of entropy, but instead the opposite. They mix entropy downward to the midplane. Note that they also appear in the \(\tau^{*}=10^{-5}\) simulation, where a short cooling time prevents any convection. The eddies can also not be driven by vertical shear, as they sit in bands of constant angular momentum, where there is no vertical shear. In a way the GSF has produced a state in which it cannot operate anymore, which defined the end of its growth. The situation at the midplane is also very interesting. Initially, vertical GSF-like modes dominate, and later radial COS modes take over, as can be seen in the snapshots (Fig. 8) as well as in the growth rates (Fig. 7). Initially, the largest velocities are the vertical motions (blue curve) and later from about 1000 orbits onward, radial oscillations (red curve) are stronger. In fact, COS modes grow right from the start, and we have a nice superposition of COS and GSF modes, which in their linear stage do not affect each other. After about 1700 orbits, the radial oscillations of the epicycles are so strong that the vertical shear between an outward moving and inward moving band becomes unstable to vertical shear modes. Briefly, the vertical velocities dominate over the radial velocities, damping the radial velocities, which then start to grow again. This can best be seen in a movie created from this simulation. Lyra (2014) and Teed and Latter (2021) already discuss various phases of growth for unstratified COS simulations and the second phase was attributed to a Kelvin-Helmholtz instability (KHI) acting as a parasitic effect on the channel modes (radial epicyclic COS oscillations). In our stratified simulations, we argue that saturation of COS modes occurs when the channel modes which conserve angular momentum reach an amplitude at which the oscillating vertical shear between inward and outward moving sheets of gas becomes unstable as a variant of the GSF itself. More precisely, as soon as the GSF growth time, based on the amplitude of the oscillation of vertical shear, is shorter than the oscillation period, we get a small outburst of GSF that removes part of the vertical shear and thus damps the radial COS modes. After that the COS grows again, eventually leading to the next GSF eruption (see Movie 1b). Note that this GSF is not product of the baroclinic state of the disk, but a secondary instability once the COS modes have an appropriate amplitude. #### 2.2.3 Models: \(\tau^{\star}=10\) Figure 7: Evolution of r.m.s. and maximal velocities for \(q=-1,p=-1.5,\tau^{\star}=1\) in the disk atmosphere (\(z=[0.5H,1.5H]\)) in units of the speed of sound and as function of time in units of orbits. We show the r.m.s. velocities (in the \(R\)-\(z\) plane) as a solid black line. The largest radial velocity is plotted with a red line and the largest vertical velocity with a blue line. The yellow and magenta dashed dotted lines are the fitted growth rates \(\Gamma_{1}\) and \(\Gamma_{2}\) given in plot, along with the analytic growth rate range for the simulated domain. For even longer cooling times, we still find the evolution of GSF modes in the atmosphere and clear COS modes in the midplane. In the latter case, the radial modes dominate throughout the run. One also finds the intermittent outbreak of GSF modes, once COS reaches a certain amplitude (see Fig. 12). #### 2.2.4 Models: \(q=-0.5\) We did the same exercise with simulations using a shallower temperature gradient of \(q=-0.5\) and our measurements of linear growth rates can be found in Figure 4. For the \(z_{0}=H\) runs we found good agreement between the predicted GSF growth rates and the measured values, but not as good as in the \(q=-1\) case. We explain this discrepancy by numerical dissipation, which damps the growth of modes. Therefore we added runs with double the resolution (512 cells per scale height) and measured growth rates much closer to the analytic prediction. For the models in the midplane \(z_{0}=0\), the measured growth rates are much lower than predicted, especially for longer cooling times. The missing crosses in Figure 4 indicate simulations for which we did not find a linear growth-phase, despite growing velocities and thus could not measure a proper growth rate. Figure 8: Simulation snapshots for \(q=-1,p=-1.5,\tau^{\star}=1\). An early snapshot showing the linear development of GSF modes (a) and a later snapshot with showing the non-linear pattern of circulation regions (b), both in the atmosphere (\(z=[0.5H,1.5H]\)). We plot the deviation of local temperature \(\Delta T=T^{\prime}/T_{0}\) and vertical velocities in the \(R\)-\(z\) plane. The further development of this simulation can be found in Fig. 9. One possible continuation these studies, would be to increase the resolution, which might find stronger growth rates and extended linear periods of growth, but might require a switch to a much less dissipative scheme. Note that by increasing the resolution, we also end up with a smaller time step, which means that our sub-sonic motions do not benefit as much as an incompressible scheme by the increased resolution. An incompressible scheme like Snoopy (Lesur & Longaretti, 2005) or possibly a low-Mach number scheme (Almgren et al., 2006) should be able to clarify this in the future. Nevertheless, based on our numerical experiments, we argue that the predicted growth rates for our dispersion relation can actually be recovered in non-linear simulations. We can identify GSF modes and COS modes in the linear evolution of vertically vs. radially dominant velocity perturbations. Even more importantly, we can confirm that GSF does not die out at longer critical cooling times, but simply takes a little longer to grow. But clearly we have shown that one cannot construct a stable atmosphere for a protoplanetary disk with a radial temperature gradient that has a non-vanishing thermal relaxation, i.e. any realistic protoplanetary disk. So far we identified GSF and COS modes in our setups by checking growth rates and dominant directions of velocity. Dominant vertical motions should correspond to GSF modes and dominant radial oscillations to COS modes. But this only holds for the first hundred orbits of the linear growth phase. For instance, in the model depicted in Fig. 8 and 9 with \(q=-1\) around \(z_{0}=H\), we find the end of the linear growth at about 600 orbits (see Fig. 7), after which radial and vertical velocities are similar in amplitude and the growth rates are still positive at a level of \(\Gamma=4\times 10^{-4}\). The question is whether this continued growth, which is distinguished by the development of little whirls inside the bands of roughly constant angular momentum, is originally created by the VSI and GSF or by the COS? Inspecting the expected growth rates for COS, which are five times stronger than the measured ones, and which can be reproduced in the linear phase of midplane evolution, makes it hard to explain the discrepancy by the dissipative scheme of the PLUTO simulations. Figure 9: Simulation snapshots for \(q=-1,p=-1.5,\tau^{\star}=1\) showing the non-linear pattern of circulation regions (b), both in the atmosphere (\(z=[0.5H,1.5H]\)) sitting in bands of locally constant angular momentum \(j=\Omega R^{2}\). We plot the deviation of local temperature \(\Delta T=T^{\prime}/T_{0}\), absolute velocities in the \(R\)-\(z\) plane indicate pseudo-streamlines, the darkest regions correspond to the largest velocities and the white regions to zero velocity. It could also be that this is actually already a non-linear growth phase or at least a new linear growth phase, in which the background state has significantly changed and needs a new analysis. Before one starts such an attempt, we can analyze the driving force of this swirl amplification. We first measured the net entropy transport along the stream lines of the swirl in order to interpret the swirl as a convection cell that converts buoyancy into motion. But this analysis of the flow does not support the idea that the swirls are convection cells driven by entropy transport. In fact, we found entropy to be mostly mixed downward to the midplane, as expected. Figure 10: Evolution of r.m.s. and maximal velocities for \(q=-1,p=-1.5,\tau^{*}=1\) in the disk midplane (\(z=[-0.5H,0.5H]\)) in units of the speed of sound and as function of time in units of orbits. We show the r.m.s. velocities (in the \(R\)-\(z\) plane) as a solid black line. The largest radial velocity is plotted with a red line and the largest vertical velocity with a blue line. The yellow and magenta dashed dotted lines are the fitted growth rates \(\Gamma_{1}\) and \(\Gamma_{2}\) given in plot, along with the analytic growth rate range for the simulated domain. Figure 11: Simulation snapshots for \(q=-1,p=-1.5,\tau^{*}=1\) (c), (d) and (e) for the midplane (\(z=[-0.5H,0.5H]\)). Continuation of Fig. 8. See also Fig. 7. So to see if these modes depend on the vertical shear or the entropy gradient, we choose to either switch off GSF or suppress COS and see what happens in such a case. We have already argued that in a real disk this is not possible, so we apply a slightly artificial disk simulation in the following section. ### Separating the modes In a real disk, it is impossible to separate GSF and COS for larger cooling times, because for all possible configurations of our atmospheres we found the growth rates of GSF to dominate. Furthermore, for cooling times suitable for COS, the growthrates GSF and COS scale similarly with respect to the disk structure, as shown in Equations (11) and (9). The cause for both instabilities is the magnitude of the baroclinic term in the vertical radial structure of the disk, i.e. the vertical gradient of angular momentum being proportional to the \(\phi\) component of the cross product of density and pressure gradients: \[\kappa_{z}^{2}=-\frac{1}{\rho^{2}}\left(\nabla\rho\times\nabla P\right)_{\phi}, \tag{16}\] Figure 12: Simulation snapshots for \(q=-1,p=-1.5,\tau^{*}=10\), (a) for the atmosphere (\(z=[0.5H,1.5H]\)) and (b), for the midplane (\(z=[-0.5H,0.5H]\)). We plot the deviation of local specific entropy \(K=P\rho^{-\gamma}\) with \(\Delta K=K^{\prime}/K0\) in units of the background entropy. Lighter indicates more entropy, darker less entropy. Absolute velocities in the \(R\)-\(z\) plane indicate pseudo-streamlines, the darkest regions correspond to the largest velocities and the white regions to zero velocity, overall normalized to the velocities at the given time. However, we can use a trick in our numerical simulations. We can modify the applied gravitational forces in a way such that they are no longer conservative (derived from a potential), but instead they either balance the radial buoyancy to remove vertical shear, or they introduce vertical shear for disks without radial temperature gradient. To simplify things, we first define vertical gravity in the \(z\ll R\) limit \[g_{z}=-MG\frac{z}{R^{3}}=-\Omega^{2}z, \tag{17}\] instead of full gravity with the spherical radius \(r^{2}=z^{2}+R^{2}\). The associated density structure is then the classical Gaussian shape \[\rho=\rho_{0}e^{-\frac{z^{2}}{2R^{2}}}. \tag{18}\] The full radial component of gravitational acceleration in this \(z\ll R\) approximation would be \[g_{R}=-\frac{MG}{R^{2}}, \tag{19}\] but in order to balance the thermal wind equation (see Eq. 16) and enforce \(\kappa_{z}^{2}=0\), even when \(q\neq 0\), we use a radial acceleration that changes slightly with height \[g_{R}=-\frac{MG}{R^{2}}\left(1-\frac{3+q}{2}\frac{z^{2}}{R^{2}}\right). \tag{20}\] This corresponds to a tiny \(1\%\) modification of gravity at \(z=H=0.1R\) for \(q=-1\). The resulting equilibrium rotation profile for the initial condition is then \[\Omega^{2}=\frac{MG}{R^{3}}\sqrt{1+(p+q)\frac{H^{2}}{R^{2}}}. \tag{21}\] In such a setup, the disk will allow for the convective modes (COS) but not for the vertical GSF shear modes. If, on the other hand, we wish to suppress the convective modes, we can set \(q=0\) for the actual density and temperature structure of the disk. In contrast to the above strategy, we modify the radial gravity to produce a \(\kappa_{z}^{2}\) appropriate for a hypothetical temperature gradient \(q^{*}\): \[g_{R}=-\frac{MG}{R^{2}}\left(1-\frac{3-q^{*}}{2}\frac{z^{2}}{R^{2}}\right), \tag{22}\] which implies that \[\Omega^{2}=\frac{MG}{R^{3}}\sqrt{1+q^{*}\frac{z^{2}}{2H^{2}}+p\frac{H^{2}}{R^{ 2}}}. \tag{23}\] and radial buoyancy, and therefore COS, is eliminated but not the GSF. We perform these simulations in cylindrical (\(R,z\)) coordinates for the same parameters \(q=-1\), \(\tau^{*}=1\) and \(z_{0}=H\) as for the spherical model, but use an even smaller simulation domain to properly resolve the instabilities without a burdensome numerical cost. We also desire to minimize the effect of using non-conservative gravity, because as in M.C. Escher's famous infinite staircase, it could be possible to circulate in a closed streamline in our simulation domain and continually release potential energy. Using a simulation with instantaneous cooling but suppressed GSF modes, we tested that this effect does not lead to an artificial instability in our simulations. The radial and vertical domain centered around a point at \(z_{0}=H\) is now only \(0.2H\) wide thus \(0.99<R<1.01\) and \(0.09<z<0.11\). With 256 cells in both directions we have a resolution of 1280 cells per scale height. True resolution studies for convergence should eventually also consider micro physics, like molecular viscosity and thermal conductivity or realistic radiative processes appropriate in optically thick and thin regimes. Without these processes the range of unstable wavenumbers is unlimited and with higher resolution more unstable modes are possible. We conduct three different simulations for the same setup in terms of disk parameters \(q,p,h,\tau^{*}\) but either use a "full" gravity model, with the correct conservative gravitational potential, a "No GSF" model, which suppresses vertical shear, and a "No COS" model, that has no radial temperature gradient, but vertical shear due to the modified gravity. They all start from a \(1\%\) perturbation in density, but no perturbation in pressure. In Fig. 13, we compare the three simulations after 100 orbits. Both the full model and the "No COS" model show the prominent vertical GSF modes. The "No GSF" model shows the radial convective oscillations of COS. The absolute values of the velocities can be read from the time evolution of velocities in the three models in Fig. 14. We also produced a movie of the "No GSF" simulation (see: Movie2), which displays the radial oscillations, which slowly move upward away from the midplane, while slowly growing in amplitude. Both models allowing for GSF (full and "No COS") reach the end of the linear growth phase after 200 orbits, at which time the r.m.s. velocity is about \(3\times 10^{-4}c_{s}\), whereas the pure COS simulation ("No GSF") grows for 1500 orbits to saturate at a level of \(v_{\rm r.m.s.}=2\times 10^{-3}c_{s}\). At this amplitude, the COS channel modes create parasitic GSF modes (see Fig. 15), same as shown in the spherical simulations close to the midplane in the previous section. Note that the measured growthrates are about 2 times smaller than the analytic prediction, which may be related to the resolution, to the radial limitation of the computational domain, or possibly the artificial modification of the gravitational potential. In the full simulations in the previous section (See Fig. 10), the measures COS growth rates matched the analytical derived estimation. The velocity amplitudes of the GSF runs continue to grow over time in a second growth phase, albeit at rates even longer than the expected COS modes. The "No COS" model stalls its growth at an amplitude of \(v_{\rm r.m.s.}=10^{-2}c_{s}\), but the full model continues to grow. ### Diagonal COS modes Figure 13: Simulation snapshots at \(t=100\) orbits for \(q=-1,p=-1.5,\tau^{*}=1\) at \(z_{0}=H\). In (a): the full model, (b): suppressed COS modes and (c): suppressed GSF modes. Our simulations with \(q=-0.5\) did not show a clear linear growth of any modes, even though there should have been some unstable diagonal / slanted modes, albeit growing over rather long time scales. So do they exist and are only damped by the numerical scheme? To test the hypothesis that in radial, stably stratified regions one can still have diagonal convective modes we chose a radial temperature profile of \(q=-4\), which according to our equations should provide fast enough growth for COS modes. For a normal disk structure, this would be unstable to radial convection, so we choose an ad hoc density gradient of \(p=4.4\). Thus as density increases radially, like at the transition between an inner cavity and disk, the radial pressure gradient always points inward for all heights considered, whereas the entropy will always decrease outward. It is not necessary to discuss if and where such a stratification would occur in a disk, it is only important that we show that for this case the predictions from the linear analysis can be tested. From this test we can infer about the validity of the dispersion relation for inclined modes in general. We would have preferred to do such a test for the \(q=-0.5\) models, but with our numerical scheme and resolution, we were not able to clearly identify the diagonal modes in these runs even for resolutions of 1024 cells per scale height. Possibly some other numerical hydrodynamic scheme will be able to do so eventually. Our model uses the cylindrical setup, so we can test the full model and compare it with the model of suppressed GSF ("No GSF") using the modified gravity from the previous section. Our model has the dimensions in radial direction \(0.95<R<1.05\) and vertical direction \(-0.05<z<0.05\) using 512 cells each in both directions and the usual closed boundary condition. Disk aspect ratio \(h=0.1\) and adiabatic index \(\gamma=1.4\) is the same as in all the other models, and the cooling time is \(\tau^{*}=1\). We perturb all three velocity components with random values in the range of \(\pm 10^{-5}\) of the local speed of sound to allow for a fair comparison of the No GSF and full modes, which would respond differently to a perturbation in density. In Fig. 16 we show a snapshot at \(t=300\) during the linear growth phase. The COS modes appear as diagonal perturbations, which actually propagate orthogonal to their wave vector. The details of the evolution of these modes can best seen in a movie of this simulations (see Movie 3). The growth rates are on the expected order of magnitude. The modes saturate after about 600 orbits at which time the flow becomes chaotic. To show these diagonal modes was a matter of principle. If we do not suppress vertical shear, then as for all other parameters, the GSF will grow faster and eventually create bands of constant angular momentum with embedded eddies, at least for the axisymmetric setups considered here (see Fig. 16). We will discuss the cause for the eddies some more in the following section. ## 3 Horizontal Convection In basically all runs outside the midplane we found the formation of in-plane eddies. For short as well as long cooling times, as well as in the simulations of the previous section (see Fig. 15), as long as the disk was baroclinic. Only the "No VSI" run showed no such eddies forming. Figure 14: Evolution of r.m.s. and maximal velocities for \(q=-1,p=-1.5,\tau^{*}=1\) for a small section of the atmosphere (\(z=[0.9H,1.1H]\)) in units of the speed of sound and as function of time in units of orbits. In (a): full model (b): No COS: Convective modes are suppressed and (c): No GSF, suppressed GSF modes. We show the r.m.s. velocities (in the \(R\)-\(z\) plane) as a solid black line. The largest radial velocity is indicated by the red line and the largest vertical velocity by a blue line. The yellow and magenta lines are the fitted growth rates \(\Gamma_{1}\) and \(\Gamma_{2}\) given in plot, along with the analytic growth rate range for the simulated domain. The reason for these in-plane whirls is two-fold. First, the vertical modes of GSF eventually create regions of constant angular momentum and then the baroclinic term is longer balanced by vertical shear (thermal wind), but produces vorticity. In the fluid dynamics literature this effect is called horizontal convection (HC), which is not driven by a horizontal super adiabatic stratification, but directly by the baroclinic term. It occurs in systems with negligible rotation, which are baroclinic because of differential heating, for instance the sea-breeze effect (Holton & Hakim, 2012). In the linear stage of our disk simulations \(\kappa_{z}^{2}\) is balanced by the baroclinic term, but once the vertical shear vanishes we have \(\kappa_{z}^{2}=0\). For the "thermal wind equation" we explicitly set radial and vertical velocities to constant zero \(v_{R}=v_{z}=0\), but now we can take the curl of the momentum equations and introducing the \(\phi\) component of vorticity \(\omega_{\phi}=\partial_{R}v_{z}-\partial_{z}v_{R}\) we find: \[\partial_{t}\omega_{\phi}=\frac{\left(\nabla\rho\times\nabla P\right)_{\phi}} {\rho^{2}}+\kappa_{z}^{2}. \tag{24}\] Thus in the initial state with vertical shear there is no horizontal convection possible. But as vertical shear and radial angular momentum gradient is locally removed, vorticity can be created, as the global rotation of the disk can be ignored. In that case Figure 15: Simulation snapshots after the end of the individual linear growth phase for \(q=-1,p=-1.5,\tau^{*}=1\) at \(z_{0}=H\). Each row starting from the top: (a) the full model, (b) suppressed COS modes, and (c) suppressed GSF modes. we have a typical case of horizontal convection, the driving mechanism behind the Sea-breeze in geophysics (Holton & Hakim, 2012). In global simulations one can observe the same effect, but additionally one finds that the shear between the vertical modes undergo a Kelvin-Helmholtz instability (see Melon Fuksmann A&A, submitted). But these eddies rotate counter clock wise following the radial shear of vertical motions between the bands of constant angular momentum. Yet inside the bands of constant angular momentum the eddies we observe in our simulations are all rotating clockwise, as forced by the sign of the baroclinic term. It seems that our local simulations saturate, before the KHI can be triggered. Like the COS and GSF, HC feeds on the baroclinicity of the disk, yet cannot be described by plane waves. For the COS, we consider oscillations that are amplified if the cooling time and the oscillation period are similar. For the horizontal convection in a disk, there already have to be closed loops, i.e. closed streamlines of constant angular momentum, so the flow in the whirl can move freely. If we now perform an integral of \(P\mathrm{d}V\) along this stream line, we can determine the work set free in the whirl, which in terms of density and pressure along the path \(s\) is given by the integral \[W=\oint_{s}P\frac{\partial V}{\partial s}\,\mathrm{d}s=-\oint_{s}\frac{P}{ \rho^{2}}\frac{\partial\rho}{\partial s}\,\mathrm{d}s. \tag{25}\] If we adopt the initial density and temperature structure for our disks, we can easily evaluate this integral numerically for arbitrary loop shapes and sizes. Using Stokes theorem, we can replace the line integral by the surface integral, where a clockwise circulation produces a positive area (and an anti-clock wise circulation a negative area) \[W=\int_{S}\frac{\nabla\rho\times\nabla P}{\rho^{2}}\cdot d\mathbf{S}=-\int_{S }\kappa_{z}^{2}dS. \tag{26}\] Thus in the northern hemisphere with \(\kappa_{z}^{2}<0\) for the initial equilibrium structure clockwise circulations are amplified, whereas in the southern hemisphere counter-clockwise whirls are amplified. For a barotropic system, the integral of \(P\mathrm{d}V\) for a closed loop is always zero and thus no energy can be released, even though there may be changes in density and pressure along the path. But in a disk with a prescribed density structure of \(p=-1.5\) Figure 16: Simulation snap shot during the linear growth phase for \(q=-4,p=+4.4,\tau^{*}=1\) at \(z_{0}=H\). When GSF is suppressed, one can clearly see the diagonal / slanted convection modes. In the full model GSF modes dominate. and independent temperature structure \(q=-1\), eddies will release energy when they rotate clockwise in the atmosphere above the midplane or counterclockwise in the atmosphere below the midplane. The energy released by the eddy scales with its size and with the velocity of the flow. If there was no thermal relaxation, then entropy would be conserved along the fluid line and the work integral would also be always zero. But for thermal relaxation towards said initial configuration, the atmosphere has a chance to maintain its baroclinicity. This is now already a difference with respect to the COS. The COS is bound by the epicyclic frequency and thus will operate poorly for larger cooling rates. But horizontal convection has no such restriction. The shorter the cooling time, the faster the gas can flow. Otherwise the cooling time will limit the velocity of the expanding gas. The eddies on the other hand, sit in a baroclinic structure and thus they can be amplified by arbitrary short or long cooling times. Of course for long cooling times the velocity of the eddies will be limited by the cooling rate. Checking Fig. 15, we can clearly identify the eddies for the full model. Evaluating the integral \(W\) along a stream line inside a whirl indeed results in a net release of thermal energy. But why does the pure COS model 'No GSF' not show these modes? The integral of \(p{\rm d}V\) would still suggest the release of energy, however in modifying the implementation of gravity we also have to integrate the release of potential energy in our non-conservative setup. And as gravity was modified to remove the vertical shear, the integral of gravitational forces exactly cancels the release of thermal energy and the baroclinic eddies vanish. The 'No COS' simulation on the other hand, has a barotropic background and thus the integral of \(p{\rm d}V\) vanishes. But we still see the same whirls at least initially as in the full simulation. In this case, the modified gravity used to generate vertical shear allows for the continuous release of potential energy as alluded to in the aforementioned Escher staircase analogy. We see that while it may still be valid to study the linear phase of GSF and COS in the modified gravity regime, the non-linear phase will contain some non-physical results. So at least in the full version of our axisymmetric simulations, we can clearly identify these baroclinic driven eddies, which are responsible to increase the r.m.s. velocity after the saturation of the GSF modes by an order of magnitude. All whirls rotate clockwise, as the counterclockwise whirls would convert motion into heat like a heat engine. In the southern hemisphere, below the midplane, of course the counterclockwise eddies will be the ones amplified. As mentioned before, these whirls are only so efficient because they sit in a band of constant angular momentum, which itself would be unstable in 3D simulations. Thus the possibility of having some baroclinic driven eddies will have to be studied in dedicated 3D simulations in the future. But it is clear that any developing non-linear flow in 3D has the chance to tap into the baroclinicity of the disk, they don't have to be axisymmetric to release energy. One will have to hunt for these modes in future simulations. A similarity also exists between the baroclinic driven eddies and the vortex amplification in the subcritical baroclinic instability (SBI) (Petersen et al. 2007a,b; Lesur & Papaloizou 2010), in the sense that for both instabilities it needs a pre-existing vortex. For SBI, this is a vortex in the \(R\)-\(\phi\) plane of the disk and for the "horizontal convection" it is in the \(R\)-\(z\) plane of the disk. But whereas the "horizontal convection" operates on a background state that is already baroclinic and thus an amplification will happen for arbitrarily small velocities, an SBI vortex needs a certain rotation velocity to generate a \(P(\rho)\) structure that can deliver work \(W\) because of the delay of cooling and heating during the rotation, i.e. gas moving outward is warmer and thus lower in density than the inward moving gas. For vanishing cooling times, \(P\) and \(\rho\) is symmetric with respect to inward or outward motion and no work can be released \(W=0\). The optimal cooling time once again is given by the rotation frequency of the SBI vortices (Raettig et al. 2013) very much like for the COS modes. Baroclinic driven vortices (or eddies) on the other hand will thrive at the fastest cooling rates, because then the work integral will remain positive even for the largest velocities. The faster the velocities of the eddies (in two or or three dimensions) will be, the more energy can be released per unit time. In that sense, the eddies may be a robust feature in disks once it is triggered, yet it remains to be seen if it occurs and plays a role in three-dimensional simulations of disks. ## 4 Conclusion In Paper I, we studied the linear stability of vertically isothermal gas disks with a radial temperature gradient \(q\) for finite thermal relaxation timescales \(\tau\). For \(q\neq 0\) and \(\tau<\infty\), we always found linearly unstable solutions in our stability analysis. In the present paper, the predicted growth rates could be confirmed in nonlinear simulations with reasonable agreement, considering the dissipation inherent in our numerical scheme. Protoplanetary disks around young stars possess some temperature gradient and a finite thermal relaxation, so they can adapt their temperature to irradiation and possible internal processes. In light of our findings, this implies that there cannot be a stable rotation profile or likewise stable density structure for these disks. The instabilities arise from the baroclinic structure of the disks, which is responsible for vertical shear (aka thermal wind) and convectively unstable stratification in selected directions. The effect of baroclinic stratifications to generate vertical shear was already considered for GSF and VSI instabilities, albeit in the regime of sufficiently small cooling times. We tested now the predictions for the growth of GSF modes for arbitrary cooling times. The stability criterion of VSI and GSF are identical, yet GSF growth rates for long cooling times scale proportionally to the square of the temperature gradient \(q^{2}\) and inversely with cooling time, whereas VSI and GSF for short cooling times scales linearly with \(q\), which we confirm in our numerical simulations. In the long cooling time regime, there are always COS modes with growth rates at least smaller by a factor of two with respect to GSF. In the midplane we observe that both modes grow simultaneously, with the GSF having a faster start but also earlier saturation. In the long run, the COS takes over and eventually dominates the dynamics. In the atmosphere, the COS modes are hardly visible in our simulations, and only by suppressing the GSF could we study the evolution of COS modes, confirming their predicted growth rates for \(z=H\). For normal \(q=-0.5\) we found GSF modes in the atmosphere, but the identification of COS modes was unsuccessful, for which we blame the numerical dissipation. Yet, for an artificially boosted temperature gradient \(q=-4\) counterbalanced by an inverse density gradient \(p=+4.4\) we could show that, even if the radial structure is convectively stable, there will be the predicted diagonal / slanted convection modes. We can thus confirm that the radial density gradient in the midplane of the disk has neither a direct influence on the growth rates of GSF nor on those of COS. Specifically, the sign of the radial entropy gradient does not affect the onset of COS. Besides the confirmation of the growth rates of unstable modes as derived in Paper I we also identified a new mode of instability. Saturation in our local axisymmetric modes occurs when bands of constant angular momentum form. In these bands neither GSF nor COS can operate, as there is no vertical shear and the epicyclic frequency vanishes. Also the stable direction of buoyancy \(N_{+}^{2}\approx N_{z}^{2}\) dominates over the unstable direction \(N_{+}^{2}+N_{-}^{2}>0\) and thus convection would be quenched into thin sheets and not explain the circular eddies we find. Still we observe the formation and amplification of eddies in these bands that are all rotating in the direction defined by the baroclinicity. As also discussed in (Melon Fuksman et al. A&A, submitted) it is the baroclinic term that drives the eddies. The process is therefore described best as horizontal convection or "Sea-breeze" effect (Holton and Hakim, 2012). The stronger the baroclinic term is enforced via thermal relaxation, the more energy can be pumped into the system. But these preliminary results are possibly restricted to our axisymmetric setup and thus it is an open question whether they also exist in full three-dimensional simulations, or whether at least similar nonlinear flow features in three-dimensional simulations can also tap into the baroclinic energy reservoir. For the cases studied in this paper, i.e. vertically isothermal, Newtonian cooling with a fixed cooling time for all wave numbers, no viscosity, and strict axisymmetry, we find that GSF always grows faster than the COS modes, but at least sometimes it saturates earlier. A situation in which additional effects as for instance the sedimentation of dust leads to an additional stabilization of the vertical stratification (Lin, 2019) could create a situation in which COS may dominate over GSF modes, as already discussed in Shibahashi (1980) and Tassoul (2000). Three-dimensional simulations will be necessary to see if we can also reproduce the linear growth rates if axisymmetry is not artificially constrained. We can expect that saturation will not occur when bands of constant angular momentum form as they themselves are unstable to non-axisymmetric modes (Rayleigh criterion). Then we will see if a similar baroclinic driving of turbulence as we see it in the present simulations does also occur in the nonlinear state of fully three-dimensional turbulence similar to the SBI as reported in (Petersen et al., 2007, 2008; Lyra and Klahr, 2011; Raettig, 2012), which were all either vertically integrated or vertically unstratified models and thus needed cooling times on the order of the orbital period. It will be interesting to measure how much the emerging three-dimensional turbulence can draw energy from the baroclinic state of the disk even in the short cooling time regime. With sufficient resolution, it will be possible to perform simulations in the regime of short cooling times \(\tau<\tau_{c}\) where VSI and GSF will dominate, but also in the long cooling time regime \(\tau>\tau_{c}\) with COS and GSF of similar strength. A third regime for very long cooling times (\(\tau>10\Omega^{-1}\)) may then show the transition from thermal instabilities to ZVI (Barranco et al., 2018). Based on our numerical experiments we confirm that all disks around young stars are hydrodynamically unstable. Protoplanetary disks all have a temperature gradient for most of their radius, generated by stellar irradiation and thus are all baroclinic and prone to GSF and COS. But the growth rates of GSF and COS and their interaction with other instabilities have to be considered now. Ideal MRI would easily outgrow the hydrodynamic instabilities by far, as we showed for the case of SBI (Lyra and Klahr, 2011) and studies investigating to what extent non-ideal MHD regime will allow for hydrodynamic instabilities of the VSI nature have just been started recently (Latter and Kunz, 2022). A full picture of MHD and pure HD effects in disks is an ambitious goal, yet now we have the tools and know the necessary resolution, at least from the pure hydrodynamic perspective. ## Acknowledgments The authors wish to thank Natascha Manger, Orkan Umurhan, and Wladimir Lyra for providing feedback on early versions of this manuscript. H.K. and J.D.M.F. are supported by the German Science Foundation (DFG) under the priority program SPP 1992: "Exoplanet Diversity" under contract KL 1469/16-1/2. HB acknowledges the support of the NASA Theoretical and Computational Astrophysics Networks (TCAN) award 80NSSC19K0639. Simulations were performed on the ISAAC and VERA clusters of the MPIA and the COBRA, HYDRA and RAVEN clusters of the Max-Planck-Society, both hosted at the Max-Planck Computing and Data Facility in Garching (Germany). H.K. also acknowledges additional support from the DFG via the Heidelberg Cluster of Excellence STRUCTURERES in the framework of Germany's Excellence Strategy (grant EXC-2181/1 - 390900948). ## Appendix A Movies We produced a set of movies from our simulations, as we find it very insightful to "see" how the instabilities develop. All movies show four panels, yet the shown information can vary, depending on what we have to highlight. The first panel shows the local relative deviation from the background temperature \(T^{\prime}=\frac{T-T_{0}}{T_{0}}\) (see Figure 17 and 18). The range of the color scheme expands as the amplitude increases with time. The second plot represents either the radial velocity (see Figure 17) or the mass flux in the \(R,\theta\) plane (see Figure 18), i.e. \(\rho v^{\prime}\propto\rho\sqrt{v_{R}^{2}+v_{z}^{2}}\), scaled on the speed of sound \(c=h=0.1\) and density \(\rho_{0}=1\) in the midplane at \(R=1\), to emphasize the velocities in regions of higher density, especially the case where there are many scale heights in the vertical direction. The lighter the blue, the higher the flow velocities. Thus the white streaks follow the streamlines in the simulation, indicating the dominant directions for the gas flow velocities. The third plot shows either the vertical velocity (see Figure 17) or the distribution of specific angular momentum \(j=\Omega R^{2}\) (see Figure 18). The fourth plot shows the evolution of the r.m.s. velocity (black) as well as the largest radial (red) and vertical velocities (blue), in a region around the center of the simulation, which is radially and vertically half as wide as the respective simulation. In total we present 4 Movies. Movies 1a and 1b are for the \(q=-1\) and \(\tau^{*}=1\) cases, 2a for the upper layer and 2b for the midplane as discussed in section 2.1. In Movie 1a, which correspond to Figures 7, 8 and 9, one can observe how initially the typical vertical GSF modes are growing and after the saturation the formation of the eddies of constant angular momentum, which are the sign for the non linear symmetric instabilty, tapping directly in the baroclinic state of the disk via the \(PdV\) term. Figure 17: Snapshot from Movie 1a at \(t=100\). Here we show radial and vertical velocity, together with the temperature deviation from the background and the evolution of r.m.s. velocity as well as velocity maxima. Movie 1b covers a simulation placed in the midplane. Here first GSF starts growing, then later COS takes over. This is the same simulation shown in Figures 10 and 11 Movie 2 is a cylindrical version of the simulation shown in Movie 1a, but now in cylindrical coordinates and suppressing the GSF as discussed in section 2.2, thus the same simulation as depicted in Figure 13. One can observe the amplification and vertical drift of the COS modes that also go nonlinear, once the vertical shear in the radial oscillation allows for a parasitic version of the GSF. Movie 3, see section 2.3 for details, finally suppresses the GSF and is located around a radial stable stratified midplane with \(q=-4\) and \(p=4.4\). This movie shows the development of the diagonal convective modes, aka slanted convection.
2306.11164
ETL for the integration of remote sensing data
Modern in-orbit satellites and other available remote sensing tools have generated a huge availability of public data waiting to be exploited in different formats hosted on different servers. In this context, ETL formalism becomes relevant for the integration and analysis of the combined information from all these sources. Throughout this work, we present the theoretical and practical foundations to build a modular analysis infrastructure that allows the creation of ETLs to download, transform and integrate data coming from different instruments in different formats. Part of this work is already implemented in a Python library which is intended to be integrated into already available workflow management tools based on acyclic-directed graphs which also have different adapters to impact the combined data in different warehouses.
Paula V. Romero Jure, Juan Bautista Cabral, Sergio Masuelli
2023-06-19T21:10:38Z
http://arxiv.org/abs/2306.11164v1
# ETL for the integration of remote sensing data ###### Abstract Modern in-orbit satellites and other available remote sensing tools have generated a huge availability of public data waiting to be exploited in different formats hosted on different servers. In this context, ETL formalism becomes relevant for the integration and analysis of the combined information from all these sources. Throughout this work, we present the theoretical and practical foundations to build a modular analysis infrastructure that allows the creation of ETLs to download, transform and integrate data coming from different instruments in different formats. Part of this work is already implemented in a Python library which is intended to be integrated into already available workflow management tools based on acyclic-directed graphs which also have different adapters to impact the combined data in different warehouses. Keywords:ETL Satellite Imagery Data Processing. ## 1 Introduction The Extraction, Transformation, and Loading (ETL), is the formalism for extracting data from various sources, transforming it into a useful format, and loading it into a target repository, such as a data warehouse. The term gained popularity throughout the industry around the 1970s rather than being formally defined in a document. However, previous works have settled the bases for the formalism. One of the first works is [6], which widely describes the process and its relation with Data Warehouse. Furthermore, [22] defines ETL activities and provides formal foundations for its conceptual representation. ETL serves as a practical theoretical framework for data integration by simplifying the extraction of data from different sources, their transformation into a consistent and compatible format, and their loading into a centralized data warehouse to facilitate subsequent analysis. In this context, remote sensing instruments on board artificial satellites orbiting the Earth are generating huge amounts of data every day, which is considered to be a Big Data problem [19][5]. The data is transmitted to Earth, stored in a data warehouse system and usually provided to the user in some scientific file format, such as a Network Common Data Form (NetCDF) [15], Hierichal Data Format [9] or GeoTIFF[7]. The database where the files are stored and the format depends on the agency or organization responsible for each satellite. There exist several situations where someone needs to analyze Earth Observation data by combining measurements from multiple instruments onboard different satellites. That would be an appropriate problem for ETL formalism because we would be dealing with big data stored in different sources and we need to transform it into a product and load the latter in some database. An event where different (satellite) sensors observe the same location roughly at the same time is called a collocation [12]. Several works have required implementing the collocation finding procedure, for example, [23] generated a dataset with combined data from MODIS and the Cloud Profiling Radar (CPR) onboard CloudSat, to study cloud types. [12] have studied collocations between the Microwave Humidity Sounder (MHS) on-board NOAA-18 and the CPR. In practice, collocating Earth Observation data usually comes with many problems due to the different sources where the data is stored and the compatibility of data formats. In this context, we have decided to use the ETL formalism to integrate remote sensing data from multiple sources by designing extractors, transformers and loaders that access information from platforms provided by different missions. Although there are precedents of application [19], in our work we opted for a modular mechanism so that different users can customize their processing and analysis pipelines. Although there are several programs suitable for performing collocations, particularly Geographic Information System (GIS) software, we have decided to implement all this infrastructure in Python, given its popularity and ecosystem in scientific computing [13]. Besides, even though there exist some Python libraries that implement data processing methods for Earth Observation data such as _Satpy_[8], we have not found any that implement extraction methods and integrate them with the processing stage, most of them assume that the data is available in the local system. This paper is organized as follows: In Section 2 we present the ETL formalism and its relation to remote sensing data, then. In Section 3 we present and explain our design of a general ETL modular pipeline intended to combine data from instruments on board different artificial satellites in orbit around the Earth. In Section 3 we present an implementation of the design as a Python package. In Section 4 we present the results in the former and in Section 5, conclusions and future work to be done. ## 2 ETL formalism The acronym ETL (Extract, Transform, Load) emerged in the context of data warehousing around the 1970s. And it comprises the following stages: Extracting data from the original sources, quality assuring and cleaning data, conforming the labels and measures in the data to achieve consistency across the original sources, and delivering data in a physical form [6]. This stages can be represented generally as a kind of diagram. For example [22] have developed a graphical notation useful to "capture the semantics of the entities involved in the ETL process". We have adopted this notation to represent the design proposed. In the following paragraph, we will review some basic concepts that are needed to explain our proposed design. Figure 1 shows all the elements that could be present in the diagram: The _Attributes_, which are the minimum unit of information, represented with an oval shape. The _Concepts_, squares, are the entities of the source databases and are defined by a name and a finite set of attributes. The hexagons represent the _Transformations_, which are the parts of code that execute a task. Next, the _ETL_Constrains_ are a finite set of attributes, on which the constraint is imposed and a single transformation that implements the enforcement of the constraint. Finally, the _Notes_ contains comments. It is important to note that all notation is UML[4] based but some forms do not have the same meaning. In this work, we will focus on transformation operations that transform the input data. ## 3 Design Our design is based on [22] and is graphically represented in Figure 1(a). For simplicity we have restricted our analysis to two sources/instruments/databases, one containing the data transmitted by satellite "A" and the other, by satellite "B". Usually, each database contains different products with several levels of processing, but all of them share the format with some common attributes and metadata. As stated in [12], to have a meaningful collocation, the pixels from both images must have a physical overlap, which means that they need to meet a spatial and a temporal criterion within some threshold of error. Figure 1: Graphical notation for explaining an ETL pipeline. Figure courtesy of [22]. First, two files that may meet the time overlap criterion are selected and downloaded. For example: An Extractor retrieves an Image \(A\) that was stored in format \(A\) from Data source \(A\), so the file can be named "ImageA.extA", where "extA" means the extension for file format \(A\). The more common scientific file formats for Earth Observation data are NetCDF (.nc)[15], a version of HDF (.hdf),.h5)[9], and GeoTIFF (.tiff)[7]. Each Image is for our formalism a _Concept_ characterised by the _Attributes_: **Time:**: The time at which each pixel of the image was acquired. It is usually provided in UTC or in some format of absolute time with a defined origin. **Geoloc:**: The geolocation of each pixel of the image, the point on Earth measured by the sensor. Each pixel is characterised by its spatial coordinates in some projection related to the type of orbit of the satellite. **Parameters:**: Every pixel of the image is characterized by n parameters. A parameter is a measurement taken by the satellite instrument or some quantity derived from it. A _Transformer_\(c\) transforms the files into a common format so it will be easier to work with them and perform the collocations later. Formally \[c:extA,extB\to extC\] Then, the _Transformer_\(f\) gets the location of every pixel from image A as input and converts its coordinates from projection A to Projection B. \[f:coordA_{projA}\to coordA_{projB}\] Next, pixels that meet the spatial overlap criterion within some threshold of error are selected and can be collocated. _Transformer_\(t\) takes care of that task, taking the coordinates of geolocation, both in some projection, as input and retrieving a new product as output. \[t:ParametersA,ParametersB\to PixelA\&B\] The output is a _Concept_ called "Pixels A&B", which format is the common format, and contains the Parameters from A and the Parameters from B, that were attributes of the pixels A and B and have not been altered or transformed. The entire process can be represented as \[(c\circ f\circ t):ImgA,ImgB\to PixelA\&B\] Finally, a _Loader_ loads the final product into a Database. ## 4 Results: An implementation We have applied the discussed design in a Python Package that, a priori, aims to be used to collocate [12] data from a radiometer aboard a geostationary satellite with data from a radar aboard a polar-orbiting satellite. For the source "A" of data we chose the ABI (Advanced Baseline Imager) on board geostationary GOES-16, with a central longitude of -76\({}^{\circ}\), which allows it to take images of the whole American continent, with a temporal frequency ranging from 5 to 15 minutes [2]. The ABI is a multispectral radiometer that sense the Earth in 16 bands ranging from the visible to the NIR part of the electromagnetic spectrum [18]. An image of the whole continent is square and has a spatial resolution of 0.5 km to 2 km and a side size of 5424 to 16272 pixels, depending on the band. See table 4 for details about this. Every pixel of an image is geolocated and each file name contains information about the acquisition time of the measurements. The ABI data is stored in _NetCDF_ format hosted in Amazon Web Server (AWS) [15]. \begin{tabular}{c|r|r} Bands & Resolution (Km) & Image size (pixels) \\ \hline 2 & 0.5 & 16272 \\ 1, 3, 5 & 1 & 10848 \\ 4, 6-16 & 2 & 5424 \\ \end{tabular} As source "B" we choose the CPR (Cloud Profiler Radar) on board the polar satellite CloudSat. Its orbit is polar sun-synchronous, with a temporal frequency of 16 days. The CPR was designed to generate vertical profiles of clouds every 1.1 km along-track and the spacial resolution of each data point is 1.3 km across-track and 1.7 km along-track [20]. Every data point is associated with the geolocation and time of the measurement. One of the most interesting things about this mission is a product which provides the type of cloud among and other variables for each vertical profile [17]. The data acquired by CPR is stored in the CloudSat Data Processing Center, which is a SFTP server [1], in HDF-EOS format[9]. With the aforementioned descriptions, and based on the theoretical model and nomenclatures presented in the previous section, we have constructed a flowchart (Figure 2) that specifies what the code does. Figure 2b tells the same story as Figure 2a but in different terms, regarding the implementation. The idea of the pipeline consists in extracting by means of a SFTP service from database A an image of an orbital passage of the CloudSat satellite (_concept_), which contains in each pixel the _attributes_ date, time, geolocation (latitude, longitude), height in meters and type of cloud for each height. Once the user has selected a time range in the CloudSat extracted data, this range information is fed to the AWS extractor which retrieves a suitable multiband image from GOES16. This image is also a _Concept_ and in this case, each pixel has the _attributes_ date, time, geolocation in geostationary projection (central longitude and height of the satellite) and radiance. To know which CPR pixels or data points correspond with which ABI pixels, the Transformer \(f\) makes the coordinate change: \[f:(lat,lon)\to geos(h,lon_{c},R_{e},R_{p})\] where \(geos\) is the projection used by GOES16 to georeference each pixel (see [21], section 5.1.2.2 for more information). This transformation depends on GOES16 height \(h\), central longitude \(lon_{c}\), equatorial radius \(R_{e}\) and polar radius \(R_{p}\). This information is used to collocate pixels from the multi-band image with pixels from the CloudSat pass and then retrieve a product where every collocated pixel contains information on the radiance, the height of the atmosphere and the cloud types found in them. Finally, the module Loader would take care of loading this final product into storage. This module is not defined yet, but we plan to implement it as an extension of some workflow orchestration program such as Apache Airflow [11] or Dagster [14]; All these technologies are based on the creation of tasks on an acyclic-directed graph (DAG) that allows fragments of the pipeline to be executed automatically in parallel or sequentially as required. For your convenience, a prototype of this pipeline can be found in the Stratopy package [16]. Please note that the project is under active development and the current state can be explored in the project repository [https://github.com/paula-rj/StratoPy/tree/dev](https://github.com/paula-rj/StratoPy/tree/dev). ## 5 Conclusion and further work The ETL formalism was very useful to achieve an orderly design of workflows. Figure 2: Structure diagrams of the proposed pipeline built with Stratopy. On the left side, there is a diagram containing the conceptual parts, and on the right side an implementation with the components provided by the project. The modular structure of the concepts and attributes allows extending extractors and transformers in a simple way to extract and transform data from any of the currently active and most used Earth Observation satellites, such as GOES16/17/18 [10], Himawari [3], etc. Also, the existence of some large ecosystem of remote sensing data analysis and data analysis packages in Python in general, and SatPy[5] for manipulation and transformation of data from remote-sensing earth-observing satellite instruments in particular, and Stratopy is a missing piece for orchestrating these transformations and analysis. The future work, part of it is already started and consists of the orchestration of the processes already implemented in some kind of DAG framework such as Apache Airflow or Dagster.
2303.05917
International Vaccine Allocation: An Optimization Framework
As observed during the COVID-19 pandemic, high-income countries, such as the U.S., may exhibit vaccine nationalism during a pandemic: stockpiling doses of vaccine for their own citizens and being reluctant to distribute doses of the vaccine to lower-income countries. While many cite moral objections to vaccine nationalism, vaccine inequity during a pandemic could possibly worsen the global effects of the pandemic, including in the high-income countries themselves, through the evolution of new variants of the virus. This paper uses the COVID-19 pandemic as a case study to identify scenarios under which it might be in a high-income nation's own interest to donate vaccine doses to another country before its own population has been fully vaccinated. We develop an extended SEIR (susceptible-exposed-infectious-recovered) epidemiological model embedded in an optimization framework and examine scenarios involving a single donor and multiple recipient (nondonor) geographic areas. We find that policies other than donor-first can delay the emergence of a more-contagious variant compared to donor-first, sometimes reducing donor-country deaths in addition to total deaths. Thus, vaccine distribution is not a zero-sum game between donor and nondonor countries: an optimization approach can achieve a dramatic reduction in total deaths with only a small increase in donor-country deaths. The iterative linear programming approximation approach we develop can help confirm those instances when a priority policy is optimal and, when not optimal, can identify superior policies. This optimization framework can be used to guide equitable vaccine distribution in future pandemics.
Abraham Holleran, Susan E. Martonosi, Michael Veatch
2023-03-08T22:08:47Z
http://arxiv.org/abs/2303.05917v3
# International Vaccine Allocation: An Optimization Framework # International Vaccine Allocation: An Optimization Framework Abraham Holleran Department of Mathematics and Computer Science, Gordon College. Susan Martonosi Department of Mathematics, Harvey Mudd College Michael Veatch Department of Mathematics, Harvey Mudd College **Abstract.** The global SARS-CoV-2 (COVID-19) pandemic highlighted the challenge of equitable vaccine distribution between high- and low-income countries. High-income countries, such as the United States, were among the first to acquire the rapidly developed vaccines against COVID-19. However, many such high-income countries were reluctant or slow to distribute extra doses of the vaccine to lower-income countries via the COVID-19 Vaccines Global Access (COVAX) collaboration [18]. In addition to moral objections to such vaccine nationalism, vaccine inequity during a pandemic could contribute to the evolution of new variants of the virus and possibly increase total deaths, including in the high-income countries. This paper uses the COVID-19 pandemic as a case study to identify scenarios under which it might be in a high-income nation's own interest to donate vaccine doses to another country before its own population has been fully vaccinated. Using an epidemiological model embedded in an optimization framework, we identify realistic scenarios under which a donor country prefers to donate vaccines before distributing them locally in order to minimize local deaths. We demonstrate that a nondonor-first vaccination policy can, under some circumstances, dramatically delay the emergence of more-contagious variants. Moreover, we find that vaccine distribution is not a zero-sum game between donor and nondonor countries: weighting the objective function even slightly in favor of minimizing total deaths can achieve dramatic reduction in total deaths with only a small increase in donor-country deaths. The insights yielded by this framework can be used to guide equitable vaccine distribution in future pandemics. **Keywords: COVID-19, vaccine inequity, SEIR, optimization.** ## 1 Introduction In December 2019, a novel variant of the coronavirus, since named severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), emerged in China, triggering a global pandemic of the coronavirus disease 2019 (COVID-19) that caused severe economic repercussions and disrupted supply chains for several years [7]. In December 2020, the first vaccines against COVID-19 became available in global markets, bringing hope that the pandemic might be nearing its end [74, 30]. The rapid development of these vaccines was funded, in part, by subsidies and contracts provided by governments of several high-income nations, including the United States [12, 11]. High-income nations such as the U.S. secured large stockpiles of COVID-19 vaccines while lower-income nations, particularly those in Africa, relied on the goodwill of these nations to provide access to doses of vaccine [27]. The World Health Organization (WHO) aimed for all countries in the world to vaccinate 70% of their populations by the middle of 2022. However, many countries, particularly in Africa, were unable to meet these targets, despite the ample supply of vaccines globally [82]. The COVID-19 Vaccine Delivery Partnership estimates that although 58% of the world's population had received the initial dose(s) of COVID-19 vaccine by April 2022, this was inequitably distributed, with high income countries enjoying a 73% vaccination rate, and low-income countries achieving only an 11% vaccination rate [19]. Emanuel _et al._ outline a Fair Priority Model for global allocation of vaccines that consists of three phases of prioritization: 1) reducing premature deaths by prioritizing countries where each dose of vaccine would achieve the highest reduction in Standard Expected Years of Life Lost (SEYLL); 2) reducing economic and social costs by prioritizing countries where each dose of vaccine would achieve the highest reduction in poverty; and 3) reducing community spread, by prioritizing countries with the highest transmission rates [24]. In addition to moral objections [36, 5, 28], there are other reasons why so-called vaccine nationalism is problematic during a pandemic. First, as was the case with COVID-19, rising cases in one geographic area can trigger surges throughout the world. Failing to distribute vaccines uniformly to the world's population can serve to prolong a pandemic and its economic disruptions, even in countries able to attain high vaccination rates [82]. Moreover, low vaccination rates and limited health care infrastructure can contribute to the emergence of variants, and such variants can be more contagious and resistant to available vaccines [34, 35]. Given the additional ramifications of inequitable vaccine distribution cited above, it is possible that vaccine nationalism could undermine a wealthy nation's own best interests [5]. This paper looks ahead to the next global pandemic and examines the circumstances under which a nation with large vaccine production might prioritize donating some of their supply to other nations, even before fully vaccinating their own populations, in order to reduce local deaths. We use data from the COVID-19 pandemic as an exemplar of this generalizable approach. We embed an epidemiological disease transmission model within an optimization framework to determine the optimal allocation of a wealthy nation's vaccine supply to its own population and those of other geographic regions. We compare four policies: donor-first, nondoor-first, optimized, and a "fairness" policy in which a limited percentage of available daily vaccine doses may be retained by the donor country. We consider two objective function types: self-interest, in which only donor country deaths are minimized, and altruistic, in which total deaths are also considered in a weighted objective function. Our model identifies realistic scenarios under which a donor country prefers to donate vaccines before distributing them locally in order to minimize local deaths. We demonstrate that a nondonor-first vaccination policy can, under some circumstances, dramatically delay the emergence of more-contagious variants. Moreover, we find that vaccine distribution is not a zero-sum game between donor and nondonor countries: weighting the objective function even slightly in favor of minimizing total deaths can achieve dramatic reduction in total deaths with only a small increase in donor-country deaths. Although this paper focuses its disease transmission model and parameter estimation on characteristics of COVID-19, the same approach can be applied to other communicable diseases and guide future pandemic policy. In the next section, we summarize the literature on COVID-19 transmission and vaccine allocation policies. Section 3 describes the epidemiological framework we use to model disease transmission during a pandemic. In Section 4 we embed the epidemiological framework into an optimization model to identify optimal vaccine allocations. Section 5 describes the data and approaches used to estimate the model parameters, and Section 6 presents the results of our approach. We conclude and present ideas for future work in Section 7. ## 2 Literature Review The COVID-19 pandemic has triggered a wave of research related to prediction of disease trajectory, estimation of disease characteristics, and operational decision-making regarding interventions to control the disease. Gupta _et al._ survey pre-COVID-19 pandemic and epidemic research from the fields of management science and operations management as it relates to the COVID-19 pandemic [31]. Choi also surveys the field and outlines a suggested research agenda that includes optimizing for social welfare in addition to traditional operational metrics [17]. The survey of Jordan _et al._ focuses on work that uses optimization and control methodology for prediction and policy analysis related to COVID-19 [38]. They find that much of the literature from 2020-2021 focuses on predicting the course of the pandemic, and that a gap in the literature exists in the areas of decision support and mitigation, which is one purpose of our paper. Kaplan addresses the challenges and approaches used to support rapid public health decision-making during the early phase of the COVID-19 pandemic [40]. ### SEIR Compartment Models for COVID-19 A common approach to epidemic modeling is the use of variations of SEIR compartmental models that estimate how many people in a population are susceptible (S), exposed (E), infected (I) or recovered (R) from the disease. An SEIR model embedded in an optimization framework is the focus of this paper, so we focus our attention on SEIR approaches in the literature. In addition to the SEIR approaches described below, we direct the reader to other approaches in the literature, including agent-based models ([68, 52, 2, 75]), statistical methods ([41, 51, 78]), and other techniques ([22, 44, 53]). #### 2.1.1 Using SEIR To Characterize and Predict COVID-19 SEIR models can be used for understanding disease characteristics and predicting future trajectories of a disease. Wang _et al._ present an SEIR model applied early in the COVID-19 pandemic on data from Italy, Spain, Germany, and France to estimate key characteristics of the virus, including its transmission rate and basic reproduction number [79]. Algarni _et al._ develop a five-compartment model that includes a vaccinated class and validate the model on COVID-19 pandemic data from Saudi Arabia [1]. Perakis _et al._ incorporate a changepoint detection Martingale process into an SEIR model to predict waves of COVID-19 resulting from changes in policy and societal behaviors [62]. Bagger _et al._ use an agent-based model for individual behavior coupled with an SEIR model for disease transmission to examine characteristics of social networks that lead to lower likelihood of COVID-19 transmission [6]. Parro _et al._ develop an SIRD (susceptible-infected-removed-dead) model and use it to predict COVID-19 dynamics in Brazil [61]. Schwarzendahl _et al._ incorporate virus variants into an SEIR framework via gene mutation [70]. Notably, they argue that the average infection rate is expected to grow linearly with time or with the number of cases. Under this assumption, they show that the disease dynamics may exhibit multiple waves, explosive growth, or extinction. Our paper also incorporates mutation and assumes that the evolution of new variants will have a strictly increasing effect on the infection rate of the virus. #### 2.1.2 Using SEIR To Simulate the Effects of COVID-19 Interventions SEIR models can also be used to simulate the impact of interventions against the disease. For example, Chen and Kong use a modified SEIR-D model to evaluate the effectiveness of hospital admission policies at reducing transmission [15]. They model the effect on disease transmission of hospital capacity constraints and the use of Chinese Fangcang shelter hospitals to isolate all COVID-19 cases. Kumar _et al._ simulate the effectiveness of non-pharmacological interventions against COVID-19 using an SEIR model that incorporates varying degrees of vulnerability within the population [47]. Yu and Hua estimate the impacts of isolation and quarantine against COVID-19 [83]. Kemp _et al._ extend the SEIR model to consider the effects of both nonpharamecological interventions and vaccination rates on COVID-19 spread [43]. Sainz-Pardo and Valero model the impacts of contact tracing and widespread testing on COVID-19 spread within an SEIR model [67]. Qian and Ukkusuri embed a spatial SEIR model over mobility dynamics of an urban transportation system to model travel-related contagion [63]. Kumar _et al._ examine the influence that social media can have on the spread of COVID-19 within the context of an SEIR compartment model [48]. Two papers consider the effectiveness of COVID-19 interventions in the context of population heterogeneity. Volpert, Vitaly, and Sharma examine the effectiveness of vaccination within a heterogeneous population of high transmission and low transmission subpopulations that arise due to characteristics such as age, religious practices, professional experiences, and cultural norms [77]. They find that the effectiveness of vaccination depends on vaccine uptake in each group: achieving a high vaccination rate within the high transmission subpopulation leads to lower overall population rates of infection, but a high vaccination rate only within the low transmission subpopulation serves only to protect the low transmission population against infection. Dolbeault and Turinici also consider the impacts of high- and low-transmission subpopulations in the context of lockdown policies in France [21]. Our work incorporates the interaction between groups, but in our context the groups are countries or regions, and the interaction occurs through the emergence of a variant. We are thus able to examine contexts in which donating vaccines to low-income countries might reduce COVID-19 deaths in the donor country. #### 2.1.3 Embedding SEIR Models within Optimization Frameworks As we do in this paper, SEIR and other epidemiological models can be embedded within an optimization framework that identifies effective policies according to a stated objective. Shahmanzari _et al._ model and compare dynamic and static mitigation strategies to contain COVID-19 disease spread in the context of mutation and vaccination [71]. They develop a stochastic multiobjective dynamic program that weighs lives lost against economic impacts to determine the timing and severity of government interventions that will be Pareto-optimal. They also consider how the Pareto-efficiency of both dynamic and static policies are influenced by the development of vaccines and the occurrence of virus mutations, which are modeled as random shocks affecting the infection rate. Mitcham and Keisler embed an SEIR model for COVID-19 transmission into a multi-attribute utility decision-making framework that identifies pandemic mitigation strategies which robustly trade-off lives saved, personal liberties, and economic considerations [54]. Bicher _et al._ develop a metaheuristic optimization model for prioritizing subgroups of a population to receive the COVID-19 vaccine [10]. The metaheuristic takes as inputs the results of any epidemiological model for disease transmission, including SEIR, and proposes a prioritization policy that specifies the timing and quantity of vaccines to distribute to each subpopulation, evaluates the efficacy of the policy on the epidemiology model by simulation, and then iteratively improves the policy. Gillis _et al._ combine a genetic algorithm with an age-stratified SEIR model, applied to data from Nova Scotia, Canada, to identify effective public health responses under various budgetary assumptions [29]. Salgotra _et al._ intertwine an SEIR model with multi-objective optimization models to examine the tradeoffs between economic costs and health impacts inherent in policies to control COVID-19 transmission [69]. The approach we use in this paper is based on the DELPHI-V-OPT model, described in [50] and [9]. It embeds a discretized SEIR model within an optimization framework for vaccine allocation, which is solved by iteratively solving a linear programming approximation. ### COVID-19 Vaccine Allocation Work related to the allocation of COVID-19 vaccines generally has a localized focus. The work of Bicher _et al._ examines how to prioritize vaccine recipients within a nation based on factors such as age and vulnerability [10]. Pan _et al._ examine vaccine allocation to public and private hospitals responsible for distributing the COVID-19 vaccine. They model the role of information-sharing and subsidies in incentivizing private sector participation in vaccine distribution and maximizing vaccine uptake [60]. Tavana _et al._ develop a mixed integer linear programming model for operational and tactical decisions surrounding COVID-19 vaccine distribution in developing countries that accounts for vulnerable subpopulations and distribution complexities such as the availability of cold-chain infrastructure [73]. Their model determines which types of vaccines should be procured, given storage and distributional considerations, and where distribution centers should be located, under the assumption that the country already has a mechanism for obtaining the vaccine. Van Oorschot _et al._ examine the distribution of COVID-19 diagnostic tests, comparing COVID-19 transmission in Norway under an isolation policy to that under one-sided donation during times of surplus, one-sided receipt during times of shortage, and two-sided donation of excess and receipt of shortage [76]. However, they do not consider how the disease transmission dynamics in partner countries affect disease transmission in the country of consideration. In the case of the COVID-19 pandemic, transmission has occurred even among geographically distant countries, and variants have rapidly spread from one geographic area to another. The work of Duijzer _et al._ predates the COVID-19 pandemic and examines allocation of influenza vaccine stockpile to multiple subpopulations, in both non-interacting and interacting scenarios, using an SIR model [23]. For the non-interacting scenario, reflecting geographically distant populations, they find that heavily vaccinating certain subpopulations while leaving other subpopulations unvaccinated maximizes total health benefit but contributes to health disparities. As interaction increases between the subpopulations, Duijzer _et al._ find that disparities in the optimal vaccine allocation policy persist but diminish. Rotesi _et al._ examine when it is in a donor country's best interest to donate vaccines [65]. They formulate an SIR epidemiological model that incorporates travel between different countries, and they simulate the impact of different vaccine donation policies. They demonstrate that it is beneficial to donate vaccines when the donor and recipient countries are close to the herd immunity limit. Because COVID-19 cases dramatically increase just below the herd immunity limit, donating vaccines to prevent the reintroduction of COVID-19 from outside countries is more beneficial near that limit. Our paper examines a similar question. Like Rotesi _et al._ we use an epidemiological model to represent disease transmission dynamics. While we do not model travel directly, we do model the emergence of more contagious variants that appear in the donor country after a time lag. Additionally, we directly optimize the vaccine distribution policy. To our knowledge, our paper is the first work that examines the question of optimal vaccine-sharing between countries in the context of geographic interaction and rapid mutation. ### Data Estimation Challenges Data estimation is a particular challenge in COVID-19 modeling, due to asymptomic carriers. Several papers tackle data estimation challenges directly. For example, Rubio-Herroro and Wang couple a mixed integer bilevel nonlinear programming problem with a regression model to estimate hidden counts of COVID-19 susceptible, infectious, recovered, and deceased individuals for which data are absent [66]. Gallo _et al._ note that asymptomatic carriers exacerbate the problem of parameter uncertainty in COVID-19 models [26]. Small perturbations in the estimation of measured parameters can dramatically alter the estimated values of hidden parameters, which in turn can dramatically impact the predictions of the model. They propose a Bayesian inference approach to iteratively estimate model parameters, and they use multiple randomizations of this approach to assess the sensitivity of model results to parameter uncertainty. Similarly, Koenen _et al._ observe that while deaths and ICU admissions are well-predicted by epidemiological models of COVID-19 under a wide range of disease transmission assumptions, estimates for the number of infected and immune individuals are highly sensitive to model assumptions. Thus, they argue, sensitivity analysis of disease transmission parameters is essential [45]. Nikolopoulous _et al._ argue that the absence of accurate epidemiological data during the COVID-19 pandemic calls into question the validity of analytics models used to evaluate intervention effectiveness, and that social media and other survey data are needed to supplement available public health data [57]. The same authors develop a portfolio of COVID-19 transmission models that incorporate social network data and simulate interventions such as lockdowns; additionally they use these supplemental data sources to forecast excess demand for goods and services [58]. These papers illustrate the importance of validating model output against observed phenomena, which we do in this paper. SEIR Model We first present a Susceptible, Exposed, Infectious, and Recovered (SEIR) model with additional states for vaccinated individuals, making it a "SEIR-V" model. The model can be used at a global scale with a small number of geographic areas \(a\in\mathcal{A}\). The areas interact through virus mutation: a more contagious variant emerges after a given amount of infections and then spreads to the other areas after a fixed time lag. The model is also aggregate in that age groups or risk groups are not considered. The model is optimistic in that there is no transmission between areas due to travel, but pessimistic in that there is complete mixing within each area. ### SEIR Model with Vaccination The states of the model are Susceptible (\(S\)), Exposed (\(E\)), Infectious (\(I\)), Dead (\(D\)), and Recovered (\(R\)), plus \(S^{V}\), \(E^{V}\), and \(I^{V}\) if also vaccinated (Figure 1). We do not track hospitalizations; instead, the proportion of hospitalized patients that will recover are already counted in \(R\) and the rest in \(D\). Let \(S_{a}(t)\) be the number of people in state \(S\) in area \(a\) at time \(t\), etc. Also let \(W_{a}(t)\) be the number in state \(S\) in area \(a\) at time \(t\) who are willing to be vaccinated. All quantities that depend on area have the subscript \(a\); however, it will be suppressed whenever possible. The parameters of the model are listed in Tables 1 and 2. More discussion of their values is given in Section 5. We make the following assumptions about dynamics. * _Vaccinations:_ Vaccines are equally effective for all individuals. Vaccinated individuals have lower rates of becoming infected, lower mortality, and are less contagious. * _Vaccine willingness:_ A proportion \(\rho\) of the population is willing to be vaccinated. We assume that "willingness" is independent of risk of infection (e.g., age or behavior), so that individuals in state \(S\) who are willing to be vaccinated move to state \(E\) at the same rate as unwilling individuals. When there are no more willing in state \(S\), vaccinations are stopped. Figure 1: State diagram for the single-area SEIR-V model * _Infectious time:_ An infected person enters state \(D\) or \(R\) when they are deceased, their contagious period ends, they self-isolate or are hospitalized due to symptoms, or they receive a positive test result and are quarantined or self-isolate. Thus, the rate out of infectious states depends on testing. Letting \(r_{0}\) be the baseline rate out of the infectious state when there is no testing, and \(\Delta r_{a}\) be the increase in rate out of the infectious state due to testing in area \(a\), we define the overall rate out of the infectious state in area \(a\) to be \(r_{a}^{d}=r_{0}+\Delta r_{a}\). * _Mutation:_ The infection rate of the new variant is larger than the previous variant [70]. The timing of when the variant appears and spreads to other areas is addressed in the next section. * _Behavior:_ The infection rate also changes due to social distancing behavior. We assume that the rate is linearly decreasing in the "effective" number of infectious individuals, i.e., people are more cautious when there is a surge in cases. This feedback loop limits the size of surges. * _Reinfection:_ Individuals cannot be infected twice. This assumption is reasonable because the time horizon is assumed to be short enough that recovered individuals do not lose their immunity and re-enter the susceptible class. * _Logistics:_ We exclude supply chain considerations and assume that vaccines may be immediately reallocated from one country to the other. * _Time dependence:_ All parameters are assumed constant over time except for the amount of vaccine available and the infection rate, which changes due to mutation. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Parameter & Base Value & Sensitivity Analysis & Description \\ \hline \(N_{a}\) & 100,000 & \(-\) & Initial population \\ \hline \(\rho_{a}\) & 0.78 & [0.5, 1.0] & Proportion of population willing to be vaccinated \\ \hline \(\rho_{a}^{V}\) & 0 & \(-\) & Initial proportion of population vaccinated \\ \hline \(\rho_{a}^{I}\) & \(3.6\times 10^{-3}\) & \([2.6\times 10^{-3},4.9\times 10^{-3}]\) & Initial new cases per day as a proportion of the population \\ \hline \(V_{a}(t)\) & Policy-dependent & \(-\) & Rate of vaccinations available at time \(t\) (people/day) \\ \hline \(r_{0}\) & 1/3.9 & \([1/5.1,1/2.5]\) & Rate out of the infectious state without testing (proportion/day) \\ \hline \(\Delta r_{a}\) & 0.035 & [0.026, 0.060] & Contribution of testing to the rate out of the infectious state in the donor area (proportion/day) \\ \hline \end{tabular} \end{table} Table 1: Parameters of SEIR-V that depend on the area We propose a simple model of how the infection rate changes over time due to a new variant. The rate depends on the current variant(s) of the virus and the area (age distribution, population density, and \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Parameter & Base Value & Sensitivity Analysis & Description \\ \hline \(r^{I}\) & \(1/5\) & \(-\) & Rate of out of the exposed state (proportion/day) \\ \hline \(p^{D}\) & \(0.014\) & \(-\) & Unvaccinated mortality rate \\ \hline \(p^{D}_{V}\) & \(0.0079\) & \(-\) & Vaccinated mortality rate \\ \hline \(a_{0}\) & \(0.6\) & \(-\) & Initial infection rate (proportion/day). \\ \hline \(\Delta a\) & \(0.6\) & [0.3, 0.9] & Change in infection rate for a new variant (proportion/day). \\ \hline \(v^{u}\) & \(0.03\) & [0.025, 0.10] & Upper limit on proportion of population infectious, due to behavioral changes. \\ \hline \(p^{e}\) & \(0.6\) & [\(0.5,0.8\)] & Transmission rate from a vaccinated person as a proportion of rate for an unvaccinated person; \(1-p^{e}\) is the vaccine effectiveness against transmitting the virus \\ \hline \(p^{r}\) & \(0.6\) & [\(0.5,0.8\)] & Infection rate for a vaccinated person as a proportion of rate for an unvaccinated person \\ \hline \(n\) & \(45,\!000\) & [60,000, 90,000] & Person-days in the infectious state before new variant appears. Only nondonor areas and unvaccinated individuals are counted. \\ \hline \(L\) & \(15\) & \(-\) & Lag for the variant to reach other areas (days) \\ \hline \(T_{D}\) & \(25\) & \(-\) & Time for a variant to dominate, i.e., represent half the new cases in an area (days) \\ \hline \(p\) & \(0.01\) & \(-\) & Proportion of people in state \(I\) and \(I^{V}\) that have the new variant when it is introduced in an area \\ \hline \(k\) & \(k=\ln[(1\!-\!p)/p]/T_{D}\). & \(-\) & Rate parameter for when the new variant dominates \\ \hline \end{tabular} \end{table} Table 2: Other parameters of SEIR-V behavior such as masking and distancing). For now, we assume the behavior is constant over time. Below we will add a dependence on the level of infection. Initially, a constant mix of variants is assumed to be in all areas, with constant infection rate \[\alpha_{a}(t)=a_{0}\gamma_{a}. \tag{1}\] The infection rate of the new variant is larger by \(\Delta a\). We assume that a variant appears after a certain number \(n\) of unvaccinated infectious person-days are accumulated over nondonor areas. Thus, each person in state \(I\) in a nondonor area contributes the same mutation risk, while vaccinated people in state \(I^{V}\) do not contribute to mutation. This assumption is based on the idea that nondonor areas are largely low-income countries with less immunization against other diseases and more vulnerability to long infections with high viral load. Sensitivity to this assumption is checked in Section 6.2. For a given scenario and \(n\), let \(t_{n}\) be the time when the new variant appears, \(m\) the area where it appears, and \(T_{D}\) the delay until it becomes dominant (half of infections). The infection rate for the mixture of the two variants in area \(m\) is \[a(t)=a_{0}+\frac{\Delta a}{1+e^{-k(t-(t_{n}+T_{D}))}}. \tag{2}\] See Figure 2. The rate parameter is \(k=\ln[(1-p)/p]/T_{D}\). The new variant is assumed to reach other areas with a lag of \(L\) days. Including the behavior factor, the time-varying infection rate is \[\alpha_{m}(t) =a(t)\gamma_{m},\quad t\geq t_{n} \tag{3}\] \[\alpha_{a}(t) =a_{m}(\,\max\{t-L,\,0\}\,)\gamma_{a}\text{ for }a\neq m.\] Let \(W(t)\) be the number of people in state \(S\) at time \(t\) who are willing to be vaccinated. Vaccinations will stop when there are no more people in state \(S\) willing to be vaccinated, \(W(t)=0\). Thus, the rate of vaccinations _administered_ is \(V^{*}(t)=V(t)\) for \(t\) before \(W(t)=0\) and \(V^{*}(t)=0\) once \(W(t)=0\). Even if everyone is willing to be vaccinated (\(\rho=1\)), \(V^{*}\) may be needed to keep \(S(t)\) nonnegative. We will show how to compute \(W(t)\) when we solve SEIR-V in Section 3.3. The system of differential equations for the SEIR-V model is \[\frac{\mathrm{d}S}{\mathrm{d}t} =-V^{*}(t)-\alpha(t)\frac{S(t)}{N}\mathcal{V}(t) \tag{4}\] \[\frac{\mathrm{d}S^{V}}{\mathrm{d}t} =V^{*}(t)-p^{r}\alpha(t)\frac{S^{V}(t)}{N}\mathcal{V}(t)\] \[\frac{\mathrm{d}E}{\mathrm{d}t} =\alpha(t)\frac{S(t)}{N}\mathcal{V}(t)-r^{I}E(t)\] \[\frac{\mathrm{d}E^{V}}{\mathrm{d}t} =p^{r}\alpha(t)\frac{S^{V}(t)}{N}\mathcal{V}(t)-r^{I}E^{V}(t)\] \[\frac{\mathrm{d}I}{\mathrm{d}t} =r^{I}E(t)-r^{d}I(t)\] \[\frac{\mathrm{d}I^{V}}{\mathrm{d}t} =r^{I}E^{V}(t)-r^{d}I^{V}(t)\] \[\frac{\mathrm{d}D}{\mathrm{d}t} =r^{d}p^{D}I(t)+r^{d}p^{D}_{V}I^{V}(t)\] \[\frac{\mathrm{d}R}{\mathrm{d}t} =r^{d}(1-p^{D})I(t)+r^{d}(1-p^{D}_{V})I^{V}(t).\] In the differential equation for \(S\), the rate moving from \(S\) to \(E\) is a multiple of \(\mathcal{V}(t)\), rather than the usual \(I(t)\). First, \(\mathcal{V}(t)\) accounts for the equivalent number of infectious, non-isolated people, \(I(t)+p^{e}I^{V}(t)\). Second, similar to the approach of [76], it is multiplied by a behavior factor that decreases linearly with this equivalent number infectious, according to \[\mathcal{V}(t)=\left(1-\frac{I(t)+p^{e}I^{V}(t)}{Nv^{u}}\right)[I(t)+p^{e}I^{ V}(t)]. \tag{5}\] The parameter \(v^{u}\leq 1\) is the proportion \([I(t)+p^{e}I^{V}(t)]/N\) of equivalent infections at which \(\mathcal{V}(t)\) drops to zero; it is an upper limit on the proportion of equivalent infections. We call (5) the model with behavior dynamics. We will also refer to the model without behavior dynamics, where \[\mathcal{V}(t)=I(t)+p^{e}I^{V}(t). \tag{6}\] The equation for \(S^{V}\) is similar to that for \(S\), but vaccinated individuals are infected at a smaller rate because of the multiplier \(p^{r}\). From \(E\) (or \(E^{V}\)), the rate into \(I\) (or \(I^{V}\)) is \(r^{I}\) and the rate out of \(I\) (or \(I^{V}\)) is \(r^{d}\). The units of these rates are per day, so that in steady state the time spent in the infectious state is \(1/r^{d}\) days. The total rate out of state \(I\) (or \(I^{V}\)) is split with proportion \(p^{D}\) (or \(p^{D}_{V}\)) dying and the rest recovering. ### Parameter Regimes: Herd Immunity This section presents herd immunity conditions for our model that will be useful in interpreting the numerical results. Herd immunity is defined as stability to a small injection of infections, from an initial state with no infections. We can remove the states \(E,E^{V}\) for the purpose of stability analysis and assume that \(I(t)\) and \(I^{V}(t)\) are infinitesimal, so that \(\alpha\), \(S(t)\) and \(S^{V}(t)\) are constant over the time scale of stability analysis. Because they are infinitesimal, we can also ignore the behavior dynamics in (5). Let \(\dot{I}\), etc. denote derivatives. From (4) and (5), \[\dot{I} =\alpha\frac{S}{N}(I+p^{e}I^{V})-r^{d}I\] \[\dot{I}^{V} =p^{r}\alpha\frac{S^{V}}{N}(I+p^{e}I^{V})-r^{d}I^{V}.\] If no vaccinations occur, \(S^{V}=0\) and the stability condition for the unvaccinated group is \(\dot{I}<0\), or \[\frac{S}{N}<\frac{r^{d}}{\alpha}=\frac{1}{R_{0}}, \tag{7}\] where \(R_{0}\) is the unvaccinated basic reproduction number. Since \(\alpha_{a}(t)\) varies after the variant emerges, so does \(R_{0}\). If all susceptible individuals are vaccinated, then \(S=0\), and the stability condition is \(\dot{I}^{V}<0\), or \[\frac{S^{V}}{N}<\frac{r^{d}}{p^{r}p^{e}\alpha}=\frac{1}{R_{0}^{V}}, \tag{8}\] where \(R_{0}^{V}\) is the vaccinated basic reproduction number. This is weaker than (7), since \(p^{r},p^{e}<1\). Also define the _critical proportion_\(1-S/N\); this is the proportion that must _not_ be susceptible in order to reach herd immunity. In general, the vaccinated and unvaccinated risk groups interact and we can only give a sufficient condition, or bound. Suppose \(I\) and \(I^{V}\) have the initial ratio \(\psi(0)=I(0)/I^{V}(0)\). A sufficient condition for stability, starting from \(\psi(0)\), is \(\dot{I}+\dot{I}^{V}<0\) for all \(\psi(t)\), \(t\geq 0\). For a given \(\psi\), this condition is \[\frac{\dot{I}+\dot{I}^{V}}{I^{V}}=\alpha\frac{S+p^{r}S^{V}}{N}(\psi+p^{e})-r^{ d}(\psi+1)<0,\] or \[\frac{S+p^{r}S^{V}}{N}<\frac{r^{d}(\psi+1)}{\alpha(\psi+p^{e})}.\] Since \(1<(\psi+1)/(\psi+p^{e})\), a stronger sufficient condition is \[\frac{S+p^{r}S^{V}}{N}<\frac{r^{d}}{\alpha}. \tag{9}\] We expect \(\psi\gg 1\), so the bound (9) should be fairly tight. Under (9), for small \(I\) and \(I^{V}\) we conjecture that there is a stable ratio \(\psi\) that is approached over time. To be stable, the ratio must satisfy \[\frac{\dot{I}^{V}/I^{V}}{\dot{I}/I}=\frac{p^{r}\alpha\frac{S^{V}}{N}(\psi+p^{e}) -r^{d}}{\alpha\frac{S}{N}(1+p^{e}/\psi)-r^{d}}=1\] with solution \[\psi=\frac{S}{S^{V}p^{r}}.\] Because \(I\) and \(I^{V}\) are negligible, \(S\), \(S^{V}\), and hence \(\psi\) are constant over time. ### Simulating SEIR-V with Vaccination Limits To solve SEIR-V numerically over a time horizon of \(T\) days, the differential equations are replaced by difference equations with a time step of one day. The model parameters from Tables 1 and 2 and equations (2), (3), and (5) are used. The state variables are initialized using the initial proportion vaccinated and initial cases per day. To estimate the exposed states, we use the steady state mean time in these states, \(1/r^{I}\). Multiplying by the new cases per day, \[E(0)+E^{V}(0)=\frac{1}{r^{I}}\rho^{I}N.\] Similarly, for the infectious states \[I(0)+I^{V}(0)=\frac{1}{r^{d}}\rho^{I}N.\] To allocate between vaccinated and unvaccinated states, use the initial proportion vaccinated and assume cases are only \(p^{r}\) as prevalent among vaccinated individuals. Then the initial conditions are \[E(0) =\left(\frac{1-\rho^{V}}{p^{r}\rho^{V}+1-\rho^{V}}\right)\frac{1} {r^{I}}\rho^{I}N \tag{10}\] \[E^{V}(0) =\left(\frac{p^{r}\rho^{V}}{p^{r}\rho^{V}+1-\rho^{V}}\right)\frac {1}{r^{I}}\rho^{I}N\] \[I(0) =\left(\frac{1-\rho^{V}}{p^{r}\rho^{V}+1-\rho^{V}}\right)\frac{1} {r^{d}}\rho^{I}N\] \[I^{V}(0) =\left(\frac{p^{r}\rho^{V}}{p^{r}\rho^{V}+1-\rho^{V}}\right)\frac {1}{r^{d}}\rho^{I}N\] \[S^{V}(0) =\rho^{V}N-E^{V}(0)-I^{V}(0)\] \[S(0) =N-E(0)-E^{V}(0)-I(0)-I^{V}(0)-S^{V}(0).\] Note that these initial values sum to \(N\), so that \(D(0)=R(0)=0\). The vaccination schedule \(V(t)\) might need to be modified if \(W(t)\) reaches 0. In this case, vaccinations to that area stop. Initially, \[W(0)=\rho N-S^{V}(0)-E^{V}(0)-I^{V}(0)-\rho E(0)-\rho I(0). \tag{11}\] Here \(\rho N\) is the number willing to be vaccinated and we assume that those initially in states \(E\) and \(I\) are representative of the population. To keep \(S(t+1)\geq 0\), the one-day number moving from \(S\) to \(E\) is limited to \[\Delta E(t)=\min\{S(t),\alpha(t)\frac{S(t)}{N}\mathcal{V}(t)\}. \tag{12}\] To keep \(W(t+1)\geq 0\), vaccinations on day \(t\) are limited to \[V^{-}(t)=\min\{W(t)-\frac{W(t)}{S(t)}\Delta E(t),V(t)\}. \tag{13}\] The first expression is the number in state \(S\) willing to vaccinate before the vaccinations occur. Now we reallocate the unused vaccinations to other areas. After reallocation, the vaccination rate in area 1 is \[V_{1}^{*}(t)=V_{1}^{-}(t)+\min\{W_{1}(t)-\frac{W_{1}(t)}{S_{1}(t)}\Delta E_{1 }(t),V_{2}(t)-V_{2}^{-}(t)\} \tag{14}\] and similarly for area 2. For more than two areas, a priority order can be chosen, with reallocations made to the highest priority areas. Subtracting the actual vaccinations from the first expression in (13) (and suppressing the area subscript again), \[W(t+1)=W(t)-\frac{W(t)}{S(t)}\Delta E(t)-V^{*}(t). \tag{15}\] Note that if \(S(t)=0\), then \(\Delta E(t)=0\) and this term should be interpreted as 0 in (13-15). Using these quantities and (5), the difference equations are \[S(t+1) =S(t)-\Delta E(t)-V^{*}(t) t=0,\ldots,T-1\] \[S^{V}(t+1) =S^{V}(t)+V^{*}(t)-p^{r}\alpha(t)\frac{S^{V}(t)}{N}\mathcal{V}(t) t=0,\ldots,T-1\] \[E(t+1) =E(t)+\Delta E(t)-r^{I}E(t) t=0,\ldots,T-1\] \[E^{V}(t+1) =E^{V}(t)+p^{r}\alpha(t)\frac{S^{V}(t)}{N}\mathcal{V}(t)-r^{I}E^{ V}(t) t=0,\ldots,T-1\] \[I(t+1) =I(t)+r^{I}E(t)-r^{d}I(t) t=0,\ldots,T-1\] \[I^{V}(t+1) =I^{V}(t)+r^{I}E^{V}(t)-r^{d}I^{V}(t) t=0,\ldots,T-1\] \[D(t+1) =D(t)+r^{d}p^{D}I(t)+r^{d}p^{D}_{V}I^{V}(t) t=0,\ldots,T-1\] \[R(t+1) =R(t)+r^{d}(1-p^{D})I(t)+r^{d}(1-p^{D}_{V})I^{V}(t) t=0,\ldots,T-1\] These equations must be solved iteratively over \(t\) for all areas to find \(t_{n}\) and the area \(m\) where the variant emerges. The variant appears in day \(t^{*}\), which is the smallest integer for which \[\sum_{a\in\mathcal{A}}\sum_{t=0}^{t^{*}}I_{a}(t)\geq n. \tag{16}\] The variant emerges in the nondonor area \(m\) with the largest number of unvaccinated infection-days \(\sum_{t=0}^{t^{*}}I_{a}(t)\). It will be useful to interpolate between days, setting \[t_{n}=t^{*}-1+\frac{\sum_{a\in\mathcal{A}}\sum_{t=0}^{t^{*}}I_{a}(t)-n}{\sum_ {a\in\mathcal{A}}I_{a}(t^{*})}. \tag{17}\] The following sequence of calculations is used to simulate SEIR-V. 1. Compute the initial states and \(W\) using (10) and (11). Set the infection rates to the constant (1). 2. For \(t=0,\ldots,T-1\), a) Solve the difference equations along with (5) and (12-15) for all areas at the current \(t\). b) Check (16); if it is satisfied, the variant has appeared: compute \(t_{n}\) and the variant area \(m\). Use the time-varying infection rates (2) and (3) for the remaining \(t\). Optimization Framework In this section we formulate a model that allocates the vaccines available each day to areas in order to minimize deaths. Let \(\mathcal{D}\) be the set of donor areas. The optimization problem, called SEIR-OPT, is \[\min\;\sum_{a\in\mathcal{D}}D_{a}(T)+\nu\sum_{a\notin\mathcal{D}}D_ {a}(T) \tag{18}\] \[\text{s.t.}\;\;\sum_{a\in\mathcal{A}}V_{a}(t) \leq B(t) t=0,\ldots,T-1\] (19) \[W_{a}(t+1) = W_{a}(t)-\alpha_{a}(t)\frac{W_{a}(t)}{N_{a}}\mathcal{V}_{a}(t)-V_ {a}(t) t=0,\ldots,T-1,\;\;\;a\in A\] (20) \[S_{a}(t+1) = S_{a}(t)-V_{a}(t)-\alpha_{a}(t)\frac{S_{a}(t)}{N_{a}}\mathcal{V} _{a}(t) t=0,\ldots,T-1,\;\;\;a\in A\] (21) \[S_{a}^{V}(t+1) = S_{a}^{V}(t)+V_{a}(t)-p^{r}\alpha_{a}(t)\frac{S_{a}^{V}(t)}{N_{a }}\mathcal{V}_{a}(t) t=0,\ldots,T-1,\;\;\;a\in A\] (22) \[E_{a}(t+1) = E_{a}(t)+\alpha_{a}(t)\frac{S_{a}(t)}{N_{a}}\mathcal{V}_{a}(t)- r^{I}E_{a}(t) t=0,\ldots,T-1,\;\;\;a\in A\] (23) \[E_{a}^{V}(t+1) = E_{a}^{V}(t)+p^{r}\alpha_{a}(t)\frac{S_{a}^{V}(t)}{N_{a}} \mathcal{V}_{a}(t)-r^{I}E_{a}^{V}(t) t=0,\ldots,T-1,\;\;\;a\in A\] (24) \[I_{a}(t+1) = I_{a}(t)+r^{I}E_{a}(t)-r^{d}I_{a}(t) t=0,\ldots,T-1,\;\;\;a\in A\] (25) \[I_{a}^{V}(t+1) = I_{a}^{V}(t)+r^{I}E_{a}^{V}(t)-r^{d}I_{a}^{V}(t) t=0,\ldots,T-1,\;\;\;a\in A\] (26) \[D_{a}(t+1) = D_{a}(t)+r_{a}^{d}p^{D}I_{a}(t)+r_{a}^{d}p_{D}^{D}I_{a}^{V}(t) t=0,\ldots,T-1,\;\;\;a\in A\] (27) \[R_{a}(t+1) = R_{a}(t)+r_{a}^{d}(1-p^{D})I_{a}(t)+r_{a}^{d}(1-p_{V}^{D})I_{a}^ {V}(t) t=0,\ldots,T-1,\;\;\;a\in A\] (28) \[\mathcal{V}_{a}(t) = \left(1-\frac{I_{a}(t)+p^{e}I_{a}^{V}(t)}{Nv^{u}}\right)[I_{a}(t) +p^{e}I_{a}^{V}(t)] t=0,\ldots,T-1,\;\;\;a\in A\] (29) \[W_{a}(t),S_{a}(t),E_{a}(t),I_{a}(t),S_{a}^{V}(t),E_{a}^{V}(t),I_ {a}^{V}(t),V_{a}(t)\geq 0 t=0,\ldots,T,\;\;\;\;\;\;\;\;\;\;\;a\in A.\] The variables in SEIR-OPT are all of the state variables, \(V\), \(W\), and \(\mathcal{V}\) at times \(t=1,\ldots,T\). Their values at \(t=0\) are the initial conditions, computed in (10) and (11). Also, \(\alpha\) is a variable because the time at which the variant appears, \(t_{n}\), depends on the state variable \(I\); see (2), (3), and (16)-(17). Nondonor deaths are given a weight \(0\leq\nu\leq 1\) in the objective. We consider deaths in the donor areas (self-interest) by setting \(\nu=0\) or all areas (altruism) by setting \(\nu=1\). Constraint (19) limits vaccinations to the budget of \(B(t)\) doses available for day \(t\), (20) computes \(W\), (21-28) are the difference equations, and (29) computes \(\mathcal{V}\). The difference equations differ from those in Section 3.3 because \(V\) does not need to be corrected to be feasible, so \(V^{*}\) and \(\Delta E\) are not used. The constraint for \(W\) is obtained from (15) by replacing \(\Delta E(t)\) with the second term in (12) and \(V^{*}(t)\) with \(V_{a}(t)\), which is the actual vaccinations. One cannot directly solve SEIR-OPT due to the complex dependence on \(t_{n}\). Instead, we iteratively solve a "Lagrangian" problem. As a surrogate for \(t_{n}\), we use the unvaccinated, nondonor infectious-days as the "Lagrangian" term. The new objective is \[\min\ \sum_{a\in\mathcal{D}}D_{a}(T)+\nu\sum_{a\notin\mathcal{D}}D_{a}(T)+ \lambda\sum_{a\notin\mathcal{D}}\sum_{t=1}^{T}I_{a}(t). \tag{30}\] We fix \(t_{n}\): instead of depending on \(I\) in (17), it is a constant. Thus, the connection between nondonor vaccinations and donor deaths is replaced by the Lagrangian term. If we had an explicit equation for \(t_{n}\) and solved a Lagrangian relaxation with \(\lambda t_{n}\) in the objective, then, for some value of \(\lambda\), the optimal solution of the relaxation is optimal for the original problem. This motivates using (30) and searching for the value of \(\lambda\) that minimizes the original objective at the optimal \(t_{n}\). To solve this Lagrangian problem, a value for \(t_{n}\) is needed. For the first iteration we use the value from an initial policy, then update it each iteration using the policy found in the previous iteration. We also use this \(t_{n}\) to refine the objective (30): Since nondonor vaccinations can only delay the variant if they occur before \(t_{n}\), we use the previous \(t_{n}\), plus some allowance for it to change, as the upper limit of the summation over \(t\). Another difficulty with (30), especially when not all days are in the summation over \(t\), is that an optimal policy may have "slack," discarding vaccinations that could be used in nondonor areas after \(t_{n}\). To prevent this, we include a small incentive \(-10^{-9}(T-t)\) for each vaccination at time \(t\). Even with constant \(t_{n}\), the constraints contain cubic terms, such as \(W_{a}(t)\mathcal{V}_{a}(t)\), in (20-24). These terms result in nonconvex constraints, making the Lagrangian problem very difficult to solve. If the behavior dynamics in (29) are removed, using (6) instead, the constraints are nonconvex quadratics. Bertsimas, et. al [8] solve a similar problem by solving a sequence of linear approximations. At each iteration, given the current vaccine allocation \(V\), the difference equations are solved to get the current infectious population estimates \(\hat{I}_{a}(t)\), \(\hat{I}_{a}^{V}(t)\), and \(\hat{\mathcal{V}}_{a}(t)\) from (29) or (6), as well as \(t_{n}\) and \(\alpha_{a}(t)\). Then we replace the variables \(\mathcal{V}\) in SEIR-OPT with the constants \(\hat{\mathcal{V}}\). To keep the linearization error from being too large, we add regularization constraints to prevent \(\mathcal{V}\) from differing too much from \(\hat{\mathcal{V}}\): \[\mid G_{a}(t)[I_{a}(t)+p^{e}I_{a}^{V}(t)]-\hat{\mathcal{V}}_{a}(t)\mid\leq \epsilon\quad t=1,\ldots,T-1,\quad a\in A. \tag{31}\] Without behavior dynamics, \(G_{a}(t)=1\) to match (6). With behavior, \[G_{a}(t)=1-\frac{\hat{I}_{a}(t)+p^{e}\hat{I}_{a}^{V}(t)}{N_{a}v^{u}}\] to match (29). Here, \(\epsilon\) is an exploration tolerance that is updated at each iteration by a multiplicative factor To summarize, the approximation is a linear program (LP) that differs from SEIR-OPT in the objective (30), constant \(t_{n}\), constant \(\mathcal{V}\), and added constraints (31). This LP can be solved to find the best solution \(V^{\rm new}\) that gives infection dynamics \(\mathcal{V}\) that are close to \(\hat{\mathcal{V}}\), i.e., close to those of the current solution. The current solution \(V\) is updated and the process repeated until the objective function value converges. Because the Lagrangian problem is not convex, this algorithm is not guaranteed to converge to a global minimum. Section 6.1 tests its convergence numerically. To start the algorithm, an initial policy \(V\) is chosen and the difference equations solved to find \(\hat{\mathcal{V}}_{a}(t)\), \(t_{n}\), and \(\alpha_{a}(t)\). We call this process "Simulate". Using more than one initial policy allows us to check whether the algorithm converges to the same solution. For two areas, the _donor-first_ policy allocates all vaccine to the donor area. Simulating this policy reallocates unused vaccine to the nondonor. Similarly, the _nondonor-first_ policy allocates all vaccine to the nondonor area, then reallocates unused vaccine to the donor. For more than two areas, a priority order can be chosen. Algorithm 1 has an outer loop that updates \(\lambda\) and an inner loop that updates \(\mathcal{V}\) and \(\alpha\). Each iteration of the inner loop solves the LP to find \(V\), then simulates \(V\). Let \(z(\lambda)\) be the value of the objective (18) when the Lagrangian problem is solved. (Because this solution is approximate, we simulate its \(V\), obtaining a valid solution to the difference equations and use that in (18)). The search over \(\lambda\) assumes that \(z(\lambda)\) is unimin, so that a golden ratio search can be used to find the minimum. However, to choose the search interval, we do a "phase 1" search: If at the first two iterations \(z(\lambda)\) is decreasing, then \(\lambda\) is increased (multiplied by some \(\phi>1\)) until \(z(\lambda)\) increases. If \(z(\lambda)\) is initially increasing, then \(\lambda\) is decreased (divided by \(\phi\)) until \(z(\lambda)\) increases. "Phase 2" is the golden ratio search. Let \(z_{\rm NLP}^{(j)}\) be the Lagrangian objective (30), evaluated by solving the \(j\)th LP for a given \(\lambda\) and then simulating this \(V\). Also let \(z_{\rm inner}^{(j)}\) be the corresponding SEIR-OPT objective (18). (Simulation is used to remove the linear approximation, which is why the notation nonlinear program (NLP) is used.) Successive LPs may not be improving, so the algorithm stores the best \(z_{\rm NLP}^{(j)}\) for the current \(\lambda\) in \(z_{\rm NLP}^{\rm min}\). Similarly, it may not terminate at the LP with the best SEIR-OPT objective (18), so it stores the best \(z_{\rm inner}^{(j)}\) in \(z^{\rm opt}\) and the corresponding best simulated policy in \(V^{\rm opt}\). The parameters used in the algorithm are listed in Table 3. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Parameter & Base Value & Sensitivity Analysis & Description \\ \hline \(\mathcal{D}\) & 1 & \(-\) & Donor area(s) \\ \hline \(B(t)\) & 1500 & \([1000,2000]\) & Vaccine doses available on day \(t\) \\ \hline \(\nu\) & 0 & \(0<\nu\leq 1\) & Objective function weight on nondonor deaths \\ \hline \end{tabular} \end{table} Table 3: Parameters for optimization model ## 5 Parameter Estimation To test and evaluate SEIR-OPT, we construct a baseline scenario with two areas, one representing donor countries and the other representing low-income recipient (nondonor) countries. We also examine its results on a wide range of parameter settings. This section describes our choice of a base value for each parameter and a range of values used in sensitivity analysis. ### Parameters that Depend on the Area (Table 1) Table 1 lists parameters of SEIR-V that vary by geographic area. These parameters are estimated as follows. * \(N_{a}\) **(initial population)**: For simplicity, we assume an equal initial population of 100,000 in both areas. * \(\rho_{a}\) **(proportion of the population willing to be vaccinated)**: As of summer 2022, 78% of the total population of the U.S. had received at least one dose of the COVID-19 vaccine, with booster shots and new vaccination rates tapering off [55]. As this is a population that had ample access to freely available vaccines we can assume that the value 0.78 represents the proportion of the population willing to be vaccinated and use it as a base value of \(\rho_{a}\) for both areas. A major study conducted in 2020, prior to the availability of a COVID-19 vaccine, indicated a higher vaccine willingness among low and middle income countries (LMICs) than among high income countries (HICs): 80.3% of those surveyed in LMICs, compared to 65% in HICs, expressed willingness to take a COVID-19 vaccine if one became available [3]. However, vaccine willingness varies significantly by country. Based on this study, for sensitivity analysis, we consider a low value of \(\rho_{a}=0.50\), and a high value of \(\rho_{a}=1.00\). * \(\rho_{a}^{V}\) **(initial proportion of the population vaccinated)**: We consider the early stage of vaccine rollout, in late 2020 and early 2021, when neither area had yet distributed any vaccine: \(\rho_{a}^{V}=0\). * \(\rho_{a}^{I}\) **(initial rate of new infections)**: To estimate the initial rate of new infections in each area, we first estimate the prevalence of COVID-19 in late 2020, and then divide this prevalence by the average duration of infectiousness. Due to the significance of asymptomatic cases in COVID-19 transmission, which limit direct estimation of prevalence, a variety of approaches have been used to estimate COVID-19 prevalence in England [56], the U.S. [16, 39], and Canada [32] at different periods of the pandemic. One study estimates a prevalence in the U.S. of 1.4% on December 31, 2020 [16]. Then, \(\rho_{a}^{I}\) can be estimated as \(0.014r_{a}^{d}=3.6\times 10^{-3}\) for the baseline, with a range of \([2.6\times 10^{-3},4.9\times 10^{-3}]\). * \(V_{a}(t)\) **(Rate of vaccinations available at time \(t\), people/day)**: In the optimization model, SEIR-OPT, \(V_{a}(t)\) is a decision variable reflecting how the donor country chooses to allocate its vaccine supply to each area. For the simulation SEIR-V model, \(V_{a}(t)\) is an input parameter that reflects the vaccine allocation in each area. * \(r_{0},\Delta r_{a},r_{a}^{d}\)**, (Rate out of infectious state, proportion/day)**: The rate out of the infectious state is given by \(r_{a}^{d}=r_{0}+\Delta r_{a}\), where \(r_{0}\) is the baseline rate absent testing, and \(\Delta r_{a}\) is the change in that rate due to testing. Absent testing, we estimate a base value of 3.9 days in the infectious state with a range from 2.5 to 5.1 days, or \(r_{0}=\frac{1}{3.9}\) (range \([\frac{1}{5.1},\frac{1}{2.5}]\)). This reflects contributions from both symptomatic infections and asymptomatic infections. Amongst symptomatic infections, we estimate a one to three day period (base value of two days) of infectiousness before isolation [13, 4]. Additionally, using an estimate of 23.5% (range: \([0.17,0.30]\)) for the percentage of infections that are asymptomatic [64] and an assumed 10-day period of infectiousness for asymptomatic cases [14], asymptomatic cases increase the average duration of infectiousness to 3.9 days. For the donor country, we assume a random daily testing rate of 5% of those in infectious states, as in [4], 100% test effectiveness in those states, 0% effectiveness in the exposed states, and immediate test results. Under these conditions, the average time spent in the infectious state decreases to 3.4 days, with a range of 2.2 to 4.5 days. Thus, we use a base value of \(\Delta r_{a}=\frac{1}{3.4}-\frac{1}{3.9}=0.035\) (range: \([0.026,0.060]\)) for the donor country. We use \(\Delta r_{a}=0\) for the nondonor country. * \(\gamma_{a}\) **(behavioral infection multiplier)**: \(\gamma_{a}\) reflects the adjustment to the disease transmission rate that is affected by the geographical and behavioral characteristics of the area \(a\), such as population density and cultural norms around social distancing. The model of Volpert _et al._ uses both a high-transmission infection rate and a low-transmission infection rate to model behavioral differences in two subpopulations of a same geographic area and find estimated behavioral infection multipliers ranging from 3.8 to 9.7 [77]. However, across different geographic areas, each of which is comprised of subpopulations of varying transmission characterstics, we would not expect to see as stark disparity as they found. Therefore, in our tests, we use \(\gamma_{a}=1\) for both the donor and nondonor countries in the baseline scenario, and use \(\gamma_{nondonor}=2\) for sensitivity analysis. ### Other Parameters of SEIR-V (Table 2) Table 2 lists parameters reflecting the epidemiological characteristics of COVID-19. These parameters are estimated as follows. * \(r^{I}\) **(rate of out of the exposed state, proportion/day)**: Lauer _et al._ estimate the incubation period of COVID-19 to be approximately 5 days [49], giving a rate from the exposed state E into the infectious state I of \(r^{I}=\frac{1}{5}\). * December 25, 2021 in 25 U.S. jurisdictions. In that same period and in the same jurisdictions, the CDC reports 94,640 and 22,567 COVID-19-associated deaths of unvaccinated and vaccinated people, respectively [37]. Thus, we estimate an unvaccinated infectious mortality rate of \(p^{D}=0.014\), and a vaccinated infectious mortality rate of \(p^{D}_{V}=0.0079\). * \(a_{0},\Delta a\) **(infection rates)**: For the baseline infection rate of the initial strain of the virus, before considering differences between areas, we assume that each infectious person's interactions will lead to 0.6 potential exposures per day: \(a_{0}=0.6\). \(\Delta a\) is the increase in infection rate of the new variant above \(a_{0}\) (see equation (2)). When the Delta variant of COVID-19 emerged, it was found to have roughly double the infectiousness as the Alpha variant [33, 42]. Thus, we let \(\Delta_{a}=0.6\) (range: \([0.3,0.9]\)) to model a variant that has twice the infection rate as the initial strain. * \(v^{u}\) **(upper limit on proportion infectious):** During the COVID-19 pandemic, social distancing and masking measures typically followed rising case rates. To reflect this, we reduce the effective number of infectious individuals, \(\mathcal{V}\), by multiplying by a factor that decreases linearly in the number infectious; see equation (5). The parameter \(v^{u}\) is the proportion infectious in the population at which transmission stops, making it an upper limit on this proportion. Rather than estimate it directly, we tune \(v_{u}\) and \(a_{0}+\Delta a\) so that in our baseline scenario a target percentage of the donor country population will be infected with the new variant. It is estimated that about 30% of Americans had been infected with COVID-19 prior to the emergence of the omicron variant, and that nearly 60% of the population had been infected a few months later [72]. Thus, we tune them so that approximately 30-40% of the donor country population will be infected with the new variant at \(v^{u}=0.03\) (range: \([0.025,0.10]\)). * \(p^{e},p^{r}\) **(vaccine effectiveness):** Using the case counts from the supplementary material of Eyre et al. [25], we see that the positivity rate of contacts of partially vaccinated index cases is 0.60 times the positivity rate of contacts of unvaccinated index cases. Thus, we use \(p^{e}=0.6\) (range: \([0.5,0.8]\)). Data from Eyre et al. [25] show that vaccinated contacts were 0.62 as likely to test positive than unvaccinated contacts. Thus, we use \(p^{r}=0.6\) (range: \([0.5,0.8]\)). * \(n\) **(person-days of infection before appearance of new variant):** We select values of \(n\) so that the variant emerges approximately one-third of the way through the 180-day simulation. In our baseline scenario, this value is \(n=45,000\). As described in Section 6, the _large nondonor initial cases_ scenario uses \(n=60,000\), and the _global variant_ scenario uses either the baseline value or \(n=90,000\). * \(L\) **(time lag for a new variant to spread between areas)**: Based on estimates of emergence dates of the omicron variant in the Netherlands, the U.S., Canada, and Scotland, after the first case was detected in South Africa, we estimate \(L=15\) days [81, 80, 20]. * \(T_{D}\)**, \(p\)**, \(k\) (time for variant to dominate)**: When omicron first emerged in South Africa in late 2021, it spread rapidly around the world and quickly became dominant. Based on reports from South Africa, Scotland, Canada, the U.S., and Korea, we estimate the time from emergence in an area to dominance to be \(T_{D}=25\) days [81, 80, 20, 46]. "Emergence" is defined as when the variant represents \(1\%\) of those currently infectious, which means setting \(p=0.01\). Given \(T_{D}\) and \(p\), the infection rate for the mixture of the two variants is given in equation (2) and involves a decay rate parameter, \(k=\ln[(1-p)/p]/T_{D}\). ### Parameters for Optimization Model (Table 3) Table 3 lists parameters used within the optimization model. These parameters are set as follows. * \(\mathcal{D},B(t)\), **and \(\nu\) (scenario parameters)**: For our simulations, \(\mathcal{D}\) contains a single donor area. To complete vaccinations in a reasonable time for the populations considered, we use a vaccination budget \(B(t)=1500\) (range: \([1000,2000]\)) doses per day. \(\nu\) measures the relative importance of nondonor area deaths in the objective function. We use \(\nu=0\), (self-interest, where the optimization model is considering only the donor area's own deaths) as a baseline. We also report results with \(\nu=1\) (altruism, where total deaths in both donor and nondonor areas are optimized), and intermediate values of \(\nu\). * \(\lambda,\phi,\epsilon_{0},\beta,\delta_{LP}\), **and \(\delta\) (optimization algorithm parameters)**: The parameters influencing the performance of the optimization were set by testing its speed and convergence for different parameter values; see Section 6.1. For most runs, the initial Lagrange multiplier is \(\lambda=0.01\) with an exploration multiplier of \(\phi=4\). The exploration tolerance \(\epsilon\) in regularization constraints (31) is initially set to \(\epsilon_{0}=500\) and scaled at each iteration by \(\beta=0.9\). The last two parameters control accuracy. The LP termination tolerance is \(\delta_{LP}=0.001\) and the Lagrangian termination tolerance is \(\delta=0.1\). However, for many scenarios, optimization was not used, as explained in Section 6.1. ## 6 Results All numerical tests used Gurobi 9.5 on a laptop with a 1.70GHz processor, 16 GB of RAM, and four physical cores. We report results from simulation and optimization. For optimization, the algorithm parameters \(\lambda\), \(\phi\), \(\epsilon_{0}\), \(\beta\), \(\delta_{LP}\), and \(\delta\) from Table 3 are used except as noted. The following scenarios are referenced, all of which have two areas (donor and nondonor): * _Baseline:_ This scenario uses the base values in Tables 1-3, a duration of \(T=180\) days, and the self-interest objective \(\nu=0\). However, nondonor initial cases are reduced to \(20\%\) of donor initial cases, i.e., \(\rho^{I}=7.2\times 10^{-4}\) for nondonor. * _Large nondonor initial cases:_ From the baseline scenario, nondonor initial cases are reset to \(100\%\) of donor initial cases and infectious days before the variant is \(n=60{,}000\). _Global variant:_ The baseline scenario, but the variant emerges after \(n=45,000\) or \(n=90,000\) infectious days in all areas (instead of just nondonor areas). * _No variant:_ The baseline scenario, but without the emergence of the variant. * _No vaccine:_ The baseline scenario, but without vaccinations. There is no vaccination policy to optimize. Some shorter scenarios are also used for testing convergence. ### Convergence of the Linear Approximation To build confidence in the approximate solution methods, first we compared the exact Lagrangian problem (30) to the iterative LP approximation. The comparison assumes no behavior dynamics so that the Lagrangian constraints are quadratic, not cubic. Even in this case, Gurobi could only solve the Lagrangian QCP for shorter scenarios, up to about 20 days with two areas. A 20-day, two-area scenario was constructed, adjusting several parameters so that the variant would have an impact within 20 days. For the altruism objective (\(\nu=1\)), both methods find the nondonor-first policy. The iterative LP results are insensitive to whether the initial policy was donor-first or nondonor-first. With this objective, the Lagrangian term \(\lambda\) is not needed; nonetheless, the results are insensitive to small values of \(\lambda\). For the self-interest objective (\(\nu=0\)), both methods find the donor-first policy. Although in general \(\lambda>0\) is needed to incentivise nondonor vaccinations for the self-interest objective, this example does not need it because the donor-first policy is optimal. Any small value of \(\lambda\) gives the same LP solution. However, QCP solution time is sensitive to \(\lambda\): for \(\lambda=0.01\) and smaller, it does not converge in 10 minutes (gap of 3.8% from the best bound), while for \(\lambda=0.1\), it converges in 2.6 seconds (gap of 0.9%), but to a different policy because \(\lambda\) is too large. The number of LPs needed for the iterative method to converge depends on the exploration tolerance. With a large tolerance (\(\epsilon=100\)) and convergence parameter \(\beta=0.8\), convergence is immediate. For the self-interest objective and a nondonor-first initial policy, the search "distance" is farthest because the donor-first policy is optimal. In this case, a smaller tolerance (\(\epsilon=10\)) converges in seven iterations to the donor-first policy. The regularization constraints are not needed in this scenario: when they are relaxed by choosing a large \(\epsilon\), it still converges to the same policy. Next we tested convergence and stability of the iterative LP inner loop on the 180-day baseline scenario. The nondonor-first policy appears optimal. The iterative LP converges to this policy from either initial policy, for \(\lambda\geq 0.01\). From the donor-first initial policy (which is most distant), it converges in three iterations for a large exploration tolerance (\(\epsilon=1000\) and \(\beta=0.8\)). For the "large nondonor initial cases" scenario, the donor-first policy appears optimal. For \(\lambda\leq 0.001\), the iterative LP converges to this policy from either initial policy. From the nondonor-first initial policy (which is most distant), it converges in two iterations for a large exploration tolerance (\(\epsilon=500\) and \(\beta=0.8\)). On these test cases, then, the iterative LP method appears to find an optimal policy. It also shows rapid convergence and stability, in that it converges to the same policy for different initial policies. Finally, we tested convergence over \(\lambda\) in the outer loop. Call a policy \(\lambda\)-optimal if it is optimal for (30). For the baseline scenario, where nondonor-first appears optimal, there are only two \(\lambda\)-optimal policies: donor-first for about \(\lambda\leq 0.004\) and nondonor-first for larger \(\lambda\). When initialized at \(\lambda=0.001\), using a multiplier \(\phi=2\), the algorithm went past this theshhold and terminated at the optimal policy. However, when initialized at smaller \(\lambda\), it stopped increasing \(\lambda\) at \(0.002\), and did not find the optimal policy. The difficulty is that the objective (18) is a step function, dropping \(4\%\) at \(\lambda=0.004\) but otherwise constant. We tried a 90-day scenario with different parameter values and found that the \(\lambda\)-optimal policy changed gradually with \(\lambda\) as demonstrated by Figure 3. The objective is self-interest and the smallest donor deaths occurs at the smallest \(\lambda\). The corresponding policy, donor-first, is optimal. When initialized at larger \(\lambda\), the algorithm decreases \(\lambda\) until the optimal policy is reached. ### Emergence of the Variant for Different Policies and Scenarios Results for the baseline scenario under the donor-first and nondonor-first policy are shown in Figures 4 and 5. Cumulative vaccinations, cases, and deaths, as well as the current susceptible and infectious (\(I(t)+I^{v}(t)\) Figure 3: Deaths and time of variant vs. \(\lambda\), 90-day scenario populations are shown by area. The two vertical lines indicate when the variant arrives in the nondonor and then the donor area. For the donor-first policy, the donor area reaches its vaccination limit of 78% at day 45. After the variant reaches the donor area (day 57), infections rebound, forming a second wave. The waves can be understood from the herd immunity critical proportions in Table 4. Before the variant, herd immunity in a fully unvaccinated donor area would require a critical (i.e., protected) proportion of 51%, whereas a fully vaccinated donor area would require a critical proportion of 0%. Thus, the actual critical proportion prior to variant emergence is within this range. This is reached before roughly day 20, when cases are 12% and susceptible vaccinated are 30%. However, by day 80 the variant is dominant in the donor area and the 22% case rate has not reached herd immunity, even if everyone was vaccinated (41% is required). Deaths are summarized in Table 5. The nondonor-first policy reduces donor deaths 3.7%, total deaths 27%, and delays the variant 57 days compared to the donor-first policy. The nondonor-first policy has fewer donor deaths because, even without vaccination, the first wave is moderated by behavior (fewer contacts) and the variant arrives much later, after both areas are vaccinated. Comparing the nondonor area in Figures 4 and 5, the first surge is much smaller and the second surge later under the nondonor-first policy, resulting in the variant arriving much later. However, if \(n\) is changed in either direction (infectious days before the variant appears), the nondonor-first policy does not delay the variant as much and is no longer preferred. Table 5 also shows the results for other scenarios. Without the vaccine, deaths are much higher. Without the variant, the donor-first policy does better for donor deaths and total deaths. The global variant scenario counts infections in both areas, so the nondonor-first policy does not delay the variant; this is also true when \(n\) is doubled to 90,000. For the "large nondonor initial cases" scenario, we tested intermediate policies that give 50 or 75% of vaccinations to the donor. These policies are worse than donor-first and nondonor-first, particularly for total deaths. algorithm did not find any new policies, it increased our confidence that these policies are optimal. The results are shown in Table 6. Starting with the "large nondonor initial cases" scenario and increasing \(\nu\) to \(0.15\) or larger changes the optimal policy from donor-first to nondonor-first. This change occurs when just \begin{table} \begin{tabular}{|l|l|c c|c|} \hline \multirow{2}{*}{Scenario} & \multicolumn{2}{c|}{Deaths} & \multicolumn{2}{c|}{Time of} \\ & Policy & Donor & Total & variant (days) \\ \hline \hline No vaccine & n/a & 965 & 1929 & 42.4 \\ \hline Baseline & donor-first & 468 & 1105 & 42.4 \\ & nondonor-first & 450 & 808 & 99.5 \\ \hline Global variant & donor-first & 517 & 1183 & 22.5 \\ & nondonor-first & 583 & 1368 & 21.6 \\ \hline Global variant, & donor-first & 467 & 1103 & 42.8 \\ infection days until & nondonor-first & 602 & 1121 & 42.7 \\ variant \(n=90,000\) & & & & \\ \hline No variant & donor-first & 274 & 687 & none \\ & nondonor-first & 440 & 701 & none \\ \hline Large nondonor & donor-first & 478 & 1151 & 38.9 \\ initial cases, & donor 75\% & 495 & 1148 & 41.0 \\ infection days until & donor 50\% & 531 & 1140 & 44.2 \\ variant \(n=60,000\) & nondonor-first & 498 & 978 & 72.6 \\ \hline \end{tabular} \end{table} Table 5: Effect of policy, vaccination, and scenario Figure 4: Baseline scenario, donor-first policy 15% of nondonor deaths are considered, moving part-way to altruism; this metric is reported as _weighted deaths_. The policy change increases donor deaths in the model by 20 (4%) compared to the self-interest policy, but decreases total deaths by 173 (15%) and nearly doubles the time until the variant emerges. Thus, vaccine distribution is not a zero-sum game between donor and nondonor countries. The last scenario in Table 6 was challenging for the algorithm to find the optimal policy. Using parameters that worked on other scenarios, the algorithm converged to more complex policies that appear to be locally optimal. For example, using \(\lambda=0.001\), \(\epsilon=500\), and \(\beta=0.8\) gave a policy that starts vaccinating the donor area, then switches to the nondonor area, then switches back for several days to finish donor vaccinations. Results for this policy are shown in the table. ### Sensitivity Analysis This section examines the sensitivity of the best policy and predicted deaths to the more important and uncertain model parameters. Starting with the baseline scenario, parameters were varied one at a time, or in pairs of related parameters. Parameter ranges are discussed in Section 5 and listed in Tables 1-3. Based on the results in Section 6.3, we expect either the donor-first or nondonor-first policy to be optimal, Figure 5: Baseline scenario, nondonor-first policy so only those policies were checked. We first observe in Table 7 that the donor-first policy is better for 11 of the 17 parameter changes. The sensitivity of the policy to parameter changes is not surprising, given that the policies only differ by 18 deaths in the baseline scenario. The donor-first policy is also better than nondonor-first in most other scenarios we tested (not reported here), where multiple parameters were changed. In several instances, the nondonor-first policy is better because the health risk in the donor country in the first wave is smaller, or the risk in the second wave is larger (low initial infection rate, large change in infection rate for variant, reduced contacts through a smaller \(v^{u}\)). In other instances, the nondonor-first policy appears to be better because both areas can be vaccinated more quickly, before the second surge (small proportion willing to be vaccinated, large vaccination budget). When the proportion willing to be vaccinated is 100%, the donor-first policy is better because the second wave is much smaller: herd immunity in the donor country is reached when everyone is vaccinated and cases reach 33%; see Table 4. In several instances where the donor-first policy is better, the health risk to the donor country in the first wave is larger or the risk in the second wave is smaller (high initial infection rate, longer time in the infectious state, smaller infection rate for variant, increased contacts through a larger \(v^{u}\), reduced vaccine effectiveness through larger \(p^{e}\) and \(p^{r}\)). When vaccination takes longer (100% willing to be vaccinated, smaller vaccination budget), the donor-first policy is also better. When the nondonor infection rate is doubled (\(\gamma=2\)), the critical proportion in the nondonor area, vaccinated and before the variant, changes from 0% to 41%. Under a nondonor-first policy, the variant emerges much earlier than in the baseline, making the donor-first policy preferable. Interestingly, the policy is non-monotone in \(r_{0}\) and vaccine effectiveness. Reducing the time in the infectious state prevents the variant from appearing in the 180 day timeframe, even for the donor-first policy, so it is preferred. Increasing vaccine effectiveness (smaller \(p^{e}\), \(p^{r}\)) increases the benefit of the donor-first policy in the first wave enough that it is preferred. \begin{table} \begin{tabular}{|l|l|l|c c|c|c|} \hline & Nondonor & Optimal policy & Deaths & Time of & Parameters \\ Scenario & weight \(\nu\) & (local min) & Donor (Wtd) & Total & variant (days) & \(\lambda\), \(\epsilon\), \(\beta\), \(dT\) \\ \hline \hline Baseline & 0 (self-int) & nondonor-first & 450 & 808 & 99.5 & 0.01, 1000, 0.8, 180 \\ \hline Lg nondonor initial cases & 0 (self-int) & donor-first & 478 & 1151 & 38.9 & 0.001, 500, 0.8, 4 \\ \hline Lg nondonor initial cases & 0.15 & nondonor-first & 498 (570) & 978 & 72.6 & 0.001, 250, 0.8, 180 \\ \hline Lg nondonor initial cases & & (dnr/non/dnr) & 520 (615) & 1157 & 38.9 & 0.001, 500, 0.8, 180 \\ \hline \end{tabular} \end{table} Table 6: Optimal and local minimum policies for different scenarios ## 7 Future Work and Conclusions We have developed an SEIR-embedded optimization framework that provides qualitative insights into conditions under which a high-income country might realize health benefits by donating vaccine doses to a low-income country before its population is fully vaccinated. Using the COVID-19 pandemic as a case study, we find that there exist realistic scenarios under which a nondonor-first (altrustic) vaccination policy reduces not only total deaths but deaths within the donor country. Moreover, a nondonor-first vaccination policy can dramatically postpone the emergence of a variant that could be more contagious and transmissible than the original virus. This is particularly the case under an assumption of self-regulating behavior dynamics wherein a population takes actions (such as social distancing or masking) to reduce transmission during periods of high infections. Additionally, even when a donor-first policy minimizes donor-country deaths, a small weight in favor of altruism can achieve dramatic reduction in total deaths and dramatically delay variant emergence with only a small increase in donor-country deaths. Thus, vaccine distribution is not a zero-sum game between donor and nondonor countries. The choice of donor-first or nondonor-first is sensitive to population behavioral characteristics, such as willingness to vaccinate and behavioral responses to local infection rates, and to characteristics of the variant. The nondonor-first policy outperforms the donor-first policy when the proportion willing to vaccinate is low (equivalently, the vaccination budget is high), the initial infection rate in the donor country is low, and the donor country imposes behavioral changes such as social distancing and masking at a low threshold of proportion infectious. In these instances, we interpret that the donor country is willing to donate vaccines when the local health impact of the vaccine is limited. Additionally, nondonor-first outperforms donor-first when the new variant will be highly contagious, suggesting that the donor country is also willing to donate vaccines when delaying emergence of a variant is especially valuable. Our model considers a short timeline, so incentives for stockpiling such as time-varying vaccine willingness or the need to administer additional vaccine doses in response to waning immunity are not relevant and thus not reflected. We leave this to future work. While our model is capable of accommodating multiple geographic areas, we have focused our analysis on two areas for computational reasons. Future work could examine whether having multiple recipient areas with different characteristics and populations yields qualitatively different results. We conclude by noting that some have claimed that COVID-19 vaccination campaigns in low-income regions, such as Africa, may not be in those regions' best interests either. Writes John Johnson, vaccination adviser for Doctors Without Borders, "Is this the most important thing to try to carry out in countries where there are much bigger problems with malaria, with polio, with measles, with cholera, with meningitis, with malnutrition? Is this what we want to spend our resources on in those countries? Because at this point, it's not for those people: It's to try to prevent new variants," [59]. Thus, while vaccine inequity perpetuates disparate health outcomes globally, an emphasis on vaccine donation can itself be fraught with donor self-interest. Our model permits a flexible weighting of multiple objectives to minimize COVID-19 deaths in the donor country and in total so that such questions of equity can be thoroughly examined.
2306.06418
List distinguishing index of graphs
We say that an edge colouring breaks an automorphism if some edge is mapped to an edge of a different colour. We say that the colouring is distinguishing if it breaks every non-identity automorphism. We show that such colouring can be chosen from any set of lists associated to the edges of a graph G, whenever the size of each list is at least $\Delta-1$, where $\Delta$ is the maximum degree of G, apart from a few exceptions. This holds both for finite and infinite graphs. The bound is optimal for every $\Delta\ge 3$, and it is the same as in the non-list version.
Jakub Kwaśny, Marcin Stawiski
2023-06-10T12:04:06Z
http://arxiv.org/abs/2306.06418v1
# List distinguishing index of graphs ###### Abstract We say that an edge colouring _breaks_ an automorphism if some edge is mapped to an edge of a different colour. We say that the colouring is _distinguishing_ if it breaks every non-identity automorphism. We show that such colouring can be chosen from any set of lists associated to the edges of a graph \(G\), whenever the size of each list is at least \(\Delta-1\), where \(\Delta\) is the maximum degree of \(G\), apart from a few exceptions. This holds both for finite and infinite graphs. The bound is optimal for every \(\Delta\geq 3\), and it is the same as in the non-list version. **Keywords**: infinite graphs, distinguishing index, list colourings, asymmetric colouring ## 1 Introduction In 1977, Babai [1] introduced a concept of _distinguishing_ vertex colourings, which are those preserved only by the identity automorphism. The minimum number of colours in a distinguishing vertex colouring of a graph \(G\) is called the _distinguishing number_ of \(G\), and it is denoted by \(D(G)\). The analogous parameter for edge colourings, introduced in 2015 by Pilsniak and Kalinowski [12], is called the _distinguishing index_ of \(G\) and denoted by \(D^{\prime}(G)\). These concepts lie on the borderland between graph theory and abstract algebra, as they naturally generalize to an arbitrary group action [5]. Automorphism breaking also plays an important role in the quasipolynomial time algorithm of Babai [2] for the graph isomorphism problem. In this paper, we study the list version of distinguishing edge colourings. For each edge \(e\in E(G)\), let \(L(e)\) be a set of colours available for that edge. We are asking for the minimum cardinal number \(k\) such that for any set of lists of cardinality \(k\) we can find a distinguishing edge colouring \(c\) such that \(c(e)\in L(e)\) for every edge \(e\in E(G)\). We denote this minimum \(k\) as \(D^{\prime}_{l}(G)\), and we call this parameter the list distinguishing index of \(G\). Clearly, \(D^{\prime}_{l}(G)\geq D^{\prime}(G)\) for any graph \(G\). List colourings were introduced in 1980 by Erdos, Rubin, and Taylor [6] for the problem of proper vertex colourings. There are known classes of graphs with arbitrary large difference between the chromatic number and the required size of lists. However, for proper edge colourings there is a famous List Colouring Conjecture [4, 10] which states that any graph \(G\) has a proper edge colouring from any lists of size \(\chi^{\prime}(G)\). The list variant of vertex distinguishing colourings was first studied by Ferrara, Flesch and Gethner [7] in 2011, and they conjectured that the list-distinguishing number is the same as the distinguishing number for every finite graph. There are a few partial results towards this conjecture: for finite trees [9], finite interval graphs [11], graphs with dihedral automorphism group [7], Cartesian product of two finite cliques [8], and Kneser graphs [3]. Motivated by the conjecture of Ferrara, Flesch and Gethner, and by the List Colouring Conjecture, we propose the following. **Conjecture 1**.: _Let \(G\) be a connected, infinite or finite graph. Then \(D^{\prime}_{l}(G)=D^{\prime}(G)\)._ In the paper, we aim to provide a general upper bound for connected graphs, both finite and infinite. These types of bounds are known for the distinguishing index. For finite graphs, Pilsniak [13] in 2017 proved the following. **Theorem 2** ([13]).: _Let \(G\) be a connected, finite graph that is neither a symmetric nor a bisymmetric tree. If the maximum degree of \(G\) is at least \(3\), then \(D^{\prime}(G)\leq\Delta(G)-1\) unless \(G\) is \(K_{4}\) or \(K_{3,3}\)._ Later, Pilsniak and Stawiski [14] proved the same claim for infinite graphs. **Theorem 3** ([14]).: _Let \(G\) be a connected, infinite graph with finite maximum degree \(\Delta\geq 3\). Then \(D^{\prime}(G)\leq\Delta-1\)._ We show that these two bounds also hold for the list version of the problem. Since the above two results are optimal, so is ours. In particular, it follows that \(D^{\prime}_{l}(G)=D^{\prime}(G)\) for every subcubic connected graph. The proof is divided into two parts. The first, major part contains a proof for graph with cycles, and then we separately check trees. In formulating the theorems, we exclude the same exceptional graphs as Pilsniak [13], so we describe them shortly in the last section. ## 2 Graphs with a cycle From now on, we only consider edge colourings. In the proofs below, we skip the case where all the lists are identical, as this case follows from Theorems 2 and 3. However, we note that our approach would allow this case to be included, at the expense of complexity of the proofs. **Theorem 4**.: _Let \(G\) be a connected graph with maximum degree \(\Delta\geq 3\) which is not a tree and not isomorphic to \(K_{3,3}\), nor \(K_{4}\). Then \(D^{\prime}_{l}(G)\leq\Delta-1\)._ Proof.: Let \(G=(V,E)\) be a connected graph and \(\Delta=\Delta(G)\) be its maximum degree. Assume that \(G\) is not a tree and \(G\not\in\{K_{3,3},K_{4}\}\). Let \(L=\{L(e)\}_{e\in E}\) be a set of lists, each of size \(\Delta-1\). Denote \(L(u)=\bigcup_{uv\in E}L_{uv}\) for any \(u\in V\). First, consider the case when \(\Delta\) is infinite. Since \(G\) is connected, then \(G\) must have exactly \(\Delta\) edges. Hence, we can pick a different colour for each edge to obtain a distinguishing colouring with \(\Delta=\Delta-1\) colours. For the rest of the proof, we shall assume that \(\Delta\) is finite. For each colour \(i\in\bigcup_{e\in E}L(e)\), we consider a subgraph \(H_{i}\) induced by all the edges \(e\), such that \(i\in L(e)\). If \(H_{i}=G\), then we call such a subgraph trivial (we shall also sometimes say that the colour \(i\) is trivial). If every \(H_{i}\) is trivial, then we have a standard non-list colouring, which exists by Theorems 2 and 3 (we use the assumptions that \(G\) is not a tree, so it is not a symmetric nor a bisymmetric tree, and that \(G\not\in\{K_{3,3},K_{4}\}\)). Therefore, we can assume that not every \(H_{i}\) is trivial. We shall describe a greedy algorithm which iteratively chooses the colours of the edges of \(G\) from the respective lists. The algorithm starts by colouring some starting subgraph \(G_{0}\). All the edges of \(G_{0}\) are coloured at this step, and this colouring is distinguishing for \(G_{0}\). We shall guarantee in the further course that \(G_{0}\) is coloured uniquely, which will cause \(G_{0}\) to be fixed. Then, the algorithm processes the remaining vertices, one by one, and fixes each new vertex it has reached, i.e. any vertex that is incident to a coloured edge. I. The starting subgraph We consider the following cases to select a suitable starting subgraph. This choice also affects the later colouring strategy, when we must avoid the colour pattern used on the starting subgraph. **Case 1.** There exists a colour \(p\) such that \(H_{p}\) is non-trivial and it contains a cycle. We shall call this colour pink. Let \(C\) be an induced cycle in \(H_{p}\). Since \(H_{p}\) is non-trivial, it must contain a vertex \(v\), in the same connected component of \(H_{p}\) as \(C\), which has an incident edge \(vw\) outside \(H_{p}\) (note that \(w\) may be in \(H_{p}\)). By the choice of \(v\), there exists a shortest path \(R\) from \(v\) to \(C\) ending in a vertex \(u\) of \(C\) (and \(u\) must be the only common vertex of \(R\) and \(C\)). In particular, it may be the case that \(v\) lies on \(C\), then \(R\) is trivial and \(u=v\). Denote by \(u^{+}\) a neighbour of \(u\) on \(C\). We define our starting subgraph \(G_{0}\) as the subgraph induced by all the edges incident to the vertices of \(C\) and \(R\). We now specify a distinguishing colouring of the starting subgraph. We colour all the edges of \(C\) and \(R\) except \(uu^{+}\) pink (this is possible since \(C\) and \(R\) are contained in \(H_{p}\), so these edges have the colour pink on their lists) and assign \(uu^{+}\) a colour other than pink; we shall call this colour blue. Let us consider all possible extensions of the current colouring to \(G\) and all possible automorphisms of these coloured graphs that stabilise \(C\cup R\). If none of these automorphisms acts non-trivially on it, then we only need to choose the colours for the edges not in \(C\) nor \(R\). For each vertex in \(C\cup R\), we assign different colours other than pink to these edges. This can be done since each such vertex except \(v\) has at most \(\Delta-2\) neighbours outside \(C\cup R\), and the lists have size \(\Delta-1\). The vertex \(v\) may have one more neighbour outside \(C\cup R\) but it has also one incident edge with \(\Delta-1\) colours different from pink. If, on the other hand, there exists such an automorphism, it interchanges \(v\) and \(u^{+}\) and we must break it at this moment. Since \(uu^{+}\) is an edge, then either \(u=v\) or \(v\) has a neighbour on \(C\) different from its successor on \(R\). This means that \(v\) must have two neighbours in \(G_{0}\). In this case, we would like to choose the colours on the edges incident to \(v\) and \(u^{+}\) such that these two vertices receive different palettes. But in this case, \(v\) has at most \(\Delta-2\) neighbours outside \(C\cup R\) and \(L(vw)\) does not contain the colour pink, so we have two possibilities for the last edge \(vw\) we colour, which result in two different palettes of \(v\). For the other vertices on \(C\cup R\), including \(u^{+}\), we do not have such freedom, but we can just succeed. Therefore, we first choose the colours for the edges incident to vertices other than \(v\) (following the rule that for each vertex we choose different colours other than pink on the incident edges), and then to \(v\) such that the palettes of \(u^{+}\) and \(v\) are different. This way, we break all the automorphisms of \(G_{0}\). **Case 2.** For every colour \(p\), the graph \(H_{p}\) is either trivial or \(H_{p}\) is a forest. Consider any induced cycle \(C\) in \(G\). If any edge of \(C\) contained only non-trivial colours on its list, then all the lists in \(G\) would be identical, and we have already assumed that this is not the case. Therefore, each edge of \(C\) has a colour \(p\) in its list such that \(H_{p}\) is a forest. For any non-trivial colour \(p\) on the lists of \(C\), we can consider the longest path \(P\) contained both in \(C\) and \(H_{p}\). Each such path \(P\) is contained in a maximal path, a maximal ray, or a double ray in \(H_{p}\), which we denote by \(R\). If it is possible, that \(R\) is not entirely contained in \(C\), then we choose \(p\), \(C\) and \(P\) accordingly (in other words, first we consider only the colours \(p\) that have the longest \(P\)'s, and then we choose, if there is one, the one with \(R\neq P\)). We define our starting subgraph \(G_{0}\) as the subgraph induced by all the edges incident to the vertices of \(R\) and \(C\). Denote by \(u\) and \(v\) the end-vertices of \(P\). Let \(R^{\prime}\) be a maximal subpath or a subray of \(R\) ending with \(u\) or \(v\) (without loss of generality, let it be \(u\)). If \(R^{\prime}\neq P\) then we call the edge of \(R^{\prime}-P\) incident with the cycle \(C\)_the gadget_ of \(P\). We start with colouring all the edges of \(R^{\prime}\) pink. The colouring of the rest of the edges of \(G_{0}\) depends on the number of edges in \(C-P\). If \(C-P\) contains at least two edges, then we choose different colours for the edges \(uu^{-}\) and \(vv^{+}\), where \(u^{-}\) and \(v^{+}\) are the neighbours of \(u\) and \(v\), respectively, in \(C-P\). These colours are different from pink by the maximality of \(P\). We shall refer to these colours as blue and green, respectively. Next, for each vertex of \(R^{\prime}\), we choose different colours other than pink for the edges outside \(R^{\prime}\) (this is possible for the same reason as in Case 1). Then, we perform the following scheme, which we write down separately as it will be used again later. _Cycle colouring scheme._ We take two passes on the cycle, each time considering the vertices \(u_{1},\ldots,u_{k}\) consecutively. First, we choose the colours for the edges of the cycle. If the current edge has the colour pink on its list, then we choose pink, unless this is the last edge we are colouring and this would result in exactly two pink paths of length \(|P|\) and only two non-pink edges on the cycle. In this case, we choose a colour other than pink. If the current edge does not have the colour pink, then we choose any colour, unless the previous \(|P|\) edges are pink, in which case we disallow blue or green, whichever would create the pink path of length \(|P|\) surrounded by blue and green. Subsequently, we do a second pass and colour all the other edges adjacent to the vertices of the cycle. Take a vertex \(u_{i}\). We consider a few cases: * If \(u_{i}u_{i+1}\) is pink, then we choose different colours other than pink for all the uncoloured edges incident to \(u_{i}\). It is possible, since there are at most \(\Delta-2\) such edges, and each of them has \(\Delta-2\) colours other than pink on its list. * If \(u_{i}u_{i+1}\) has a colour other than pink, and the colour pink does not appear on all the lists of the uncoloured incident edges, then we forbid both pink and the colour of \(u_{i}u_{i+1}\) on the incident edges and again choose different colours. Moreover, if \(u_{i}\) is an end-vertex of a pink path of length \(|P|\) on the cycle, then we forbid also blue or green (whichever does not appear on the other side of this path) on all the currently coloured edges. To argue that we can succeed, we observe that if blue or green is present on the list of \(u_{i}u_{i+1}\), then this colour cannot appear on any of the lists of the incident edges outside \(C\). This is because we would choose this colour to be called pink at the beginning, as it would yield a gadget. Furthermore, either there is no pink at all on the lists of the edges incident to \(u_{i}\) (so we have only two forbidden colours) or there is one list with pink and one without it (which gives us an additional colour to choose from). * If \(u_{i}u_{i+1}\) has a colour other than pink, and all the incident edges have pink on their lists, then again we choose different colours other than pink and the colour of \(u_{i}u_{i+1}\) on the incident edges. Moreover, if \(u_{i}\) is an end-vertex of a pink path of length \(|P|\) on the cycle, and we are forced to use blue or green (whichever does not appear on the other side of this path) somewhere on the edge incident to \(u_{i}\), then we put this colour on \(u_{i}u_{i+1}\). This may create a copy of a pink path of length \(|P|\) surrounded by blue and green, but it will cause no problem due to an absence of a gadget (and \(P\) must have a gadget, since the path just created could have one). Note that we have used the rule, that if an edge on the cycle has a colour other than pink, then there is no pink on its list. There might be one exception to this rule, but it does not concern us because this exception does not occur at the end of the pink path of length \(|P|\), but rather of \(|P|-1\). If \(C-P\) contains only one edge \(uv\) (it must contain at least one, since \(H_{p}\) is a forest) and \(u\) and \(v\) have degree at least three in \(G\), then we choose different colours of edges incident to \(u\) and \(v\) so that the palettes of \(u\) and \(v\) are different. We shall refer to the colour of \(uv\) as blue. If \(R^{\prime}\neq P\), then \(u\) has two adjacent pink edges and \(u\) only one, so they are already distinguished. Otherwise, by the maximality of \(R\), none of the edges incident to \(u\) and \(v\) outside \(C\) has pink in its list, so there are at least one and at most \(\Delta-2\) neighbours of each of these vertices outside \(C\) and we can choose two different palettes. Then we choose the colours of the remaining edges, again like in the second pass of the Cycle colouring scheme. If \(C-P\) contains only one edge \(uv\) and \(d(u)=d(v)=2\), then we recolour the edge \(uu^{+}\), where \(u^{+}\) is a neighbour of \(u\) on \(C\) other than \(v\), with a new colour different from pink. We shall refer to this colour as blue. We choose a colour other than pink and blue for the edge \(uv\) and call it green. Then we choose the colours of the remaining edges, like in the second pass of the Cycle colouring scheme. Depending on what the starting subgraph \(G_{0}\) looks like and on the chosen colouring, we shall avoid the specific patterns during the remaining part of the algorithm. This will guarantee that \(G_{0}\) is stabilised and, given the colouring of \(G_{0}\), also fixed. Note that, in fact, there are only two types of starting subgraph: either an induced cycle with all incident edges, or an induced cycle with an attached path or ray, with all incident edges. In both cases, all the edges in \(G_{0}\) not contained in the cycle, path nor ray are assigned a colour other than pink. Let \(k\) be the length of the cycle. We shall use the name _gadget_ not only for the edge defined in Case 2, but also for the analogous edge in Case 1 (i.e. the one on the non-trivial path \(R\), incident to a vertex of \(C\)). Moreover, we shall refer to the pink path on the cycle in \(G_{0}\) as \(P\), regardless of whether it was formed in Case 1 or Case 2. We will also reuse the Cycle colouring scheme during the next part. In Case 2, the scheme started from some specific pre-coloured cycle, but we have never used the fact, what this initial colouring looked like. The main property of this scheme is that it will never produce another pink path of length \(|P|\) surrounded by green and blue, with or without a gadget (depending on the existence of a gadget in \(G_{0}\)). Therefore, we shall use it, starting with some other initial colourings. II. The iterative procedure We shall now iteratively extend the set of _reached_ vertices, i.e. the ones with a coloured incident edge, starting from \(G_{0}\). We shall execute the procedure until there are no uncoloured edges left. Let \(A\) be the set of the automorphisms which stabilise \(G_{0}\) and preserve the partial colouring we defined so-far. After each execution of the procedure, we shall guarantee that the following conditions are satisfied: 1. Each reached vertex is fixed pointwise with respect to \(A\). 2. If a vertex \(v\notin V(G_{0})\) has a pink incident edge, and it is the only coloured edge incident to \(v\), then this edge is not contained in any cycle of length \(k\). Note that these conditions are satisfied for the initial colouring of \(G_{0}\). The procedure starts by taking a reached vertex \(v\) with the smallest distance from \(G_{0}\), which has an uncoloured edge. We shall call the already coloured edges of \(v\) as back edges, the uncoloured edges to the reached vertices as horizontal edges and the remaining ones as forward edges. If none of the forward edges of \(v\) appear in any induced cycle of length \(k\), then we simply colour each forward edge of \(v\) with a different colour, avoiding pink if possible, and then each horizontal edge with an arbitrary colour other than pink. This is possible since there are at most \(\Delta-1\) of such edges, and it fixes pointwise each newly coloured vertices, so the conditions (A1) and (A2) are fulfilled. If there is an induced cycle of length \(k\) containing a forward edge of \(v\), then we first check the following conditions: 1. Each forward edge of \(v\) appears on a cycle of length \(k\). 2. All the lists of the forward edges of \(v\) are the same, and each of them contains pink. 3. There are \(\Delta-1\) forward edges. If any of these condition is not satisfied, then we can colour the forward edges with different colours either without using pink (C2 or C3) or we can use pink on the edge which does not appear in such cycle (C1). If, however, all these conditions hold, our further actions shall depend on the structure of \(G_{0}\). Let \(C^{\prime}\) be a cycle of length \(k\) that contains a forward edge of \(v\). If \(C^{\prime}\) contains also the unique back edge of \(v\), then this edge is not pink by (A2) and \(C^{\prime}\) has two fixed vertices by (A1). Therefore, we just need to realise the cycle colouring scheme from Case 2 and then \(C^{\prime}\) will be fixed pointwise as long as \(G_{0}\) is stabilized. There is only one exception: if \(G_{0}\) is a cycle with all edges except one coloured pink, and the cycle colouring scheme produced an identical copy of \(G_{0}\), then we change the colour of the blue edge to any other (including pink). Assume now that \(C^{\prime}\) is a cycle of length \(k\) that contains two forward edges of \(v\). We must ensure that the colouring of \(C^{\prime}\) will be different from the one in \(G_{0}\), otherwise \(G_{0}\) will not be stabilized. Therefore, we will again colour the whole \(C^{\prime}\) with all incident edges at once, along with the edges incident to \(v\). We colour the forward edges of \(v\) which are not in \(C^{\prime}\) with different colours other than pink and blue. Then we colour one of the edges on \(C^{\prime}\) incident to \(v\) pink, and the other one with any colour other than pink and blue. This last choice may be impossible if \(\Delta=3\) and the lists of both forward edges consist of exactly pink and blue. In this case, we colour both edges pink and continue to choose pink in both directions on \(C^{\prime}\), until possible. Then on each side, we colour one next edge (it may be the same one edge) so that at least one of them is not blue. Afterwards, we continue like for the other values of \(\Delta\), depending on the structure of \(G_{0}\). * If \(G_{0}\) has no gadget, or \(G_{0}\) has a gadget but the back edge of \(v\) is not pink, then, we just execute the cycle colouring scheme on \(C^{\prime}\). Note, that the cycle colouring scheme does not produce gadgets, so the back edge of \(v\) would be the only candidate for one. * If \(G_{0}\) has a gadget and the back edge \(e\) of \(v\) is pink, then by the assumption of the procedure, the edge \(e\) is not contained in any cycle of length \(k\). Hence, if we follow the cycle colouring scheme, then the only gadget created in this step can be \(e\). But the gadget in \(G_{0}\) was always incident to a blue edge, and there is no blue edge incident to \(v\), therefore we are safe to execute the cycle colouring scheme on \(C^{\prime}\). By the colouring of the two edges on \(C^{\prime}\) incident to \(v\), we broke all the automorphisms of \(C^{\prime}\), given that \(v\) was fixed. This and the cycle colouring scheme guarantee that all the reached vertices are fixed pointwise, so (A1) is satisfied. Moreover, we used the colour pink only on the cycle \(C^{\prime}\) or on some forward edge of \(v\) which does not belong to any cycle of length \(k\). This gives us (A2). We are left to show that we did not create a second copy of \(G_{0}\) throughout the iterative procedure. Assume otherwise, and denote by \(C^{\prime\prime}\) the cycle isomorphic to the cycle in \(G_{0}\). There must be a pink edge \(xy\) contained in a pink path \(P^{\prime\prime}\) of length \(|P|\) on \(C^{\prime\prime}\), surrounded by blue and green edges or one blue edge, and the edge \(xy\) does not belong to \(G_{0}\). Let us assume that \(xy\) is the edge incident to a blue edge on \(C^{\prime\prime}\). Consider the step of the procedure when this edge was coloured. In the procedure, we used the colour pink for an edge in a cycle of length \(k\) only when we coloured a cycle \(C^{\prime}\). We used the cycle colouring scheme, where the only possibility to create a pink path of length \(|P|\) surrounded by blue and green edges was if \(P\) had a gadget. But we ensured that the only pink edges incident to \(C^{\prime}\) lie on \(C^{\prime}\) itself, except for the currently processed vertex \(v\) which has no incident blue edges, and therefore we could not have created a gadget of \(P^{\prime\prime}\). We could not have created a cycle of length \(k\) with all pink edges except one blue, either, as any pink path of length \(k-1\) would be contained in \(C^{\prime}\), and this cycle is induced. This contradiction allows us to conclude that \(G_{0}\) is fixed after the procedure, hence also the whole graph \(G\). ## 3 Trees **Theorem 5**.: _Let \(G\) be a tree with maximum degree \(\Delta\geq 3\). Then either \(G\) is a symmetric tree, \(G\) is a bisymmetric tree, or \(D^{\prime}_{l}(G)\leq\Delta-1\)._ Proof.: Like in the proof of Theorem 4 we can assume that \(\Delta\) is finite. We shall choose one vertex \(r\) and refer to it as the root. We shall use the standard notation and for any vertex \(u\), we shall call the incident edge on the unique path from \(u\) to \(r\) as the _back edge_, and all other edges incident to \(u\) as _forward edges_. We call a colouring of \((G,r)\) the _standard colouring_ if every vertex except \(r\) has all the forward edges coloured with distinct colours. We claim that any standard colouring which fixes \(N[r]\) (a closed neighbourhood of \(r\)) is a distinguishing colouring of \(G\). To see this, consider any vertex \(u\) outside \(N[r]\) (as the elements of \(N[r]\) are already fixed). Then there is a unique path from \(r\) to \(u\) through a neighbour \(v\) of \(r\). Consider the last vertex \(w\) on that path, starting from \(r\), which is fixed. If \(w\neq u\), then some automorphism maps one forward edge of \(w\) to another. But this is impossible, since these two edges, by the assumption, have different colours. This means that \(w=u\), and \(u\) must be fixed. The remaining of the proof will consist of a few cases where we shall find a suitable root vertex \(r\) and a standard colouring of \((G,r)\) with the above property. Note that having the edges incident to \(r\) coloured, it is straightforward to find a standard colouring of the graph, e.g. by considering the vertices of \(G\) one by one, from those closest to \(r\). We shall usually be doing a variation of such procedure, as we shall need some additional properties. **Case 1.** There is no vertex of degree at least two, with all incident edges sharing the same list. We choose an arbitrary vertex \(r\) and we colour all its incident edges with different colours. Let pink be one of these colours. We colour the second end-vertex \(v\) of this pink edge so that \(r\) and \(v\) have distinct palettes. This is possible, since the edges incident to \(v\), by our assumption, have different lists. Finally, we colour the remaining edges of \(G\) with a standard colouring without using pink. This is again possible by our assumption. In the following cases, we shall assume that there is a vertex which is not a leaf, such that all its incident edges share the same palette. **Case 2.**\(G\) contains a vertex \(v\) such that \(1<d(v)<\Delta\). We take such a vertex as a root \(r\) and colour all its incident edges with different colours. Then, we colour the remaining edges to get a standard colouring, with the condition that each vertex of degree \(d(r)\), apart from \(r\), has a distinct palette than that of \(r\). **Case 3.**\(G\) is the regular tree of degree \(\Delta\). Let \(r\) be an arbitrary vertex of degree at least two, with all incident edges sharing the same list. We start by colouring all the edges incident to \(r\) with the same colour, say pink. We shall ensure that \(r\) is the only vertex with all incident pink edges. Then, we iteratively fix the remaining vertices. During each iteration, we fix possibly only one vertex, but we choose a colour for multiple edges. Let \(v\) be a vertex which is not yet fixed, and is the closest to \(r\) among all such vertices. Let \(i\) be the smallest natural number such that all the currently coloured edges are contained in \(B(r,i)\) (i.e. the ball of radius \(i\) centred at \(r\)). We choose a vertex \(w\) which is a descendant of \(v\) in \(B(r,i)\setminus B(r,i-1)\). If there is such a vertex \(w\) that has a forward edge with pink on its list, then we pick this vertex and colour that edge pink. Otherwise, we choose any such \(w\) and pick an arbitrary colour (say red) for any of its forward edges. Then, we colour all the remaining uncoloured edges in \(B(r,i)\) with arbitrary colours, such that: * if \(w\) has a red forward edge, then the colour red is not used on the forward edges of the vertices in \(B(r,i)\setminus B(r,i-1)\), and * if \(w\) has a pink forward edge, then we do not use the colour pink, and * if \(w\) has a red forward edge, then each vertex in \(B(r,i)\) except \(r\) has at most one pink forward edge. After these steps, \(w\) is the only vertex in a distance \(d(r,w)\) from \(r\) with a pink (or red) forward edge. Therefore, \(w\) is fixed, and so are all the vertices between \(r\) and \(w\) (including \(v\)). Since \(\Delta\geq 3\), we did not create vertices with all incident pink edges, apart from \(r\). Repeating these steps, we fix all the vertices of \(G\). **Case 4.**\(G\) is not regular, and the degree of every vertex of \(G\) is in \(\{1,\Delta\}\). We consider three subcases: **Case 4a.**\(G\) is finite. Then \(G\) contains either a central vertex or a central edge. If \(G\) has a central vertex \(r\), then, as \(G\) is not a symmetric tree, \(G-r\) must contain two rooted subtrees which are not isomorphic. We colour the edges incident to \(r\) with distinct colours, except possibly two edges to two non-isomorphic subtrees. For the remaining edges, we use a standard colouring. If \(G\) has a central edge \(xy\), we choose an arbitrary colour for that edge. Since \(G\) is not a bisymmetric tree, among all the rooted subtrees of \(G-e\), there must be two which are non-isomorphic. The roots of these subtrees are either the neighbours of the same end-vertex of the central edge, say \(x\), or of two different end-vertices of the central edge. In both cases, we can colour the remaining edges incident to \(y\) with different colours, and the same with \(x\) (possibly using the same colour on the edges to the non-isomorphic subtrees), so that the palettes of \(x\) and \(y\) are different. Then we can continue with a standard colouring. **Case 4b.**\(G\) contains a ray but not a double ray. Let \(r\) be any non-leaf vertex on the unique ray of \(G\). All but one subtree of \(r\) must be finite, since otherwise \(G\) would have a double ray. We colour all the edges from \(r\) to its finite subtrees with different colours, and we choose any colour, say pink, for the last edge incident to \(r\). Then, we complete this colouring to a standard colouring, with the additional condition that any forward edge to an infinite subtree has a colour other than pink. For any considered vertex, there will be at most one such forward edge, so this is possible, and it guarantees that \(r\) is fixed. **Case 4c.**\(G\) contains a double ray. Since \(G\) has a leaf, there exists a vertex \(r\) that lies on a double ray and has a finite subtree (and also two infinite ones). We try to choose different colours on the edges incident to \(r\), and if it is impossible, then we repeat the colour on the edges to two non-isomorphic subtrees. Note that there is still an edge from \(r\) to an infinite subtree with a different colour than the one to the finite subtree. Then, we continue with a standard colouring, with the additional condition that if for some vertex \(r^{\prime}\) we are forced to use the same palette as \(r\), and there is an automorphism mapping \(r^{\prime}\) to \(r\), then we use on the forward edge to the finite subtree a different colour than \(r\) has. Note that the back edge of \(r^{\prime}\) leads to the subtree containing \(r\), hence, to an infinite one. Therefore, the finite subtree, the existence of which is guaranteed by the automorphism, must be attached to one of the forward edges. ## 4 Exceptional graphs For completeness, we append this short section about the locally finite graphs not covered by Theorems 4 and 5. We state the following theorems without proofs, as they are straightforward analogues of the proofs for the non-list distinguishing index, see [12]. **Theorem 6**.: _Let \(G\) be the cycle of length \(n\). Then \(D^{\prime}_{l}(G)=D^{\prime}(G)=3\) if \(n=3,4,5\), or \(D^{\prime}_{l}(G)=D^{\prime}(G)=2\) otherwise. Moreover, for \(n=3,4,5\) the only lists of length 2 which do not yield a distinguishing colouring are the identical ones._ **Theorem 7**.: _Let \(G\) be the double ray, a symmetric tree, a bisymmetric tree, \(K_{4}\), or \(K_{3,3}\). Then \(D^{\prime}_{l}(G)=D^{\prime}(G)=\Delta(G)\). Moreover, the only lists of length \(\Delta-1\) which do not yield a distinguishing colouring are the identical ones, except for bisymmetric trees, where the central edge may have an arbitrary list (and the remaining ones must be identical)._
2310.03286
Bridging HPC and Quantum Systems using Scientific Workflows
Quantum Computers offer an intriguing challenge in modern Computer Science. With the inevitable physical limitations to Moore's Law, quantum hardware provides avenues to solve grander problems faster by utilizing Quantum Mechanical properties at subatomic scales. These futuristic devices will likely never replace traditional HPC, but rather work alongside them to perform complex tasks, utilizing the best of decades of HPC and quantum computing research. We leverage the capabilities of scientific workflows to make traditional HPC and Quantum Computers work together. To demonstrate this capability, we implemented three algorithms: Grover's Search Algorithm, Shor's Factoring Algorithm, and a 4-node Traveling Salesman Algorithm. The algorithms' implementation and generated inputs are sent from ORNL HPC to IBMQ, the algorithms run on IBMQ, and the results return. The entire process is automated as a workflow by encoding it into the Parsl parallel scripting and workflow platform.
Samuel T. Bieberich, Ketan C. Maheshwari, Sean R. Wilkinson, Prasanna Date, In-Saeng Suh, Rafael Ferreira da Silva
2023-10-05T03:28:32Z
http://arxiv.org/abs/2310.03286v1
# Bridging HPC and Quantum Systems using Scientific Workflows ###### Abstract Quantum computing offers intriguing potential to solve certain kinds of problems with unprecedented speed. Quantum computers are unlikely to replace classical computers in the future, but may work in tandem with them to perform complex tasks by utilizing their complementary strengths. Indeed, most quantum computers today are made available to users via cloud-based Application Programming Interfaces (APIs) which must be called remotely from classical computers. Unfortunately, this usage model presents obstacles for a seamless application execution connecting quantum computers with classical High Performance Computing (HPC) systems. Workflow management systems can help overcome these obstacles. In this work, we apply the scientific workflows paradigm to bridge the gap between quantum and classical computing - specifically, between the quantum and HPC systems available through the Oak Ridge Leadership Computing Facility (OLCF). We provide three fully automated foundational examples for demonstration: the Traveling Salesman Problem, Grover's Search Algorithm, and Shor's Factoring Algorithm. We employ workflows to generate inputs from OLCF's HPC systems and transfer them to IBM Quantum systems in the cloud, where the quantum calculations produce results which return to OLCF for post processing. This workflows-based approach provides additional benefits including _(a)_ end-to-end programmatic automation of the entire process, _(b)_ an out-of-the-box tool for interfacing with HPC schedulers and quantum middleware, and _(c)_ concurrency of independent tasks such as running the same algorithm over a simulator and a real quantum device simultaneously. Although the current technological limitations of quantum computers prevent the use of these algorithms to solve real-life problems at scale, the workflows-based approach nevertheless unites these two powerful computing paradigms in a way that shows immense promise for the future. ###### Abstract We propose a new approach to the problem of quantum computing. We propose a new approach to the problem of quantum computing. collects the results and brings them back at the HPC site. The code and other implementation artefacts are publicly available on Github [5]. On the HPC side, we use Crusher, a precursor to the upcoming Frontier supercomputer, and Andes, a commodity cluster at OLCF for the experiments presented in this paper. Crusher [6] is OLCF's moderate-security system that contains identical hardware and similar software as the Frontier system (the first exascale HPC system). It is used as an early-access testbed for the Center for Accelerated Application Readiness (CAAR) and Exascale Computing Project (ECP) teams as well as OLCF staff and the vendor partners. The system has 2 cabinets, the first with 128 compute nodes and the second with 64 compute nodes, for a total of 192 compute nodes. Each compute node is equipped with 64-core AMD EPYC 7A53 "Optimized 3rd Gen EPYC" CPU, four AMD MI250X, each with 2 Graphics Compute Dies (GCDs) for a total of 8 GCDs per node with access to 64 GiB of HBM2E, 512 GB of DDR4 memory, and connection to a 250 PB GPFS scratch filesystem. The rest of the paper is organized as follows. Section 2 describes the algorithm, implementation, and workflow-bridging schema for the Traveling Salesman Problem. Sections 3 and 4 describe the Grover's search and Shor's factoring algorithms, respectively. Both the implementations follow the same workflow-bridging schema as described in section 2. Section 5 describes the related work from both research and industry. Finally, section 6 presents our conclusions and future work. ## 2 Traveling Salesman Problem The Traveling Salesman Problem (TSP) is a well known fundamental optimization problem with significant practical importance. Classified as an NP-Hard problem, the TSP is not solvable in polynomial time, meaning as more nodes are added, the problem gets exponentially harder for classical computers [7]. The TSP asks for the fastest way to visit a number of cities \(N\) (also referred to as nodes), given the distances between them, and make it back, while traveling the shortest distance. This results in \((N-1)!\) different routes that may be taken. While there are many current algorithms to solve TSP implementations with relatively few nodes, even the largest supercomputers are unable to find Figure 1: Overall workflow schema between tradional HPC and Quantum Systems. the best distance with hundreds of nodes in polynomial time. Figure 2 shows a an example of a randomized map for four cities generated using NetworkX [8]. The TSP is not hard to compute with a calculator at four nodes, much less a supercomputer, however the limited access to powerful quantum computers led us to create the circuit for only four nodes. Each time our code was implemented on HPC, one of these NetworkX maps was imported to our local laptops, and when the quantum jobs were completed, we checked the answers. Roughly rectangular maps such as the one pictured in figure 2 often lead to two paths with very similar distances. This error is accepted in current TSP algorithms on classical computers (albeit with much more nodes), and thus we assumed that TSPs with that particular shape could have two sufficient correct paths. ### Algorithm The process for designing the TSP 4-node circuit utilizes phase estimation. Phase estimation is a method that allows for users to read information about an operation from qubits in superposition [9]. We initially attempted to use Quantum Approximate Optimization Algorithms (QAOA), but these algorithms did not offer the exact answer we were pursuing with the limited access to quantum qubits [10, 11]. It is worth noting that these algorithms could be better for very large TSP problems [10]. The algorithm we used utilized matrices that would be brute forced by classical computers and converted to phases. These phases are often represented on the Bloch Sphere, a well-known 3D representation of how qubits physically work, and are synonymous with rotations around an axis of the sphere. After getting unitary matrices for each of the four nodes, these can be converted to high-level gates in a quantum algorithm, primarily composed of Controlled-Not (CNOT), Rotation, and SWAP gates. A concurrent step involves determining the Eigenstates. As aforementioned, there are \((N-1)!\) paths in a TSP, where N is the number of nodes being tested. Each path can be mapped to a unique Eigenstate, which for the rest of the program must be represented in binary, so both the classical and quantum computers can read it. The paths are converted to binary Eigenstates via the function \(i(j)\), which defines the TSP. \[|\psi\rangle=\otimes_{j}|i(j)-1\rangle \tag{1}\] For example, in the path 1-2-3-4-1, if the path from node 2 to node 3 is taken, then \(i(3)=2\), thus \(j\) is the number for the node you are traveling to. After taking Figure 2: Example NetworkX map for a four-node TSP. the tensor product for each value 1 through 4, the Eigenstate is completed. To further optimize the program, it can be proven that the paths 1-2-3-4-1 and 1-4-3-2-1 are the same, thus the number of Eigenstates can be reduced to \((N-1)!/2\), or 3 for a four-node TSP. ### Implementation Figure 3 shows the high-level gates for the 4 node TSP circuit. Using the Phase Estimation method, there are two registers, the Unit and Eigenstate (shortened to eigen in figure 3) registers. The quantum part of the algorithm can be split into four parts. The Unit register is initialized with Hadamard Gates, putting each qubit into a superpositioned state. The Eigen register, as expected, is initialized based on the Eigenstate the circuit is testing. Because there are three Eigenstates, there are three circuits in this algorithm that are tested altogether. Also, because the Eigenstates in binary are eight digits long, the register is composed of 8 qubits. Going from index 0 to 7 of the binary Eigenstate, an X gate is applied to the initialization step of the Eigen register at the corresponding Qubit index. X gates are fundamentally identical to Boolean NOT gates, and flip the intial states of the qubits from 0 to 1, offering an "input" to the TSP circuit. The second step of the process is the actual Phase Estimation. As explained above, the Phase Estimation part of the program can be decomposed into matrices which convert the Eigenstates to, in conjunction with the QFT, eigenvalues, which can be read by a computer. Phase Estimation circuits are composed of Controlled-Unitary (CU) gates, which accept a control value from the unit register, and if the control value is measured as a 1, the Unit Figure 3: Circuit for 4 node TSP with Eigenstate 11000110, or path 1-2-3-4-1 runs [9]. There are the same number of CU gates as there are qubits in the unit register. It is worth mentioning that the unit register, unlike the Eigen register, can be increased or decreased in size. The code from Qiskit's Alpha Textbook used 6 qubits, and through testing, we determined that adding more qubits only marginally increased accuracy, and decreasing the qubits sacrificed accuracy (see Section 2.4). The third step of the process is the Inverse Quantum Fourier Transform. The \(QFT^{-1}\) finishes the conversion from Eigenstates to eigenvalues, and prepares the unit register to be measured. Lastly, the results are measured to a classical register. ### HPC-Quantum Workflow In this section, we describe the overall workflow for TSP. The workflow scheme remains the same for the other algorithms, however, it was arguably most complex for TSP. After completing the code and basic sanity tests over local hardware, we uploaded the Jupyter Notebook code into HPC enabling rapid prototyping. For each algorithm, we needed to save our IBMQ accounts with our unique API tokens. After this code ran, we were able to use the "load account" function from the Qiskit library to load our credentials, rather than leaving the long API token in the file. We did this due to security concerns, as we did not want to have to change our API token very often, as then each Jupyter notebook would need to be adjusted for our testing with the other algorithms running concurrently. After loading the credentials, the rest of the code was split into five main steps: * The code to create random coordinates for each of the four nodes and then graphing them into the TSP format. A file is created that sends a picture of the node map to the local computer. * A python function reads the coordinates from the map and finds the distances between each node. Another, overarching function converts these distances into a matrix, then a python list so that it can interface correctly with the Controlled-U gate creation function. * The rest of the circuit is built via Qiskit, including the inverse QFT, which to conserve gates and allow for scalability, was automatically produced from the Qiskit library in the TSP program. * The three circuits are sent together, as a list, to IBMQ. Each time, _ibmq kolkata_ was the least busy processor, so each test (besides the simulations) was run on it. We ran the circuits four times, each taking three to six hours in the queue and approximately 2 and a half minutes to run, with 4,000 shots (default value). * The outputs are read via two variables: The most frequent counts are determined, and functions find which path is the shortest and verify that it is right. The results are printed in the terminal. IBMQ creates a histogram on their web portal, allowing for a more readable graph than the Qiskit histograms that are readable from the terminal. The entire aforementioned workflow was automated using Parsl [12], a popular workflow management tool. In addition to automating the workflow, Parsl allows for python functions to run concurrently to increase the speed of implementation. We were able to complete the workflow with several steps running concurrently. For instance, the NetworkX map process takes several seconds, so we organized it to run concurrently with the quantum circuit initialization. We were able to run the circuit on the IBMQ QASM simulator and ibmq_kolkata quantum computer _simultaneously_, allowing for predictive results from the simulator to generate as the circuit awaits in the queue for the real quantum computer. We refer readers to our Github codebase for further implementation details [5]. ### Results The results are printed as binary strings that are 6 digits long (one for each qubit). Due to the nature of the Phase Estimation, the largest numbers represent the shortest paths that may be taken. For example, in one test we ran, the largest value was 35, which was returned from circuit one, associated with Eigenstate 11000110, or 1-2-3-4. This means that the shortest path through the TSP is 1-2-3-4-1. In terms of measured results, we tested the circuit first through HPC and IBMQ's ibmq_kolkata 27 qubit cloud quantum computer, then via local and IBMQ's ibmq_qasm_simulator, which can operate up to 32 qubits. The real quantum computer, as exhibited in Figure 4, features a significant amount of noise, rendering the results inconclusive. The correct result for the TSP generated with this run should be 101000, which has a greater frequency than many of the other measured values, however it is not the highest, and almost every measured value is represented far too much. Due to the size of the circuit, too much noise was likely introduced, resulting in significant error. On the other hand, the ibmq_qasm_simulator proved to offer much more accurate results. With the same path, 1-2-3-4-1, the true result of 101000 was consistently returned. The reason this result was so much more accurate has to do with the composition of the QASM simulator. The QASM simulator runs on Figure 4: 1-2-3-4 path TSP circuit results from ibmq_kolkata. Correct answer highlighted. a classical computer via the Cloud, and while it does model noise, it supports all of the gates in the circuit we wrote, meaning the compiler step does not need to split the CU gates into thousands of simpler gates, rather it only has hundreds. It ignores the calibration issues that modern quantum computers need, and assumes that all gates are connected, decreasing gate count by several magnitudes. Full results, along with results for the Grover's and Shor's algorithms code, are publicly available online [5]. ## 3 Grover's Search Algorithm Grover's algorithm finds an item in an unsorted list of length \(N\) using only \(O(\sqrt{N})\) operations, as opposed to \(O(N)\) operations on average for a classical computer [13]. In a nutshell, the algorithm corresponds to a bar graph with one bar representing each index of the computational list [14]. The oracle in the Grover's algorithm finds the value being searched for and flips it from a value of 1 to -1. Then, it finds the average of all of the values in the list, and flips each index over said value. This way, the index at which the value is at has a significantly larger magnitude than all other indices, thus making it easy to identify. ### Implementation Quantum programs in IBMQ's Qiskit programming language are composed of quantum circuits. These circuits are composed of a variety of quantum gates, similar in concept to Boolean Logic gates in Digital Computers. These circuits are read from left to right, and are composed like musical staves, with one qubit represented by each horizontal line. Figure 5 shows an example of the most basic part of Grover's Algorithm for the integer value 15. (Each section is separated by barriers for formatting purposes.) The first section uses Hadamard (H) gates to put each qubit into superposition. The second section is a manually created Controlled-controlled-controlled-Z gate (CCCZ), with controls on qubits 0-2 and a Z gate applied to q3. This section of the circuit is the Oracle, and changes depending on the value input. Oracles are often referred to as "Black Boxes", and are created in most cases by the processor. The rest of the program is designed to help you discover what the oracle is. If the value were 0, each qubit would have X (NOT) gates applied on either side of the CCCZ gate. The third section is the Amplification function. This is the part of the algorithm where each value is flipped over the average of all indices [15]. Lastly, the fourth and final section of the circuit measures each qubit. Each of the four measurement gates convert the qubits from their quantum register to a classical register, which a regular computer can then read as a 1 or 0. Since there are four qubits, the integer values 0-15 can be returned. Grover's Algorithm is split into three steps when explained, step one being Initialization, then a Grover Operation, then Measurement. The second and third sections of our circuit are combined into one statement, the Grover Operator. Grover's Algorithm is most accurate when the Grover Operator is repeated \(\sqrt{N}\) times, where N is the number of qubits. Since our implementation of Grover's Algorithm has 4 qubits, the Grover Operator must be completed twice before measurements are made. ### Results We designed the quantum circuit and the rest of the program in Jupyter Notebooks with the IBMQ's Qiskit [16]. To run the program via a workflow encompassing HPC and quantum cloud computers, we first uploaded the completed program on the HPC side by adding it to a GitHub repository and then pulling the code to a terminal. The program randomly chose a number between 0 and 15. Once this number was chosen, it was printed and saved as a variable (to compare with results), then a loop created the unique Grover Oracle. Once this was complete, the circuit was sent to IBM's ibmq-belem 5 qubit quantum computer. After 1024 shots of the circuit, the results are brought back to the HPC side, which makes a histogram displaying the results. On average, 92% of the shots returned the number input. ## 4 Shor's Factoring Algorithm More than any other quantum algorithm, Peter Shor's factoring algorithm has created the most buzz for physicists and computer scientists. Current encryption techniques, such as the prevalent RSA encryption in everything from governmental to financial resources, are composed of keys made of the factors of RSA-2048, a 2048 bit number. These factors are still unknown, and almost impossible to find, because they are both prime numbers. This makes RSA-2048 a semi-prime integer, the hardest to factor, as it is divisible by nothing but those two numbers. Shor's algorithm is composed of a series of steps, starting with a classical computer, then transferring a circuit to a quantum computer, and finally reading the results on a classical computer to determine if the circuit needs to be run on the quantum device again with a different guessed value. The process is based on an Figure 5: IBMQ Matplotlib circuit for Grover’s Algorithm, with one implementation of the Grover Operator. Value being searched for is 15. algorithm that has long been theorized, but is very difficult to implement at a large scale on regular computers: the Period Finding Problem. For a given number \(N\) that we wish to factor and a randomly selected number \(a\) (\(1<a<N\)), the period finding problem states that there exists a number \(r\) such that \(a^{r}\mod N=1\). This leads to the greatest common divisor of \(a^{r/2}\pm 1\) and \(N\) being one of the prime factors of \(N\). The steps incurred in the Shor's algorithm are: 1. Pick a random number \(a\) between 1 and \(N\), where \(N\) is the number being factored. 2. Compute the greatest common divisor (GCD) of \(a\) and \(N\). 3. If the GCD of \(a\) and \(N\) is not equal to 1, then \(a\) is one of the factors as required. The other factor can be computed by dividing \(N\) by \(a\). 4. Else, run the quantum period finding subroutine on a quantum computer with \(N\) and \(a\) as the inputs. 5. Determine the period \(r\) by interpreting the results from the quantum period finding subroutine on a classial computer. 6. If the \(r\) is 1, redo steps 1-5 with a different value for \(a\). 7. If \(r\) is odd, restart the process with a different value for \(a\). 8. If \(r\) is neither 1 nor odd, compute the GCD of \(a^{r/2}\pm 1\) and \(N\). 9. The GCD should be one of the factors of \(N\) as required. Divide \(N\) by the GCD returns the second factor as well. Using these steps, RSA-2048 and other large semi-primes could be factored in the future, though thousands of coherent qubits and an equally large quantum volume would be needed to implement this program for such a large number. Presently, quantum computers can factor semi-primes up to 21 only [13]. ### Implementation The circuit for Shor's algorithm that we used utilizes two qubit registers, one that encompasses 0-2 qubits, called the work register, and the second encompassing qubits 3-6, called the control register. Only the work register is measured in the end. Like the Grover's algorithm circuit, Shor's circuit can be broken into four distinct sections: Initialization, Modular Exponentiation, Inverse Quantum Fourier Transform (\(QFT^{-1}\)), and Measurement, as shown in figure 6. In the Initialization step, all three qubits in the work register are put into superposition, and a NOT gate is applied to the final qubit in the control register. The Modular Exponentiation stage uses \(U^{2^{j}}\) gates to perform a Quantum Phase Estimation on the three work register qubits, resulting in the work register ending in a state a\({}^{x}\)modN. The Inverse QFT takes these values and creates interference between these states, turning the current circuit value and converting it to a Fourier basis [17]. \[QFT|x\rangle=1/\sqrt{N}\sum_{y=0}^{N-1}e^{2\pi ixy/N}=|y\rangle \tag{2}\] Lastly, the measurement stage, like in Grover's algorithm, measures the results and sends them to a classical register. Note that the measurement gates only apply to qubits 0, 1, and 2. ### Results The larger a circuit in Qiskit is, the more computational time it takes to run and thus more noise is exposed to the delicate qubits running the operations. Figure 8: Results for Shor’s algorithm with a = 13, on IBM’s QASM simulator (local) backend Figure 6: IBMQ circuit for Shor’s algorithm, with a = 7, on 7 qubits Figure 7: Results for Shor’s algorithm with a = 13, on ibm_nairobi Thus, the results become further from the expected values, as opposed to the smaller Grover circuit. Figure 8 showing the results from a simulator (IBMQ's qasm_simulator), and figure 7 showing the same circuit run on a real quantum device, ibm_nairobi. While figure 8 clearly shows a period of 2, figure 7 is less obvious, and requires knowledge of what the expected answer should be to determine the real period. Once again, these circuits were executed from HPC terminal, and results were printed to files we returned to our local computers. Figure 7 is from the IBMQ Jobs recap page on their website, as it offers a less cluttered graph than the default output that can be run via Qiskit. This graph has an almost perfect four-way distribution of the measured counts: 0, 2, 4, and 6, in binary. The way the circuit was designed, 0 always features a large distribution (such as in 7), and is thus ignored, but the following values, 2, 4, and 6, exhibit the period of the function. ## 5 Related Work Several works have researched the ways to implement quantum workflows for classical-quantum hybrid computing. One recent work experiments with workflows using the D-Wave quantum annealer, reconciling with the inherent difficulty in creating cloud-based hybrid workflows for 'big science' jobs [18]. Additionally, in the previous editions of ICCS, work such as [19], proposes the "QuantMe" framework, which models, transforms, and then computes data via quantum workflows at the research level. It takes into account the inherent noise in NISQ era quantum computers, and how conversions from classical to quantum algorithms are affected by this significant shortcoming in modern technology [20]. This helps to mitigate or eliminate the incredibly technical elements that must take place to set up and read the results from quantum algorithms, which could slow research such as ours. Expanding on this theme, [21] detailed the running of a similar "The Total Weighted Tardiness Problem", which is NP-hard, defines a series of tasks with due dates which must be completed on a machine, with the goal to determine in which order to complete the tasks to minimize tardiness. Quantum annealing devices like D-Wave's are well-adapted to these problems, as they do not require the running of every combination of sequences to obtain the lowest result. The D-Wave quantum computer has been used to solve a wide range of problems across various complexity classes such as training machine learning models [22, 23, 24], protein folding [25], graph partitioning [26] and portfolio optimization [27]. Combined with our work, optimal workflow paths may be determined when a multitude of jobs are queued. On cloud resources like IBMQ, this may help reduce wait times, while offering a practical application for secondary quantum systems. Researchers at ORNL and Alpine Quantum Technologies recently published a work outlining the feasibility of HPC-Quantum Computing hybrid processes in computing centers [28]. The outcomes reinforce that the hardware and software for HPC and quantum computers are compatible with current technology, and that the most significant limitation is in the quantum hardware, which suffers from scaling and qubit coherence issues. Lastly, in the industry, a few companies are already committed to making changes in the HPC world to implement a hybrid HPC and quantum computing environment. One of the leaders in this field is Finland's IQM, one of Europe's leading quantum computing firms. In their two part series "The Way Forward: Bringing HPC and Quantum Computing Together", they offer a three step process to combine current HPC hardware and quantum computing resources [29]. The first step involves identifying ways that quantum algorithms can optimize HPC workflows, such as in quantum simulations, weather tracking and prediction software, or optimization problems. The second and third steps look at mid and long term goals, for example, designing the systems needed based off of the research goals and finally implementing a workflow between the supercomputers and quantum computer. They propose that the best way to ensure the smooth flow of the process is to have HPC devices and quantum computers in the same location, helping with troubleshooting and creating further interest in the quantum capabilities the center has access to. ## 6 Conclusions and Future Work We demonstrate the practicality of combining HPC and remote quantum resources for certain applications that require both resources. We use the Parsl workflow manager as we found it most conveniently adapted to the python platform. However, a variety of workflow platforms are available and we believe the same results may be achieved with any modern scientific workflow management system. In other words, our work is agnostic to specific solutions used and a validation for the workflows paradigm in general. Quantum devices are evolving and will likely act as auxiliary processors alongside the CPU and GPU. New low-level APIs to use such devices might be developed in the future. Currently, though, workflow systems offer a familiar and promising approach to combining the two. In the future, we would like to test the same or similar algorithms on Quantumium and Rigetti devices. The algorithms for both Shor's and the TSP required more credits than we were allotted for the project. Thus, we switched to IBMQ's resources, of which we had a significantly larger quota. If we had access to Quantumium or Rigetti fully, it could offer insight into running workflows with changes in language. While IBMQ's quantum computers all take code in Qiskit, Quantumium uses QASM and Rigetti uses pyQuil. Offering a workflow that could convert a quantum circuit from any one of these languages to the others would allow users much greater access to quantum machines, from Quantumium's H1-1 to Rigetti's Aspen QPU. As research in Quantum Error Correction has peaked in recent years, we may assume further optimization in new quantum processors, allowing for larger circuits to run with less error, creating results more on par with the simulators used in this project. _Acknowledgments._ We acknowledge the use of IBM Quantum services for this work. The views expressed are those of the authors, and do not reflect the official policy or position of IBM or the IBM Quantum team. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.
2305.07256
Relation between circular photon orbits and the stability of wormholes with the thin shell of a barotropic fluid
We cut a general, static, spherically symmetric spacetime and paste its copy to make a wormhole with a thin shell of any barotropic fluid in general relativity. We show that the stability of the thin-shell wormhole is characterized by a set of circular photon orbits called an (anti)photon sphere in the original spacetime if a momentum flux passing through a throat is prohibited. Our result will be useful to classify the stability of the thin shell on the throat against linearized spherically symmetric perturbations.
Naoki Tsukamoto, Takafumi Kokubu
2023-05-12T05:18:09Z
http://arxiv.org/abs/2305.07256v2
Relation between circular photon orbits and the stability of wormholes with the thin shell of a barotropic fluid ###### Abstract We cut a general, static, spherically symmetric spacetime which satisfying generalized Birkhoff's theorem and paste its copy to make a wormhole with a thin shell of any barotropic fluid in general relativity and we investigate the stability of the thin shell on a throat against linearized spherically symmetric perturbations. We show that the stability of the thin-shell wormhole satisfying a transparency condition which prohibits its momentum flux passing through the throat is characterized by circular photon orbits called (anti-)photon sphere in the original spacetime. ## I Introduction Recently, LIGO and VIRGO Collaborations have detected gravitational waves from compact objects [1] and Event Horizon Telescope Collaboration has detected the shadows of black hole candidates at the centers of a giant elliptical galaxy M87 [2] and of the Milky Way [3]. The study of compact objects with a strong gravitational field in theoretical and observational aspects will be important to understand our universe. Static, spherically symmetric compact objects such as black holes and wormholes have unstable (stable) circular photon orbit called photon (antiphoton) sphere [4; 5]. The photon sphere has important roles in several phenomena in a strong gravitational field. Dim images near compact objects [6; 7; 8], the image of a collapsing star to be a black hole [9; 10], the photon absorption cross section [11], quasinormal modes [12; 13], centrifugal force and gyroscopic precession [14; 15; 16; 17], and Bondi's sonic horizon of a radial fluid [18; 19; 20; 21; 22; 23] are related in the photon sphere. Recently, the features of circular photon orbits like its radius [24], its number [25; 26], and its stability [27; 25] has been investigated. A stable circular photon orbit might cause instability of compact objects by the slow decay of linear waves [28; 29; 30]. Wormhole is a hypothetical spacetime structure with non-trivial topology which is permitted in general relativity [31; 32]. The wormhole connects two regions of one universe or two universes by its throat. The stability of the wormhole solutions is required to be in nature. In Refs. [33; 34], the Schwarzschild spacetime is cut and its two copies are pasted by a thin shell [35; 36] to construct a wormhole solution by Darmois-Israel matching [36; 37] and the stability of thin-shell wormhole was studied. After the Refs. [33; 34], stability of thin-shell wormholes pasted by using general static, spherically symmetric spacetimes [38; 39], plane symmetric spacetimes [40], asymmetric spacetimes [41; 42; 39], cylindrical spacetimes [43], higher-dimensional spacetimes [44], and lower-dimensional spacetimes [45] were investigated. We note that the stability of wormholes depends on gravitational theories [46]. Recently, the details of stability of a traversable thin-shell wormhole have been investigated in Refs. [47; 48]. In 2000, Barcelo and Visser [49] cut a class of static, spherically symmetric spacetime at a radius and pasted two copies of the spacetime to make a thin-shell wormhole. They pointed out that the location of the static throat filled with a pure tension \(\sigma=p\), where \(\sigma\) and \(p\) are the surface energy density and the surface pressure of the thin shell, equals to the radius of photon sphere associated to the original spacetime. Recently, Koga [50] showed that the throat of a general static pure-tensional thin-shell wormhole with only \(Z_{2}\) symmetry in \(\Lambda\)-vacuum locates on a photon surface [4] and showed that the stability of the thin shell corresponds to the stability of the photon surface. In this paper, we consider a general, static, \(Z_{2}\) symmetrical, and spherically symmetric wormhole with a thin shell filled with any barotropic fluid \(p=p(\sigma)\) which satisfies transparency condition prohibiting its momentum flux passing through a throat [38; 39; 51; 52] in general relativity and we show a relation between photon (antiphoton) spheres and instability (stability) of the wormhole. The stability of the thin-shell wormhole is characterized by photon and antiphoton spheres and the thin shell on antiphoton (photon) sphere is stable (unstable) under linearized spherically symmetric perturbations. We use as a general spherical metric as possible to show the relationship between (anti-)photon spheres and stability of the thin shell. However, we should keep in mind that the stability analysis of the thin-shell wormhole will be valid only if the original spacetime satisfies a generalized Birkhoff's theorem [53; 54; 38]. In cases where the theorem cannot be applied, the spherically symmetric perturbations to the thin shell can affect met rics outside of the thin shell to emit gravitational waves. This paper is organized as follows. In Sec. II, we review the photon sphere and antiphoton sphere. In Sec. III, we make a general, static, spherically symmetric wormhole with a thin shell filled with any barotropic fluid and investigate its stability. We show the relation of antiphoton (photon) sphere and stability (instability) of a transparent thin-shell wormhole in Sec. IV. and we discuss and conclude our results in Sec. V. In this paper, we use the units in which a light speed and Newton's constant are unity. ## II Photon sphere and antiphoton sphere We consider a general, static, spherically symmetric spacetime with a line element \[ds^{2}=-A(r)dt^{2}+B(r)dr^{2}+C(r)\left(d\theta^{2}+\sin^{2}\theta d\phi^{2} \right), \tag{1}\] where \(A(r)\), \(B(r)\), and \(C(r)\) are functions of a radial coordinate \(r\). We assume that \(A(r)\), \(B(r)\), and \(C(r)\) are positive and finite in a range \(r\geq a\), where \(a\) is a constant. There are time-translational and axial Killing vectors \(t^{\mu}\partial_{\mu}=\partial_{t}\) and \(\phi^{\mu}\partial_{\mu}=\partial_{\phi}\) because of stationarity and axisymmetry of the spacetime, respectively. From spherical symmetry, we can assume \(\theta=\pi/2\) without loss of generality. The motion of a light ray is described by \[-A(r)dt^{2}+B(r)dr^{2}+C(r)d\phi^{2}=0 \tag{2}\] and it is rewritten in \[\left(\frac{dr}{d\lambda}\right)^{2}+\mathcal{V}(r)=0, \tag{3}\] where \(\lambda\) is an affine parameter on the trajectory of the ray and \(\mathcal{V}(r)\) is an effective potential of the motion of the ray defined by \[\mathcal{V}(r)\equiv\frac{1}{B}\left(\frac{L^{2}}{C}-\frac{E^{2}}{A}\right), \tag{4}\] where \[E\equiv-g_{\mu\nu}t^{\mu}\frac{dx^{\nu}}{d\lambda}=A\frac{dt}{d\lambda} \tag{5}\] and \[L\equiv g_{\mu\nu}\phi^{\mu}\frac{dx^{\nu}}{d\lambda}=C\frac{d\phi}{d\lambda} \tag{6}\] are the conserved energy and angular momentum of the light ray, respectively. The first and second derivatives of \(\mathcal{V}\) with respect to the radial coordinate \(r\) are given by \[\mathcal{V}^{\prime}=-\frac{B^{\prime}}{B}\mathcal{V}-\frac{1}{B}\left(\frac {C^{\prime}}{C}L^{2}-\frac{A^{\prime}}{A}E^{2}\right) \tag{7}\] and \[\mathcal{V}^{\prime\prime}=-\left(\frac{B^{\prime}}{B}\right)^{ \prime}\mathcal{V}-\frac{B^{\prime}}{B}\mathcal{V}^{\prime}+\frac{B^{\prime}} {B^{2}}\left(\frac{C^{\prime}}{C_{2}}L^{2}-\frac{A^{\prime}}{A^{2}}E^{2}\right)\] \[+\frac{2}{B}\left(\frac{C^{\prime 2}}{C^{3}}L^{2}-\frac{A^{ \prime 2}}{A^{3}}E^{2}\right)-\frac{B^{\prime}}{B^{2}}\left(\frac{C^{\prime \prime}}{C^{2}}L^{2}-\frac{A^{\prime\prime}}{A^{2}}E^{2}\right), \tag{8}\] respectively, where \({}^{\prime}\) is a differentiation with respect to the radial coordinate \(r\). The circular orbit of the ray with a radius \(r=r_{\rm m}\) should satisfy a condition \[\mathcal{V}_{\rm m}=\mathcal{V}^{\prime}_{\rm m}=0 \tag{9}\] and it gives \[L^{2}=L_{\rm m}^{2}\equiv\frac{C_{\rm m}}{A_{\rm m}}E^{2} \tag{10}\] and \[D_{\rm m}=0, \tag{11}\] where \[D(r)\equiv\frac{A^{\prime}}{A}-\frac{C^{\prime}}{C}, \tag{12}\] Here and hereafter, a function with a subscript \(m\) denotes the function on the circular orbit at \(r=r_{\rm m}\). We obtain \[\mathcal{V}^{\prime\prime}_{\rm m}=\frac{E^{2}}{A_{\rm m}B_{\rm m}}F_{\rm m}, \tag{13}\] where \[F(r)\equiv\frac{A^{\prime\prime}}{A}-\frac{C^{\prime\prime}}{C} \tag{14}\] and the circular orbit of the light is stable if \(F_{\rm m}>0\) and it is unstable if \(F_{\rm m}<0\). The unstable (stable) circular orbit is called photon sphere (antiphoton sphere). ## III Thin-shell wormhole We construct a thin-shell wormhole by cutting and pasting a general, static, spherically symmetric spacetime [36; 37]. We assume that the original spacetime satisfies a generalized Birkhoff's theorem [53; 54; 38] and that spherically symmetric perturbations to a thin shell do not affect metrics outside of the thin shell. We take two copies of a manifold \(\Omega_{\pm}\equiv\{r_{\pm}>a\}\) with boundaries given by timelike hypersurfaces \(\Sigma_{\pm}\equiv\{r_{\pm}=a\}\). We identify the hypersurfaces \(\Sigma\equiv\Sigma_{+}=\Sigma_{-}\) to obtain a manifold \(\mathcal{M}\) by gluing the manifolds \(\Omega_{\pm}\) at a throat located at \(\Sigma\). Note that the wormhole has \(Z_{2}\) symmetry against the throat. The hypersurface \(\Sigma\) filled with a Dirac distribution matter is called thin shell. Coordinates in \(\Omega_{\pm}\) denote \(x^{\mu}\) but the coordinates may not joint continuously at the two-dimensional hypersurface \(\Sigma\). We denote by \(y^{i}\) coordinates on the two-dimensional hypersurface \(\Sigma\). We assume that the same coordinates \(y^{i}\) can be taken on both sides of the hypersurface \(\Sigma\). We permit \(a=a(\tau)\), where \(\tau\) is its proper time, since we are interested in the dynamics of the thin shell. We consider that the hypersurface \(\Sigma\) is orthogonally sticked by a congruence of geodesics. The geodesics are parametrized by the proper distance \(l\) and we set \(l=0\) when the geodesics intersect the hypersurface and \(l<0\) (\(l>0\)) when they are in \(\Omega_{-}\) (\(\Omega_{+}\)). A displacement from the hypersurface \(\Sigma\) is given by \(dx^{\mu}=n^{\mu}dl\), where \(n^{\mu}\) is the unit normal to the hypersurface. A metric tensor in \(\mathcal{M}\) is given by \(g_{\mu\nu}=\Theta(-l)g_{-\mu\nu}+\Theta(l)g_{+\mu\nu}\), where \(\Theta(l)\) is the Heaviside distribution which is \(0\) if \(l<0\), which is \(1\) if \(l>0\), and which is indeterminate if \(l=0\) and where \(g_{-\mu\nu}\) and \(g_{+\mu\nu}\) are metric tensors in \(\Omega_{-}\) and \(\Omega_{+}\), respectively. The connection \(\Gamma^{\mu}_{\nu\rho}\) is given by \[\Gamma^{\mu}_{\nu\rho}=\Theta(-l)\Gamma^{\mu}_{-\nu\rho}+\Theta(l)\Gamma^{\mu }_{+\nu\rho}, \tag{3.1}\] where \(\Gamma^{\mu}_{-\nu\rho}\) and \(\Gamma^{\mu}_{+\nu\rho}\) are the connections in \(\Omega_{-}\) and \(\Omega_{+}\), respectively. The extrinsic curvature \(K_{ij}\) of the timelike hypersurface \(\Sigma\) is given by \[K_{ij}\equiv e^{\mu}_{i}e^{\nu}_{j}\nabla_{\nu}n_{\mu}=(n_{\mu,\nu}-\Gamma^{ \rho}_{\mu\nu}n_{\rho})e^{\mu}_{i}e^{\nu}_{j}, \tag{3.2}\] where \(e^{\mu}_{i}\) are basis vectors \[e^{\mu}_{i}\equiv\frac{\partial x^{\mu}}{\partial y^{i}} \tag{3.3}\] and \(\nabla_{\nu}\) is a covariant differentiation in \(\mathcal{M}\). The induced metric \(h_{ij}\equiv g_{\mu\nu}e^{\mu}_{i}e^{\nu}_{j}\) on the hypersurface \(\Sigma\) can be written by \[ds^{2}_{\Sigma} = h_{ij}dy^{i}dy^{j} \tag{3.4}\] \[= -d\tau^{2}+C(a)\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right).\] The induced metrics in \(\Omega_{-}\) and \(\Omega_{+}\) are the same each other. The four velocity \(u^{\mu}\) of the thin shell at \(t=T(\tau)\) and \(r=a(\tau)\) is given by \[u^{\mu}\partial_{\mu}=\dot{T}\partial_{t}+\dot{a}\partial_{r}, \tag{3.5}\] where the dot is a differentiation with respective to \(\tau\) and \[\dot{T}=\sqrt{\frac{1+B\dot{a}^{2}}{A}}. \tag{3.6}\] We obtain \[\ddot{T}=\frac{2\ddot{a}\dot{a}AB+\dot{a}^{3}(AB^{\prime}-A^{\prime}B)-\dot{a }A^{\prime}}{2A^{2}\dot{T}}. \tag{3.7}\] The unit normals \(n_{\mu\pm}\) to the hypersurface in \(\Omega_{-}\) and \(\Omega_{+}\) are obtained as \[n_{\mu\pm}dx^{\mu}=\pm\left(-\sqrt{AB}\dot{a}dt+\sqrt{AB}\dot{T}dr\right) \tag{3.8}\] and the basis vectors \(e^{\mu}_{i}\) are given by \[e^{\mu}_{\mu}\partial_{\mu} = \dot{T}\partial_{t}+\dot{a}\partial_{r}, \tag{3.9}\] \[e^{\mu}_{\theta}\partial_{\mu} = \partial_{\theta},\] (3.10) \[e^{\mu}_{\phi}\partial_{\mu} = \partial_{\phi}. \tag{3.11}\] The extrinsic curvatures of the hypersurfaces in \(\Omega_{\pm}\) are given by \[K^{\tau}_{\tau\pm} = \frac{\pm 1}{\sqrt{\dot{a}^{2}+\frac{1}{B}}}\left(\ddot{a}+\frac{ \dot{a}^{2}(AB)^{\prime}}{2AB}+\frac{A^{\prime}}{2AB}\right), \tag{3.12}\] \[K^{\theta}_{\theta\pm} = K^{\phi}_{\phi\pm}=\frac{\pm C^{\prime}}{2C}\sqrt{\dot{a}^{2}+ \frac{1}{B}}, \tag{3.13}\] and the traces are \[K_{\pm}\equiv\frac{\pm 1}{\sqrt{\dot{a}^{2}+\frac{1}{B}}}\left(\ddot{a}+\frac{ \dot{a}^{2}(AB)^{\prime}}{2AB}+\frac{A^{\prime}}{2AB}\right)\pm\frac{C^{\prime }}{C}\sqrt{\dot{a}^{2}+\frac{1}{B}}. \tag{3.14}\] The Einstein equations, which the thin shell should satisfy, are given by \[S^{i}_{j}=-\frac{1}{8\pi}\left(\left[K^{i}_{j}\right]-\left[K\right]\delta^{i }_{j}\right), \tag{3.15}\] where \(S^{i}_{j}\) is a surface stress-energy tensor for the thin shell \[S^{i}_{j}=(\sigma+p)U^{i}U_{j}+p\delta^{i}_{j}, \tag{3.16}\] where we define \(U_{i}dy^{i}\equiv u_{\mu}e^{\mu}_{i}dy^{i}=d\tau\) and where \(\sigma\) and \(p\) are the surface energy density and the surface pressure of the thin shell, respectively and we obtain \(S^{\tau}_{\tau}=-\sigma\) and \(S^{\theta}_{\theta}=S^{\phi}_{\phi}=p\). Here, \(\left[T\right]\) is defined by \[\left[T\right]\equiv\left.T_{+}\right|_{\Sigma}-\left.T_{-}\right|_{\Sigma}, \tag{3.17}\] where \(T_{+}\) and \(T_{-}\) are any tensorial function \(T\) in \(\Omega_{+}\) and \(\Omega_{-}\), respectively. From \((\tau,\tau)\) and \((\theta,\theta)\) components of the Einstein equations (3.15), we obtain the surface energy density \(\sigma\) and the surface pressure \(p\) \[\sigma=-\frac{1}{4\pi}\frac{C^{\prime}}{C}\sqrt{\dot{a}^{2}+\frac{1}{B}} \tag{3.18}\] and \[p=\frac{1}{8\pi}\frac{1}{\sqrt{\dot{a}^{2}+\frac{1}{B}}}\left(2\ddot{a}+\frac{ \dot{a}^{2}\left(ABC\right)^{\prime}+\left(AC\right)^{\prime}}{ABC}\right), \tag{3.19}\] respectively. From Eqs. (3.18) and (3.19), we obtain \[\frac{d(\sigma\mathcal{A})}{d\tau}+p\frac{d\mathcal{A}}{d\tau}=-\frac{\dot{a}}{2 }\sqrt{\dot{a}^{2}+\frac{1}{B}}C^{\prime}\left(\frac{2C^{\prime\prime}}{C^{ \prime}}-\frac{\left(ABC\right)^{\prime}}{ABC}\right), \tag{3.20}\] where \(\mathcal{A}\equiv 4\pi C(a)\) is the area of the throat. Equation (3.20) is rewritten in \[C\sigma^{\prime}+C^{\prime}(\sigma+p)=\frac{C}{2}\left(\frac{2C^{\prime\prime}}{C^{ \prime}}-\frac{\left(ABC\right)^{\prime}}{ABC}\right)\sigma, \tag{3.21}\] where \(\sigma^{\prime}\equiv\dot{\sigma}/\dot{a}\). If we assume a barotropic fluid with \(p=p(\sigma)\), from Eq. (3.21), we obtain the surface density \(\sigma=\sigma(a)\). From Eq. (3.18), the equation of motion for the thin shell is given by \[\dot{a}^{2}+V(a)=0, \tag{3.22}\] where \(V(a)\) is an effective potential defined by \[V(a)\equiv\frac{1}{B}-\left(\frac{4\pi\sigma C}{C^{\prime}}\right)^{2}. \tag{3.23}\] The derivative of \(V\) with respective to \(r\) is given by \[V^{\prime}=-\frac{B^{\prime}}{B^{2}}-\frac{32\pi^{2}\sigma C}{C^{\prime}} \left(\left(1-\frac{CC^{\prime\prime}}{C^{\prime 2}}\right)\sigma+\frac{C \sigma^{\prime}}{C^{\prime}}\right) \tag{3.24}\] and, from Eq. (3.21), it can be rewritten as \[V^{\prime}=-\frac{B^{\prime}}{B^{2}}+\frac{16\pi^{2}\sigma C}{C^{\prime}} \left(2p+\frac{(ABC)^{\prime}}{ABC^{\prime}}\sigma\right). \tag{3.25}\] The second derivative of \(V\) is obtained as \[V^{\prime\prime}\] \[=-\frac{B^{\prime\prime}B-2B^{\prime 2}}{B^{3}}\] \[+16\pi^{2}\left\{\left(\frac{C}{C^{\prime}}\sigma^{\prime}+ \left(1-\frac{CC^{\prime\prime}}{C^{\prime 2}}\right)\sigma\right)\left(2p+\frac{(ABC) ^{\prime}}{ABC^{\prime}}\sigma\right)\right.\] \[\left.+\frac{C}{C^{\prime}}\sigma\left(2p^{\prime}+\frac{(ABC)^{ \prime}}{ABC^{\prime}}\sigma^{\prime}\right.\right.\] \[\left.\left.+\frac{((AB)^{\prime\prime}AB-(AB)^{\prime 2})C^{ \prime}-AB(AB)^{\prime}C^{\prime\prime}}{(ABC^{\prime})^{2}}C\sigma\right)\right\}\] and, from Eq. (3.21), it becomes \[V^{\prime\prime}=\] \[-\frac{B^{\prime\prime}B-2B^{\prime 2}}{B^{3}}-8\pi^{2}\left\{ \left(2p+\frac{(ABC)^{\prime}}{ABC^{\prime}}\sigma\right)^{2}\right.\] \[\left.+\sigma\left(2p+\left(2+\frac{(ABC)^{\prime}}{ABC^{\prime}} -\frac{2CC^{\prime\prime}}{C^{\prime 2}}\right)\sigma\right)\left(\frac{(ABC)^{ \prime}}{ABC^{\prime}}+2\beta^{2}\right)\right.\] \[\left.+\frac{2C^{2}\left(\left((AB)^{\prime 2}-AB(AB)^{\prime \prime}\right)C^{\prime}+AB(AB)^{\prime}C^{\prime\prime}\right)\sigma^{2}}{( AB)^{2}C^{\prime 3}}\right\},\] where \(\beta^{2}\equiv dp/d\sigma=p^{\prime}/\sigma^{\prime}\). We consider that a static thin shell at \(a=a_{0}\), where \(a_{0}\) is a positive constant, with the surface energy density \[\sigma_{0}=-\frac{C_{0}^{\prime}}{4\pi\sqrt{B_{0}C_{0}}} \tag{3.28}\] and the surface pressure \[p_{0}=\frac{\left(A_{0}C_{0}\right)^{\prime}}{8\pi\sqrt{B_{0}A_{0}C_{0}}}, \tag{3.29}\] where functions with a subscript \(0\) denote the functions at \(a=a_{0}\). From the definition, \(V_{0}=V_{0}^{\prime}=0\) is satisfied. Therefore, the effective potential can be expanded around \(a=a_{0}\) as \[V(a)=\frac{V_{0}^{\prime\prime}}{2}(a-a_{0})^{2}+O\left(\left(a-a_{0}\right)^{ 3}\right), \tag{3.30}\] where \(V_{0}^{\prime\prime}\) is given by \[V_{0}^{\prime\prime}=\frac{1}{B_{0}}\left(2G_{0}\beta_{0}^{2}+ \frac{A_{0}^{\prime\prime}}{A_{0}}+G_{0}\right.\] \[\left.-\frac{A_{0}^{\prime 2}}{A_{0}^{2}}-\frac{A_{0}^{\prime}B_{0}^{ \prime}}{2A_{0}B_{0}}-\frac{A_{0}^{\prime}C_{0}^{\prime}}{A_{0}C_{0}}-\frac{B _{0}^{\prime}C_{0}^{\prime}}{B_{0}C_{0}}\right), \tag{3.31}\] where we define \(G(r)\) as \[G(r)\equiv\frac{C^{\prime\prime}}{C}-\frac{B^{\prime}C^{\prime}}{2BC}-\frac{C ^{\prime 2}}{C^{2}}. \tag{3.32}\] When \(V_{0}^{\prime\prime}>0\) (\(V_{0}^{\prime\prime}<0\)), the thin shell is stable (unstable). The stability of the thin shell depends on \(\beta_{0}^{2}\) generally. However, when \(G_{0}=0\) holds, its (in)stability does not depend on \(\beta_{0}^{2}\). ## IV Transparent condition We consider that a wormhole with a transparency condition [38; 39; 51; 52], \[[G_{\mu\nu}u^{\mu}n^{\nu}]=[T_{\mu\nu}u^{\mu}n^{\nu}]=0, \tag{4.1}\] where \(G_{\mu\nu}\) and \(T_{\mu\nu}\) are the Einstein tensor and stress-energy tensor, respectively. The transparency condition prohibits its momentum flux passing through the thin shell and it gives \[\frac{2C^{\prime\prime}}{C^{\prime}}-\frac{(ABC)^{\prime}}{ABC}=0. \tag{4.2}\] We note that a cosmological constant \(\Lambda\) does not affect the transparency condition (4.1) or (4.2) since \([\Lambda g_{\mu\nu}u^{\mu}n^{\nu}]=0\). Under the transparency condition, Eq. (3.20) becomes \[\frac{d(\sigma{\cal A})}{d\tau}+p\frac{d{\cal A}}{d\tau}=0. \tag{4.3}\] We take a radial coordinate \(r\) satisfying \(A(r)=1/B(r)\) without loss of generality. 1 Footnote 1: Under the coordinates, Eqs. (3.31) and (3.32) become \[V_{0}^{\prime\prime}=A_{0}\left(2G_{0}\beta_{0}^{2}+\frac{A_{0}^{\prime\prime}}{ A_{0}}-\frac{A_{0}^{\prime 2}}{2A_{0}^{2}}+G_{0}\right) \tag{4.4}\] and \[G_{0}=\frac{C_{0}^{\prime\prime}}{C_{0}}+\frac{A_{0}^{\prime}C_{0}^{\prime}}{2A_{ 0}C_{0}}-\frac{C_{0}^{\prime 2}}{C_{0}^{2}}, \tag{4.5}\] respectively. Equations (4.4) and (4.5) are equal to Eq. (33) obtained by Eiroa in Ref. [38]. Under the dynamics, the transparency condition is expressed by \[\frac{C^{\prime\prime}}{C}=\frac{C^{\prime 2}}{2C^{2}} \tag{4.6}\] and it is solved as \[C=c(r+b)^{2}, \tag{10}\] where \(c\) is a positive constant and \(b\) is a constant [38]. From Eqs. (12)-(14), (32), and (11), we obtain \[V_{0}^{\prime\prime}=A_{0}\left(\frac{C_{0}^{\prime}}{C_{0}}D_{0}\beta_{0}^{2}+ F_{0}-\frac{A_{0}^{\prime}}{2A_{0}}D_{0}\right). \tag{11}\] The thin shell is stable if \[\beta_{0}^{2}>\frac{C_{0}}{C_{0}^{\prime}D_{0}}\left(-F_{0}+\frac{A_{0}^{ \prime}}{2A_{0}}D_{0}\right) \tag{12}\] for \(D_{0}>0\) and if \[\beta_{0}^{2}<\frac{C_{0}}{C_{0}^{\prime}D_{0}}\left(-F_{0}+\frac{A_{0}^{ \prime}}{2A_{0}}D_{0}\right) \tag{13}\] for \(D_{0}<0\). Note that an antiphoton sphere or a photon sphere is at \(D_{\rm m}=0\). Thus, we realize that existences of the antiphoton sphere and photon sphere of the original spacetime strongly affects the stability of the transparent thin-shell wormhole spacetime. If \(D_{\rm m0}=0\) holds, \(V_{\rm m0}^{\prime\prime}\) of transparent wormhole is obtained as \[V_{\rm m0}^{\prime\prime}=A_{\rm m0}F_{\rm m0}. \tag{14}\] Here, the functions with the subscript \(m0\) denotes the functions at \(r=a_{0}=r_{\rm m}\). Therefore, the thin shell on the antiphoton spheres (photon spheres) is stable (unstable) independent on the value of \(\beta_{\rm m0}^{2}\). ## V Discussion and conclusion We have cut a general, static, spherically symmetric spacetime and jointed two copies to make a thin-shell wormhole spacetime satisfing a transparent condition with a thin shell filled with any barotropic fluid in general relativity. We have shown that the stability and instability of the throat strongly depend on the existence of the photon sphere and antiphoton sphere of the original spacetime under the assumption that the original spacetime satisfies a generalized Birkhoff's theorem [53; 54; 38]. Stable wormholes without the thin shell in general relativity have not been found by now [55; 56; 57; 58]. 2 We notice that there are few studies on (in)stability analysis of wormhole with an antiphoton sphere on a throat while instability of static and spherically symmetrical wormholes with a photon sphere on the throat have been reported often. Footnote 2: Recently, Bronnikov _et al._ have reported a candidate of stable wormholes with some matter sources [59] and Azad _et al._ have discussed that the rotation of a wormhole may stabilize it [60]. Moreover, in a general context, the possibility that static, spherically symmetrical, and \(Z_{2}\)-symmetrical wormholes can have an antiphoton sphere on a throat has been overlooked often. For example, Bronnikov and Baleevskikh have shown that a general static, spherically symmetrical, and \(Z_{2}\)-symmetrical wormhole has circular photon orbits on the throat from the symmetry and they have concluded that the wormhole has a photon sphere on the throat [61]. On the other hand, in Ref. [62], Shaikh _et al._ have pointed out that the wormholes can have an antiphoton sphere on the throat and Tsukamoto has showed that a Damour-Solodukhin wormhole [63] and a Bronnikov-Kim wormhole [64; 65] can have the antiphoton sphere on the throat in Refs. [66; 67]. In this paper, we have concentrated on the thin-shell wormhole with \(Z_{2}\) symmetry against the throat but our result gives us a lesson to find stable wormholes without the thin shell: Our result implies that the antiphoton sphere on or near the throat might stabilize the wormholes. Thus, the investigation of wormholes with the antiphoton sphere on or near the throat could be a good strategy to find stable wormholes without the thin shell. ## Acknowledgements This work was partially supported by JSPS KAKENHI Grants No. JP20H05853 (TK) from the Japan Society for the Promotion of Science.
2306.01377
A systematic literature review on the code smells datasets and validation mechanisms
The accuracy reported for code smell-detecting tools varies depending on the dataset used to evaluate the tools. Our survey of 45 existing datasets reveals that the adequacy of a dataset for detecting smells highly depends on relevant properties such as the size, severity level, project types, number of each type of smell, number of smells, and the ratio of smelly to non-smelly samples in the dataset. Most existing datasets support God Class, Long Method, and Feature Envy while six smells in Fowler and Beck's catalog are not supported by any datasets. We conclude that existing datasets suffer from imbalanced samples, lack of supporting severity level, and restriction to Java language.
Morteza Zakeri-Nasrabadi, Saeed Parsa, Ehsan Esmaili, Fabio Palomba
2023-06-02T08:57:31Z
http://arxiv.org/abs/2306.01377v1
# A systematic literature review on the code smells datasets and validation mechanisms ###### Abstract. The accuracy reported for code smell-detecting tools varies depending on the dataset used to evaluate the tools. Our survey of 45 existing datasets reveals that the adequacy of a dataset for detecting smells highly depends on relevant properties such as the size, severity level, project types, number of each type of smell, number of smells, and the ratio of smelly to non-smelly samples in the dataset. Most existing datasets support God Class, Long Method, and Feature Envy while six smells in Fowler and Beck's catalog are not supported by any datasets. We conclude that existing datasets suffer from imbalanced samples, lack of supporting severity level, and restriction to Java language. Software and its engineering Software notations and tools; Software maintenance tools; Software smell, code smell dataset, code smell prediction, source code metrics, systematic literature review ## 1. Introduction Code smells postulate the refactoring opportunities in the source code of software systems during the development and maintenance phases [(1)]. Therefore, decent refactoring mainly depends on identifying existing smells in the target source code accurately and correctly. On the other hand, automatic smell detection requires a large dataset with annotated samples for each smell since most smell detection methods are based on statistical and learning-based approaches [(2)]. The performance of a code smell detection tool is vastly affected by the dataset used to create and evaluate that tool [(3)]. It has been shown that the results of some code smell detection tools are not accurate due to the method used in constructing and exploiting their datasets [(3)], [(4)]. For this reason, it is essential to identify the state-of-the-art datasets used for code smell detection, their capabilities, and their limitations. A code smell dataset has two potential applications. First, it can be used as a data source to construct automatic software smell detection tools [(5)]. Second, it can be used as a ground truth or golden reference to evaluate a code smell detection tool [(6)]. The reliability of the dataset affects the performance of both applications. Identifying code smells is inherently a subjective process [(7)], requiring human experts' intervention. Code smell detection in industrial environments and tool evaluation in academic research heavily depend on human factors [(8)]. Without any standard and quality dataset, it is difficult to create code smell detection tools that are fully automated, and their results are not affected by human judgments. Several studies have systematically reviewed the software smells, concepts, and detection tools [(9)], [(10)]-[(11)], while they have rarely focused on the details of the datasets and the validation techniques employed by researchers and practitioners. Di Nucci et al. [(3)] initially discussed the impact of the dataset on smell detection results with machine learning. They found out that the high performance reported in Fontana's work [(13)] is mainly related to the specific dataset employed rather than to the capabilities of ML techniques for code smell detection. It confirms the essence of the quality dataset in software smell detection [(13)]. However, they have not proposed any alternative dataset to mitigate these problems. Azeem et al. [(2)] state that only a few studies have used a large dataset to build and evaluate code smell detection tools. They do not discuss the type of smells and structures used in various datasets. We observed that the code smell datasets have common properties with different data [(6)], [(13)]. The most apparent properties are the size of a dataset and the type of smells supported by the dataset. This paper aims to identify a standard scheme and architecture that code smells datasets can be fairly compared and ranked according to them. We investigate the current state of the available datasets for code smell detection using a systematic literature review (SLR). The anatomy of the code smell datasets is discussed from different viewpoints enabling us to identify the opportunities and pitfalls of the research in this area. Our systematic literature review is conducted to answer the following research questions: * RQ1. _How many code smell datasets have been proposed by the software engineering community?_ * RQ2. _What are the common aspects of the code smells dataset anatomies?_ * RQ3. _What are the code smell dataset creation techniques and validation mechanisms?_ * RQ4. _Which software tools are mostly leveraged to automatically create code smells datasets?_ * RQ5. _Which programming languages, code smells, and code metrics are covered by existing datasets?_ * RQ6. _Which open-source or close-source software projects are widely used as data sources to create code smells datasets?_ * RQ7. _What are the publicly available code smell datasets?_ * RQ8. _How is the quality of the existing code smell datasets regarding different evaluation metrics?_ * RQ9. _What are the most comprehensive and adequate code smell datasets?_ * RQ10. _What are the limitations of the existing code smell datasets?_ A comprehensive search was performed about the code smell datasets on five digital libraries indexing the relevant publications of the field to find the answer to each research question. Our search has resulted in finding 2696 articles, of which 45 articles are of high quality and present new datasets. We compare the datasets according to different aspects, including size, supported smells, programming languages, and construction approaches. Our SLR indicates that while the field is growing and the changes are very dynamic, the existing code smell datasets suffer from many challenges. Most importantly, the small size, few types of code smell, high false-positive rate, and the lack of standard structure. We highlight the potential solutions to be considered in future research. To the best of our knowledge, this is the first systematic review of code smell datasets that helps researchers and practitioners to find the most appropriate datasets and improve them. The remainder of the paper is organized as follows. Section 2 reviews the related SLRs on software smells and describes their difference from our proposed study. Section 3 outlines the research methodology of our SLR. The results of our findings on the primary studies are discussed in Section 4. Section 5 proposes a catalog with an in-depth review of the most notable code smell datasets. Section 6 discusses the challenge and opportunities in the area of code smell datasets. The threats to the validity of our SLR are described in Section 7. Finally, Section 8 concludes this paper and points out directions for future works. ## 2. Related Research We found relatively few SLRs on code smells [12, 14, 9, 10, 11, 2]. Most of them have been published in recent years, which denotes the importance and growing research in the field. However, none of them have studied the code datasets in detail. To the best of our knowledge, this paper is the first systematic literature review dedicated to code smell datasets in advance. Caram et al. [11] provided a systematic mapping study to determine which methods, practices, techniques, and models are used when applying machine learning for code smell detection and prediction. The authors have identified 26 primary studies that used learning-based techniques for code smell identification. Bloaters [15]_i.e_, long method, large class, primitive obsession, long parameter list, and data clumps have been studied in 35% of the papers. Genetic algorithms as the most commonly applied technique were used by 22.22% of the primary studies. A high level of redundancy has been reported regarding the smells addressed by each learning-based technique. In other words, most smells were detected by more than one algorithm. Feature envy was detected by 63% of the proposed techniques as the most common smell type detected automatically, while five out of the 25 analyzed smells have not been detected by any machine learning techniques. Regarding the F1 score the best average performance has been reported for the decision tree classifier in detecting middle-man and shotgun surgery smells, while random forest has an outstanding performance for the long parameter list. As a result, no machine learning technique is superior for all types of smells. The authors have found a lack of comparable results due to the heterogeneity of the data sources in the reported experiments. Azeem et al. [2] proposed a systematic literature review on the use of learning-based techniques for code smell detection. The authors targeted four specific aspects related to how previous research conducted experimentations on code smell prediction models, including (i) which code smells have been considered, (ii) How the machine learning setups have been adopted, (iii) which types of evaluations strategies have been exploited, and (iv) what are performance claimed for the proposed machine learning models. Their analyses highlighted a number of limitations of existing studies as well as open issues that need to be addressed by future research. They also provided a list of actions that must be performed to make the research field more concrete, mainly prioritizing smells, configuring machine learning models, and preparing manually validated code smell datasets. In this paper, we list all available code smell datasets and describe their construction and validation mechanisms. Trindade et al. [16] presented a systematic literature review to compile oracles for bad smells. The primary motivation of their study is the fact that bad smells have been widely studied, and many studies rely on bad smell oracles obtained by tools that have not yet been adequately proven to be precise. The oracles of bad smells available online have the following main characteristics. They involve a maximum of 29 software systems, varying from 86 to 17167 classes. Most of them rely on results provided by tools. They usually verify Large Class smell instances and provide their results in a spreadsheet. Their study indicates that researchers must address gaps related to code smell oracles. For example, just a few oracles accurately identify the methods where the bad smell is located. In addition, most oracles are indeed defined by employing tools, which is a serious threat to their validity. We expand the Trindade et al. [16] study to additional aspects of code smell datasets, mainly their construction and validation mechanisms. Al-Shaaby et al. [17] systematically reviewed and analyzed the machine-learning approaches applied to code smell detection from different aspects, including smell types, learning algorithms, smell datasets, and software tools. They also reported a comparison between machine learning models used for code smell detection in terms of prediction accuracy and performance. Seventeen primary studies were selected and discussed in their SLR to answer the five research questions. The authors have concluded that the application of machine learning techniques to detect code smells is still a new area and needs further investigation. Therefore, more research efforts are required to facilitate the employment of machine learning techniques addressing the code smell prediction issues. We believe that many of these issues originate from the datasets, not the machine learning techniques used to predict. Sobrinho et al. (2017) conducted an extensive literature review on a huge body of knowledge from 1990 to 2017. They found that some smells are much more studied in the literature than others, and some of them are intrinsically interrelated (which). They give a perspective on how the research has been driven across time (when). They analyzed aims, findings, and respective experimental settings and observed that the variability of these elements might be responsible for some contradictory results on bad smells (what). Moreover, while bad smells of different types are generally studied together, only a tiny fraction of the studies have investigated the relationships between different smells (co-studies). The authors also mentioned that researchers have various interest levels in the subject, some of them publishing sporadically and others continuously (who). Their results show that the communities studying code clones or duplications and other types of bad smells are largely separated. The authors observed that some conferences and journals are more likely to disseminate knowledge on code clones while others follow a balanced distribution among all smells (where). We answer similar questions about the code smell datasets in this paper to shed light on the reasons behind the findings by Sobrinho et al. (2017). ## 3. Research Methodology We adapted the guidelines proposed for conducting SLR in software engineering (Sobrinho et al., 2017; Sobrinho et al., 2018; Sobrinho et al., 2019) to identify, analyze, and assess the published literature about code smell datasets considering our research questions. Figure 1 shows the overall process of searching and selecting relevant publications. At first, we defined a search string containing relevant keywords to our SLR. Then, search the resultant query in the top five digital libraries indexing computer science literature massively, shown in Figure 1. Afterward, the article selection process is performed, and finally, a set of 45 articles are selected for a detailed review. We discuss each step in detail in the subsequent sections. ### Constructing search string Our search string is figured around three essential concepts, "source code," "smell," and "dataset," which appear in our research questions. The variation of these three keywords, combined with the Boolean operators, constitute our search string. We performed five steps recommended by Kitchenham and Charters (Sobrinho et al., 2017) to find all relevant search terms and construct the required query string: **Step 1:** We used the research questions in Section 1 to derive the main terms by identifying PICOC criteria (Sobrinho et al., 2018), specifically the population, intervention, outcome, and context. **Step 2:** We extracted and added related terms to the main terms as well as alternative spellings and synonyms of the main terms to our search string. Figure 1. SLR process. **Step 3**: We retrieved and verified the keywords in the related works incrementally and iteratively. Indeed, after performing an initial search, we checked the keywords in the most relevant articles to ensure adding any existing synonyms, spelling forms, and related words to our main terms used in the literature. **Step 4**: We used the "OR" operator for concatenating the alternative spellings, synonyms, and related terms. Moreover, the "AND" operator was used for combining the main terms. **Step 5**: We integrated the search string into a summarized form whenever it was required, according to the search engine's capabilities and limitations. The results of each step are described as follows: **Results for step 1**: As for the first step, the population, intervention, and outcome, were identified to find and organized the main terms. Our population is the code smells and anti-patterns appearing in the production code of software systems. The interventions are techniques, algorithms, tools, and datasets developed for code smells detection, identification, and prediction. The outcomes are different aspects of code smell datasets and databases introduced in either academic or industrial contexts, including the dataset labeling approach, structure, source of data, availability, and quality assessment. For instance, RQ6 contains the main terms related to our PICOC criteria when decomposed as follows: "_Which open-source or close-source_ [_software projects_] **(outcomes)**_are widely used as data sources to create_ [_code smells_] **(population) [_datasets_] **(intervention)**?** " **Results for step 2**: The extracted synonyms, alternative spellings, and related terms of each main term are: * Code smell: "bad smell" OR'smelly code" OR 'anti-pattern" OR 'anti pattern" OR "antipattern" OR "design smell" * Detection: "detect" OR 'identify" OR 'identification" OR 'predict" OR 'prediction" OR'recognize" * Software: "program" OR'metric" * Dataset: "data set" OR "data-set" OR "benchmark" OR 'oracle" OR'machine learning" OR 'classification" OR "regression" It is worth mentioning that anti-patterns and smells are near concepts such that software engineering researchers and practitioners often use them interchangeably [9]. We observed that some code smell datasets contain both the code smells and anti-patterns samples. Therefore, we decide to add the term "antipattern" to retrieve all related datasets in the field. **Results for step 3**: The following keywords were extracted and added to our search terms after investigating the related research in step 3: * Code smell: "design flaw" * Detection: "refactor" OR 'analysis" OR "empirical study" * Software: "program" OR'metric" OR'maintenance" * Dataset: "supervised learning" OR "unsupervised learning" OR "heuristic" OR "statistic" **Results for step 4**: The combination of the extracted search terms with the help of the Boolean operators results in the final search string shown in the first row of Table 1. **Results for step 5**: We optimized the query built upon our search string for each digital library due to specific formats and limitations, such as the query length imposed by their search engines. More precisely, we could not use the above search string with the ScienceDirect search engine because the engine has a limitation of accepting search strings including up to 8 logical operators. Therefore, we had to look for the most relevant keywords in our search string and remove the additional ones. Apparently, by restricting the logical operators the search results were applied to a broader number of articles and we had to look for the most relevant keywords in our search string and remove the additional ones. The search string used with the ScienceDirect library is shown in the second row of Table 1. ### Resources to be searched Choosing the proper resources to search for relevant literature plays a significant role in an SLR. We selected the five well-known digital libraries which mainly index computer science publications as resources to search for all the available literature related to our research questions: (1) IEEE Xplore digital library (_[https://ieeexplore.ieee.org_](https://ieeexplore.ieee.org_)), (2) ACM digital library (_[https://dlacm.org_](https://dlacm.org_)), (3) SpringerLink (_[https://link.springer.com_](https://link.springer.com_)), (4) Scopus (_[https://www.scopus.com_](https://www.scopus.com_)), (5) ScienceDirect (_[http://www.sciencedirect.com_](http://www.sciencedirect.com_)). \begin{table} \begin{tabular}{l l} \hline Repository & Search string \\ \hline \multirow{3}{*}{IEEE Xplore, ACM, SpringerLink, and Scopus} & ((“code smell” OR "bad smell" OR "smelly code" OR "anti-pattern" OR "anti pattern" OR "antipattern" OR "design smell" OR "design flaw") **AND** (“detection" OR "detect" OR "identify" OR "identification" OR "predict" OR "precdiction" OR "recognize" OR "empirical study" OR "analysis" OR "refactor") **AND** ("program" OR "software" OR \\ & "metric" OR "maintenance") **AND** ("dataset" OR "data set" OR "data-set" OR "benchmark" OR "oracle" OR "machine learning" OR "supervised learning" OR "unsupervised learning" OR "classification" OR "regression" OR "heuristic" \\ \hline \multirow{2}{*}{ScienceDirect} & (“code smell” OR "antipattern" OR "design smell") **AND** (“detect") **AND** ("code" OR "design") **AND** (“dataset" OR "data set" OR "machine learning") \\ \hline \end{tabular} \end{table} Table 1: Search string used to query each digital library. The five digital libraries contain a large portion of publications in software engineering and machine learning, including journal articles, conference proceedings, book chapters, and books. Therefore, we ensure that all related works published in code smells and the dataset area are found and retrieved. ### Article selection process The article selection consists of five steps, shown in Figure 1. In the initial search of the repositories for our search string, no restrictions were placed on the publisher, publication type, year of publication, or other items. As a result, a set of 2696 resources were found and retrieved in our initial search, indicating a pretty large number of publications in this area. We used five exclusion criteria (ECs) in the second step to filter out the initial set of articles and remove irrelevant studies. A Python script [21] was developed to automatically apply exclusion criteria where possible. Thereafter, the third author (M.Sc. in software engineering) checked the exclusion criteria on the remained articles considering each resource's title, abstract, keywords, and the manuscript text, where it was required. The results were double-checked by the first author (Ph.D. candidate in software engineering). In case of disagreement in selecting or excluding an article, the second and fourth authors were asked to make the final decision based on the response provided by the first and three authors. It led to the discarding of 2212 resources, resulting in 484 papers to proceed in the next steps. The applied exclusion criteria are as follows: * EC1: _Duplicated resources_: Some articles were indexed by more than one digital library. We kept one copy of each duplicated paper. * EC2: _Non-English resources_: We removed manuscripts written in languages other than English. We observed that none of them provided a new public dataset, including code smell. * EC3: _Non-primary studies_: All secondary and tertiary studies were removed. We discussed them as related work to our SLR in Section 2. * EC4: _Full-text availability_: Articles whose full-text was not available or less than two pages were removed. * EC5: _relevant papers_: Articles focused on other software engineering topics rather than code smells. Our search string terms such as 'code smell" and "dataset" only appeared in the related works section of these articles, or each term appeared in a different article section. For instance, the word "dataset" did not point to a code smells dataset in the paper. In the third step, two inclusion criteria were applied to find the relevant papers appropriate for a detailed study. We selected the articles that dedicated a section to describing their "code smell dataset." In addition, we chose those articles that implicitly explain their "code smell dataset" in the article's main text. To this aim, the contents of the papers obtained in the second step were jointly reviewed by the first and third authors, respectively, a Ph.D. candidate and an M.Sc. graduate in software engineering. We extracted a set of information, including dataset creation and validation process, studied projects, code smells, number of smelly and non-smelly samples from each paper during the review, and saved them in a Microsoft Excel file. The completed Excel file was further used in our manual analysis to answer the research questions described in Section 1. Each selected article was also investigated and verified by the second and fourth authors of the paper, who have a Ph.D. in computer science and are experts in software engineering, to ensure the relevancy and usefulness of the selected papers and obtained information. As shown in Figure 1, 42 articles were picked systematically at the end of this step. In the fourth step, backward and forward snowballing [22] was carefully performed on the candidate papers by the first and third authors. The snowballing process resulted in 31 new articles. It is worth noting that we included all types of resources, _i.e._, journals, conferences, workshops, and technical reports to achieve a collection of relevant papers that are as comprehensive as possible. Finally, in the last step, the quality assessment is carefully performed on 73 papers by answering the questions about the quality of the paper and assessing the score computed for each article regarding these questions. The details of the study quality assessment are described in Section 3.4. ### Quality assessment We used the following checklist to assess the credibility and thoroughness of the selected publications in the study quality assessment step. * Q1: _Is the dataset related to code smell?_ * Q2: _Is the dataset new?_ * Q2:1: _If not, do the authors add any new features to the dataset?_ * Q3: _Do the authors provide helpful information for their datasets?_ * Q3:1: _Do the authors mention the dataset construction and validation mechanisms?_ * Q3:2: _Do the authors mention the tool(s) used to create or validate the dataset?_ * Q3:3: _Do the authors mention the source projects of their dataset?_ * Q3:4: _Do the authors mention the number of each code smell in the dataset?_ * Q3:5: _Do the authors mention the amount of effort used to prepare the dataset?_ * Q4: _Is the dataset created by a novel approach, or does it have a considerable advantage?_ The answer to each question in our checklist was marked with "Yes," "No," or "Partially" for each primary study in the candidate list of studies. The subquestions were evaluated to determine the partial answers. The term "Partially" was used when some of the subquestions in the quality assessment checklist are answered with "Yes" and the others are answered with No. In such cases, the main question was marked as "Partially". We then scored the answers based on the following rules: "Yes" = 1, "No" = 0, ad "Partially" = 0.5. For each candidate's primary study, its quality score was computed by summing up the scores of the answers to all four questions. The scoring process was performed by the first and third authors, who jointly evaluated each article in the set of 73 candidate studies and used consensus to determine the final score. We categorized the quality level into High (score = 4), Medium (2 \(\leq\) score < 4), and Low (score < 2). The articles whose scores belonged to the high and medium levels were selected for in-depth analysis as our final primary studies. Articles that used other researchers' or their previous datasets or utilized datasets unrelated to code smell, or did not provide helpful information for their datasets received low scores and were removed from our repository. In the same way, articles that were created with a similar approach to those in our collection received a low score and were eliminated. For instance, two articles [(23)], [(24)] are similar to S22, S4, and S5. Our search ended up with a total of 45 papers that highly contributed to the area of code smells datasets and validation mechanisms. We could retrieve only 25 datasets from the internet, indicating a relatively low number of publicly available code smells datasets. We analyzed all datasets manually or by simple Python scripts to extract the required information. Objective information extracted from the primary studies was also checked by three independent M.Sc. students in software engineering in addition to the authors to ensure the correctness of the results reported in Section 4. The Microsoft Excel file containing the data extracted during the article selection process is publicly available on Zenodo in reference [(25)]. ## 4. Findings and Results We describe our findings from the objective investigation of the primary studies found by the research methodology described in Section 3. The section begins with an overview of the reviewed articles to answer RQ1. ### Overview This section aims to answer our first research question, _how many code smell datasets have been proposed by the software engineering community?_ To answer RQ1, we investigate the frequency and diversity of the proposed code smell datasets in existing primary studies. Table 2 shows the list of publications contributing to the code smells datasets obtained by our proposed resource selection process. We observe that 26 of 45 selected resources belong to conference articles, 18 papers belong to journal articles, and one resource is a book chapter. It concludes that the topic of the code smell dataset is covered by various publication types. Regarding the title of primary studies, only four papers, S3, S8, S10, and S11, directly point out the dataset term in their title. In other words, a few articles are dedicatedly studied code smell datasets. Table 3 shows the initial and final number of retrieved articles from each digital library. Google Scholar is not used as a primary library for searching and extracting resources. However, we found two articles that previous libraries have not indexed in the manual search and snowballing step. It is observed that the IEEE Xplore digital library hosts most (23 out of 45) of the primary studies about code smell datasets. \begin{table} \begin{tabular}{l l l l l} \hline Study & Title & First author & Type & Ref. \\ \hline **S1** & On the diffuseness and the impact on maintainability of code smells: a large-scale empirical investigation & F. Palomba & Journal & [(26)] \\ **S2** & A large-scale empirical study on the lifecycle of code smell co-occurrences & F. Palomba & Journal & [(27)] \\ **S3** & Landfill: an open dataset of code smells with public evaluation & F. Palomba & Conference & [(28)] \\ **S4** & Comparing and experimenting machine learning techniques for code smell detection & F. Fontana & Journal & [(29)] \\ **S5** & Code smell severity classification using machine learning techniques & F. Fontana & Journal & [(30)] \\ **S6** & Code smell prediction employing machine learning meets emerging Java language constructs & H. Grodzicka & Book chapter & [(31)] \\ **S7** & Detecting code smells using machine learning techniques: are we there yet? & D. Di Nucci & Conference & [(3)] \\ **S8** & The technical debt dataset & V. Learduzzi & Conference & [(32)] \\ **S9** & Code smell detection using multi-label classification approach & T. Guggulothu & Journal & [(33)] \\ **S10** & MLCQ: industry-relevant code smell data set & L. Mackayski & Conference & [(34)] \\ **S11** & Using code evolution information to improve the quality of labels in code smell datasets & Y. Wang & Conference & [(35)] \\ **S12** & Comparing heuristic and machine learning approaches for metric-based code smell detection & F. Pecorelli & Conference & [(36)] \\ **S13** & A support vector machine based approach for code smell detection & A. Kaur & Conference & [(37)] \\ **S14** & Experience report evaluating the effectiveness of decision trees for detecting code smells & L. Amorim & Conference & [(38)] \\ **S15** & Context-based code smells prioritization for prefactoring & N. Sae-Lim & Conference & [(39)] \\ **S16** & Bad-smell prediction from software design model using machine learning techniques & N. Maneerat & Conference & [(40)] \\ **S17** & Evaluating the accuracy of machine learning algorithms on detecting code smells for different developers & M. Hozano & Conference & [(41)] \\ **S18** & Competitive coevolutionary code-smells detection & M. Bousssaa & Conference & [(42)] \\ **S19** & A machine learning based ensemble method for anti-patterns detection & A. Budgez & Journal & [(43)] \\ **S20** & Finding bad code smells with neural network models & D. Kim & Journal & [(44)] \\ **S21** & Evaluation of machine learning approaches for change-proneness prediction using code smells & K. Kaur & Conference & [(45)] \\ **S22** & SMURF: a SVM-based incremental anti-pattern detection approach & A. Maiga & Conference & [(46)] \\ **S23** & BDTEX: a QQM-based Bayesian approach for the detection of anti-patterns & F. Khomh & Journal & [(47)] \\ **S24** & Reducing subjectivity in code smells detection: experimenting with the long method & S. Bryton & Conference & [(48)] \\ **S25** & Adaptive detection of design flaws & J Kreimer & Journal & [(49)] \\ **S26** & Classification model for code clones based on machine learning & J Yang & Journal & [(50)] \\ **S27** & Can I clone this piece of code here? & X.Wang & Conference & [(51)] \\ \hline \end{tabular} \end{table} Table 2. Articles investigated and reviewed in our SLR. ### Code smell datasets classification It is essential to crafting an abstract model to categorize and compare code smell datasets in a standard and fair scheme. This section answers our second research question, "_what are the common aspects of the code smell dataset anatomies?_" To this aim, the keywords of our search string that appeared in RQs 3 to 8 were extracted to find the features of code smell datasets for which we investigated the primary studies. The investigation of these features in the primary studies indicated that existing code smells datasets could be categorized and compared from five orthogonal aspects, including labeling, structure, data source, availability, and quality. After that, the first three authors manually analyzed the data in the Microsoft Excel file (Zhou et al., 2017) described in Section 3 to specify the categories in each aspect and find the number of studies in each category. Figure 2 shows the classification of the code smell datasets regarding the extracted aspects and categories. Each category includes several subcategories organized in a hierarchical form. The numbers inside the parenthesis in the leaves of the classification tree denote the number of existing datasets in that subcategory. We organized the technical review of code smell datasets introduced by the primary studies according to the proposed classification. The subsequent sections discuss each aspect in detail to provide answers to other research questions mentioned in Section 1. The most important aspect of code smells datasets in our classification is the 'labeling' aspect which describes the oracle creation and validation process used for each dataset. Three oracle creation techniques and three oracle validation mechanisms are observed regarding the dataset labeling process. Figure 2 shows that the oracles of most code smell datasets are created automatically and then validated manually by experts or no validation has been performed. Section 4.3 discusses the labeling process of existing datasets in detail. The second aspect that distinguishes code smell datasets is'structure'. We identified five subcategories related to this aspect including supported languages, smell types, severity levels, features, and instance ratio. Regarding programming languages, 43 out of 45 primary studies have proposed a code smell dataset for Java programs. Therefore, the analytical results in our study are limited to the concept of code smells and metrics in object-oriented programming languages. Section 4.4 investigates the structure of the code smell dataset in detail. The third aspect that differentiates code smell datasets is the'source of data' describing the software systems used as benchmark projects to create dataset samples. We distinguish between open-source, industrial, and academic projects as well as their combination when discussing \begin{table} \begin{tabular}{l l l l l} \hline \hline Study & Title & First author & Type & Ref. \\ \hline **S28** & An immune-inspired approach for the detection of software design smells & S. Hassaine & Conference & (S2) \\ **S29** & Tracking design smells lessons from a study of God classes & S. Vaucher & Conference & (S3) \\ **S30** & A Bayesian approach for the detection of code and design smells & F. Khomh & Conference & (S4) \\ **S31** & Predicting maintainability of open-source software using gene expression programming and bad smells & S. Tarwani & Conference & (S5) \\ **S32** & An exploratory study of the impact of anti-patterns on class change- and fault-proneness & F.Khomh & Journal & (S6) \\ **S33** & DECOR: A method for the specification and detection of code and design smells & N. Moha & Journal & (S7) \\ **S34** & Developer-driven code smell prioritization & F. Pecorelli & Conference & (S8) \\ **S35** & Software code smell prediction model using Shannon, Rényi, and Tsallis entropies & A. Gupta & Journal & (S9) \\ **S36** & Application of machine learning algorithms for code smell prediction using object-oriented & M. Agnihotri & Journal & (S6) \\ & software metrics & F. Palomba & Journal & (S6) \\ **S37** & Toward a smell-aware bug prediction model & D. Cruz & Conference & (S6) \\ **S38** & Detecting bad smells with machine learning algorithms: an empirical study & D. Cruz & Conference & (S6) \\ **S39** & Beyond technical aspects: how do community smells influence the intensity of code smells? & F. Palomba & Journal & (S6) \\ **S40** & An empirical study of the performance impacts of android code smells & G. Hecht & Conference & (S6) \\ **S41** & Detecting bad smells in source code using change history information & F. Palomba & Conference & (S6) \\ **S42** & An exploratory study of the impact of code smells on software change-proneness & F. Khomh & Conference & (S6) \\ **S43** & Detection of shotgun surgery and message chain code smells using machine learning techniques & T. Gugugoluthu & Journal & (S6) \\ **S44** & Deep learning based code smell detection & H. Liu & Journal & (S7) \\ **S45** & Detecting code smells using deep learning & A. Das & Conference & (S6) \\ \hline \hline \end{tabular} \end{table} Table 3. Number of retrieved resources by each digital library search engine \begin{table} \begin{tabular}{l l l l l} \hline \hline Library & Initial number & Final number & Journal/ book chapter & Conference \\ \hline IEEE Xplore & 65 & 23 & 4 & 19 \\ ACM & 821 & 4 & 0 & 4 \\ SpringerLink & 1329 & 8 & 6 & 2 \\ Scopus & 301 & 1 & 1 & 0 \\ ScienceDirect & 180 & 5 & 5 & 0 \\ Google Scholar & 0 & 4 & 3 & 1 \\ \hline Total & 2096 & 45 & 19 (425) & 26 (582) \\ \hline \hline \end{tabular} \end{table} Table 3. Number of retrieved resources by each digital library search engine the data source of code smell datasets. As shown in Figure 2, the majority of datasets have been built based on open-source software projects. The details about software systems used in the code smell dataset are discussed in Section 4.5. The fourth aspect that distinguishes code smell datasets is the 'availability' of a proposed dataset. The availability is an essential factor for researchers and practitioners who want to use or contribute to them. It is observed that more than half of code smell datasets are outdated or not available publicly highlighting the need for public datasets in the field. The publicly available code smell datasets are discussed in Section 4.6. Finally, the fifth aspect in our classification is 'quality' describing which evaluation metrics have been used to assess the correctness of the proposed dataset. Section 4.7 compare the quality of existing code smell datasets regarding the reported evaluation metrics and review the advantages and disadvantage of each dataset. ### Code smell datasets' labeling The third research question, _what are the code smell dataset creation techniques and validation mechanisms_, is answered by analyzing the labeling process of code smell datasets in our primary studies. According to Figure 2, the labeling process in code smell datasets includes two phases of oracle creation and oracle validation. During the oracle creation, a label is assigned to each program entity (_e.g._, method or class) Figure 2. Classification of code smell datasets. specifying its smell type. Entities with no labels are considered non-smelly or smell-free instances. In the oracle validation phase, assigned labels are checked to ensure their correctness and fix the false ones. The labeling process highly affects the reliability and correctness of a code smell dataset regardless of its structural properties, which mainly affects the development of code smell detection tools. Due to numerous entities in the source code of real-world programs, various oracle creation and validation methods have been presented. #### 4.3.1 Oracle creation approaches Table 4 summarizes the contributions proposed in the primary studies regarding the construction and validations of their code smell datasets. It is observed that researchers have used three approaches to create oracles: manual, automatic (tool-based), and merging. Following are descriptions of each oracle creation approach: **Manual**. The manual approaches use experienced software developers and practitioners (called experts) to recognize code smells, such as those proposed in S10 and S17. Creating a large and quality dataset can be a time-consuming and expensive process, threatened by the experts' opinions and knowledge [8]. The results of manual labeling are typically validated during the oracle validation process by another group of experts to reduce the bias caused by the first group. **Automatic (tool-based)**: The automatic approaches use existing code smell detection and refactoring tools and do not rely on human experts. Automatic creation of a large code smell dataset, e.g., S8, S11, required less effort than the manual approach. However, a fully automated labeling approach invariably leads to many false-positives samples (refer to Section 4.3.2) [69]. Moreover, the type of smell is limited by existing tools. In other words, automatic approaches cannot be used to identify new types of code smells. Human experts often involve in the oracle validation phase to identify and remove false-positive samples as much as possible. S1 and S4 have employed human experts to filter out false-positive samples. The tool-based creation approach used by Boussa et al. in S18 differs from the other automatically created datasets. Instead of detecting code smells using tools, they created prototypes of artificial code smells. Since the artificially generated smells are different from the actual ones, they have added some real smelly instances to their dataset to improve the naturalness of the data and achieve better results. The construction approach of some datasets, such as S13, has not been mentioned clearly. **Merging**: In this technique, existing code smell datasets are combined to form a new dataset with more samples and smell types compared to the existing ones. During the merge process, some properties such as the order, place, and structure of samples may be changed to enhance representation. Moreover, a new round of validation process may be applied to entire samples to improve dataset quality. Authors in S7, S9, and S16 have used merging to create new code smell datasets. In S7 and S9, Fontana's dataset (S4) files have been merged to address some problems described in Section 5. In S16, the authors have collected seven datasets from the previous works, which offer 27 design model metrics and seven code smells. We believe that merging is a promising approach to achieving a larger dataset while it requires proper validation to access a high-quality dataset. #### 4.3.2 Oracle validation mechanisms After constructing the code smell dataset, an important step is to validate the dataset. The validation process mainly affects the results obtained by computing evaluation metrics on every code smell detection or prediction approach. The third column of Table 4 indicates the validation mechanism associated with each primary study. We found that the code smell datasets are primarily validated in three methods: Manual validation by one or several experts, automatic validation, and a combination of them. Moreover, some authors have not validated their datasets. Studies without a clear validation mechanism are denoted with a "No validation" term in the third column of Table 4. It is observed that 18 out of 45 datasets provide no validations. Experts are people with significant hands-on experience in detecting, classifying, and fixing code smells in software systems. Most manually validated code smell datasets have been validated by only one expert, threatening the reliability of available datasets and increasing the bias toward one person's opinions. Our findings show that experts who evaluated the code smell datasets are the authors of the datasets themselves, trained students, and professional developers. Mello et al. [70] have shown that reviewers' collaboration significantly increases the precision of smell identification. They also have shown that having previous knowledge of the reviewed module does not affect the precision of reviewers with higher professional backgrounds [70]. Therefore, it is expected to involve more human evaluators even if they are not highly expert developers when creating new code smell datasets. Automatic validation is used when dataset creators apply an automated mechanism to evaluate a dataset, e.g., using unsupervised learning [66] or the history of changes [35] in the refactoring process and different versions of software. Both types of manual and automatic validations are accompanied by a voting mechanism when three or more validators (experts or algorithms) are involved in making the final decision about smelly and non-smelly samples and the type of smell. The "voting" term has been added to the validation mechanisms in Table 4 for datasets validated by three or more validators. Hybrid validation had been only used by the authors of S11 [35]. Wang et al. in S11 have invented a hybrid technique based on code evolution information. They have automatically evaluated samples by analyzing two versions of a refactored software. Only smelly samples in the first version that are not smelly in refactored version are considered as the true sample of a code smell. Wang et al. have also performed manual evaluations to ensure the reliability of the results. As a result, regarding the quality of the validation process, S11 is one of the prominent studies. In contrast, many studies, such as S7-S9, S13-S16, and S20-S22, have not performed any validation activity. The validation process should be taken into account even for datasets that are created by merging existing ones since the base datasets are often validated under different conditions and are associated with different qualities. One may argue that the dataset validation mechanisms are primarily related to the dataset construction approaches. However, it is not true in general. For example, tool-based created datasets are often evaluated manually. Code smell dataset contributors need to distinguish between oracle creation and validation processes when introducing a new dataset in this field. The clarification of these concepts and serious attention to the validation process in the future make the empirical results more realistic than the current ones. \begin{table} \begin{tabular}{p{34.1pt} p{142.3pt} p{142.3pt}} \hline \hline Study & Labeling approach & Validation mechanism \\ \hline S1 & **Tool-based**: Using a simple detection tool to extract code smell candidates and then two authors validate them manually. Finally, they performed an open discussion to resolve possible conflicts and consensus on the detected code smells to ensure high recall. & Expert \\ S2 & **Tool-based**: A simple tool that discarded the classes/methods that surely do not contain code smells has been used. Then, two master students (_i.e._, the inspectors) individually analyzed and classified code elements of each system as true positive or false positive for a given smell. All the instances positively classified by both inspectors have been considered as real smells. The inspectors opened a discussion to resolve the disagreement and make a shared decision for the other instance. & Expert \\ S3 & **Manual**: The first author manually detected the smelly instance, and then another author validated the produced oracle to verify the results. & Expert \\ S4 & **Tool-based**: Using several smell detection tools as advisors. The oracle evaluation was performed by three M.Sc. students trained explicitly for the task. The students independently studied the code smell definitions and held a two-hour discussion about their opinions. & Expert \\ S5 & **Tool-based**: Using several smell detection tools as advisors. The labeling process is also supported by graphical code representations, like dependencies, calls, and hierarchy graphs. Neither the values of software metrics nor the number of advisors suggesting an instance were available during the evaluation to avoid biases. Three M.Sc. students performed the labeling process after being trained both theoretically and practically for the task. & Expert \\ S6 & **Tool-based**: The author's tool, hvadetrics, was used for the initial filtering of samples from the dataset, helping authors to conduct an in-depth analysis of the selected samples based on source code metrics. Code smell labeling was performed by the authors, atl-year software engineering students, and developers with approximately one year of professional experience. & Expert \\ S7 & **Merging**: The authors have merged Fontuna’s dataset files (S4) to reduce the balancing rate and make a dataset with more than one type of smell. & No validation \\ \hline \hline \end{tabular} * [noitemsep,topsep=0pt] * The authors have cloned the projects’ repositories and iterated on each commit using PyDriller [71]. For each commit, the following actions are performed. * The commit information in the GitLog is retrieved using PyDriller [71]. * The refactoring are classified using RefactoringMiner [72]. * The code is analyzed with SonQuabe [73] using the default quality model (Sonar way) to collect technical debt information * Code smells and anti-patterns are detected with Ptidej [74]. * We validation * [noitemsep,topsep=0pt] * Merging: The authors have merged Fontuna’s dataset files (S4) to make a multilabel dataset. * Manual: The dataset provides unique and detailed insights related to the professional and academic backgrounds of the reviewers. All of the reviewers involved in the code smell assessment are actively employed in the software development industry. The majority of samples are gathered by developers that are neither students nor researchers. * [noitemsep,topsep=0pt] * The authors have used code evolution information to detect false positive instances and improve the quality of labels. * Toul-based**: The authors have used DECOR [57] to detect code smells and then manually evaluate the samples. * Manual: Training dataset (TDS) = [C, where j=1, 2,..., n] and C is a set of classes obtained from object-oriented systems. [Yi], Cj is marked as smelly or not. Object-orient metrics have been computed for each class of TDS which are used as an attribute Aj for each class of training dataset. An SVM classifier is used to detect the new existence of code smells in the dataset. **Tool-based**: The information about smells in a class has been derived from the 532 dataset. The CEM [75] and POM [76] tools were used to calculate 18 and 44 metrics, respectively. For each class. Both tools calculate some metrics, but they decided to keep both versions since each tool calculates metrics differently and thus, produces different values. It is essential to observe that some code smells are more related to some metrics than others. Therefore, the models used for code smell detection give more importance to some metrics than others. [MISSING_PAGE_POST] ep=0pt] \begin{tabular}{c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{Study}} & \multicolumn{1}{c}{Labeling approach} & Validation \\ & & mechanism \\ \hline \multirow{3}{*}{S22} & **Manual**: A set of classes \(\mathcal{C}_{k}\) has been derived from an object-oriented system that constitutes the training dataset. Vi: \(\mathcal{C}_{k}\) is labeled as Smelly class (_e.g._, blob) or not. A classifier has been trained on the dataset and used to identify code smells in new samples. After manually validating newly detected smelly samples, the correct ones were added to the training set. & Expert \\ & **Manual**: The authors have asked four undergraduate students and three graduate students to identify occurrences of the three anti-patterns in the two programs. They independently combined the students' votes using majority voting. If at least three of the five students/pair considered a class an anti-pattern, they tagged it as a true occurrence, _i.e_, an instance of the anti-pattern. & Expert \\ & **Manual**: In this study, all three authors separately played the role of experts. Each method was independently inspected by each expert. They may only considered a method to be a long method in cases where they had a full match. & Expert \\ & **Tool-based**: The authors' own tool, IYC (it is your code), suggests potential design flaws then the user decides whether a flaw & \\ & **S25** & exists. Then, manually validated samples prepare an initial training set to train a decision tree. The learned model is then used to detect new instances, validate them manually, and add them to the training set. & Expert \\ & **Tool-based**: A clone detection tool detects a set of clones in the source code. The user marks some of these clones as true or false clones according to her/ his judgment and then submits these marked clones to FICA (filter for the individual user on code clone analysis) as a profile. FICA records the marked clones in its own database. FICA ranks other unmarked clones based on the result of machine learning, which predicts the probability that each clone is relevant to the user. The user can adjust the marks on code clones and resubment them to FICA to obtain a better prediction. & Expert \\ & **Tool-based**: First, the authors have used a clone detector to identify several cloning operations performed in the version histories of existing software projects. Second, for each cloning operation acquired in the first step, they have determined the values of the 21 features of the cloning operation and whether it is harmful or harmless to form a training instance. Third, they have constructed the Bayesian network based on the training instances. & No validation \\ & **Tool-based**: The cracles were manually created by analyzing the two systems used in the experiments. Three of the authors & \\ & **S28** & independently re-validated the publicly available data in S30 and S33 to reduce the risk of classification errors. A candidate smell & Expert voting \\ & **was classified as an actual smell when two or three authors classified it as a smelly instance.** & \\ & **Manual**: The authors have asked two undergraduate students and two graduate students to detect occurrences of God class in the two systems. A pair of undergraduate students performed the task together. & Expert voting \\ & **Manual**: The authors have asked two undergraduate students and two graduate students to detect occurrences of the blob in the two programs. The student opinions have been independently combined such that if at least two of the three students/pair considered a class smelly, tagged as a correctly occurrence. & Expert voting \\ & **S31** & **Tool-based**: Eleven code smells have been identified in every class with the help of two tools, Deodoran [81] and Robusta. & No validation \\ & **Tool-based**: The authors have used their previous approach, DECOR (defect detection for correction) [57], to specify and detect anti-patterns. & No validation \\ & **Tool-based**: The authors have automatically applied the detection algorithms (DECOR [57]) on models of systems to detect suspicious classes. Detection algorithms may be applied in isolation or batch. & No validation \\ & **Tool-based**: The authors have built an automated mechanism that fetches daily commits from the repositories to a local copy. This allowed them to generate the list of classes modified during the workday. At this point, they performed the actual smell detection. The authors used DECOR [57] to identify instances of the blob, complex class, and spaghetti code and used HST [65], [79] to detect shotgun surgery. Afterward, they manually double-checked the smelly classes given by the automated tools to discard possible false positives. Finally, they sent in enables to the original developers to ask (i) whether s/he actually recognized the presence of a code smell and (ii) if so, rate its criticality using a Likert scale from 1 (very low) to 5 (very high). & Expert \\ & **Tool-based**: The data consisting of six bad smells are extracted for seven official releases of the Apache Abdera project using the Robusta [82] smell detection tool. & No validation \\ & **Tool-based**: Four code smells, feature entry, dispersed coupling, refused parent bequest, and God class, were identified using the Eclipse plugin SpiRIT [83]. & No validation \\ & **Tool-based**: The authors have relied on the smells detected by JCodeOdor [84] because it has been empirically validated, demonstrating good performances in detecting code smells and detecting all the code smells considered in the empirical studies. & No validation \\ & **In addition, JCodeOdor [84] computes the value of the intensity index on the detected code smells.** & \\ & **Tool-based**: The authors have combined the results of five automatic detection tools to create bad smells. They applied & \\ & **Three detection tools for each bad smell and computed an agreement voting between their results. An entity (class or method) is considered smelly if two or more tools detect it. & Automatically \\ & **Tool-based**: To collect smell instances, the authors have selected DECOR [57] because it has been employed in previous investigations on code smells, demonstrating exemplary performance in terms of precision, recall, and scalability. & No validation \\ & **Tool-based**: The authors have detected the three smells in the projects by performing a static analysis using the Paprika tool [85]. They have obtained a list of methods and classes concerned with the three code smells. Then, they manually corrected each smell. & Expert \\ & **Manual**: A Msc. student manually identified instances of the five considered smells in each system's snapshots. Starting from the definition of the five smells reported in the literature, the student manually analyzed each snapshot's source code looking for instances of those smells. Clearly, for smells having an intrinsic historical nature, he analyzed the changes performed by developers on different code components. A second Msc. student validated the produced oracle to verify that all affected code components identified by the first student were correct. & Expert \\ & **Tool-based**: The authors have used their previously proposed tool, DECOR [57], to specify and detect code smells. & No validation \\ & **Tool-based**: The researchers have assigned class instances labels with the help of detection rules proposed in the literature & Automatically \\ & **Cododor [84]. The method instances affected (positive) by the above rules have been compared with the formed clusters to validate the instances. If an instance produces the same cluster as its label, it is considered smelly. & Automatically \\ & **Tool-based**: Applying refactoring, \(\pi\), to well-designed applications would change their well-designed internal structures. As a result, the refactoring leads to bad or suboptimal design, _i.e._, code smells. The resultant smells should be resolved by applying another software refactoring, \(\pi\). Indeed, refactoring, \(\pi\), does nothing except undo the smell-introducing refactoring, ar. An example of smell introducing refactoring is to move a method from a class, \(sc\) (where the method should be placed) to another class, \(tc\). The move method operation, in this case, results in a feature envy smell. The smell is resolved by another move method operation that moves the method from \(tc\) back to \(sc\). & Automatically \\ & **Tool-based**: The proposed smell detection rules in [86, 87] have been applied to each set of related metrics to detect smell instances for each smell. The authors have used the if plasma tool [88] to generate eight required metrics. & No validation \\ \hline \hline \end{tabular} **RQ3:**_What are the code smell dataset creation techniques and validation mechanisms?_ **Summary for RQ3:**_Code smell datasets have been created with a manual or tool-based approach and validated manually with human experts, automatically with existing smell detection tools, or a combination of them. The number of studies that use tool-based approaches is increasing. Nearly 69% of the papers have used an automatic approach to create the code smell dataset of which 77% have leveraged only one tool, threatening the reliability of these datasets. The manual, automatic, and hybrid validations have respectively been used in 49, 9, and 28 of code smell datasets while 40% of the datasets have not been validated._ #### 4.3.3 Software tools used to create code smell datasets To answer the fourth research question, _which software tools are mostly leveraged to automatically create code smells datasets_, we extracted all tools mentioned in the primary studies. Various tools have been used to detect code smells and extract the source code metrics in preparing code smell datasets. We found 31 different software analysis tools vastly used in the code smell datasets creation process by researchers of the primary studies. Figure 3 shows the software tools used to detect the code smells and label smelly entities, _i.e._, methods, classes, and packages by related smell types with the goal of dataset creation. DECOR [57] has been applied more than other available smell detection tools. The authors of two studies, S1 and S2, have developed their specific tools to label dataset samples. The tools used by each primary study are shown in Table 5. It should be noted that this table shows smell detection tools mentioned in the primary studies which used a tool-based method to create their datasets. Indeed, primary studies with manually created datasets have not been reported in Table 5. At most, five different tools have been simultaneously used by three primary studies, S4, S5, and S38. Code smell datasets often contain additional information such as source code metrics for data samples. Figure 4 shows the reuse rate of the software tools used to compute code metrics corresponding to each sample in code smells datasets. Most tools have appeared only in one study. The POM1 (Primitives, Operators, Metrics) tool [89] has the maximum reuse rate by appearing in three studies. POM is an extensible tool based on the PADL meta-model, which computes more than 60 metrics, including the well-known set of source code metrics by Chidamber and Kemerer [90]. The software tools that compute source code metrics may produce different results due to different definitions of metrics and calculation algorithms [38]. For example, authors in S14, have used two different metric calculation tools, CKJM [74] and POM [89], to calculate 62 source code metrics for each class. Some metrics have been calculated by both tools but the authors have decided to keep both versions since they are calculated differently in each tool, and thus, produce different values. Footnote 1: [https://wiki.ptidej.net/doku.php?id=pom](https://wiki.ptidej.net/doku.php?id=pom) In addition, existing tools support a subset of source code metrics. For example, the CKJM tool [75] only calculates Chidamber and Kemerer [90] object-oriented metrics by processing the bytecode of compiled Java files to increase the accuracy and performance of metric computation. For this reason, applying multiple tools to compute source code metrics is recommended to increase the diversity and accuracy of metrics associated with each sample in code smell datasets. It is worth noting that most studies have not embedded source code metrics in their datasets. Source code metrics are used to detect code smells in different techniques, including machine learning, rule-based, and heuristic-based. Therefore, a decent code smell dataset is expected to be associated with various source code metrics. Figure 3: Tools used for creating code smell datasets. ### Structural aspects of code smell datasets This section answers RQ5 by analyzing the structural properties of the code smell dataset proposed in the primary studies. According to Figure 2 in Section 4.2, 43 out of 45 datasets are based on the Java language. However, their other structural properties such as the type of smells and the number of samples are different. We observed that existing code smell datasets do not follow a standard structure and often contain different metadata, making it difficult to fairly compare the datasets. The data and metadata are typically saved into XLS, CSV, SQL, or TXT files with required information about code smell, metrics, and projects. As an example, Figure 5 illustrates the structure and samples of the code smell dataset proposed in 510 [34]. The dataset contains 15 columns declaring various information about available samples. It contains nearly 15000 code samples of the smelly and non-smelly instances. Some code smell datasets contain source code metrics, which Figure 4. Tools used for extracting and computing source code metrics. \begin{table} \begin{tabular}{c can be used as features by the code smell detection tools. If a dataset does not have any source code metrics, the link to the source codes of samples must be provided to extract the required features and metrics. #### 4.4.1 Supported code smells One of the main characteristics of code smell datasets is the different types of smells and anti-patterns supported by the datasets. We counted the number and types of smells in the available datasets to answer RQ5 about the code smells covered by existing datasets. For the datasets that were not publicly available, we relied on the statistic provided in their corresponding primary studies. Table 6 shows the smell types in the code smell datasets proposed by the primary studies. The proposed dataset in S42 contains 29 types of code smells, which is the highest among the primary studies. The datasets in S8 and S1 with 23 and 13 types of code smells are in second and third place, respectively. Figure 6 shows the available code smells and the numbers of supporting datasets. Smells with appearance frequency one has been shown as a separate category for better visualization. It is observed that God/ large class, long/ brain method, feature envy, data class, and spaghetti code are among the top five supported code smells. They form 43% of total smells covered in code smell datasets. One possible reason is the simplicity of manual detection and the number of available detection tools for these smells. On the other hand, a relatively large number of smells (nearly 16% of all smells listed in Table 6) are only supported by one dataset, demonstrating the lack of research on a large portion of code smells. \begin{table} \begin{tabular}{l l} \hline \hline Study & Supporting code smells \\ \hline \multirow{2}{*}{S1} & Class data should be private, complex class, feature envy, God class, inappropriate intimacy, lazy class, long method, long parameter list, message \\ & chains, middle man, refused bequest, spaghetti code, speculative generality \\ & Class data should be private, complex class, feature envy, blob, inappropriate intimacy, lazy class, long method, long parameter list, message \\ & chains, middle man, refused bequest, spaghetti code, speculative generality \\ & S3 Divergent change, shotgun surgery, parallel inheritance, blob, feature envy \\ & Gad class, data class, feature envy, long method \\ & Gad class, data class, foot class \\ & Gad class, data class, feature envy, long method \\ & Gad class, data class, foot class \\ & Duplexted code, blob, class data should be private, cyclomatic complexity, down casting, excessive use of literals, feature envy, functional \\ & decomposition, God class, inappropriate intimacy, large class, lazy class/freeloader, orphan variable or constant, refused bequest, spaghetti code, \\ & speculative generality, Swiss army knife, tradition breaker, excessively long identifiers, excessive return of data, long \\ & method, too many parameters/ long parameters list \\ & S9 Feature envy, long method \\ & S10 Blob, data class, feature envy, long method \\ & S11 Data class, God class, brain class, brain method \\ & Gad class, long method, spaghetti code, complex class, class data should be private \\ & S13 Blob, data class, feature envy, long method \\ & S14 Anti-stign, blob, class data should be private, complex class, large class, lazy class, long method, long parameter list, message chains, refused \\ & parent bequest, speculative generality, Swiss army knife \\ & S15 Blob, God class, data class, feature envy, schizophrenia class \\ & Lady class, feature envy, middle man, message chains, long method, long parameter lists, switch statement \\ & S17 Blob, data class, feature envy, long method \\ & S18 Blob, spaghetti code, functional decomposition \\ & S19 God class, feature envy \\ & Large class, lazy class, data class, parallel inheritance hierarchies, God class, feature envy \\ \hline \hline \end{tabular} \end{table} Table 6: Supported smell types by code smell datasets in primary studies. Figure 5: The structure (columns) and samples (rows) of the proposed dataset in S10. Figure 6: Frequency of smells and anti-patterns supported by available code smell datasets. As shown in Figure 6, there are many types of code smells apart from those introduced by Fowler and Beck [1] and Lanza and Marinescu [86]. Moreover, some smells and anti-patterns have several alias names. We consider them as one type where possible. The lack of standard taxonomy for code smell types leads to various smell types proposed by datasets. Sometimes, the introduced type cannot be considered a code smell or is named improperly by the authors. For example, the cyclomatic complexity (CC) has been listed as a code smell in the dataset proposed by Lenarduzzi et al. [32]. However, CC is defined as a quality metric indicating the complexity of a program but not a code smell [91]. Creating code smell datasets based on accepted references such as [1], [86] is encouraged. #### 4.4.2 Instance ratio and features Two other important structural aspects of code smell datasets are the number of instances and the number of metrics available in a dataset. Table 7 lists information about the size and metrics of the existing code smell dataset. A '\(-\)' symbol is used where the data are not available. Four datasets are balanced, _i.e._, their number of smelly and non-smelly instances are equal while most existing datasets (40 datasets) do not have any non-smelly instances. Regarding metrics, 21 out of 45 datasets do not propose any additional metrics while the remaining dataset contains metrics at one or more entity levels including, method, class, package, and project. It is observed that there are no associated metrics for a large portion of code smell datasets. However, the code metrics can be extracted from the project's source code. For instance, the column named 'link' in Figure 5 denotes the address of the source codes corresponding to the data sample in each row of the S10 dataset. Figure 7 shows the instance ratio and diversity distribution of smell types in the existing code smell datasets. Only the five frequent smells in Figure 6 have been illustrated in this figure. Moreover, datasets without any reported data about samples' diversity have not been shown. The numbers in the vertical axis are reported in percentages for better comparison. It is observed that the ratio of smelly and non-smelly instances is not equal in most datasets. Similarly, the frequency of smell types is different in most datasets. For example, God class is more frequent than other types of smells. One possible conclusion is that such distribution mostly follows the natural distribution of code smells in software systems. Nevertheless, code smell datasets are expected to support various smell types regardless of their diversity. \begin{table} \begin{tabular}{l l l} \hline Study & Number of smelly and non-smelly samples & Number of code metrics \\ \hline S1 & 17,350 smelly instances & 0 \\ S2 & 40,888 smelly instances & 0 \\ S3 & 243 smelly instances & 0 \\ S4 & \(\star\) dataset files, \(\star\)\(\times\)420 instances & \(\star\) 61 metrics for class-level smells, \(\star\)\(\,\)82 metrics for method-level smells \\ S5 & \(\star\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\,\)\(\,\,\,\)\(\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\,\)\(\,\,\,\ ### Source of data The code smell datasets may build upon various software projects developed by academia, industry, and the open-source community. We extracted the names of all software projects mentioned in the primary studies to answer RQ6, _which open-source or close-source software projects are widely used as data sources to create code smells datasets_? Figure 8 shows the frequency of project types and programming languages used in creating code smell datasets. Most researchers (nearly 89%) have prepared their datasets using open-source projects. However, few researchers (only 11%) have used industrial projects. We also did not find any set of academic projects explicitly developed for studying code smells in selected datasets. Industrial projects' participation in code smell datasets is expected to be increased in future research. Table 8 shows the list of projects used to create code smell datasets. As shown in Figure 9 word cloud illustration, the top five frequently used projects are Xerces1[92], Eclipse2[93], Gantt Project3[94], Argo UML4[95], Argo UML4[96], Argo UML4[97], Argo UML4[98], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[9], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[99], Argo UML4[ [95], Ant1 [96], and JEdit2 [97]. Moreover, the projects in the well-known Qualitas Corpus benchmark [98] have been used in five code smells datasets. Footnote 1: [https://ant.apache.org](https://ant.apache.org) Most code smell datasets have been created based on the same set of software projects. One possible reason is that the datasets have the same authors who have preferred to work on the same software systems. The second is that the new dataset has only improved previous datasets by increasing the number of instances and adding some metrics or severity indexes to the dataset. Finally, the new datasets are created by merging the existing ones in some studies. Nevertheless, it is needed to consider new projects when creating new code smell datasets to increase the diversity of instances in different application domains. \begin{table} \begin{tabular}{l l l l} \hline Study & Source projects & Types & Language \\ \hline & Apache ant, apache cassandra, apache derbry, apache hadoop, apache hhase, apache hhive, apache incubating, apache ivy, apache karaf, apache lucene, apache nutch, apache pig, apache qpid, apache struts, apache & Open-source & Java \\ & wiket, apache acres, apall, autunes, eclipse core, elasticsearch, freemind, hibernate, haslddb, jboss, jedit, jfreechart, jbordraw, jal, jytl, sax & & \\ & Apache ant, apache cassandra, apache derbry, apache hadoop, apache hhase, apache hive, apache incubating, apache hive, apache karaf, apache lucene, apache pitch, apache pig, apache qpid, apache struts, apache & Open-source & Java \\ & apache ivy, apache karaf, apache lucene, apache cv, elasticsearch, freemind, hibernate, haslddb, jboss, jedit, jfreechart, jbordraw, jal, jytl, sax & & \\ & Apache ant, apache tomcf, jedit, android api (framework-opt-telephony) android api (framework-base), android api (framework-suspet), android api (stk), android api (tool-based), apache commons lang, apache cassandra, apache commons code, apache Derby, eclipse core, apache james mime+j, google guava, aardvark, and engine, apache commons io, apache commons logging, mongo db & Open-source & Java \\ & 54 & 7 projects from qualitas corpus & Open-source & Java \\ & 55 & 76 projects from qualitas corpus & Open-source & Java \\ & 56 & 281 github projects & Open-source & Java \\ & 77 & 4 projects from qualitas corpus & Open-source & Java \\ & Accumulombary, atlas, aurora, batik, beam, cocoon, commons bocl, commons beanutils, commons cli, & & \\ & commons codec, commons collections, commons configuration, commons daemon, commons dhep, commons cl, commons dhep, commons object, commons exec, commons jet, commons get, commons get, commons get, commons get, commons get, commons get, commons get, commons get, commons get, commons validator, commons vfs, fcix, hftp & Industrial & Java \\ & 57 & 76 projects from qualitas corpus & Open-source & Java \\ & 510 & 523 projects from GitHub & Industrial & Java \\ & 511 & Tomcat, jurby, netty & Open-source & Java \\ & 512 & Ant, argo um, cassandra, derby, eclipse, elasticsearch, hadoop, haslddb, incubating, nutch, apid, wicket, xeres & Open-source & Java \\ & 513 & Gantt project, xeres & Open-source & Java \\ & 514 & Eclipse, mylyn, argo uml, rhino & Open-source & Java \\ & 515 & Argo uml, jabref, jedit, mucommander & Open-source & Java \\ & 516 & \(-\) & Industrial & Java \\ & 517 & Gantt project & Open-source & Java \\ & 518 & Argouml, xeres, ant-apache, azureus & Open-source & Java \\ & 519 & Android opt telephony, android support, ant, lucene, tomcat, xeres, argo uml, jedit & \\ & & Android-universal-image-loader, bigbluebutton, bukkit, tojure, dropwizard, elasticsearch, junit, libgdx, & \\ & 520 & metrics, netty, nokogiri, okhtr, platform frameworks base, retroft, presto, rjava, spring-boot, spring, & \\ & framework, storm, xing & & \\ & 521 & Mobac, jajuk, googui, openrocket & Open-source & Java \\ & 522 & Argo uml, azureus, xeres & Open-source & Java \\ & 523 & Gentt project, xeres & Open-source & Java \\ & 524 & Apache commons cli & Open-source & Java \\ & 525 & Iyc, weka & Industrial & Java \\ & 526 & Gt, xbah, agh, agh, aghysorg & Open-source & C \\ & 527 & Xproj, jproof (Microsoft projects) & Industrial & C* \\ & 528 & Gantt project, xeres & Open-source & Java \\ & 529 & Eclipse jdt, xeres & Open-source & Java \\ & 530 & Gantt project, xeres & Open-source & Java \\ & 531 & Jds, jches, astroflission, ordrumbox & Open-source & Java \\ & 532 & Eclipse, mylyn, argo uml, rhino & Open-source & Java \\ & 533 & Argo uml, azureus, gantt project, log*j, lucene, nutch, pmd, quickuml, eclipse, xeres (two versions) & \\ & 534 & Apache manout, apache Cassandra, apache Lucene, apache cayenne, apache pig, apache jackrabbit, apache & \\ & 535 & apache adrera & Open-source & Java \\ & 536 & Jbordraw & Open-source & Java \\ & 537 & Apache ant, apache camel, apache forrest, apache ivy, jedit, apache velocity, apache tomcat, apache lucene, apache pei, apache pei, apache synapse & Open-source & Java \\ & 538 & 20 systems & Open-source & Java \\ & 539 & Apache mahout, apache cassandra, apache lucene, apache cayenne, apache pig, apache jackrabbit, apache & \\ & 530 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache jackrabbit, apache & \\ & 539 & Apache mahout, apache Cassandra, apache Lucene, apache cyenne, apache pig, apache jackrabbit, apache & \\ & 531 & Apache mahout, apache Cassandra, apache Lucene, apache cyenne, apache pig, apache jackrabbit, apache & \\ & 532 & Apache mahout, apache Cassandra, apache Lucene, apache cyenne, apache pig, apache jackrabbit, apache & \\ & 534 & Apache mahout, apache Cassandra, apache Lucene, apache cyenne, apache pig, apache jackrabbit, apache & \\ & 535 & Apache akdera & Open-source & Java \\ & 536 & Jbordraw & Open-source & Java \\ & 537 & Apache ant, apache camel, apache forrest, apache ivy, jedit, apache velocity, apache tomcat, apache lucene, apache pei, apache pei, apache synapse & Open-source & Java \\ & 538 & 20 systems & Open-source & Java \\ & 539 & Apache mahout, apache cassandra, apache lucene, apache cyenne, apache pig, apache jackrabbit, apache & \\ & 530 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache jackrabbit, apache & \\ & 531 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache jackrabbit, apache & \\ & 532 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache jackrabbit, apache & \\ & 533 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache jackrabbit, apache & \\ & 534 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache jackrabbit, apache & \\ & 535 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache jackrabbit, apache & \\ & 536 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 537 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 538 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 539 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 539 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 530 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 530 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 531 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 532 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 533 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 534 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 535 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 536 & Apache mahout, apache Cassandra, apache cyenne, apache pig, apache & \\ & 537 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 538 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 539 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 539 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 530 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 531 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 532 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 533 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 534 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 535 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 536 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 537 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 538 & Apache mahout, apache Cassandra, apache lucene, apache cyenne, apache pig, apache & \\ & 539 & Apache mahout ### Code smell datasets' availability Table 9 shows all publicly available code smell datasets and their download links. We have provided a complete reference for researchers and practitioners who aim to work on developing new code smell detection tools or datasets. It is observed that only 25 of 45 (56%) articles have proposed a public dataset and the remaining 20 datasets are not publicly available to download. The dataset links proposed in S3, S39, S40, and 41 were not accessible at the time of writing this SLR. It seems that these datasets are no longer supported by their authors or are no longer available for public usage. Hence, we marked the status of the S3, S39, S40, and S41 datasets as "obsolete" in Table 9. We conclude that the data used in a large portion of code smell detection research is not available, indicating that researchers are mostly not interested in publishing code smell datasets. It is also observed that public code smell datasets have been published on different web repositories rather than well-known ones, such as Zenodo [99], Kaggle [100], and Figshare [101], which are specific to scientific datasets. It makes finding, indexing, updating, and versioning Figure 8: Benchmark project types and programming languages used in code smell datasets. Figure 9: Word cloud of benchmark projects used to construct code smell datasets. the datasets difficult. Currently, S7, S12, S34, and S37 are found in Fisshare [101], S8, S9, S19, and S4+ are found in GitHub [102], and S10, S20, and S4+ are found in Zenodo [99]. Publishing datasets on the websites such as Zenodo [99], and Figshare [101], which provide data versioning facilities and statistics of datasets, helps researchers to find and select an appropriate dataset easily. A comprehensive review of the datasets in Table 9 facilitates using and extending these datasets. We discuss the most important contributions and properties of publicly available code smell datasets regarding their constructions and validation approaches in Section 5. **RQ7:** _What are the publicly available code smell datasets?_ **Summary for RQ7:** _Only 25 out of 45 (about 56%) of the code smell datasets are publicly available to be used by the research community. Therefore, most primary studies have created their own dataset, which is neither complete nor accurate. The dataset links of four primary studies are no longer publicly accessible. The available code smell datasets have been published on different websites and there is no dedicated host for publishing code smell datasets. As a result, code smell datasets maintenance, versioning, and metadata indexing are poorly supported by the community._ \begin{table} \begin{tabular}{l l l l l l} \hline \hline Study & Status & Projects & \begin{tabular}{l} Supported \\ smells \\ \end{tabular} & Other important properties & Download link \\ \hline S1 & available & 30 & 14 & \(\bullet\) 17,350 code smell instances, \(\bullet\).csv files & _[https://ilbr.unimol.li/tsaff/plaomba/reports/b_](https://ilbr.unimol.li/tsaff/plaomba/reports/b_) \\ S2 & available & 30 & 14 & \(\bullet\) 40,888 code smell instances, \(\bullet\).csv files & _[http://www.mediafile.com/file/m2yr55gmrby_](http://www.mediafile.com/file/m2yr55gmrby_) \\ S3 & obsolete & 20 & 5 & 243 code smell instances & _[https://www.seas.unisa.il/landfill/_](https://www.seas.unisa.il/landfill/_) \\ S4 & available & 74 & 6 & \(\bullet\) 1,986 code smell and non-smell instances, \(\bullet\).csv files & _tml_ \\ S5 & available & 76 & 4 & \(\bullet\) 63 metrics for class smells and eight metrics for method, \(\bullet\).csv files & _[http://esser.disco.unimib.it/reverse/MLCSD.h_](http://esser.disco.unimib.it/reverse/MLCSD.h_) \\ & & & & & \(\bullet\) 63 metrics for class smells and eight metrics for method smells, \(\bullet\).csv files & _tml_ \\ S6 & available & 281 & 1 & \(\bullet\) Long method instances, \(\bullet\).PMD, lplasma, Marinescu, \\ & & & & & \(\bullet\) Designite output for instance, \(\bullet\).csv files & _[http://madyski.e-_](http://madyski.e-_) \\ & & & & & \(\bullet\) 4 datasets each 80 instances, \(\bullet\) 140 with specific & _[https://fisshare.com/articles/dataset/Detectin_](https://fisshare.com/articles/dataset/Detectin_) \\ S7 & available & 74 & 4 & \(\bullet\) small and 700 without it, \(\bullet\) 61 class metrics and 82 & _g.Code. Smells_ \\ & & & & & method metrics,.csv files & _[https://github.com/clowe/The-Technical-_](https://github.com/clowe/The-Technical-_) \\ S8 & available & 30 & 23 & \(\bullet\) 37,553 code smell instances, \(\bullet\).184,027 technical & _[https://github.com/clowe/The-Technical-_](https://github.com/clowe/The-Technical-_) \\ & & & & & \(\bullet\) 445 instances of multi-label smells and non-smells, \(\bullet\).2 labels, \(\bullet\).4 sets of labels, \(\bullet\).6 metrics, \(\bullet\).csv files & _Debt-Dataset_ \\ S9 & available & 74 & 2 & \(\bullet\) 445 instances of multi-label smells and non-smells, \(\bullet\).2 labels, \(\bullet\).4 sets of labels, \(\bullet\).6 metrics, \(\bullet\).csv files & _[https://github.com/libraries/2endo.org/record/366840.YPRTXqgz_](https://github.com/libraries/2endo.org/record/366840.YPRTXqgz_) \\ S10 & available & 523 & 4 & \(\bullet\) 4 & \(\bullet\) 4,391 matches, \(\bullet\).3291 code smell instances & _[https://semedo.org/record/366840.YPRTXqgz_](https://semedo.org/record/366840.YPRTXqgz_) \\ & & & & & \(\bullet\) 8,534 code smell instances and non-smell instances, \(\bullet\).csv & _[https://fgsshare.com/s/9d08c59eb.1e85359fd_](https://fgsshare.com/s/9d08c59eb.1e85359fd_) \\ S12 & available & 13 & 5 & \(\bullet\) 5 & \(\bullet\) 740,888c9eb.1e8359fd_ \\ S19 & available & 8 & 2 & \(\bullet\) 4 & \(\bullet\) 262 smelly samples, \(\bullet\).11 metrics, \(\bullet\).1xt files & _[https://github.com/antoineBarberse/SMAD_](https://github.com/antoineBarberse/SMAD_) \\ S22 & available & 3 & 4 & \(\bullet\) 3,162 instances, \(\bullet\).50 metrics,.csv and arrf files & _[http://www.ptdig.net/download/experiments_](http://www.ptdig.net/download/experiments_) \\ S23 & available & 2 & 3 & \(\bullet\) 777 instances, \(\bullet\).147 smell instances,\(\bullet\) Decor output, \(\bullet\) Metrics, \(\bullet\).3x files & _[https://www.ptdig.net/downloads/experiments_](https://www.ptdig.net/downloads/experiments_) \\ & & & & & \(\bullet\) 777 instances, \(\bullet\).19 smell instances, \(\bullet\) The probability & _[http://www.ptdig.net/downloads/experiments_](http://www.ptdig.net/downloads/experiments_) \\ S30 & available & 2 & 1 & \(\bullet\) & \(\bullet\) 743(000) & _[http://www.ptdig.net/downloads/replications_](http://www.ptdig.net/downloads/replications_) \\ S32 & available & 4 & 12 & \(\bullet\) & \(\bullet\) 743(1000) & _[http://www.ptdig.net/downloads/replications/_](http://www.ptdig.net/downloads/replications/_) \\ S33 & available & 11 & 4 & \(\bullet\) & \(\bullet\) 743(1000) & _[http://www.ptdig.net/downloads/replications/_](http://www.ptdig.net/downloads/replications/_) \\ S34 & available & 9 & 4 & \(\bullet\) & \(\bullet\) 743(1000) & _[http://www.ptdig.net/research/designs/_](http://www.ptdig.net/research/designs/_) \\ S37 & available & 11 & 6 & \(\bullet\) & \(\bullet\) 743(1000) & _[https://fgsshare.com/s/9d0593d452fb7850704_](https://fgsshare.com/s/9d0593d452fb7850704_) \\ S38 & available & 20 & 4 & \(\bullet\) & \(\bullet\) 743(1000) & _[https://fgsshare.com/articles/dataset/_](https://fgsshare.com/articles/dataset/_) \\ S39 & obsolete & 9 & 5 & \(\bullet\) & \(\bullet\) 433(1000) & _[http://www.pdig.net/downloads/replications/_](http://www.pdig.net/downloads/replications/_) \\ S40 & obsolete & 2 & 3 & \(\bullet\) 60 smells instances & _[http://www.crosti.unismai.it/mdipenta/paper_](http://www.crosti.unismai.it/mdipenta/paper_) \\ S41 & obsolete & 8 & 5 & \(\bullet\) & \(\bullet\) 8,534(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ S42 & available & 2 & 29 & \(\bullet\) & \(\bullet\) 743(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ S44 & available & 10 & 4 & \(\bullet\) & \(\bullet\) 743(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ & & & & & & \(\bullet\) 743(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ S43 & available & 20 & 4 & \(\bullet\) & \(\bullet\) 743(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ & & & & & & \(\bullet\) 743(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ & & & & & & \(\bullet\) 743(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ & & & & & & & \(\bullet\) 743(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ & & & & & & & \(\bullet\) 743(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ & & & & & & & \(\bullet\) 743(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ & & & & & & & \(\bullet\) 743(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ & & & & & & & \(\bullet\) 743(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ & & & & & & & \(\bullet\) 743(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ & & & & & & & \(\bullet\) 743(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ & & & & & & & \(\bullet\) 743(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ & & & & & & & \(\bullet\) 743(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ & & & & & & & \(\bullet\) 743(1000) & _[https://www.pdig.net/downloads/replications_](https://www.pdig.net/downloads/replications_) \\ & & & & & & & & \(\bullet\) 743(1000) & _https://www. ### Code smell datasets' quality, advantages, and disadvantages This section discusses the answer to RQs, _how is the quality of the existing code smell datasets regarding different evaluation metrics?_ We observed that the quality of the dataset has been determined by different evaluation metrics, including accuracy, sensitivity, precision, and F1-score. The dataset validation mechanisms described in Section 4.3.2 aim at increasing code smell dataset quality. Theoretically, it is assumed that the labels of all samples in a dataset are true after validation is performed. Hence, the dataset is used as ground-truth to evaluate the code smell detection tools based on evaluation metrics. Indeed, the primary goal of the evaluation metrics is not used to evaluate the quality of the dataset, but actually, the performance of tools to identify code smells using the dataset. However, in practice, code smell datasets are not completely true due to a large number of instances, specifically the ones created automatically. In other words, the validation process most presumably does not remove all false positive and false negative instances. For this reason, evaluation metrics are used with a secondary goal of evaluating datasets, _i.e._, reporting the number of false positives and false negatives in a dataset [53], [61]. Code smell analysis in software systems is often formulated as a binary classification problem [29], [60], [103] and evaluation metrics are computed according to the confusion matrix [104]. The confusion matrix [104] terminology used in primary studies for evaluating the results of code smell detection tools and code smell datasets quality is defined as follows: * _True-positive (TP)_ TP refers to the entities, _e.g._, methods or classes, that are smelly and considered smelly by a prepared dataset or tool. * _True-negative (TN)_. TN refers to the entities that are not smelly and also not considered smelly by a prepared dataset or tool. * _False-positive (FP)_ FP refers to the entities that are not smelly but considered smelly by a prepared dataset or tool. * _False-negative (FN)_. FN refers to smelly entities which are not considered smelly by a prepared dataset or tool. It should be noted that the discussed evaluation metrics are not specific to code smell detection with machine learning. We observed that primary studies reported their evaluation had used one or some of these metrics. Table 10 shows the evaluation metrics of the primary studies which at least reported one of the defined metrics. The F1 values are computed for all studies that have reported the Precision and Recall metrics. "NR" means that a specified metric has not been reported for that study. As discussed, the assumption about datasets is that they are ground-truth and fully accurate. Therefore, the quality metric values in primary studies have been mostly reported for tools considering such ground-truth datasets. Studies that reported evaluation metrics for their proposed tools, not their datasets, are marked with a "*" symbol. It is observed that 28 out of 45 studies reported at least one evaluation metric for their dataset or tools. Only two studies, S29 and S37, have reported the evaluation metrics for their dataset with an F1 score of 87 and 80%. The labels of the dataset created by using only one smell detection tool are as accurate as the tool. Therefore, we have reported the corresponding tool performance metrics for such datasets marked with a "+" symbol. Assessment of code smell datasets based on evaluation metrics indicates the presence of false labels in most of the datasets that are automatically labeled by code smell detection tools. Still, datasets created and validated manually can be considered highly precise compared to automatically created datasets. \begin{table} \begin{tabular}{l l l l} \hline \hline Study & Accuracy & Precision & Recall & F1 \\ \hline S4\({}^{*}\) & 98.16 & NR & NR & 98.61 \\ S5\({}^{*}\) & 84.50 & NR & NR & NR \\ S6\({}^{*}\) & 97.79 & NR & NR & 98.33 \\ S7\({}^{*}\) & 76.00 & NR & NR & 10.00 \\ S9\({}^{*}\) & 97.50 & NR & NR & 97.60 \\ S12\({}^{*}\) & NR & 21.80 & 52.40 & 29.20 \\ S13\({}^{*}\) & NR & 80.30 & 76.20 & 78.20 \\ S14\({}^{*}\) & NR & 78.10 & 71.10 & 74.40 \\ S17\({}^{*}\) & 6.110 & NR & NR & NR \\ S18\({}^{*}\) & NR & 87.00 & 84.50 & 85.75 \\ S19\({}^{*}\) & NR & 42.50 & 66.50 & 51.90 \\ S20\({}^{*}\) & 99.09 & NR & NR & NR \\ S22\({}^{*}\) & NR & 83.27 & 80.57 & 81.90 \\ S26\({}^{*}\) & 70.00 & NR & NR & NR \\ S28\({}^{*}\) & NR & 73.24 & 100 & 84.85 \\ S29\({}^{*}\) & NR & 77.00 & 100 & 87.00 \\ S32\({}^{*}\) & NR & 69.50 & 93.00 & 79.50 \\ S33\({}^{*}\) & NR & 69.50 & 93.00 & 79.50 \\ S34\({}^{*}\) & NR & 69.50 & 93.00 & 79.50 \\ S35\({}^{*}\) & NR & 69.50 & 93.00 & 79.50 \\ S41\({}^{*}\) & NR & 76.20 & 76.60 & 76.20 \\ S42\({}^{*}\) & NR & 69.50 & 93.00 & 79.50 \\ S43\({}^{*}\) & 99.80 & 99.80 & 99.80 & 99.80 \\ S44\({}^{*}\) & NR & 43.53 & 85.50 & 57.69 \\ S45\({}^{*}\) & 97.39 & NR & NR & NR \\ \hline \hline \end{tabular} \end{table} Table 10: Evaluation metrics reported in primary studies. The quality of the code smell datasets can be qualitatively evaluated regarding their advantages and disadvantages. Table 11 compares the advantages and disadvantages of existing code smell datasets. The main advantages and disadvantages of the manual oracle creation approaches are the high reliability and small size of the datasets, respectively, while for the automatic or tool-based methods, the opposite ones are true. The dataset validation is considered an advantage that supports its quality. Moreover, only seven out of 45 primary studies support the code smell severity in their proposed datasets making them superior to other datasets regarding this feature. \begin{table} \begin{tabular}{l l l} \hline \hline Study & Advantages & Disadvantages \\ \hline S1 & (1) Numerous samples, (2) Various smell types, (3) High recall & (1) No metrics, (2) Many duplicate instances, (3) Only smelly instances \\ S2 & (1) Numerous samples, (2) Various smell types, (3) High recall & (1) No metrics, (2) Many duplicate instances, (3) Only smelly instances \\ S3 & (1) Two experts & (1) No metrics, (2) Small dataset, (3) Only smelly instances \\ S4 & (1) Multiple tools and experts, (2) High number of metrics & (1) Old projects, (2) Balanced dataset, (3) One kind of smell in each dataset \\ S5 & (1) Multiple tools and experts, (2) High number of metrics, (3) 4 levels of severity & (1) Old projects, (2) Balanced dataset, (3) One kind of smell in each dataset \\ S6 & (1) Multiple experts & (1) Balanced dataset \\ S7 & (1) Imbalanced dataset, (2) Multiple types of smells, (3) High number of metrics & (1) Small size (few samples), (2) No validation \\ S8 & (1) Various smell types, (2) Numerous samples, (3) 5 levels of severity, (3) 30 different software metrics & (1) Only smell instances, (2) No validation \\ S9 & (1) Multilabel dataset & (1) Small size (few samples), (2) No validation \\ S10 & (1) Multiple expert developers, (2) 4 levels of severity & (1) No metrics, (2) Small dataset, (3) Low recall (only one detection tool) \\ S11 & (1) Removing false positive instances automatically & (1) Balanced, (2) Low recall (only one detection tool has been used) \\ S12 & (1) High precision & (1) No detailed information about the oracle creation \\ S13 & (1) Two metric computation tools & (1) No validation \\ S14 & (1) Multilabel dataset, (2) Various smell types, (3) High number of metrics, (4) Two metric computation tools & (1) Low recall (Only one detection tool has been used), (2) Only smelly instances, (3) No validation \\ S15 & (1) Severity scores & (1) No validation \\ S16 & (1) Merging different datasets & (1) No information about smell detection or instance quantities, (3) No validation \\ S17 & (1) 40 expert developers, (2) High precision & Small size (few samples and smell types) \\ S18 & (1) "Artificial" code smell examples, (2) Reducing the manual effort effectively & (1) No information about smell detection, (2) No validation \\ S19 & (1) Using a weighted vote over the reported answers, (2) Using a couple of smell detection tools (high recall) & (1) Small dataset, (2) Only smelly instances \\ S20 & (1) High recall & (1) No validation \\ S21 & (1) Various types of smells & (1) No validation \\ S22 & (1) Interactive labeling & (1) No detailed information about the dataset \\ S23 & (1) Seven experts (students) & (1) No metrics, (2) Small size (few samples and smell types) \\ S24 & (1) Se levels of severity, (2) High precision & (1) One kind of smell, (2) Small size (few samples and smell types) \\ S25 & (1) Interactive labeling, (2) manual validation & (1) Small size (few samples) and smell types) \\ S26 & (1) Interactive labeling & (1) One kind of smell, (2) Small size (few samples), (3) Only smell instances \\ S27 & (1) 21 different software metrics & (1) One kind of smell, (2) No validation, (3) Only smelly instances \\ S28 & (1) Merging existing datasets & (1) Small size (few samples) \\ S29 & (1) + experts (students) & (1) One kind of smell, (2) Small dataset, (3) Only smelly instances \\ S30 & (1) + experts (students) & (1) One kind of smell, (2) Small dataset \\ S31 & (1) Various smell types, (2) 21 different software metrics & (1) No validation \\ S32 & (1) Various smell types, (2) Multilabel & (1) No metrics, (2) No validation, (3) Low recall (only one detection tool) \\ S33 & (1) Well-defined process & (1) No metrics, (2) No validation, (3) Low recall (only one detection tool) \\ S34 & (1) Se levels of severity, (2) Valuated and ranked by the developers of the projects, (3) High reliability & \(-\) \\ S35 & (1) Different versions of the same classes & (1) No validation, (2) Small size (few samples) \\ S36 & (1) No validation, (2) No information about dataset size \\ S37 & (1) Se levels of severity & (1) Small size (few samples) \\ S38 & (1) Lightweight and relatively reliable validation & \(-\) \\ S39 & (1) + levels of severity & (1) No validation \\ S40 & (1) Smells in the Android-based programs & (1) Small size (few samples) \\ S41 & (1) Two independent experts (students) & (1) No direct information about dataset size, (2) No metrics \\ S42 & (1) Smells as features (multi-label), (2) High number of smells & (1) No validation \\ S43 & (1) Automatic and relatively reliable validation & (1) No metrics \\ S44 & (1) Automatic and relatively reliable validation & (1) No direct information about dataset size \\ S45 & & (1) No validation, (2) No direct information about dataset size \\ \hline \hline \end{tabular} \end{table} Table 11: Advantages and disadvantages of existing code smell datasets. **RQ8**: _How is the quality of the existing code smell datasets regarding different evaluation metrics?_ **Summary for RQ8**: _The primary metrics used in evaluating the code smell datasets are accuracy, sensitivity, precision, and F1 score. An F1 score of 87 and 80% have been reported for datasets in S29 and S37 while no metrics have directly been provided for other datasets. The accuracy of datasets created by code smell detection tools is the same as the accuracy of the tool. We conclude that there is no fully accurate code smell dataset based on which code smell detection tools can be compared fairly. Moreover, no dataset is superior to other ones in all code smell dataset aspects._ ## 5. Notable Code smell datasets This section answers RQ9, _what are the most comprehensive and adequate code smells datasets?_ To this aim, we present an in-depth review of the most notable code smell detection datasets listed in Table 9. The discussed datasets in this section are either mostly cited by the author researchers in the field [29], [30], proposing a completely new labeling approach [13], [55], having a distinguished advantage [26], [27], [32], or addressing the problems of previous datasets [3], [34], [41]. Our review explains how code smells have been detected and to which extent the proposed datasets are valid. We end up with specific guidelines facilitating the creation and validation of code smell datasets mostly achieved by researchers in the field. In the case of the manually created datasets, Fontana et al. [29] have developed a dataset containing 420 samples of four code smells from 76 Java projects in the Qualitas Corpus [98] by manual labeling. They asked a team of three M.Sc. students to identify the God class, long methods, feature envy, and data class smells in the selected projects and then label them with the corresponding smell type. Later they added a severity level for each kind of smell, including four ordinal levels [30]. The smelly and non-smelly samples have been balanced with a portion of 1/2 to be used in the machine learning task. However, studies show that realistic code smell datasets are highly imbalanced by nature due to the low occurrence of most code smells [4]. Therefore, Fontana's dataset seems unrealistic, and it also contains very few samples, such that it is not suitable to be used for learning-based smell detection techniques. Di Nucci et al. [3] have criticized Fontana's dataset [29] concerning the size, types, and ratio of smelly and non-smelly samples. They created a dataset containing more than one type of smell and more samples by merging Fontana's datasets and reported that the performance of code smell detection models is up to 90% lower than the one reported in [29]. Their studies highlight the importance of code smell datasets concerning the number of samples and features, type of smells, and the ratio of smelly and non-smelly code samples. Moreover, it indicates that automatic smell detection is not a trivial task, and achieving high performance is difficult. Madeyski and Lewowski [34] have stated that the projects in Qualitas Corpus [98] are old since they have been primarily developed with Java 5. Indeed, the features added in new versions of the Java programming language are not used in these projects. These features may lead to software smells that do not exist in the current datasets. They have introduced a dataset with 2,175 samples of 4 code smells, including God class, data class, feature envy, and long method, in 4 severity levels. The samples have been selected from industrial projects and labeled by 26 experienced software developers. Hozano et al. [41] have created a dataset containing 600 samples on four odor codes using 40 experienced software developers with at least three years of experience. Both datasets suffer from a low number of samples and smell types. Some researchers have applied available tools and plugins to automatically create code smells datasets and expand different aspects of their datasets. Lenarduzzi et al. [32] have analyzed 33 Apache projects with the SonarQube1[73] and Ptidej [74] tools and extracted 23 types of smells in 5 severity levels. They have also extracted 30 source code metrics corresponding to each sample in the dataset by using SonarQube [73]. Tarwani et al. [55] have analyzed 1,089 Java classes in 4 projects with JDeodorant [81] and Robusta2[82] and recognized 11 smell types. Using an IntelliJ IDE IDE plugin, they extracted a set of source code metrics corresponding to each sample. Using various tools in automated approaches leads to more code smells being detected and increased accuracy. However, these tools typically have a low agreement and produce many false positives. Footnote 1: [https://www.sonarquube.org](https://www.sonarquube.org) Footnote 2: [https://marketplace.edipse.org/content/robusta-eclipse-plugin](https://marketplace.edipse.org/content/robusta-eclipse-plugin) Wang et al. [13] have proposed an approach to automatically detect code smells and remove false-positive samples. The authors have used RefDiff [105] to determine the refactored version of different entities (classes and methods) in code as a so-called contrastive version. The original version of the entities has been considered as the baseline. The authors have used the iPlasma tool [88] to identify code smells in both the original and contrastive versions. The smells found in the original version and refactored in the contrastive version have been labeled as smelly samples with their type. The entities that are not refactored and not detected as the smell have been considered non-smelly samples and added to the dataset. The smells detected by iPlasma [88] in the original version and not refactored in the contrastive version are false-positive and discarded. However, due to the low number of code smells, the approach proposed by Wang et al. may result in a high false-negative rate and aggravate data imbalance. To alleviate these problems, some researchers have employed a hybrid tool-based method in which false positives eliminates by human experts. Palomba et al. [26] have created one of the largest code smell datasets using a tool-based approach in which smells are initially identified by a tool, and then false-positive samples are removed manually. Their smell detection tool uses a set of rules with strict thresholds to minimize the false positive rate. However, such a dataset may still suffer from a high false-negative rate. They have designated 17,350 samples of 13 code smell types from 395 versions of 30 different Java open-source projects. Unfortunately, the source code of 113 versions could not be found due to outdated links. Pecorelli et al. [4] have used this dataset to assess the role of data balancing in machine-learning-based code smell detection methods. In more recent work, Palomba et al. [27] have extended their previous dataset by increasing the number of samples to 40,888. Employing automatic approaches must not lead to the complete replacement of automatic methods with the human efforts required for creating reliable datasets. The results of automatic approaches must be deeply analyzed to discover how various code smells can be determined accurately. So far, the details proposed in many primary studies about creating and validating datasets are insufficient and must be revisited in future code smell studies, particularly those specific to smell datasets. Manually creating and validating code smell datasets is a laborious task. None of the smells are indeed trivial to find, and most of them have a subjective interpretation. The rules used to detect the smells are often proxies for them. For example, in the case of the most popular smell in the datasets, _i.e._, God class, the detection strategy is based on the length of a source code file concerning all other source code files in the project. This strategy principally says nothing about clustering all functionality in one class. None of the papers really discuss the validity of this proxy. Other smells, such as the message chain or refused bequest, need far more complex analysis and understanding of the system to determine whether a snippet of code is affected by the smell. It means that any form of automation or manual analysis that relies on such proxies is almost always inaccurate. Using smell detection and dataset refinement in a cycle can help overcome such issues. Experts in the study by Fontana et al. [29] reached a set of guidelines determining the most relevant aspects for each code smell to help the way labels are assigned. Lanza and Marinescu [86] observed that smelly codes often exhibit (_i_) low cohesion and high coupling, (_ii_) high complexity, and (_iii_) extensive access to the data of foreign classes. These observations are highly in common with the guidelines declared by Fontana et al. [29]. Table 12 summarizes guidelines used to recognize and validate the most frequent smells in code smell datasets. It concludes that a set of rules on which there is consensus can be extracted by analyzing smelly codes manually. The code smell detection tool used by Palomba et al. [61] relies on detection strategies similar to those defined by Lanza and Marinescu [86]. Each detection strategy is a logical composition of predicates, and each predicate is based on an operator that compares a metric with a threshold. The detection strategy should be compared with those obtained by analyzing detection models, _e.g._, the results of interpreting machine learning models, and then refined to achieve reliable smell detection guidelines for manual validation tasks. Finally, technical documentation of the various aspects of code smell datasets is necessary to provide useful guidelines when constructing new datasets. **RQ9:**_What are the most comprehensive and adequate code smells datasets?_ **Summary for RQ9:**_The proposed dataset by Palomba et al. [26] in S1 and the one proposed by Madeyski et al. [34] in S10, namely MLQC, can be considered the most comprehensive available code smell dataset according to the different aspects, mainly the size and quality of data samples. It is observed that publicly available code smell datasets have not been well-documented with respect to their structure, construction, and validation process._ ## 6 Challenges, Implications, and Opportunities Code smell datasets and detection tools face many challenges in reaching the completeness and reliability required by software engineers in the industry. In the previous sections, we discuss some of these challenges. This section investigates the answer to our last research question and focuses on the limitations of existing code smell datasets and the challenges of creating new ones. We describe six orthogonal dimensions of the most significant challenges and implications in current code smell datasets, which we observed while reviewing the primary studies. The possible solutions and opportunities are discussed to shed light on the direction of future research in this field. Each aspect of the code smell dataset, shown in Figure 2, can be improved by future works. This section ends up with our suggestion about an "ideal" dataset for code smells. \begin{table} \begin{tabular}{l p{227.6pt}} \hline \hline Code smell/ anti-pattern & Detection guidelines \\ \hline God/ large class, blob & * God/ large classes and blobs are large, * God/ large classes expose a large number of methods, * God/ large classes and blobs usually contain brain methods, * God/ large classes and blobs tend to access many attributes from many other classes, * God/ large classes and blobs tend to centralize the intelligence of the system. \\ \hline Long/ brain method & * Long/ brain methods contain many lines of code, * Long/ brain methods tend to have many parameters or a long parameter list, * Long/ brain methods across many attributes and a large portion of the attributes declared in the enclosing class, * The number of accessed variables through an accessor is high in long/ brain methods, * Long/ brain methods tend to be complex. \\ \hline Feature envy & * Feature envy exists access to many foreign attributes, * Feature envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envyvy envy envy envy envy envy envyvy envy envy envyvy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envy envyvy envy envy envy envy envy envy envy envy envy envy envy envyvy envy envy envy envy envy envy envy envy envyvy envy envy envy envy envyvy ### Imbalanced smell types Existing datasets support the limited and similar types of code smells. Figure 10 illustrates Fowler and Beck's catalog of code smells [(1)], covered by the current code smell datasets found in the primary studies along with their frequency. It is observed that code smells such as God/large class, long method, feature envy, and data class are supported by most datasets. However, six of the 22 code smells in Fowler and Beck's catalog are not supported by any datasets. These include data clumps, primitive obsession, alternative classes with different interfaces, incomplete library classes, temporary fields, and comments. Although Palomba's dataset [(26)] includes the comments code smell, they did not study this smell in their article. The lack of datasets for the above smells and the ones proposed in Figure 6 in Section 4.4.1, which are only considered by one or two studies, prevents the development of comprehensive and reliable code smell detection tools. The available code smell detection or prediction tools only identify limited types of code smells with acceptable accuracy. It implies that only specific types of smells are expected to be found and fixed during the software maintenance phase. It is important to note that not all smells require the same detection effort and difficulty. For instance, as we will discuss in Section 6.3, the long method is often easier to detect than the divergent change or message chain code smell. Indeed, the later smells in our example require more samples than the former, _i.e._, the long method to be detected accurately. As a result, software developers have to perform manual investigations or use several time-consuming and error-prone tools to find code smells. The dataset by Khomh et al. [(56)] contains 29 code smell types, which is the highest number of smell types covered by the current code smell datasets. It implies that merging existing code smell datasets into one dataset while searching for samples of infrequent smells is necessary to improve the supported types of code smells in a dataset. ### Imbalanced smelly and non-smelly samples It has been reported that the number of smelly samples is significantly less than non-smelly samples in real-world codebases [(4)]. The imbalanced data mainly affects learning-based approaches' performance in detecting code smells. Some researchers manually balanced smelly and non-smelly samples and built a balanced dataset to address this challenge [(29)], [(30)]. However, manually balancing leads to missing the approach's generalization and generates unrealistic results [(3)]. Pecorelli et al. [(4)] have examined different resampling algorithms, including ClassBalancer, Resample, SMOTE [(106)], and concluded that none of the resampling strategies could solve the imbalance problem in smell detection. It implies that synthetically generated samples only working at the feature level (_e.g._, source code metrics) do not improve the effectiveness of learning-based smell prediction tools. A better solution to generating synthetically smelly samples that are more realistic than resampling feature vectors is to convert non-smelly code snippets into smelly ones by program transformation techniques. For instance, a long method can be created by merging several single responsible methods into one method using the inline method refactoring. The resultant feature vectors created by extracting the source code metrics from the refactored codes are more realistic than vectors generated by the resampling algorithms. Another potential solution to address the problem of imbalanced data is to switch the learning paradigm from supervised learning to semi-supervised and unsupervised learning mechanisms. Specifically, the anomaly detection techniques are suitable if we define the problem of code smell detection as the problem of finding code snippets with anomalous or abnormal features. Such code snippets can be detected as outliers using an anomaly detection model such as isolation forest [(107)], local outlier factor (LOF) [(108)], or deep auto-encoder [(109)], [(110)]. Figure 10. Code smells in Fowler and Beck’s catalog [(1)] covered by the current code smell datasets. In semi-supervised approaches, a large number of code snippets can be approximately labeled with their smells and then used in the final learning algorithm. The large-scale code smell datasets can be created with the described approaches and combined with the existing ones. ### Different detection efforts and smell occurrences Manual detection of the code smell is time-consuming and also requires high knowledge and experience in software development. Subjects with more professional backgrounds tend to reach higher precision regardless of their familiarity with the code they review (Kang et al., 2017). The problem is exacerbated by the fact that different code smells required different detection efforts. For instance, detecting long method instances in a program can be performed by only looking at the method body. However, to detect the message chain code smell, the entire program call graph should investigate, requiring more effort than detecting the long method. In addition, some types of code smells rarely occur in the code, while they can have very harmful effects on software quality, and their detection is also a matter (Kumar et al., 2017). According to Tsantalis et al. (2017) out of the 35,000 refactorings applied in the period 2011-2017, 50% belong to extract method, 25% correspond to extract class, and 16% correspond to move method. These refactoring operations respectively correspond to long method, God class, and feature envy which also frequently appeared in code smell datasets. The most frequent code smells in the Xerees-J v2.7.0 project fixed by Mkaouer et al. (2017) are blob/God class, spaghetti code, feature envy, and data class which are the same reported in Figure 10 of our SLR. As a result, a positive correlation is observed between the frequency of supporting smells and their distribution in the projects. It should be noted that a low occurrence rate of a code smell should not lead to ignoring its impacts on software quality. Indeed, code smell datasets are expected to support such less common smells. For these reasons, the manually created datasets mainly suffer from both the low number of samples and the low number of supported smell types, specifically for those smells that rarely occurred. Researchers have used automatic smell detection tools to address this problem. However, such automatically created datasets suffer from serious reliability issues, including high false positive and false negative rates, equal severities, limited smell types, and algorithmic biased. Moreover, dataset creation methods require a dynamic addition of new smell types effectively, which is not provided by current automatic approaches. Using multiple code smell detection tools, program version history, and code clone information reduces the false positive rate of automatically generated code smell datasets. Nevertheless, manual oracles are still required to archive reliable code smell detection tools, especially for new code smells. Studies show that even developers with little professional background can perform collaborative identification with high precision (Kang et al., 2017). The available information made by developer actions in response to code smells, _e.g._, refactoring and remodularization in the public code repositories, such as GitHub (Kumar et al., 2017), speed up the manual labeling process. For example, comments on merging pull requests or the last comments on the issues which led to closing the issue often point out the change that applied to the code. The advanced NLP techniques can be used to detect those comments which denote the identification or refactoring of code smells and add the code with the corresponding smell to a dataset. ### Different smell importance Not all code smells are dangerous equally to the quality of the system (Kumar et al., 2017). For instance, the message chain negatively affects the testing and fault localization activities, while the data clumps do not have such destructive effects (Kumar et al., 2017), (Kumar et al., 2017). This fact implies that it is necessary to create datasets containing information about the smells' impact on quality attributes. According to a recent survey by Lacerda et al. (2017), limited empirical evidence about the impact of code smells on software quality attributes has been provided by the research community. Determining the effects of code smells on different quality attributes such as reusability, understandability, and modularity requires both the frequency of code smells and the value of quality attributes. Computing accurate values for some quality attributes such as testability (Kumar et al., 2017), (Kumar et al., 2017) and Coverageability (Kumar et al., 2017) is not straightforward since they require dynamic analysis which is a time and resource-consuming process. Adding quality attribute information to the code smell dataset facilitates the measurement of smell importance for different software systems. The relationship between the code smells and quality attributes can be discovered by performing a correlation analysis or regression analysis. For instance, a regression model may be used to map a vector containing the frequency of different code smells to the value of the QMOOD quality attributes (Kumar et al., 2017). A feature importance analysis (Kumar et al., 2017) is then applied to rank the smells by their importance in predicting the value of a specific quality attribute. A prerequisite of such meta-analysis is to focus on creating code smell datasets for the more critical types of smells. Code smells that are not covered by the existing smell datasets should be prioritized over the other smells, such as long method and God class, when creating new datasets. Future code smells datasets are expected to provide information about the importance of smell types on different aspects of code quality as golden references. This information enables the development of software tools that can identify the smells affecting specific tasks such as fault localization or fault prediction (Kumar et al., 2017). ### Different smell severity Similar to the difference in the criticality of smell types, the instances of the same type also have different intensities (Kumar et al., 2017), (Kumar et al., 2017). For example, two Java methods with 100 and 1000 lines of code are considered long methods (Kumar et al., 2017), while the resultant technical debt imposed by them is very unlikely to be equal. Typically, smell detection tools use thresholds on different features, _e.g._, related source code metrics, to select code smells that denotes a minimum value or lower bound for that feature. The distance between the specific value of a feature and the corresponding threshold for a given code snippet can be used to determine the smell severity. Software engineers can effectively use the severity level of smells to prioritize the refactoring activities and reduce software maintenance costs and technical debts. Unfortunately, most available datasets do not provide the code smell severity levels or provide only two or three levels. As described in Table 11, only 7 out of 45 primary studies support the code smell severity in their proposed datasets. It implies that code smell datasets are rarely suitable for accurate estimation of technical debt and maintenance costs. The current code smells dataset should be improved to include information about the severity level of their smells to support smell prioritizations. One solution to determine the severity of smells in the code smell dataset is to design and share online questionnaires with software developer communities and ask different developers to designate the severity of smell. In a long time, the approach leads to a reliable code smell dataset that supports the severity levels. Another solution is to find and analyze the number of refactored smells in the existing codebases during the software development lifecycle (SDLC). The refactored smells can be considered as smells with high priority from the developers' viewpoints and vice versa. ### Diversity in application domains, programming languages, and paradigms The last but not least point about the challenges in code smell datasets is related to three factors, including the application domain, programming language, and programming paradigm. Hall et al. [111] have stated that smells' impacts on systems depend on the application domain and development context. Our SLR reveals that the primary studies are limited to the concept of code smells and source code metrics in programming languages such as Java and C*, which are based on the object-oriented paradigm. There are vast opportunities to create code smell datasets for other programming languages, specifically multi-paradigm languages such as C++, Python, and Go, and even for the new generation of Java programming language [125] due to adding new features. Although the current code smell datasets contain different software projects, they do not reflect any information regarding the application domain and development contexts. Fernandes et al. [126] have reported that the code smell detection tools present different levels of accuracy in different contexts. The critical software system, including business-critical, mission-critical, and safety-critical systems, rarely contributed to the existing code smell dataset. In contrast, the smell in such systems most presumably severely impacts system behavior and performance. It implies that the available code smells datasets are not dependable for critical systems. Therefore, the next generation of code smell datasets is supposed to consider factors related to the application domain and development context, mainly the system criticalness level [127], deployment and run-time environments, and stakeholders. For instance, when voting between the result of tools, the application domain can be used to weigh each tool according to the application [126]. To figure out an 'ideal' dataset for code smells, we back to our proposed classification in Section 4.2. An ideal dataset contains labels entirely validated, supports multiple programming languages, and cover various type of smells with a large number of samples, and features. The ratio of samples in the dataset must approach their ratio in reality. The minimum required features are smell importance, severity levels, sample sequences (to trace smell cooccurrence), application domain, and source code metrics. Additional metadata such as traceability links (to the source code of each sample), reviews activities, and versions are helpful. The dataset is expected to regularly update under a data version control system which enables tracking of different versions and covers the true samples of the previous datasets/ versions. Finally, the source of data, _i.e._, the benchmark projects must be diverse as possible. **RQ10:**_What are the limitations of the existing code smell datasets?_ **Summary for RQ10:**_Existing code smell datasets contain limited types of smells with few smelly samples and small sizes. In practice, the number of smelly samples is fewer than non-smelly ones, which leads to imbalanced datasets. Future research must develop new code smell datasets for supporting the severity and importance of smells, various application domains, programming languages, and paradigms. Using online questionnaires and generating synthetic smelly instances by program transformation are recommended to assist the code smells datasets creation process._ ## 7. Threats to Validity Several issues may threaten the construction, internal, and external validity of this paper. The main construction validity threat is the suitability of research questions and the categorization scheme used for answering these questions. We mainly focused on questions that researchers and practitioners face when developing and evaluating a new smell detection tool or comparing the existing ones to mitigate threats. The taxonomies used to classify the common aspects of code smell datasets were extracted from the taxonomies that appeared in the primary studies. A few factors, such as quantitative results of code smell detection obtained from each dataset, may still be missed in our research. The internal validity of our paper may threat due to the incomplete set of articles selected as primary studies. Search engines, search terms, and inclusion/exclusion criteria are carefully defined to ensure that our review is comprehensive and the result is repeatable. Another problem we faced during the article selection process was missing some relevant papers that we expected to find in our initial search. Indeed, when feeding our search string to the search engines, especially the IEEE Xplore search engine, we noticed that some important papers containing our keywords were not found despite being indexed in their libraries. For example, our search string contains some of the keywords in the paper describing a smell detection tool, DECOR [57]. However, when feeding our search string to IEEE Xplore we noticed that it could not find the paper. We observed that similar problems have been reported by Landman _et al._[128] when dealing with the IEEE Xplore search engine. Therefore, the problem was not specific to our search string. Fortunately, we could find such missed papers during the snowballing process. We did not perform a strict quality assessment on the evaluation results of the primary studies due to the low number of relevant publications to mitigate the risk of losing any code smell dataset. However, we filtered the papers, which did not include introducing any new code smell dataset. In addition to digital search libraries, we look for code smell datasets in specific web repositories, including Zenodo [99], Kaggle [100], Figshare [101], and GitHub [113], to mitigate the threats of incomplete search. Another threat to the internal validity is the manual analysis performed on the primary studies to extract the required information for analyzing and comparing existing code smell datasets. The initial analysis was performed independently by the first and third authors who are experienced in the software refactoring field. Tiny Python scripts were developed to help analyze public datasets by extracting primary statistics from the available versions. Afterward, objective information extracted from the primary studies was reviewed by three independent M.Sc. students in software engineering with a background in software smells and refactoring, and their correction was applied accordingly. Finally, the second and fourth authors also reviewed the results to ensure the collected result's correctness. We maintained the extracted information from the primary study in an Excel worksheet publicly available [25] to facilitate the analysis process and merging of results. The external validity of our SLR is threatened by factors that affect the generality of reported results. We mainly relied on the statistics and results mentioned by the primary studies. Few code smell datasets appeared in more than one paper. We referred to them only if a selected primary study had not reported the required data. In some cases, the required data, _e.g._, the size of the dataset or instance ratio, were not reported in any paper, and we could not find any relevant data. Indeed, ours is a work a meta-analysis of the datasets, we do not aim at producing additional knowledge other than shared by the original authors. Some code smells and anti-patterns have different names and slightly different definitions regarding the terminology [9]. We merged them into one type of smell based on the catalog of Fowler and Beck [1] and the recent SLR by Sharma and Spinellis [9] to mitigate the sparsity of the studied smell types. ## 8 Conclusion High-quality and large code smells datasets are required to construct decent automatic smell detection tools and evaluate them. It is difficult to find and employ an appropriate code smell dataset to create new smell detection tools or evaluate the existing ones. A systematic literature review of the existing code smells datasets is proposed in this paper to answer ten research questions. Our research questions cover a wide range of information about code smell datasets, including their structure and formats, size, supporting languages, supporting smells, analysis tools, labeling approach, quality, and limitations. Code smells datasets are classified and studied in a standard model based on five different aspects in alignment with the research questions. A total of 45 code smell datasets are recognized and reviewed, of which only 25 datasets are available publicly. Code smells datasets are created and evaluated manually, automatically, and semi-automatically. Existing code smells datasets suffer from limitations in the number of samples, supported smell types, and diversity of projects. Most datasets cover God class, long method, feature envy, and data class smells. At the same time, there is not any dataset for six of the smells discussed by Fowler and Beck [1]. The main reason is that the frequency of code smells in real-world projects and the effort required to find them are different. We observe that 43 out of 45 primary studies contain smelly instances for the Java programming language, indicating the lack of datasets for other programming languages and paradigms. Reliable and large code smell datasets are mandatory artifacts in developing code smell detection and program refactoring tools, specifically the programs designed in the Software 2.0 paradigm [129], including learning-based software systems. With current datasets creating accurate learning-based tools are almost impossible. There are several opportunities for future work on code smell datasets. Generating and integrating code smell datasets automatically by composing different tools or synthesizing amelly samples by program transformation techniques [130] (_e.g._, inverse refactoring [131]) can be considered new solutions and research lines in this area. One straightforward option to begin is to merge the current datasets into a single database to increase the size and types of smelly samples. Another is to leverage techniques such as code similarity detection and transfer learning to make datasets for other programming languages with a relatively low effort. Providing code smell datasets for new programming languages encourage researchers and practitioners to propose new and accurate smell detection tools with reliable evaluations. ## Compliance with Ethical Standards This study has received no funding from any organization. ## Conflict of Interest All of the authors declare that they have no conflict of interest. ## Ethical approval This article does not contain any studies with human participants or animals performed by any of the authors.
2310.13095
Homotopical operadic calculus in positive characteristic
Algebraic operads provide a powerful tool to understand the homotopy theory of the types of (co)algebras they encode. So far, the principal results and methods that this theory provides were only available in characteristic zero. The reason is that operads carry an action of all the symmetric groups, whose representation theory becomes much more involved in positive characteristic. The goal of this paper is to extend these results and methods to a positive characteristic setting. We solve the main problems that appear in this new setting by using the notion of a quasi-planar cooperad as the building block of the theory.
Brice Le Grignou, Victor Roca i Lucio
2023-10-19T18:51:35Z
http://arxiv.org/abs/2310.13095v2
# Homotopal operadic calculus in positive characteristic ###### Abstract. Algebraic operads provide a powerful tool to understand the homotopy theory of the types of (co)algebras they encode. So far, the principal results and methods that this theory provides were only available in characteristic zero. The reason is that operads carry an action of all the symmetric groups, whose representation theory becomes much more involved in positive characteristic. The goal of this paper is to extend these results and methods to a positive characteristic setting. We solve the main problems that appear in this new setting by using the notion of a quasi-planar cooperad as the building block of the theory. Key words and phrases:Homotopical operadic calculus, algebraic operads, Koszul duality, bar-cobar adjunctions, positive characteristic 2020 Mathematics Subject Classification: 18M70,18N40,18N55,18N60 ###### Contents * 1 Differential graded modules, \(\mathbb{S}\)-modules, operads and cooperads * 2 Homotopy theory of operads and quasi-planar cooperads * 3 Algebras, coalgebras, and Bar-Cobar adjunctions * 4 Model structure on coalgebras over a cooperad * 5 A Quillen equivalence, \(\infty\)-morphisms and homotopy transfer theorems for algebras * 6 Model structure on complete algebras over a cooperad * 7 A Quillen equivalence, \(\infty\)-morphisms and homotopy transfer theorems for coalgebras * 8 Linear duality * A Adjoint lifting theorems, right and left transferred structures ## Introduction Operads are algebraic objects which encode other types of algebraic structures: Lie algebras, associative algebras, Batalin-Vilkovisky algebras, Gerstenhaber algebras, \(\mathcal{A}_{\infty}\)-algebras, \(\mathcal{L}_{\infty}\)-algebras, and many more. The theory of algebraic operads provides us with a large set of methods which allow us to study the homotopy theory of differential graded (dg) \(\mathcal{P}\)-algebras if they are encoded by a dg operad \(\mathcal{P}\). We will call this set of methods _homotopical operadic calculus_. We refer to [12] for a detailed account of the theory of algebraic operads. Let us explain these methods. Over a characteristic zero field, the category of dg \(\mathcal{P}\)-algebras always admits a model category structure where weak-equivalences are given by quasi-isomorphisms and fibrations are given by degree-wise epimorphisms. This model structure can be obtained by transfer along the free-forgetful adjunction and presents the homotopy theory of dg \(\mathcal{P}\)-algebras. Cofibrant objects in this category do not admit an easy description, and the homotopy category is in general quite hard to understand. One way to get an easier description of this homotopy category is via bar-cobar adjunctions. To any operad \(\mathcal{P}\) one can always associate a cooperad \(\mathcal{C}\) which is Koszul dual in some sense. There are in fact two canonical choices for \(\mathcal{C}\): in certain cases, a minimal choice given by classical Koszul duality, or in general the one given by the operadic bar construction on \(\mathcal{P}\). For any choice of Koszul dual cooperad \(\mathcal{C}\), this duality is instantiated in the data of a twisting morphism \(\alpha:\mathcal{C}\longrightarrow\mathcal{P}\). And for any such twisting morphism \(\alpha\), there exists a bar-cobar adjunction ###### Abstract We consider the category of dg \(\mathcal{P}\)-algebras and the category of dg \(\mathcal{C}\)-coalgebras. We show that the category of dg \(\mathcal{P}\)-algebras and the category of dg \(\mathcal{C}\)-coalgebras is of the classical \(\operatorname{bar}\) construction. We show that the category of dg \(\mathcal{P}\)-algebras and the category of dg \(\mathcal{C}\)-coalgebras is of the classical \(\operatorname{bar}\) construction. We also show that the category of dg \(\mathcal{P}\)-algebras and the category of dg \(\mathcal{C}\)-coalgebras is of the classical \(\operatorname{bar}\) construction. wants to encode all coalgebras (say, coassociative, cocommutative, etc) without a conilpotency condition, one cannot use cooperads. We use here (reversed) operads. In this case, given a twisting morphism \(\alpha:C\longrightarrow\mathcal{P}\), they constructed a _complete_ bar-cobar adjunction between dg \(\mathcal{P}\)-coalgebras and complete dg \(\mathcal{C}\)-algebras. The notion of an algebra over a cooperad gives a new type of algebraic structures called _absolute algebras_. These are algebraic structures endowed with meaningful notion of infinite sums of operations without presupposing an underlying topology. Here _complete_ means that the canonical topology induced by the absolute structure is separated. Most of the classical algebraic structures encoded by an operad (Lie algebras, associative algebras, \(\mathcal{L}_{\infty}\)-algebras, etc.) have an absolute analogue. Once again, one can transfer (when it exists) the model category structure of dg \(\mathcal{P}\)-coalgebras to dg \(\mathcal{C}\)-algebras using the complete bar-cobar adjunction. And if the operad \(\mathcal{P}\) considered is given by the cobar construction \(\Omega\mathcal{C}\) of \(\mathcal{C}\), then the complete bar-cobar adjunction becomes a Quillen equivalence. The aforementioned methods, as well as the key notion of an absolute algebra were used by the second author in [11] to develop the integration theory of curved absolute \(\mathcal{L}_{\infty}\)-algebras, extending the work of [10] and [14]. Fibrant-cofibrant dg \(\mathcal{C}\)-algebras are also given by quasi-free objects: one can then define \(\infty\)-morphisms and \(\infty\)-quasi-isomorphisms for coalgebras. They were shown to be "invertible" as well, which makes them a useful tool to study the homotopy theory of \(\mathcal{P}\)-coalgebras. Finally, an analogue version of the homotopy transfer theorem also holds for dg \(\Omega\mathcal{C}\)-coalgebras. Note that coalgebras over an operad can also be interpreted from a properadic perspective, and \(\infty\)-morphisms coincide with those introduced by E. Hoffbeck, J. Leray and B. Vallette in [11]. Linear duality is a bridge between these two bar-cobar adjunctions, and thus between the homotopy theory of algebras and coalgebras. Indeed, the linear dual of a dg \(\mathcal{P}\)-coalgebra is always a dg \(\mathcal{P}\)-algebra, and the linear dual of a dg \(\mathcal{C}\)-coalgebra is a dg \(\mathcal{C}\)-algebra. These functors admit adjoints, and one obtains a _duality square_ made of four commuting adjunctions: This square can be promoted (under some conditions) into a square of Quillen adjunctions, using the model category structures constructed so far. This duality square allowed the second author to show that the homotopy theory of algebras and coalgebras are equivalent on objects with finite dimensional homology, see [11] for more details on these constructions. Finally, let us mention that all these results also extend to non-augmented operads, which encode counital types of coalgebras, where on the other side one needs to consider curved algebras over a curved cooperad. **An overview of the positive characteristic case.** A (dg) operad is a collection \(\{\mathcal{P}(n)\}_{n\in\mathbb{N}}\) of (dg) \(\Bbbk[S_{n}]\)-modules, for all \(n\geq 0\), together with an additional structure that allows one to compose elements. The category of \(\Bbbk[S_{n}]\)-modules is fairly well-behaved when \(\Bbbk\) is a characteristic zero field: it is semi-simple, there is a combinatorial description of simple objects, and much more. In particular, the homotopy theory of dg \(\Bbbk[S_{n}]\)-modules behaves in a similar fashion to the homotopy theory of dg \(\Bbbk\)-modules. Things change drastically over a positive characteristic field. Understanding the representation theory of the symmetric groups is an active subject of research, with many open questions, see for instance [13]. The homotopy category of dg \(\Bbbk[S_{n}]\)-modules becomes very rich and hard to understand. A concrete consequence of these facts is that many statements in fail or at least, require extra hypothesis, over a positive characteristic field. The first obstruction is given by the fact that, in general, dg \(\mathcal{P}\)-algebras do not admit a model category structure transferred from dg modules. Under the assumption that \(\mathcal{P}(n)\) is a projective dg \(\Bbbk[\mathcal{S}_{n}]\)-module for all \(n\geq 0\), M. Spitzweck constructed in [14] a _semi-model_ category structure on dg \(\mathcal{P}\)-algebras, which is a weaker notion of a model category structure. B. Fresse then showed in [15] that the category of dg operad themselves admits a semi-model category structure and that any quasi-isomorphism between \(\mathcal{S}\)-projective dg operads induces an Quillen equivalence of semi-model categories. Parallel to these results, there is also the axiomatic point of view developed by C. Berger and I. Moerdijk in [1]. They consider _admissible_ operads, those that admit a transferred model structure from dg modules, and give conditions under which an operad is admissible (for instance, if it is cofibrant). These results already have breakthrough applications. As an example, understanding the homotopy theory of \(\mathcal{E}_{\infty}\)-algebras in positive characteristic allowed M. Mandell to construct algebraic models for \(p\)-adic homotopy types, see [13, 14]. The homotopy theory of operads themselves plays a crucial role in homotopical operadic calculus. In this direction, M. Dehling and B. Vallette developed in [13] a general method that gives cofibrant resolutions of the operadic structure _and_ of the action of the symmetric groups _at the same time_. Furthermore, they show that their cofibrant resolution, applied to an operad \(\mathcal{P}\), is in fact isomorphic to \(\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\): here \(\Omega,\mathsf{B}\) are respectively the cobar and bar constructions at the operadic level, and \(\mathcal{E}\) is the Barratt-Eccles of [10]. This gives a clear indication that the Barratt-Eccles operad provides a universal way to construct resolutions that also take into account the action of the symmetric groups. In a different direction, L. Brantner, R. Campos and J. Nuiten develop in [1] a theory of divided power operads, in order to encode divided power algebras. In particular, they endow these categories with semi-model category structures. This allows them to give point-set models of partition Lie algebras as defined in [1], where they were shown to encode formal moduli problems over a positive characteristic field. Finally, let us mention several results in the direction of classical Koszul duality which are valid over any ring [15, 16, 17]. **Main results.** The purpose of this paper is to generalize all the aforementioned methods of homotopical operadic calculus over any field. Let us mention first that the main motivation for developing these methods in positive characteristic is the article [11]. In _op.cit_, we provide a new point of view on formal moduli problems via the duality square constructed here together with the main results of [10]. Putting these two results together allows us to give a new proof of many of the main results concerning formal moduli problems, while generalizing the Koszul duality inherent to these types of results over a field of any characteristic. Another application is the paper by the second author about the integration theory of partition \(\mathcal{L}_{\infty}\)-algebras [14], which also extensively uses the results of this paper. We expect that they will find many more applications, as their analogues in characteristic zero have so far. From now on, let us assume we work over a field \(\Bbbk\). The main idea to bypass all the major difficulties that arise over a positive characteristic field is to introduce the notion of a quasi-planar dg cooperad. Informally speaking, it amounts to a dg cooperad whose underlying graded cooperad is non-symmetric/planar, and even if its differential can interact in non-trivial ways with the underlying action of the symmetric groups, this interaction is still "controlled" by a suitable filtration on the cooperad. Note that even if our results are valid in the more general unital/curved setting, we will state them in the augmented/dg setting for simplicity. **Definition** (Quasi-planar dg cooperad).: _Let \((\mathcal{C},d_{\mathcal{C}})\) be a conilpotent dg cooperad. It is quasi-planar if there exists a ladder of conilpotent dg cooperads_ \[\mathcal{C}^{(0)}\longrightarrow\mathcal{C}^{(1)}\longrightarrow\cdots \longrightarrow\mathcal{C}^{(l)}\longrightarrow\cdots\] _such that \(\mathcal{C}\) is the colimit of the diagram and such that the following conditions are satisfied._ 1. _For all_ \(i\)_, the underlying conilpotent graded cooperad_ \(\mathcal{C}^{(i)}_{\mathrm{gr}}\) _is planar, meaning that there exists a conilpotent graded non-symmetric cooperad_ \(\mathcal{C}^{(i)}_{\mathrm{pl}}\) _and a given isomorphism_ \[\mathcal{C}^{(i)}_{\mathrm{pl}}\otimes\mathbb{S}\subseteq\mathcal{C}^{(i)}_{ \mathrm{gr}}\,\] _where_ \((-\otimes\mathbb{S})\) _denotes the left adjoint to the forgetful functor from conilpotent graded cooperads to conilpotent graded non-symmetric cooperads. Moreover, the filtration preserves this planar structure in the sense that the graded map_ \(\mathcal{C}^{(i)}\to\mathcal{C}^{(i)}\) _(for_ \(i<j\)_) is the image through the functor_ \(-\otimes\mathbb{S}\) _of the a map_ \(\mathcal{C}^{(i)}_{\mathrm{pl}}\to\mathcal{C}^{(j)}_{\mathrm{pl}}\)_._ 2. _For all_ \(i\)_, the restriction of the coderivation of_ \(d_{\mathcal{C}(i+1)}\) _to_ \(\mathcal{C}_{\mathrm{pl}}(i+1)\otimes 1\) _factors through_ \[\mathcal{C}^{(i+1)}_{\mathrm{pl}}\otimes 1\longrightarrow\left(\mathcal{C}^{(i+ 1)}_{\mathrm{pl}}\otimes 1\right)+\mathcal{C}^{(i)}\hookrightarrow\mathcal{C}^{(i+1)};\] _in other words, the differential of_ \(\mathrm{gr}_{i+1}\mathcal{C}=(\mathcal{C}^{(i+1)}_{\mathrm{pl}}/\mathcal{C}^ {(i)}_{\mathrm{pl}})\otimes\mathbb{S}\) _has the form_ \(d_{\mathrm{pl}}\otimes\mathrm{Id}_{\mathbb{S}}\)_._ A first version of this notion is already present in [10]. Let us point out that the ladder can be indexed by any small ordinal \(\alpha\), not only by \(\omega\). Nevertheless, using Theorem A, we will show that any quasi-planar cooperad \(\mathcal{C}\) admits a _canonical_ quasi-planar ladder indexed by \(\omega\). Two key observations. Firstly, for any dg operad \(\mathcal{P}\), the dg cooperad \(\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\) is a quasi-planar cooperad. Secondly, any operad \(\mathcal{P}\) can be replaced by \(\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\) up to quasi-isomorphism. If \(\mathcal{P}\) is \(\mathbb{S}\)-projective, this replacement does not change the underlying homotopy category. And if it is not, there was no meaningful homotopy category for dg \(\mathcal{P}\)-algebras to begin with. So, for all intents and purposes, one can restrict to operads of the form \(\Omega\mathcal{C}\), where \(\mathcal{C}\) is quasi-planar conilpotent dg cooperad. Therefore, let us fix a quasi-planar dg cooperad \(\mathcal{C}\). We show that its cobar construction \(\Omega\mathcal{C}\) is a cofibrant object in the semi-model category of dg operad of [10]. Not only that, we also show that any dg operad of the form \(\Omega\mathcal{C}\), with \(\mathcal{C}\) quasi-planar, admits a canonical \(\mathcal{E}\)-comonoid structure, given by explicit formulas. **Theorem A** (Theorem 2).: _Let \(\mathcal{C}\) be a quasi-planar dg cooperad. There is a canonical morphism of dg operads_ \[\Delta_{\mathcal{E},\mathcal{C}}:\Omega\mathcal{C}\xrightarrow{}\mathcal{E} \otimes\Omega\mathcal{C}\] _which endows the dg operad \(\Omega\mathcal{C}\) with a left \(\mathcal{E}\)-comodule structure._ This theorem can be seen as the positive characteristic analogue of the following fact: any dg operad \(\mathcal{P}\) admits a canonical \(\mathsf{u}\mathcal{C}\)-comonoid structure. Notice that is is precisely this \(\mathsf{u}\mathcal{C}\)om-comonoid structure that gives the canonical convolution curved \(\mathcal{L}_{\infty}\)-algebra structure on the hom-graded modules between types of coalgebras and types of algebras which are linked by Koszul duality. Here, it will produce a canonical convolution (curved absolute) _partition_\(\mathcal{L}_{\infty}\)-algebra structures on the hom-graded modules between types of coalgebras and types of algebras, in the sense of [1], see also [11]. Exploring these convolution structure will be the subject of future work; we refer to [11]. By standard axiomatic arguments in the same spirit as in [1], the dg operad \(\Omega\mathcal{C}\) is both admissible and coadmissible, meaning both dg \(\Omega\mathcal{C}\)-algebras and dg \(\Omega\mathcal{C}\)-coalgebras admit a transferred structures from dg modules. Then we consider the bar-cobar adjunction relative to the universal twisting morphism \(\iota:\mathcal{C}\longrightarrow\Omega\mathcal{C}\) \[\mathsf{dg}\ \mathcal{C}\text{-cog}\xrightarrow{\Omega_{\iota}}\mathsf{dg}\ \Omega\mathcal{C}\text{-alg}\.\] **Theorem B** (Theorems 6 and 7).: _The exists a combinatorial model category structure on the category of dg \(\mathcal{C}\)-coalgebras given by the following sets of maps:_ 1. _the set of weak-equivalences is given by morphisms_ \(f\) _such that_ \(\Omega_{\iota}(f)\) _is a quasi-isomorphism,_ 2. _the set of cofibrations is given by degree-wise monomorphisms,_ 3. _the set of fibrations is given by morphisms with the right lifting property with respect to acyclic cofibrations._ _Furthermore, considering this model structure promotes the bar-cobar adjunction into Quillen equivalence._ This means that the homotopy theory of dg \(\Omega\mathcal{C}\)-algebras can be read using their Koszul dual dg \(\mathcal{C}\)-coalgebras. Here again, a great advantage is that fibrant-cofibrant dg \(\mathcal{C}\)-coalgebras are exactly quasi-free dg \(\mathcal{C}\)-coalgebras which are also images of dg \(\Omega\mathcal{C}\)-algebras. Notice that, in principle, the notion of a coalgebra over a cooperal in positive characteristic encodes types of _divided power_ conilpotent coalgebras. Nevertheless, since \(\mathcal{C}\) is quasi-planar, there are no divided power operations that appear at the level of algebraic structures. We define \(\infty\)-morphisms and \(\infty\)-quasi-isomorphisms of dg \(\Omega\mathcal{C}\)-algebras: we show that they are "invertible", therefore two dg \(\Omega\mathcal{C}\)-algebras are linked by a zig-zag of quasi-isomorphisms if and only if there exists an \(\infty\)-quasi-isomorphism between them. We also prove a version of the homotopy transfer theorem: given a dg \(\Omega\mathcal{C}\)-algebra \(A\), we construct an \(\infty\)-quasi-isomorphic dg \(\Omega\mathcal{C}\)-algebra structure on the homology of \(A\). This means that \(A\) can be replaced by its homology without losing any homotopical data. These methods should lead to applications in the study of formality questions over positive characteristic field, in the same spirit as in [1]. Then we turn to dg \(\Omega\mathcal{C}\)-coalgebras, and generalize the results of [1] in this case. Consider the complete bar-cobar adjunction relative to \(\iota\) \[\text{dg }\Omega\mathcal{C}\text{-cog }\overbrace{\mathbb{G}_{\iota}}^{ \widehat{\Omega}_{\iota}}\text{dg }\mathcal{C}\text{-alg}^{\text{qp-comp}}\,\] between the category of dg \(\Omega\mathcal{C}\)-coalgebras and the category of _qp-complete_ dg \(\mathcal{C}\)-algebras. Here qp-complete algebra refer to algebras which are complete for the _canonical_ quasi-planar ladder of the quasi-planar cooperal \(\mathcal{C}\). **Theorem C** (Theorems 10 et 11).: _The exists a combinatorial model category structure on the category of qp-complete dg \(\mathcal{C}\)-algebras given by the following sets of maps:_ 1. _the set of weak-equivalences is given by morphisms_ \(f\) _such that_ \(\widehat{\mathbb{B}}_{\iota}(f)\) _is a quasi-isomorphism,_ 2. _the set of fibrations is given by degree-wise epimorphisms,_ 3. _the set of cofibrations is given by morphisms with the left lifting property with respect to acyclic fibrations._ _Furthermore, considering this model structure promotes the bar-cobar adjunction into Quillen equivalence._ Again, this means that one can read the homotopy theory of dg \(\Omega\mathcal{C}\)-coalgebras by using dg \(\mathcal{C}\)-algebras. Here again, fibrant-cofibrant dg \(\mathcal{C}\)-algebras are much simpler as they are exactly given by quasi-free dg \(\mathcal{C}\)-algebras which are essentially the images of the dg \(\Omega\mathcal{C}\)-coalgebras under the complete cobar functor. The notion of a dg \(\mathcal{C}\)-algebra corresponds to the absolute version of divided powers dg \(\mathcal{C}^{*}\)-algebras, and is also naturally endowed with divided power operations. We define \(\infty\)-morphisms and \(\infty\)-quasi-isomorphisms of dg \(\Omega\mathcal{C}\)-coalgebras too: we show that they are "invertible", therefore two dg \(\Omega\mathcal{C}\)-coalgebras are linked by a zig-zag of quasi-isomorphisms if and only if there exists an \(\infty\)-quasi-isomorphism between them. We also prove a similar version of the homotopy transfer theorem: given a dg \(\Omega\mathcal{C}\)-algebra \(V\), we construct a \(\infty\)-quasi-isomorphic dg \(\Omega\mathcal{C}\)-coalgebra structure on the homology of \(V\), which allows one to pass from a coalgebra \(V\) to its homology without loosing homotopical data. Note that once the theory is established for operads of the form \(\Omega\mathcal{C}\), with \(\mathcal{C}\) a quasi-planar dg cooperal, then we can easily extend all the above results to any cofibrant dg operad \(\mathcal{P}\). Indeed, given a dg operad \(\mathcal{P}\), we construct what we call the _quasi-planar_ bar-cobar adjunctions \(\mathsf{dg}\ \mathcal{P}\)-alg \(\xrightarrow{\mathbb{B}^{\mathsf{dg}}_{\mathsf{P}}}\)\(\mathsf{curv}\ \mathsf{B}(\mathcal{E}\otimes\mathcal{P})\)-cog. \(\mathsf{dg}\ \mathcal{P}\)-cog \(\xrightarrow{\mathbb{G}^{\mathsf{ap}}}\)\(\mathsf{curv}\ \mathsf{B}(\mathcal{E}\otimes\mathcal{P})\)-alg\(\mathsf{gp}\)\(\mathsf{comp}\), and in the case when \(\mathcal{P}\) is cofibrant, all the above results can be translated _mutatis mutandis_ to this setting. Finally, we extend the _duality square_ of [144] to this new setting. Recall that it intertwines the bar-cobar adjunction with the complete bar-cobar adjunction via a pair of duality adjunctions. Using the model category structures constructed so far, we prove that all the functors in this duality square are Quillen adjunctions. As in the characteristic zero case, this allows us to show that the homotopy category of dg \(\mathcal{P}\)-algebras with finite dimensional homology is equivalent to the homotopy category of dg \(\mathcal{P}\)-coalgebras with finite dimensional homology, where \(\mathcal{P}\) is a cofibrant dg operad. These results also open the door to a classical theory of Koszul duality in the positive characteristic setting, both at the operadic level and at the algebra/coalgebra level. Since quasi-planar dg cooperads seem to be the right notion, it would be interesting to understand, given an operad \(\mathcal{P}\), what is (and when does it exist) the _minimal_ quasi-planar dg cooperad \(\mathcal{C}\) such that there exists a quasi-isomorphism \(\Omega\mathcal{C}\xrightarrow{\simeq}\mathcal{P}\). In this case, one could think of \(\mathcal{C}\) as the Koszul dual of \(\mathcal{P}\), in the original sense of [111, 112]. Very similar ideas were already explained in [113, Section 2.7], where the authors also suggest a similar approach using their theory of _higher cooperads_. Finally, at the algebra level, one should be able to ask similar questions using the bar-cobar constructions of this paper, knowing that they have a reasonable homotopical behaviour. These questions are all beyond the reach of this paper. Nevertheless, let us mention that comparing the approach carried out in this paper with [113], and specially comparing quasi-planar cooperads with higher cooperads, shall be the subject of future work in [112]. ### Acknowledgements We would like to thank Guille Carrion, Aras Ergus, Geoffroy Horel, Joost Nuiten and Bruno Vallette for interesting discussions. We would also like to thank Bruno Vallette for useful comments on the draft version. ### Notations and conventions 1. _Universe_. We fix a universe \(\mathcal{U}\). A small set is an element of this universe. A large set is a subset of it. 2. _Ordinal products_. Let \(\omega\) be the first infinite ordinal which is the poset of natural numbers. Let \(\omega\cdot\omega\) be the ordinal whose underlying poset is the two times product of \(\omega\) equipped with the lexicographic order. In general, if \(\alpha\) and \(\beta\) are two small ordinals, we will denote \(\alpha\cdot\beta\) the small ordinal whose underlying poset is the product poset equipped with the lexicographic order. Given a small ordinal \(\alpha\) (and more generally a poset), one can view it as a category. Objects are indexed by the ordinal \(\alpha\), and there is only one non-trivial arrow between two objects \(i\) and \(j\) in \(\alpha\), if and only if \(i<j\). 3. _Ladders_. Let \(\mathsf{C}\) be a cocomplete category and let \(\alpha\) be a small ordinal. The data of an \(\alpha\)-ladder amounts to the data of a functor cocontinuous functor \[c:1+\alpha\longrightarrow\mathsf{C}\.\] This corresponds to a "ladder" diagram of objects in \(\mathsf{C}\) \[0\longrightarrow c(0)\longrightarrow c(1)\longrightarrow\cdots \longrightarrow c(i)\longrightarrow\cdots\] indexed by \(\alpha\), such that for every limit ordinal \(k\in\alpha\), we have \[c(k)\cong\underset{i<k}{\text{colim}}\ c(i)\.\] The colimit of the ladder diagram will be denoted by \[c(\alpha)\coloneqq\underset{i\in\alpha}{\text{colim}}\ c(i)\.\] Usually and depending on the category \(\mathbf{C}\), additional requirements will be made on the transition maps \(c(i)\longrightarrow c(i+1)\). For instance, they will be required to be monomorphisms or some kind of cofibrations. We considering ladder diagrams in categories of operads and cooperads, the notation \(c(i)\) for the image of \(i\in\alpha\) will be replaces by \(c^{(i)}\) in order to avoid confusions with the arity. 4. _Coladders._ Let \(\mathbf{D}\) be a complete category and let \(\alpha\) be a small ordinal. The data of an \(\alpha\)-coladder amounts to the data of a functor continuous functor \[d:(1+\alpha)^{\text{op}}\longrightarrow\mathbf{D}\.\] This corresponds to a "coladder" diagram of objects in \(\mathbf{D}\) \[0\longleftarrow d(0)\longleftarrow d(1)\longleftarrow\cdots\longleftarrow d (i)\longleftarrow\cdots\] indexed by \(\alpha\), such that for every limit ordinal \(k\in\alpha\), we have \[d(k)\cong\lim_{i<k}d(i)\.\] The limit of the coladder diagram will be denoted by \[d(\alpha)\coloneqq\lim_{i\in\alpha}d(i)\.\] Usually and depending on the category \(\mathbf{D}\), additional requirements will be made on the transition maps \(d(i+1)\longrightarrow d(i)\). For instance, they will be required to be epimorphisms or some kind of fibrations. 5. _Permutations and shuffles._ The symmetric group on \(n\) elements, given by the set of bijections between \(\{1,\cdots,n\}\) and itself, will be denoted by \(\mathbb{S}_{n}\). Elements in \(\mathbb{S}_{n}\) are called permutations. A permutation \(\sigma\) in \(\mathbb{S}_{n}\) is determined by its values \(\sigma(1),\cdots,\sigma(n)\). Let \(\sigma\) be a permutation in \(\mathbb{S}_{a+b}\). It is a \((a,b)\)-shuffle if it satisfies that \[\sigma(1)<\cdots<\sigma(a)\quad\text{and}\quad\sigma(a+1)<\cdots<\sigma(a+b)\.\] The set of all \((a,b)\)-shuffles in \(\mathbb{S}_{a+b}\) will be denoted by \(\text{Sh}(a,b)\). ## 1. Differential graded modules, S-modules, operads and cooperads In this section, we introduce the underlying categories of graded, pre-differential and differential modules over a field \(\Bbbk\) of any characteristic. We recall some results about the homotopy theory of dg \(\Bbbk[G]\)-modules, where \(G\) is a finite group. These results will be useful when considering dg \(\mathsf{S}\)-modules. Finally, we recall the definitions of planar and symmetric (co)operads. ### Graded modules, differential graded modules and pre-differential graded modules **Definition 1** (Graded modules).: Let \(\mathsf{gr}\)\(\Bbbk\)-mod be the category of graded \(\Bbbk\)-modules. Objects are given by families \(X=\{X_{n}\}_{n\in\mathbb{Z}}\) of \(\Bbbk\)-modules indexed by the set of integers \(\mathbb{Z}\), and morphisms by collections of linear maps indexed by \(\mathbb{Z}\) which respect the grading. It forms a closed symmetric monoidal category endowed with the tensor product \(X\otimes Y\) given by \[(X\otimes Y)_{n}:=\bigoplus_{i+j=n}X_{i}\otimes Y_{j}\,\] where the internal hom \([X,Y]\) is given by \[[X,Y]_{n}:=\prod_{k}[X_{k},Y_{n+k}],\] for every two graded \(\Bbbk\)-modules \(X,Y\) and every integer \(n\) in \(\mathbb{Z}\). An element \(x\) of \(X_{n}\) will be called an homogeneous element of degree \(n\) of \(X\). We denote the degree of a homogeneous element by \(|x|\). A degree \(n\) map from \(X\) to \(Y\) is just a homogeneous element of degree \(n\) of \([X,Y]\). Moreover, let \(f\in[X,X^{\prime}]_{n}\) and \(g\in[Y,Y^{\prime}]_{m}\). We denote \(f\otimes g\) the element of \([X\otimes Y,X^{\prime}\otimes Y^{\prime}]_{n+m}\) defined as \[(f\otimes g)(x\otimes y):=(-1)^{|g||x|}f(x)\otimes g(y)\] and \([f,g]\) the element of \([[X^{\prime},Y],[X,Y^{\prime}]]_{n+m}\) defined as \[[f,g](h):=(-1)^{|f||h}g\circ h\circ f.\] The category of graded \(\Bbbk\)-modules enjoys several categorical properties. 1. The (monadic) forgetful functor towards graded sets commutes with sifted colimits (it commutes with filtered colimits and reflexive coequalisers). 2. The tensor product commutes with colimits and with finite limits. 3. Subsequently the functor \(X\mapsto X^{\otimes n}\) commutes with sifted colimits and with _finite_ cosifted limits. 4. Filtered colimits commute with finite limits (since both are computed in graded sets). 5. Coproducts commute with finite limits. 6. Products commute with finite colimits. Example 1.: However, cofiltered limits do not commute in general with finite colimits. Indeed, in the context where \(\Bbbk=\mathbb{R}\), let \(X\) be the sub \(\mathbb{R}\)-module of \(\mathbb{R}^{\mathbb{N}}\) spanned by sequence \((x_{0},x_{1},\ldots)\) so that \(\sum_{n}|x_{n}|<+\infty\) and let \(s:X\longrightarrow X\) be the shift endomorphism \[s(x_{0},x_{1},x_{2},\ldots)=(0,x_{0},x_{1},x_{2},\ldots).\] Moreover, let \(Y\subseteq X\) the sub \(\mathbb{R}\)-module spanned by sequence whose sum is zero. It is stable through \(s\). Moreover the following sequence is exact \[0\longrightarrow Y\hookrightarrow X\stackrel{{\Sigma}}{{ \longrightarrow}}\mathbb{R}\longrightarrow 0\] where \(\Sigma\) denotes the sum. The following diagram is commutative The limit of the two first row are \(0\) while the limit of the last row is \(\mathbb{R}\). Thus, it cannot be the quotient of the second limit by the first one. **Definition 2** (Pre-differential graded modules).: We denote pdfs \(\Bbbk\)-mod the category of pre-differential graded (pdg) \(\Bbbk\)-modules. Objects are given by graded \(\Bbbk\)-modules \(\{X_{n}\}_{n\in\mathbb{Z}}\) equipped with a degree \(-1\) endomorphism \(d:X_{(-)}\longrightarrow X_{(-)-1}\), and morphisms by morphisms of graded modules which commute with the pre-differentials. The closed symmetric monoidal category structure on graded \(\Bbbk\)-modules induces a closed symmetric monoidal category structure on pdfg \(\Bbbk\)-modules. For any two pdfg \(\Bbbk\)-modules \(X,Y\), the graded tensor product \(X\otimes Y\) is equipped with the pre-differential \[d_{X\otimes Y}\coloneqq d_{x}\otimes\operatorname{Id}_{Y}+d_{X}\otimes \operatorname{Id}_{Y}\,\] and the internal hom \([X,Y]\) with the pre-differential \[d_{[X,Y]}\coloneqq[\operatorname{Id},\,d_{Y}]-[d_{x},\operatorname{Id}].\] **Definition 3** (Differential graded modules).: We denote dg \(\Bbbk\)-mod the category of differential graded (dg) \(\Bbbk\)-modules. It is the full subcategory of pdfg \(\Bbbk\)-modules \((X,d)\) that \(d^{2}=d\circ d=2\). Remark 1.: A differential graded module is also referred as a chain complex. It inherits from pdfg \(\Bbbk\)-modules the structure of a closed symmetric monoidal category. One can check that if \(X\) and \(Y\) are dg modules, their tensor product \(X\otimes Y\) and their internal hom \([X,Y]\) is again a dg module. **Definition 4** (Spheres and disks).: Let \(n\in\mathbb{Z}\) be an integer. The \(n\)-th disk, denoted by \(D^{n}\), is the dg module given by \[D^{n}_{m}\coloneqq\left\{\begin{array}{ll}\Bbbk&\text{if }m=n-1,n\,\\ 0&\text{otherwise}\,\end{array}\right.\] where the differential \(D^{n}_{n}\longrightarrow D^{n}_{n-1}\) is the identity of map of \(\Bbbk\). The \(n\)-th sphere, denoted by \(S^{n}\), is the dg module given by \[S^{n}_{m}\coloneqq\left\{\begin{array}{ll}\Bbbk&\text{if }m=n\,\\ 0&\text{otherwise}.\end{array}\right.\] **Definition 5** (Suspensions).: Let \(k\in\mathbb{Z}\) be an integer and let \(X\) is a graded (resp. pdfg, dg) \(\Bbbk\)-module. The tensor product \(S^{k}\otimes X\) is referred as the \(k\)-th suspension of \(X\) and will be denoted by \(s^{k}X\). The category of dg \(\Bbbk\)-modules can be endowed with a combinatorial model category structure, determined by the following sets of maps: 1. the set of weak equivalences is given by quasi-isomorphisms; 2. the set of fibrations is given by degree-wise epimorphisms; 3. the set of cofibrations is given by degree-wise injections; 4. the set of generating cofibrations is given by the canonical inclusion maps \(S^{n}\longrightarrow D^{n+1}\) for all \(n\) in \(\mathbb{Z}\); 5. the set of generating acyclic cofibrations is given by the inclusion maps \(0\longrightarrow D^{n+1}\) for all Remark 2. Since \(\Bbbk\) is a field, this model category structure is both the _injective_ model structure and the _projective_ model structure. This model category structure can be left-transferred along the forgetful-truncation adjunction to dg k-modules in non-negative degrees. It can also be right-transferred via the truncation-forgetful adjunction to dg k-modules in non-positive degrees. ### Finite group action on dg modules Let \(G\) be a finite group. The data of an action of \(G\) on a dg k-module is equivalent to the data of a dg k[\(G\)]-module, where \(\Bbbk[G]\) denotes the group algebra of \(G\). The functor is both left and right adjoint to the forgetful functor \(U_{G}\), and fits in the following adjunction diagram There is a bialgebra structure on \(\Bbbk[G]\), where the coproduct \(\Delta\) is induced by the set-theoretical diagonal of \(G\). It allows us to endow the category of dg k[\(G\)]-modules with with a monoidal category structure. The tensor product of two dg k[\(G\)]-modules \(X,Y\) is given by the underlying dg module \(X\otimes Y\) endowed with the following dg k[\(G\)]-module structure: \[\Bbbk[G]\otimes(X\otimes Y)\xrightarrow{\Delta\otimes\text{id}}\Bbbk[G\times G ]\otimes(X\otimes Y)\xrightarrow{\cong}(\Bbbk[G]\otimes X)\otimes(\Bbbk[G] \otimes Y)\xrightarrow{\cong\text{Tr}\otimes\text{Tr}}X\otimes Y\,\] where \(\gamma_{X},\gamma_{Y}\) denote, respectively, the dg k[\(G\)]-module structures of \(X\) and \(Y\). One can either right-transfer or left-transfer the model category structure on dg modules to dg k[\(G\)]-modules. This gives two _different combinatorial model structures_. 1. The right-transferred structure is called the _projective model structure_. Fibrations are given by degree-wise epimorphisms and weak-equivalences by quasi-isomorphisms. 2. The left-transferred structure is called the _injective model structure_. Cofibrations are given by degree-wise monomorphisms and weak-equivalences by quasi-isomorphisms. **Definition 6** (Projective and injective modules).: Let \(X\) be a dg k[\(G\)]-module. 1. \(X\) is said to be _projective_ if it is a fibrant-cofibrant object in the projective model structure. 2. \(X\) is said to be _injective_ if it is a fibrant-cofibrant object in the injective model structure. **Lemma 1**.: _Let \(X\) be dg k[\(G\)]-module._ 1. _If_ \(X\) _is injective, then its linear dual_ \(X^{*}\) _is projective._ 2. _If_ \(X\) _is projective, then its linear dual_ \(X^{*}\) _is injective._ Proof.: Linear duality defines a Quillen adjunction when the category on the left is endowed with the projective model structure and the category on the right with the opposite of the injective model structure. The two adjunctions \(\Bbbk[G]\otimes-\dashv U_{G}\dashv\Bbbk[G]\otimes-\) restrict to dg k-modules and dg k[\(G\)]-modules which are either in non-negative degrees or in non-positive degrees. 1. The category of dg \(\Bbbk[G]\)-modules in non-negative degrees can be endowed with the _projective model structure_. Fibrations are given by degree-wise epimorphisms and weak-equivalences by quasi-isomorphisms. Moreover, cofibrations are degree-wise injections whose cokernel is degree-ewise projective. 2. The category of dg \(\Bbbk[G]\)-modules in non-positive degrees can be endowed with the _injective model structure_. Cofibrations are given by degree-wise injections and weak-equivalences by quasi-isomorphisms. Moreover, fibrations are degree-wise epimorphisms whose kernel is degree-wise injective. **Definition 7** (Quasi-free modules).: A dg \(\Bbbk[G]\)-module is _quasi-free_ if its underlying graded \(\Bbbk[G]\)-module is free, that is, in the essential image of the functor \(\Bbbk[G]\otimes-\) from graded k-module to graded \(\Bbbk[G]\)-modules. A quasi-free dg \(\Bbbk[G]\)-module \(X\) is degree-wise projective and degree-wise injective. Nevertheless, it might not be cofibrant in the projective model structure nor fibrant in the injective model structure. 1. If \(X\) is _bounded below_, it is cofibrant in the projective model structure since, up to some a finite degree translation, it is the image of a cofibrant object of dg \(\Bbbk[G]\)-\(\mathsf{mod}_{\geq 0}\). 2. If \(X\) is _bounded above_, it is fibrant in the injective model structure since, up to some a finite degree translation, it is the image of a fibrant object of dg \(\Bbbk[G]\)-\(\mathsf{mod}_{\leq 0}\). Remark 3.: To the best of our knowledge, these results are particular instances of results in [1]. As we were unable to find it, we refer to [1] for more details. **Proposition 1**.: _Quasi-free dg \(\Bbbk[G]\)-modules are stable through tensor product in the following sense: \(X\otimes Y\) is quasi-free whenever \(X\) or \(Y\) is quasi-free._ Proof.: Let \(X\) and \(Y\) be dg \(\Bbbk[G]\)-modules, and assume that \(Y\cong\Bbbk[G]\otimes Z\) as graded \(\Bbbk[G]\)-modules. The canonical morphism of dg \(\Bbbk[G]\)-modules \[\Bbbk[G]\otimes(U_{G}(X)\otimes Z)\longrightarrow X\otimes Y\] is an isomorphism, with inverse \[x\otimes g\otimes z\mapsto g\otimes(g^{-1}(x)\otimes z).\] Remark 4.: The above proposition holds if one replaces \(\Bbbk[G]\) by any Hopf algebra \(H\). **Proposition 2**.: _Let \(X\) be a quasi-free dg \(\Bbbk[G]\)-module. The canonical norm map_ \[\mathbb{N}_{X}:X_{G}\longrightarrow X^{G}\] _from the coinvariants to the invariants is an isomorphism of dg modules._ Proof.: Follows from the fact that \(X\) is quasi-free and that limits and colimits are computed degree-wise. Finally, the projective and injective model category structures are compatible in the following sense. **Proposition 3**.: _The category of dg \(\Bbbk[G]\)-modules together with the projective model structure is homotopically enriched-tensored-cotensored over the category dg \(\Bbbk[G]\)-module together with the injective model structure. For every injective cofibration (i.e: a degree-wise injection) \(f:A\mapsto B\) and every projective cofibration \(g:X\mapsto Y\), the morphism_ \[f\circ g:(A\otimes Y)\coprod_{A\otimes X}(B\otimes X)\mapsto B\otimes Y\] _is a projective cofibration. Furthermore, it is acyclic whenever \(f\) or \(g\) is._ Proof.: Let us suppose that \(g\) is a generating projective cofibration, given by the inclusion \(S^{k}\otimes\Bbbk[G]\longrightarrow D^{k+1}\otimes\Bbbk[G]\). In that case, one has a canonical isomorphism of diagrams between the commutative square and the commutative square Thus, the map \(f\circ g\) is isomorphic to the image through the functor \(-\otimes\Bbbk[G]\) of the map \[(U_{G}(A)\otimes D^{k+1})\coprod_{(U_{G}(A)\otimes S^{k})}((U_{G}(B)\otimes S ^{k}))\to(U_{G}(B)\otimes D^{k+1})\,\] which is a cofibration of dg \(\Bbbk\)-modules. Hence \(f\circ g\) is a projective cofibration. Now the set of morphisms \(g\) so that \(f\circ g\) is a projective cofibration is stable through pushouts, transfinite composition and retracts. Hence it contains all the projective cofibrations. Finally, if \(f\) is acyclic or \(g\) are acyclic, then \(f\circ g\) is acyclic too, as it is an acylic cofibration of dg \(\Bbbk\)-modules and since the model category on dg \(\Bbbk\)-modules is a monoidal model category. ### N-modules, planar operads and planar cooperads We consider N as a category where objects are natural integers and where there are only identity morphisms. **Definition 8** (dg \(\mathbb{N}\)-modules).: A dg \(\mathbb{N}\)-modules amounts to the data of a functor \[X:\mathbb{N}\longrightarrow\text{dg }\Bbbk\text{-mod}.\] The object \(X(n)\) is called the arity \(n\) part of \(X\). We denote dg \(\mathbb{N}\)-mod the category of dg \(\mathbb{N}\)-modules. Remark 5.: The category of dg \(\mathbb{N}\)-modules admits a canonical combinatorial model category structure, determined by the following sets of maps: 1. the set of weak equivalences \(f:X\xrightarrow{\text{\rm dg}}Y\) is given by arity-wise quasi-isomorphisms \(f(n):X(n)\xrightarrow{\text{\rm dg}}Y(n)\) for all \(n\geq 0\); 2. the set of fibrations \(f:X\twoheadrightarrow Y\) is given by arity-wise degree-wise epimorphisms \(f(n):X(n)\twoheadrightarrow Y(n)\) for all \(n\geq 0\); 3. the set of cofibrations \(f:X\mapsto Y\) is given by arity-wise degree-wise injections \(f(n):X(n)\mapsto Y(n)\) for all \(n\geq 0\). The planar horizontal product on dg \(\mathbb{N}\)-modules \(X,Y\) is given by the Day convolution product \[(X\oplus_{\text{pl}}Y)(n)\coloneqq\bigoplus_{k+l=n}X(k)\otimes Y(l)\.\] This endows dg \(\mathbb{N}\) with a symmetric monoidal category structure, where the unit is given by \(\Bbbk\) concentrated in arity \(0\). There is another monoidal structure given by the _planar composition product_ \[(X\circ_{\text{pl}}Y)(n)\coloneqq\bigoplus_{k\geq 0}X(k)\otimes Y^{\otimes_{ \text{pl}}k}(n)\.\] The unit for the composition is given by \(\mathcal{I}\), defined as follows \[1(n)\coloneqq\left\{\begin{aligned} & 0\text{ if }n\neq 1\,\\ &\Bbbk\text{ if }n=1.\end{aligned}\right.\] **Definition 9** (Planar dg operad).: A _planar dg operad_\(\mathcal{P}\) amounts to the data of a monoid \((\mathcal{P},\boldsymbol{\gamma},\boldsymbol{\eta})\) in the category of dg \(\mathbb{N}\)-modules with respect to the composition product. **Definition 10** (Augmented planar dg operad).: An _augmented planar dg operad_\(\mathcal{P}\) amounts to the data of a planar dg operad \((\mathcal{P},\boldsymbol{\gamma},\boldsymbol{\eta})\) equipped with a morphism of planar dg operads \(\boldsymbol{\nu}:\mathcal{P}\longrightarrow\mathcal{I}\) such that \(\boldsymbol{\nu}\circ\boldsymbol{\eta}=\text{id}\). Given an augmented planar dg operad \(\mathcal{P}\), we will denote by \(\overline{\mathcal{P}}\) the kernel of the augmentation map. **Definition 11** (Planar dg cooperad).: A _planar dg cooperad_\(\mathcal{C}\) amounts to the data of a comonoid \((\mathcal{C},\Delta,\epsilon)\) in the category of dg \(\mathbb{N}\)-modules with respect to the composition product. Given a planar dg cooperad \(\mathcal{C}\), we will denote by \(\overline{\mathcal{C}}\) the kernel of the counit map. **Definition 12** (Coaugmented planar dg cooperad).: A _coaugmented planar dg cooperad_\(\mathcal{C}\) amounts to the data of a planar dg cooperad \((\mathcal{C},\Delta,\epsilon)\) equipped with a morphism of planar dg cooperads \(\boldsymbol{\mu}:\mathcal{I}\longrightarrow\mathcal{C}\) such that \(\epsilon\circ\boldsymbol{\mu}=\text{id}\). Remark 6.: All the definitions of this subsection make sense in the graded or the pre-differential setting. ### S-modules, operads and cooperads In this subsection, we deal with dg (resp. graded or pdg) S-modules. **Definition 13** (dg S-module).: Let \(\mathbb{S}\) be the groupoid whose objects are natural integers and whose morphisms are given by \[\hom_{\mathbb{S}}(n,m)=\left\{\begin{aligned} &\emptyset\text{ if }n\neq m\,\\ &\mathbb{S}_{n}\text{ if }n=m\.\end{aligned}\right.\] A _dg_ S-_module_\(M\) amounts to the data of a functor \[M:\mathbb{S}^{\text{op}}\longrightarrow\text{dg }\Bbbk\text{-mod}\] from \(\mathbb{S}^{\text{op}}\) to dg modules. It corresponds to collection of dg modules \(\{M(n)\}\) for \(n\geq 0\), where each \(M(n)\) is endowed with a (right) action of \(\mathbb{S}_{n}\). We denote by dg S-mod the category of dg S-modules. Remark 7.: We define analogously the categories of graded or pdg S-modules. There is a diagram of adjunctions between the categories of dg N-modules and of dg S-modules. The functor \(-\otimes\mathbb{S}\) is given by \[(X\otimes\mathbb{S})(n)\coloneqq X(n)\otimes\Bbbk[\mathbb{S}_{n}]\,\] for all \(n\geq 0\). We will also denote by \(-\otimes\mathbb{S}\) the endofunctor of dg N-modules that is given by the (co)free dg S-module functor composed with the forgetful functor. One can either right-transfer or left-transfer the model category structure on dg N-modules to dg S-modules. This gives _two different combinatorial model structures_. 1. The right-transferred structure is called the _projective model structure_. Fibrations are given by degree-wise arity-wise epimorphisms and weak-equivalences by arity-wise quasi-isomorphisms. 2. The left-transferred structure is called the _injective model structure_. Cofibrations are given by degree-wise arity-wise monomorphisms and weak-equivalences by arity-wise quasi-isomorphisms. Remark 8. All the results of Subsection 1.2 can be translated to dg S-modules as the hold for any finite group \(G\) and since the homotopy theory of dg S-modules is determined arity-wise. The _composition product_\(\circ\) of two dg S-modules \(M,N\) is defined as follows \[M\circ N(n)\coloneqq\bigoplus_{k\geq 0}M(k)\otimes_{\mathbb{S}_{k}}\left( \bigoplus_{i+\cdots+i_{k}=n}\operatorname{Ind}_{\mathbb{S}_{k_{1}}\times\cdots \times\mathbb{S}_{k_{n}}}^{\mathbb{S}_{n}}(N(i_{1})\otimes\cdots\otimes N(i_{ k}))\right)\.\] The unit for the composition is given by \(\mathcal{I}\), defined as follows \[1(n)\coloneqq\left\{\begin{array}{ll}0&\text{if }n\neq 1\.\\ \Bbbk&\text{if }n=1.\end{array}\right.\] They endow the category of dg S-modules with a monoidal category structure. **Definition 14** (dg operad).: A _dg operad_\(\mathcal{P}\) amounts to the data of a monoid \((\mathcal{P},\boldsymbol{\gamma},\boldsymbol{\eta})\) in the category of dg S-modules with respect to the composition product. **Definition 15** (augmented dg operad).: An _augmented dg operad_\(\mathcal{P}\) amounts to the data of a dg operad \((\mathcal{P},\boldsymbol{\gamma},\boldsymbol{\eta})\) equipped with a morphism of dg operads \(\boldsymbol{\nu}:\mathcal{P}\longrightarrow\mathcal{I}\) such that \(\boldsymbol{\nu}\circ\boldsymbol{\eta}=\operatorname{id}\). Given an augmented planar dg operad \(\mathcal{P}\), we will denote by \(\overline{\mathcal{P}}\) the kernel of the augmentation map. **Definition 16** (S-something dg operad).: Let \(\mathcal{P}\) be a dg operad. 1. It is called S-_projective_ if its underlying dg S-module is cofibrant for the projective model structure. 2. It is called S-_injective_ if its underlying dg S-module is fibrant for the injective model structure. 3. It is called S-_quasi-free_ if its underlying dg S-module is quasi-free. Remark 9. An S-projective dg operad is usually called an S-cofibrant dg operad in the literature. We adopt this non-standard terminology in order to be able to differentiate between S-projective and S-injective dg operads, which both have cofibrant underlying dg S-modules, but in different model category structures. **Definition 17** (dg cooperad).: A _dg cooperad_\(\mathcal{C}\) amounts to the data of a comonoid \((\mathcal{C},\Delta,\epsilon)\) in the category of dg S-modules with respect to the composition product. Given a dg cooperad \(\mathcal{C}\), we will denote by \(\overline{\mathcal{C}}\) the kernel of the counit map. **Definition 18** (coaugmented dg cooperad).: A _coaugmented dg cooperad_\(\mathcal{C}\) amounts to the data of a dg cooperad \((\mathcal{C},\Delta,\epsilon)\) equipped together with a morphism of planar dg cooperads \(\mu:\mathcal{I}\longrightarrow\mathcal{C}\) such that \(\epsilon\circ\mu=\operatorname{id}\). There is a strong monoidal structure on the functor \(-\otimes\mathbb{S}\) which yields two adjunctions that lift, respectively, the adjunction \(-\otimes\mathbb{S}\dashv U_{\mathbb{S}}\) and the adjunction \(U_{\mathbb{S}}\dashv-\otimes\mathbb{S}\) that relate dg \(\mathbb{S}\)-modules to dg \(\mathbb{N}\)-modules. Remark 10.: The adjunction \(-\otimes\mathbb{S}\dashv U_{\mathbb{S}}\) relating dg operads to planar dg operads is monadic since its right adjoint preserves coreflexive equalisers and is conservative. However, the other adjunction \(U_{\mathbb{S}}\dashv-\otimes\mathbb{S}\) is not a priori comonadic. Remark 11.: All the definitions of this subsection make sense in the graded or the pre-differential setting. ### Tree modules and conilpotent cooperads In this subsection, we briefly recall how operads are algebras over the tree monad and how conilpotent cooperads are exactly coalgebras over the tree comonad, both in the planar and in the symmetric case. These constructions can all be found in [11]. For a more detailed discussion about the point of view adopted here, see the forthcoming note [10]. **Planar tree endofunctor.** For every dg \(\mathbb{N}\)-module \(X\), one can define the _planar tree module_\(\mathbb{T}_{\text{pl}}(X)\) of \(X\), which is the dg \(\mathbb{N}\)-module given, for \(m\geq 0\), by \[\mathbb{T}_{\text{pl}}(X)(m)=\bigoplus_{t}t(X)\,\] where the sum is taken over the isomorphism classes of planar trees with \(m\) leaves. Let \(n\) be in \(\mathbb{N}\). We define the following sub-functors of the planar tree module: * The _reduced planar tree endofunctor_\(\overline{\mathbb{T}}_{\text{pl}}(X)\), given by the sum over all non-trivial planar trees; * The _\(n\)-leveled planar tree endofunctor_\(\mathbb{T}_{\text{pl},\leq n}(X)\), given by the sum over planar trees whose height is equal or lower than \(n\) (recall that the height of the trivial tree with no node is \(0\)); * The _\(n\)-weight planar tree endofunctor_\(\mathbb{T}_{\text{pl}}^{(\leq n)}(X)\), given by the sum over planar trees with \(n\) nodes or less. All these constructions are natural in \(X\) and define endofunctors of the category of dg \(\mathbb{N}\)-modules. **Planar operads.** The planar tree module endofunctor \(\mathbb{T}_{\text{pl}}\) admits a monad structure induced by the grafting of planar trees. **Proposition 4**.: _The category of algebras over the monad \(\mathbb{T}_{\text{pl}}\) is canonically isomorphic to the category of planar dg operads._ **Conilpotent planar cooperads.** The _reduced_ planar tree module endofunctor \(\overline{\mathbb{T}}_{\text{pl}}\) admits a comonad structure induced by partitioning planar trees. Furthermore, there is a fully faithful functor \[\text{Conil}:\text{dg}\ \overline{\mathbb{T}}_{\text{pl}}\text{-cg}\longrightarrow( \text{dg}\ \text{Cooperads}_{\text{pl}})_{\mathcal{I}_{\mathcal{I}}}\] from dg \(\overline{\mathbb{T}}_{\text{pl}}\)-coalgebras to coaugmented planar dg cooperads. **Definition 19** (Conilpotent planar dg cooperad).: Let \(\mathcal{C}\) be a coaugmented planar dg cooperad. It is _conilpotent_ if it belongs to the essential image of the functor \(\text{Conil}\) from dg \(\overline{\mathbb{T}}_{\text{pl}}\)-coalgebras to planar dg cooperads. We denote dg \(\text{Cooperads}_{\text{pl}}^{\text{conil}}\) the full sub-category of coaugmented planar dg cooperads spanned by conilpotent ones. Remark 12. The idea behind this definition is the following: a (non-counital) cooperad can also be described in terms of partial decomposition maps \[\Delta_{i}:C(n+k-1)\longrightarrow\mathcal{C}(n)\otimes\mathcal{C}(k)\,\] and it is conilpotent if and only if any iteration of these partial decompositions is eventually trivial. If this is the case, then the data of all the possible iterations is exactly encoded by the \(\overline{\mathbb{T}}_{\mathrm{pl}}\)-coalgebra structure. For every natural integer \(n\geq 1\), the comonad structure on \(\overline{\mathbb{T}}_{\mathrm{pl}}\) restricts to \(\overline{\mathbb{T}}_{\mathrm{pl}}^{(\leq n)}\). The inclusion of comonads \(\overline{\mathbb{T}}_{\mathrm{pl}}^{(\leq n)}\longrightarrow\overline{ \mathbb{T}}_{\mathrm{pl}}\) induces an endofunctor \(F_{n}^{\mathrm{rad}}\) in the category of conilpotent planar dg cooperads. **Definition 20** (Coradical filtration).: Let \(\mathcal{C}\) be a conilpotent planar dg cooperad. Its _\(n\)-coradical filtration_ is given by the conilpotent planar dg cooperad \(F_{n}^{\mathrm{rad}}\mathcal{C}\). It induces a ladder diagram \[F_{0}^{\mathrm{rad}}\mathcal{C}\hookrightarrow F_{1}^{\mathrm{rad}}\mathcal{C }\hookrightarrow\cdots F_{n}^{\mathrm{rad}}\mathcal{C}\hookrightarrow\cdots\,\] indexed by \(\mathbb{N}\), where all the arrows are monomorphisms. For any conilpotent planar dg cooperad \(\mathcal{C}\), there is a canonical isomorphism between \(\mathcal{C}\) and the colimit of the following ladder diagram \[F_{0}^{\mathrm{rad}}\mathcal{C}\hookrightarrow F_{1}^{\mathrm{rad}}\mathcal{C }\hookrightarrow\cdots F_{n}^{\mathrm{rad}}\mathcal{C}\hookrightarrow\cdots\] in the category of conilpotent planar dg cooperads. **Proposition 5**.: _Let \(n\geq 0\) and \(\mathcal{C}\) be a conilpotent planar dg cooperad. Then \(F_{n}^{\mathrm{rad}}\mathcal{C}\) fits in the following pullback_ _in the category of dg \(\mathbb{N}\)-modules, where \(\delta_{\mathcal{C}}\) denotes the dg \(\overline{\mathbb{T}}_{\mathrm{pl}}\)-coalgebra structure of \(\mathcal{C}\)._ Proof.: This is a direct application of Proposition 71. **Tree endofunctor.** Let \(M\) be a dg \(\mathbb{S}\)-module, we can define the _tree endofunctor_\(\mathbb{T}(M)\) as the following reflexive coequalizer where one of the maps is build from the dg \(\mathbb{S}\)-module structure of \(M\) and the other using the monad structures \(-\otimes\mathbb{S}\). Note that this definition is equivalent to the more standard one in [12]. Notation. Let \(n\) be a natural integer. We define analogously variants of the tree endofunctor by replacing the planar tree endofunctor \(\overline{\mathbb{T}}_{\mathrm{pl}}\) in the above coequalizer. * The _reduced tree_ endofunctor \(\overline{\mathbb{T}}\) is given by replacing \(\mathbb{T}_{\mathrm{pl}}\) with \(\overline{\mathbb{T}}_{\mathrm{pl}}\). * We denote by \(\mathbb{T}_{\leq n}\) the endofunctor obtained by replacing \(\mathbb{T}_{\mathrm{pl}}\) with \(\overline{\mathbb{T}}_{\mathrm{pl}\leq n}\). * We denoted by \(\mathbb{T}^{(n)}\) the endofunctor obtained by replacing \(\mathbb{T}_{\mathrm{pl}}\) with \(\overline{\mathbb{T}}_{\mathrm{pl}}^{(n)}\). * We denoted by \(\mathbb{T}^{(\leq n)}\) the endofunctor obtained by replacing \(\mathbb{T}_{\mathrm{pl}}\) with \(\overline{\mathbb{T}}_{\mathrm{pl}}^{(\leq n)}\). **Proposition 6**.: _Let \(X\) be a dg \(\mathbb{N}\)-module. The canonical map_ \[\nu_{X}:\mathbb{T}(X\otimes\mathbb{S})\longrightarrow\mathbb{T}_{\mathrm{pl}} (X)\otimes\mathbb{S}\] _is a isomorphism of dg \(\mathbb{S}\)-modules, natural in \(X\)._ **Operads.** There is a monad structure on the tree endofunctor \(\mathbb{T}\) which can be constructed using the monad structure on the planar tree endofunctor. We refer to [10] for more details. **Proposition 7**.: _The category of algebras over the monad \(\mathbb{T}\) is canonically isomorphic to the category of dg operads._ **Conilpotent cooperads.** There is a comonad structure on the reduced tree endofunctor \(\overline{\mathbb{T}}\), which again can be constructed from the comonad structure on the reduced planar tree endofunctor \(\overline{\mathbb{T}}_{\text{pl}}\). We refer to [10] for more details. There is a fully faithful functor \[\text{Conil}:\text{dg}\ \overline{\mathbb{T}}\text{-}\text{cg}\longrightarrow( \text{dg}\ \text{Cooperads})_{\mathbb{T}/}\] from dg \(\overline{\mathbb{T}}\)-coalgebras to coaugmented dg cooperads. **Definition 21** (Conilpotent dg cooperad).: Let \(\mathcal{C}\) be a coaugmented dg cooperad. It is _conilpotent_ if it belongs to the essential image of the functor Conil from dg \(\overline{\mathbb{T}}\)-coalgebras to dg cooperads. We denote dg \(\text{Cooperads}^{\text{conil}}\) the full sub-category of coaugmented dg cooperads spanned by conilpotent ones. The adjunction restricts to an adjunction \[\text{dg}\ \text{Cooperads}^{\text{conil}}\xrightarrow[\text{$\begin{array}{c} -\otimes\mathbb{S}\\ \overline{\mathbb{T}}\\ \overline{\mathbb{T}}\\ \overline{\mathbb{U}_{\text{S}}}\end{array}$}\text{dg}\ \text{Cooperads}^{\text{conil}},\] between conilpotent planar dg cooperads and conilpotent dg cooperads. For every natural integer \(n\geq 1\), the comonad structure on \(\overline{\mathbb{T}}\) restricts to \(\overline{\mathbb{T}}^{(\leq n)}\). The inclusion of comonads \(\overline{\mathbb{T}}^{(\leq n)}\mapsto\overline{\mathbb{T}}\) induces an endofunctor \(\text{F}_{n}^{\text{rad}}\) in the category of conilpotent dg cooperads. **Definition 22** (Coradical filtration).: Let \(\mathcal{C}\) be a conilpotent dg cooperad. Its \(n\)-_coradical filtration_ is given by the conilpotent dg cooperad \(\text{F}_{n}^{\text{rad}}\mathcal{C}\). It induces a ladder diagram \[\text{F}_{0}^{\text{rad}}\mathcal{C}\mapsto\text{F}_{1}^{\text{rad}}\mathcal{ C}\mapsto\cdots\text{F}_{n}^{\text{rad}}\mathcal{C}\mapsto\cdots\,\] indexed by \(\mathbb{N}\), where all the arrows are monomorphisms. For any conilpotent dg cooperad \(\mathcal{C}\), there is a canonical isomorphism between \(\mathcal{C}\) and the colimit of the following ladder diagram \[\text{F}_{0}^{\text{rad}}\mathcal{C}\mapsto\text{F}_{1}^{\text{rad}}\mathcal{ C}\mapsto\cdots\text{F}_{n}^{\text{rad}}\mathcal{C}\mapsto\cdots\] in the category of conilpotent dg cooperads. All the construction we have performed in this section commutes with the forgetful functors \[\text{dg}\ \text{k-mod}\longrightarrow\text{pdg}\ \text{k-mod}\longrightarrow \text{gr}\ \text{k-mod}\.\] This implies that if a dg (or pdg) conilpotent cooperad \(\mathcal{C}\) proceeds, as a graded conilpotent cooperad, from a planar one \(\mathcal{C}_{\text{pl}}\) (in the sense that there is an isomorphism \(\mathcal{C}\cong\mathcal{C}_{\text{pl}}\otimes\mathbb{S}\) of graded cooperads), then the conilpotent dg cooperad \(\overline{\text{F}}_{n}\mathcal{C}\) fits in the following pullback diagram in the category of dg \(\mathbb{S}\)-modules, where \(\delta_{\mathcal{C}}\) denotes the dg \(\overline{\mathbb{T}}\)-coalgebra structure of \(\mathcal{C}\). Remark 13.: If the characteristic of the base field \(\Bbbk\) is zero, both the tree module \(\mathbb{T}(-)\) and the composition product \(\circ\) preserve **finite** cosifted limits. Therefore the forgetful functors from (conilpotent) dg cooperads to dg \(\mathbb{S}\)-modules preserve these limits. Thus, as in the planar case, given a dg cooperad \(\mathcal{C}\), the conilpotent dg cooperad \(\overline{\text{F}}_{n}\mathcal{C}\) fits in a pullback diagram as above. ## 2. Homotopy theory of operads and quasi-planar cooperads The main goal of this section is to introduce the notion of a _quasi-planar cooperad_. First, we recall the bar-cobar constructions at the operadic level (the unital/curved version of [11]). Then, we recall the semi-model category structure of [10], which encodes the homotopy theory of dg operads. Finally, we introduce the notion of a quasi-planar cooperad, construct basic examples, and show their cobar constructions provide us with cofibrant resolutions which are particularly well-behaved when working over a field of any characteristic. Furthermore, for any quasi-planar conilpotent curved cooperad \(\mathcal{C}\), where construct a canonical \(\mathcal{E}\)-comonoid structure over \(\mathcal{DC}\), where \(\mathcal{E}\) is the Barratt-Eccles operad. This allows us to define a _canonical_ quasi-planar ladder for any quasi-planar conilpotent curved cooperad \(\mathcal{C}\), which can be viewed as the positive characteristic analogue of the coradical filtration. ### Conilpotent curved cooperads and the operadic bar-cobar adjunction The Koszul dual notion of a non-necessarily augmented dg operad is a conilpotent curved cooperad. We recall the bar-cobar adjunction at the operadic level that links these two notions constructed in [11]. **Definition 23** (Curved cooperad).: A _curved cooperad_\(\mathcal{C}\) amounts to the data of a pdg cooperad \((\mathcal{C},\Delta,\epsilon,d)\) equipped with a degree \(-2\) map \(\theta:\mathcal{C}\longrightarrow\mathcal{I}\) called the _curvature_ such that \(\theta\circ d=0\), and such that the following diagram commutes: Here \(\Delta_{(1)}\) is given by the sum over all possible decompositions into pairs. Remark 14.: A curved cooperad with zero curvature is a dg cooperad. Remark 15.: A curved cooperad is said to be coaugmented (resp. conilpotent) if its underlying pdg cooperad is. **The operadic bar construction.** Given a dg operad \(\mathcal{P}\), one can build a conilpotent curved cooperad \(\mathcal{B}\mathcal{P}\) whose underlying conilpotent graded cooperad is given by \(\mathbb{T}(s\mathcal{P}\oplus s^{2}\mathcal{I})\). We endow it with the unique coderivation whose projection onto the generators is the sum of the following maps: 1. the map \[\mathbb{T}(s\mathcal{P}\oplus s^{2}\mathcal{I})\twoheadrightarrow\mathbb{T}^{( 2)}(s\mathcal{P}) \longrightarrow s\mathcal{P}\] \[(sp\otimes_{i}sp^{\prime})\otimes\{\sigma\} \mapsto(-1)^{|\rho|}s(\rho\circ_{i}p^{\prime})^{\sigma}\] where \(sp\otimes_{i}sp^{\prime}\) labels a \(2\) nodes tree whose second node is plugged at the \(i^{th}\) leaf of the root node, 2. the map \[\mathbb{T}(s\mathcal{P}\oplus s^{2}\mathcal{I})\twoheadrightarrow s \mathcal{P} \longrightarrow s\mathcal{P}\] \[sp\mapsto-sd(\rho)\,\] 3. the map \[\mathbb{T}(s\mathcal{P}\oplus s^{2}\mathcal{I})\twoheadrightarrow s ^{2}\mathcal{I} \longrightarrow s\mathcal{P}\] \[s^{2}1 \mapsto-s1_{\mathcal{P}}.\] We denote by \(d_{\gamma}\), \(d_{\mathcal{P}}\) and \(d_{u}\) the respective coderivations induced by these maps. The curvature is given by the following map \[\Theta:\mathbb{T}(s\mathcal{P}\oplus s^{2}\mathcal{I})\twoheadrightarrow s ^{2}\mathcal{I} \longrightarrow\mathcal{I}\] \[s^{2}1 \mapsto 1.\] One can check that \((\mathbb{T}(s\mathcal{P}\oplus s^{2}\mathcal{I}),d_{\gamma}+d_{P}+d_{u},\Theta)\) forms a conilpotent curved cooperad. **The operadic cobar construction.** Given a coaugmented curved cooperad \(\mathcal{C}\), one can build a dg operad \(\Omega\mathcal{C}\) whose underlying graded operad is \(\mathbb{T}(s^{-1}\overline{\mathcal{C}})\). We endow it with the unique derivation whose restriction to the generators is the sum of the following maps: 1. the map \[s^{-1}\overline{\mathcal{C}}\longrightarrow\mathbb{T}^{(2)}(s^{-1} \overline{\mathcal{C}})\hookrightarrow\mathbb{T}(s^{-1}\overline{\mathcal{C}})\] \[s^{-1}c\mapsto-\sum(-1)^{|c_{(1)}|}sc_{(1)}\otimes sc_{(2)} \otimes\{\sigma\}\] where \(\Delta_{(1)}(c)=\sum c_{(1)}\otimes c_{(2)}\otimes\{\sigma\}\) denotes the sum of all possible decompositions of \(c\) into a pairs, 2. the map \[s^{-1}\overline{\mathcal{C}}\longrightarrow s^{-1}\overline{ \mathcal{C}}\hookrightarrow\mathbb{T}(s^{-1}\overline{\mathcal{C}})\] \[s^{-1}c\mapsto-s^{-1}d(c)\] 3. the map \[s^{-1}\overline{\mathcal{C}}\longrightarrow\mathcal{I}\hookrightarrow \mathbb{T}(s^{-1}\overline{\mathcal{C}})\] \[s^{-1}c\mapsto\theta(c).\] We denote by \(d_{\Delta},d_{\mathcal{C}}\) and \(d_{b}\) the respective derivations induced by these maps. One can check \((\mathbb{T}(s^{-1}\overline{\mathcal{C}}),d_{\Delta}+d_{\mathcal{C}}+d_{b})\) forms a dg operad. **The operadic bar-cobar adjunction.** This two functors form an adjunction between dg operads and conilpotent curved cooperads. Remark 16. If a dg operad \(\mathcal{P}\) is augmented, then its bar construction is in fact a conilpotent dg cooperad. Up to natural weak-equivalences, this adjunction coincides with the bar-cobar adjunction of [12] between augmented dg operads and conilpotent dg cooperads. ### The Barratt-Eccles operad and the Hadamard tensor product Let us review the construction of the Barratt-Eccles operad of C. Berger and B. Fresse in [1]. **Definition 24** (Barratt-Eccles dg operad).: The _unital barrat-Eccles dg operad_, denoted by \(\mathcal{E}\), is defined as follows. For \(n\geq 2\), the arity \(n\) component \(\mathcal{E}(n)\) is given, in degree \(m\), by the free \(\Bbbk\)-module generated by the sequences of distinct permutations \[(\sigma_{0},\sigma_{1},\ldots,\sigma_{m})\in\mathbb{S}_{n}^{m+1}\] where \(\sigma_{i}\neq\sigma_{i+1}\) for \(0\leq i\leq m-1\). The right \(\mathbb{S}_{n}\)-action is given by \[(\sigma_{0},\sigma_{1},\ldots,\sigma_{m})^{\sigma}=(\sigma_{0}\sigma,\sigma_{ 1}\sigma,\ldots,\sigma_{m}\sigma).\] The differential of \(\mathcal{E}(n)\) is given as follows: \[d((\sigma_{0},\sigma_{1},\ldots,\sigma_{m}))=(\sigma_{1},\ldots,\sigma_{m})-( \sigma_{0},\sigma_{2},\ldots,\sigma_{m})+\cdots+(-1)^{m}(\sigma_{0},\sigma_{ 1},\ldots,\sigma_{m-1})\.\] For \(n=0,1\), \(\mathcal{E}(0)=\mathcal{E}(1)=\Bbbk\) endowed with the trivial action and the zero differential. We refer to [1, Section 1.13] for the specific formulae of the operadic compositions. Remark 17. Notice that, for all \(n\geq 0\), the dg \(\Bbbk[S_{n}]\)-module \(\mathcal{E}(n)\) is quasi-free and concentrated in positive degrees, therefore it is also projective. Therefore \(\mathcal{E}\) is both an S-quasi-free and an S-projective dg operad. The canonical morphism of operads \(\mathcal{E}\longrightarrow\mathfrak{u}\mathcal{C}\mathfrak{o}\mathfrak{m}\) is arity-wise a quasi-isomorphism. To see this, it suffices to notice it admits a section and that the degree \(1\) endomorphism of \(\mathcal{E}\) \[h:(\sigma_{0},\sigma_{1},\ldots,\sigma_{m})\mapsto(1,\sigma_{0},\sigma_{1}, \ldots,\sigma_{m})\] satisfies \(\partial(h)=\pi_{\mathfrak{u}\mathcal{C}\mathfrak{o}\mathfrak{m}}\), where \(\pi_{\mathfrak{u}\mathcal{C}\mathfrak{o}\mathfrak{m}}\) is the projection onto the image of this section. **Definition 25**.: We define \(\mathcal{E}_{\mathfrak{p}\mathfrak{l}}\) as the sub-graded \(\mathbb{N}\)-module of \(\mathcal{E}\) given, in arity \(n\geq 2\) and degree \(m\), by the free \(\Bbbk\)-module generated by the sequences \[(\sigma_{0},\sigma_{1},\ldots,\sigma_{m})\in\mathbb{S}_{n}^{m+1}\] where \(\sigma_{i}\neq\sigma_{i+1}\) for \(0\leq i\leq m-1\) and such that \(\sigma_{0}=1\). In arities \(n=0,1\), we have \(\mathcal{E}_{\mathfrak{p}\mathfrak{l}}(0)=\mathcal{E}_{\mathfrak{p}\mathfrak{ l}}(1)=\Bbbk\). The canonical morphism of graded \(\mathbb{S}\)-modules \[\mathcal{E}_{\mathfrak{p}\mathfrak{l}}\otimes\mathbb{S}\longrightarrow \mathcal{E}\] is an isomorphism. **Definition 26** (The Hadamard tensor product).: Let \(\mathcal{P}\) be a dg operad. We denote by \(\mathcal{E}\otimes\mathcal{P}\) the _Hadamard tensor_ product of \(\mathcal{P}\) with \(\mathcal{E}\). It is the dg operad whose underlying dg \(\mathbb{S}\)-module is \[(\mathcal{E}\otimes\mathcal{P})(n)=\mathcal{E}(n)\otimes\mathcal{P}(n)\] equipped with the diagonal action of \(\mathbb{S}_{n}\). The operad structure is given by the map \[(\mathcal{E}\otimes\mathcal{P})\circ(\mathcal{E}\otimes\mathcal{P}) \longrightarrow(\mathcal{E}\otimes\mathcal{E})\otimes(\mathcal{P}\circ \mathcal{P})\longrightarrow\mathcal{E}\otimes\mathcal{P}\.\] **Proposition 8**.: _Let \(\mathcal{P}\) be a dg operad. The dg \(\mathbb{S}\)-module of the Hadamard tensor product \(\mathcal{E}\otimes\mathcal{P}\) is quasi-free with generators \(\mathcal{E}_{\mathfrak{p}\mathfrak{l}}\otimes U_{\mathbb{S}}(\mathcal{P})\)._ Proof.: Follows from Proposition 1. Notice that the canonical map \(\mathcal{E}\otimes\mathcal{P}\xrightarrow{}\mathcal{P}\) is also an arity-wise quasi-isomorphims. Remark 18.: We choose, as a convention, to systematically consider \(\mathcal{E}\) on the left hand side of the tensor products \(\mathcal{E}\otimes\mathcal{P}\) for consistency reasons that will become more apparent later. One could have chosen the opposite convention, all the results also hold. ### Another presentation of the Barratt-Eccles operad In this subsection, we give a new presentation of the Barratt-Eccles operad and its composition. The computations introduced here will be used in Subsections 2.8 and 2.9. Let us describe the composition in the Barratt-Eccles operad in terms of shuffle permutations. Throughout this subsection and the next one, given two permutations \(\mu\in\mathbb{S}_{p}\), \(\psi\in\mathbb{S}_{q}\) and \(1\leq i\leq p\), \(\mu\circ_{i}\psi\) refers to the composition of \(\mu\) with \(\psi\) at the \(i\)-th leaf. This composition is given by the partial compositions of the associative operad in the category of sets. **Definition 27** (Admissible permutations).: Let \(p,q\) be two natural integers with \(p\geq 1\), let \(1\leq i\leq p\) and let \(n=p+q-1\). We say that a permutation \(\sigma\in\mathbb{S}_{n}\) is 1. \((p,q,i)\)_-bottom admissible_ if there exists a permutation \(\mu\in\mathbb{S}_{p}\) such that \[\sigma=\mu\circ_{\mu^{-1}(i)}\operatorname{Id}_{q}\ ;\] 2. \((p,q,i)\)_-top admissible_ if there exists a permutation \(\psi\in\mathbb{S}_{q}\) such that \[\sigma=\operatorname{Id}_{p}\circ_{i}\psi\ ;\] 3. \((p,q,i)\)_-admissible_ if it is \((p,q,i)\)-top admissible or \((p,q,i)\)-bottom admissible; 4. \((p,q,i)\)_-non admissible_ if it is not \((p,q,i)\)-admissible. **Lemma 2**.: _Given \(p,q\geq 1\) and \(1\leq i\leq p\), the function_ \[\mathbb{S}_{p}\times\mathbb{S}_{q} \longrightarrow\mathbb{S}_{n}\] \[(\mu,\psi) \mapsto\mu\circ_{\mu^{-1}(i)}\psi\] _is injective. In particular, the intersection of \((p,q,i)\)-bottom admissible permutations with \((p,q,i)\)-top admissible permutations only contains the trivial permutation._ Proof.: A permutation \(\sigma\) in \(\mathbb{S}_{n}\) can be written as a composition \(\mu\circ_{\mu^{-1}(i)}\psi\), with \(\mu\in\mathbb{S}_{p}\) and \(\psi\in\mathbb{S}_{q}\) if and only if \(\sigma^{-1}\) sends the segment \(\{i,\cdots,i+q-1\}\) to a segment. Then the permutation \(\psi\) is determined by the induced function between these two segments and \(\mu\) is determined by the function induced by collapsing these each of these segments into one element. Notice that for every \(\sigma_{1},\sigma_{2}\in\mathbb{S}_{n}\) and \(\mu_{1},\mu_{2}\in\mathbb{S}_{m}\) and \(1\leq 1\leq n\) we have \[(\sigma_{2}\sigma_{1})\circ_{i}(\mu_{2}\mu_{1})=(\sigma_{2}\circ_{\sigma_{1}( i)}\mu_{2})(\sigma_{1}\circ_{i}\mu_{1})\.\] **Definition 28**.: Let \(k,p\geq 1,q\geq 0\), \(1\leq i\leq p\) and \(n=p+q-1\) be natural integers and let us consider two sequences of permutations \(\underline{\mu}\in\mathbb{S}_{p}^{k}\) and \(\underline{\psi}\in\mathbb{S}_{q}^{k}\). We define \(\underline{\mu}\ltimes i\underline{\psi}\) as the sequence \(\underline{\sigma}=(\sigma_{1},\ldots,\sigma_{k})\in\mathbb{S}_{n}^{k}\) given by \[\left\{\begin{array}{c}\sigma_{1}\coloneqq\mu_{1}\circ_{\mu_{1}^{-1}(i)} \psi_{1};\\ \vdots\\ \sigma_{j}\coloneqq\mu_{j}\circ_{\mu_{j}^{-1}\cdots\mu_{1}^{-1}(i)}\psi_{j} \quad\text{for}\quad 1<j<k;\\ \vdots\\ \sigma_{k}\coloneqq\mu_{k}\circ_{\mu_{k}^{-1}\cdots\mu_{1}^{-1}(i)}\psi_{k}. \end{array}\right.\] One can notice that \[(\mu_{1},\ldots,\mu_{k-1},\mu_{k})\ltimes_{i}(\psi_{1},\ldots,\psi_{k-1},\psi_ {k})=(\mu_{1}\circ_{\mu_{1}^{-1}(i)}\psi_{1})\sqcup((\mu_{2},\ldots,\mu_{k}) \ltimes_{\mu_{1}^{-1}(i)}(\psi_{2},\ldots,\psi_{k}))\,\] where \(\sqcup\) stands for the concatenation of sequences of permutations. **Lemma 3**.: _If \(q\geq 1\), the function \(-\ltimes_{i}-:\mathbb{S}_{p}^{k}\times\mathbb{S}_{q}^{k}\longrightarrow \mathbb{S}_{n}^{k}\) is injective._ Proof.: This follows from a straightforward induction on \(k\) using Lemma 2. **Definition 29** (Admissible sequence).: Let \(p,q,k\) be three natural integers such that \(p,k\geq 1\), let \(1\leq i\leq p\) and let \(n=p+q-1\). A sequence of non-trivial permutations \(\underline{\sigma}\in\mathbb{S}_{n}^{k}\) is 1. \((p,q,i)\)_-admissible_ if there exists two sequences of permutations \(\underline{\mu}\in\mathbb{S}_{p}^{k}\) and \(\underline{\psi}\in\mathbb{S}_{q}^{k}\) such that \(\underline{\sigma}=\underline{\mu}\ltimes_{i}\underline{\psi}\), and if for every \(1\leq j\leq k\) at least (and necessarily at most) one of the two permuations \(\mu_{j},\psi_{j}\) is trivial; 2. \((p,q,i)\)_-non admissible_ if it is not \((p,q,i)\)-admissible. **Definition 30**.: Let \(\underline{\mu}\in\mathbb{S}_{p}^{a}\) and \(\underline{\psi}\in\mathbb{S}_{q}^{b}\) be two sequences of permutations with \(p,a,b\geq 1\) and let \(\phi\) be a \((a,b)\)-shuffle. We denote \(\underline{\mu}\circ_{i,\phi}\underline{\psi}\) the sequence of permutation in \(\mathbb{S}_{n}^{k}\) (where \(k=a+b\) and \(n=p+q-1\)) given by \[\underline{\mu}\circ_{i,\phi}\underline{\psi}\coloneqq\phi(\underline{\mu} \sqcup\operatorname{Id}_{p}^{b})\ltimes_{i}\phi(\operatorname{Id}_{q}^{a} \sqcup\underline{\psi})\,\] where 1. \(\phi(\underline{\mu}\sqcup\operatorname{Id}_{p}^{b})\in\mathbb{S}_{p}^{k}\) is the sequence of permutations \((\mu_{1}^{\prime},\ldots,\mu_{k}^{\prime})\) such that \(\mu_{j}^{\prime}=\mu_{\phi^{-1}(j)}\) if \(\phi^{-1}(j)\leq a\) and \(\mu_{j}^{\prime}=\operatorname{Id}_{p}\) otherwise; 2. \(\phi(\operatorname{Id}_{q}^{a}\sqcup\underline{\psi})\in\mathbb{S}_{q}^{k}\) is the sequence of permutations \((\psi_{1}^{\prime},\ldots,\psi_{k}^{\prime})\) such that \(\psi_{j}^{\prime}=\psi_{\phi^{-1}(j)-a}\) if \(\phi^{-1}(j)\geq a+1\) and \(\psi_{j}^{\prime}=\operatorname{Id}_{q}\) otherwise. **Lemma 4**.: _The function_ \[\prod_{a+b=k}(\mathbb{S}_{p}-\{\mathrm{Id}_{\rho}\})^{a}\times(\mathbb{S}_{q}-\{ \mathrm{Id}_{\rho}\})^{b}\times\mathrm{Sh}(a,b)\xrightarrow{}\left(\mathbb{S}_{n }-\{\mathrm{Id}_{\rho}\}\right)^{k}\] \[(\underline{\mu},\underline{\psi},\phi)\xmapsto{\underline{\mu}}\circ_{i,\phi} \underline{\psi}\] _is injective, and its image is given by \((p,q,i)\)-admissible sequences._ Proof.: This is a direct consequence of the definition of a \((p,q,i)\)-admissible sequence combined with Lemma 3. Let again \(\mathcal{E}\) denote the Barratt-Eccles dg operad of [1]. **Definition 31**.: Let \(\underline{\sigma}:=(\sigma_{1},\ldots,\sigma_{k})\in\mathbb{S}_{n}^{k}\) be a sequence of permutations, we define \(\rho(\underline{\sigma})\), an element of \(\mathcal{E}(n)_{k}\), as follows as follows 1. if \(k=0\), then \(\rho(\underline{\sigma})\coloneqq\rho(*)=(\mathrm{Id}_{n})\), 2. if \(k\geq 1\) and \(\underline{\sigma}=(\sigma_{1},\ldots,\sigma_{k})\), \[\rho(\underline{\sigma})\coloneqq(\mathrm{Id}_{n},\sigma_{k},\sigma_{k-1} \sigma_{k},\ldots,\sigma_{1}\cdots\sigma_{k})\.\] Remark 19.: A direct computation shows that \(\{\rho(\underline{\sigma})\}_{\underline{\sigma}}\) for \(\underline{\sigma}\coloneqq(\sigma_{1},\ldots,\sigma_{k})\in\mathbb{S}_{n}^{k}\) forms a basis of \((\mathcal{E}(n)_{\mathrm{pl}})_{k}\) for all \(n,k\geq 0\) as a graded \(\mathbb{N}\)-module. Thus it is a basis of \(\mathcal{E}(n)_{k}\) as a graded \(\mathbb{S}\)-module. ### Partial compositions Let us recall the partial compositions of the Barratt-Eccles dg operad, constructed in [1, Section 1.1.3]. For two elements \(x=(\mu_{0},\ldots,\mu_{a})\in\mathcal{E}(p)_{a}\) and \(y=(\psi_{0},\ldots,\psi_{b})\in\mathcal{E}(p)_{b}\), given \(1\leq i\leq a\) and an \((a,b)\)-shuffle \(\phi\), let us consider \[x\circ_{i,\phi}^{\mathcal{E}}y\in\mathcal{E}(p+q-1)_{a+b}\.\] It is the sequence of permutations \((\sigma_{0},\ldots,\sigma_{a+b})\) defined by \[\sigma_{j}=\mu_{j,a}\circ_{i}\psi_{j,a}\,\] where \[j_{d} =\#\{k\in\mathbb{N}|1\leq k\leq a,\ \phi(k)\leq j\}\,\] \[j_{u} =\#\{k\in\mathbb{N}|a+1\leq k\leq a+b,\ \phi(k)\leq j\}.\] The partial compositions in \(\mathcal{E}\) are given by \[x\circ_{i}^{\mathcal{E}}y=\sum_{\phi\in\mathrm{Sh}(a,b)}(-1)^{\epsilon(\phi)} x\circ_{i,\phi}^{\mathcal{E}}y\,\] where the sum is taken over all \((a,b)\)-shuffles and where \(\epsilon(\phi)\) is the sign of the permutation \(\phi\). Notation. For every \((\sigma_{1},\ldots,\sigma_{n})\in\mathcal{E}(k)_{n+1}\) we denote \[\mathrm{inv}((\sigma_{1},\ldots,\sigma_{n}))=(\sigma_{n},\ldots,\sigma_{1}) \in\mathcal{E}(k)_{n+1}.\] **Lemma 5**.: _For every \(\underline{\mu}\in\mathbb{S}_{p}^{a},\underline{\psi}\in\mathbb{S}_{q}^{b}\) and every \((a,b)\)-shuffle \(\phi\), we have_ \[(\mathrm{inv}\rho(\underline{\mu}))\circ_{\underline{\mu}^{-1}(i),\phi}^{ \mathcal{E}}(\mathrm{inv}\rho(\underline{\psi}))=\mathrm{inv}\rho(\underline{ \mu}\circ_{i,\phi}\underline{\psi})\,\] _where \(\underline{\mu}^{-1}(i)=\mu_{a}^{-1}\cdots\mu_{1}^{-1}(i)\). Subsequently,_ \[(\mathrm{inv}\rho(\underline{\mu}))\circ_{\underline{\mu}^{-1}(i)}(\mathrm{ inv}\rho(\underline{\psi}))=\sum_{\phi\in\mathrm{Sh}(a,b)}(-1)^{\epsilon(\phi)} \mathrm{inv}\rho(\underline{\mu}\circ_{i,\phi}\underline{\psi})\,\] _where the sum is taken over the \((a,b)\)-shuffles and \(\epsilon(\phi)\) is the sign of the shuffle \(\phi\)._ Proof.: For \(1\leq j\leq a+b\) we denote \(\mu^{\prime}_{j}=\mu_{\phi^{-1}(j)}\) if \(\phi^{-1}(j)\leq a\) and \(\mu^{\prime}_{j}=\operatorname{Id}_{p}\) otherwise; similarly \(\psi^{\prime}_{j}=\psi_{\phi^{-1}(j)-a}\) if \(\phi^{-1}(j)\geq a+1\) and \(\mu^{\prime}_{j}=\operatorname{Id}_{q}\) otherwise. One can check that both \[(\operatorname{inv}\!\rho(\underline{\mu}))\circ_{\underline{\mu}^{-1}(j), \phi}^{\mathcal{E}}(\operatorname{inv}\!\rho(\underline{\psi}))\quad\text{ and}\quad\operatorname{inv}\!\rho(\underline{\mu}\circ_{i,\phi}\underline{\psi})\] are equal to \[(\operatorname{Id}_{a+b},\mu^{\prime}_{a+b}\circ_{\underline{\mu}^{-1}(j)} \psi^{\prime}_{a+b},\cdots,(\mu^{\prime}_{1}\cdots\mu^{\prime}_{a+b})\circ_{ \underline{\mu}^{-1}(j)}(\psi^{\prime}_{1}\cdots\psi^{\prime}_{a+b}))\.\] **Definition 32** (Opposite shuffle).: Let \(\phi\) be a \((a,b)\)-shuffle. Its _opposite \((a,b)\)-shuffle \(\overline{\phi}\)_ is given by \[\overline{\phi}(j)=\left\{\begin{array}{l}a+b+1-\phi(a+1-j)\text{ if }j\leq a \ ;\\ a+b+1-\phi(2a+b1-j)\text{ if }j\geq a+1\.\end{array}\right.\] In other words, the opposite \((a,b)\)-shuffle \(\overline{\phi}\) is given by inverting indepedently the segments \(\{1,\ldots,a\}\) and \(\{a+1,\ldots,a+b\}\), applying \(\phi\) to them, and finally inverting the whole segment \(\{1,\ldots,a+b\}\). By inverting, we mean applying the unique permutation that yields the maximum number of inversions. Notice that the signatures of a \((a,b)\)-shuffle and its opposite are related by the following formula \[(-1)^{\epsilon(\overline{\phi})}=(-1)^{ab+\epsilon(\phi)}\.\] **Lemma 6**.: _For every \((\mu_{0},\ldots,\mu_{a})\in\mathcal{E}(p)_{a},(\psi_{0},\ldots,\psi_{b})\in \mathcal{E}(q)_{b}\), \(1\leq i\leq p\) and every \((a,b)\)-shuffle \(\phi\) we have_ \[\operatorname{inv}\!\left(\mu_{0},\ldots,\mu_{a}\right)\circ_{i,\phi}^{ \mathcal{E}}\operatorname{inv}\!\left(\psi_{0},\ldots,\psi_{b}\right)= \operatorname{inv}\!\left((\mu_{0},\ldots,\mu_{a})\circ_{i,\overline{\phi}}^{ \mathcal{E}}(\psi_{0},\ldots,\psi_{b})\right)\.\] _Subsequently_ \[\operatorname{inv}\!\left(\mu_{0},\ldots,\mu_{a}\right)\circ_{i}\operatorname{ inv}\!\left(\psi_{0},\ldots,\psi_{b}\right)=(-1)^{ab}\operatorname{inv}\! \left((\mu_{0},\ldots,\mu_{a})\circ_{i}(\psi_{0},\ldots,\psi_{b})\right)\.\] Proof.: Let us denote \[(\sigma_{0},\ldots,\sigma_{a+b})=\operatorname{inv}\!\left(\operatorname{inv} \!\left(\mu_{0},\ldots,\mu_{a}\right)\circ_{i,\phi}^{\mathcal{E}}\operatorname {inv}\!\left(\psi_{0},\ldots,\psi_{b}\right)\right)\.\] By definition, one has \[\sigma_{a+b-j}=\mu_{a-j_{d}}\circ_{i}\psi_{b-j_{u}}\] for every \(0\leq j\leq a+b\), where as above \[j_{d} =\#\{k\in\mathbb{N}|1\leq k\leq a,\ \phi(k)\leq j\}\] \[j_{u} =\#\{k\in\mathbb{N}|a+1\leq k\leq a+b,\ \phi(k)\leq j\}.\] By denoting \(\overline{j}=a+b-j\), \(\overline{j}_{d}=a-j_{d}\) and \(\overline{j}_{u}=b-j_{u}\), this rewrites as \[\sigma_{\overline{j}}=\mu_{\overline{j}_{a}}\circ_{i}\psi_{\overline{j}_{u}}\] for every \(0\leq\overline{j}\leq a+b\). Combined with the fact that \[\overline{j}_{d}=a-j_{d} =\#\{k\in\mathbb{N}|1\leq k\leq a,\ \overline{\phi}(k)>j\}\] \[=\#\{k\in\mathbb{N}|1\leq k\leq a,\ \overline{\phi}(k)\geq j+1\}\] \[=\#\{k\in\mathbb{N}|1\leq k\leq a,\ a+b+1-\overline{\phi}(k)\leq a +b+1-(j+1)\}\] \[=\#\{k\in\mathbb{N}|1\leq k\leq a,\ a+b+1-\overline{\phi}(a+1-k) \leq\overline{j}\}\] \[=\#\{k\in\mathbb{N}|1\leq k\leq a,\ \overline{\phi}(k)\leq \overline{j}\}\,\] and similarly \[\overline{j}_{u}=\#\{k\in\mathbb{N}|a+1\leq k\leq a+b,\ \overline{\phi}(k)\leq \overline{j}\}\,\] we get that \[(\sigma_{0},\ldots,\sigma_{a+b})=(\mu_{0},\ldots,\mu_{a})\circ_{i,\overline{\phi} }^{\mathcal{E}}(\psi_{0},\ldots,\psi_{b})\.\] **Lemma 7**.: _For every \(\underline{\mu}\in\mathbb{S}_{\rho}^{a},\underline{\psi}\in\mathbb{S}_{q}^{b}\), we have_ \[\rho(\underline{\mu})\circ_{\underline{\mu}^{-1}(i)}\rho(\underline{\psi})=(-1)^ {ab}\sum_{\phi\in\operatorname{Sh}(a,b)}\epsilon(\phi)\rho(\underline{\mu} \circ_{i,\phi}\underline{\psi})\,\] _where \(\underline{\mu}^{-1}(i)=\mu_{a}^{-1}\cdots\mu_{1}^{-1}(i)\), where the sum is taken over the \((a,b)\)-shuffles and where \(\epsilon(\phi)\) is the sign of the \((a,b)\)-shuffle \(\phi\)._ Proof.: This is a combination of Lemma 6 and Lemma 5. **Comonoid structure.** As described in [1, Section 1.1], the Barratt-Eccles dg operad \(\mathcal{E}\) has the structure of a comonoid for the Hadamard tensor product. This structure is given by the map **Lemma 8**.: _For every \((\sigma_{0},\ldots,\sigma_{k})\in\mathcal{E}(n)_{k}\), one has that_ \[\Delta_{\mathcal{E}}(\rho((\sigma_{0},\ldots,\sigma_{k})))=\sum_{i=1}^{k+1} \rho((\sigma_{i},\ldots,\sigma_{k}))\otimes\rho((\sigma_{1},\ldots,\sigma_{i- 1}))^{\sigma_{i-}\sigma_{k}}\,\] _where \(\Delta_{\mathcal{E}}\) is the structure of a comonoid of the Barratt-Eccles dg operad \(\mathcal{E}\) for the Hadamard tensor product._ Proof.: This follows from direct inspection. ### Homotopy theory of operads There is a semi-model category structure on the category of dg operads constructed by B. Fresse in [11]. **Theorem 1** ([11, Chapter 12]).: _The category of dg operads admits a semi-model structure, determined by the following sets of maps:_ * _the set of fibrations is given by morphisms of dg operads which are arity-wise degree-wise epimorphisms,_ * _the set of weak-equivalences is given by morphisms of dg operads which are arity-wise quasi-isomorphism,_ * _the set of cofibrations is given by morphisms of dg operads which have the left lifting property with respect to maps that are both fibrations and weak-equivalences._ Let us denote by [1] the category \(0\longrightarrow 1\) with two objects and a single non-trivial arrow. A functor from this category is equivalent to the data of two objects and a morphism between them. **Lemma 9**.: _Let us consider a morphism of dg operad \(\mathcal{P}^{(0)}\longrightarrow\mathcal{P}^{(1)}\). Suppose there exists a morphism of graded \(\mathbb{N}\)-modules \(X^{(0)}\longrightarrow X^{(1)}\) such that the following diagram of functors_ _commutes. Furthermore, suppose that_ 1. _the map_ \(X^{(0)}\longrightarrow X^{(1)}\) _is a degree-wise injection;_ 2. _the restriction of the derivation of_ \(\mathcal{P}^{(1)}\) _to the generators_ \(X^{(1)}\) _factors as_ \[X^{(1)}\longrightarrow\mathcal{P}^{(0)}+X^{(1)}\hookrightarrow\mathcal{P}^{(1 )}\.\] _Then the map \(\mathcal{P}^{(0)}\longrightarrow\mathcal{P}^{(1)}\) is a cofibration._ Proof.: Let us decompose \(X^{(1)}\cong X^{(0)}\oplus Y\) as graded \(\mathbb{N}\)-modules. The restriction to \(Y\) of the derivation of \(\mathcal{P}^{(1)}\) is a degree \(-1\) map \((d_{\nu},\phi):Y\longrightarrow Y\oplus\mathcal{P}^{(0)}\). The fact that the whole derivation of \(\mathcal{P}^{(1)}\) squares to zero amounts to the facts that 1. the map \(d_{\nu}\) squares to zero and thus \(Y\) gets the structure of a dg module ; 2. the map \(\psi:S^{-1}\otimes Y\longrightarrow\mathcal{P}^{(0)}\), that sends \(s\otimes y\) to \(\phi(y)\) is a morphism of dg modules. Thus the following diagram is a pushout square of dg operads. Since the left vertical map is cofibration, so is the right vertical one. Given a small ordinal \(\alpha\), one can view it as a category. Objects are indexed by the ordinal \(\alpha\), and there is only one non-trivial arrow between two objects \(i\) and \(j\) in \(\alpha\), if and only if \(i<j\). Cocontinuous functors from \(\alpha\) to a category give what we call \(\alpha\)-indexed _ladders_. We refer to Subsection for more details. **Proposition 9**.: _Let \(\alpha\) be a small ordinal and let_ \[\mathcal{P}^{(0)}\longrightarrow\mathcal{P}^{(1)}\longrightarrow\mathcal{P}^ {(2)}\longrightarrow\cdots\longrightarrow\mathcal{P}^{(i)}\longrightarrow\cdots\,,\] _be an \(\alpha\)-indexed ladder of dg operads. Suppose there exists an \(\alpha\)-indexed ladder_ \[X^{(0)}\longrightarrow X^{(1)}\longrightarrow X^{(2)}\longrightarrow\cdots \longrightarrow X^{(i)}\longrightarrow\cdots\,,\] _of graded \(\mathbb{N}\)-modules such that the following diagram of functors_ _commutes. Furthermore, suppose that_ 1. _for every_ \(i,i+1\in\alpha\)_, the map_ \(X^{(i)}\longrightarrow X^{(i+1)}\) _is a degree-wise injection;_ 2. _for every_ \(i\in\alpha\)_, the restriction of the derivation of_ \(\mathcal{P}^{(i+1)}\) _to the generators_ \(X^{(i+1)}\) _factors as_ \[X^{(i+1)}\longrightarrow\mathcal{P}^{(i)}+X^{(i+1)}\hookrightarrow\mathcal{P} ^{(i+1)}.\] _Then the colimit dg operad_ \[\mathcal{P}^{(\alpha)}\coloneqq\operatorname*{colim}_{i\in\alpha}\,\mathcal{ P}^{(i)}\] _is cofibrant._ Proof.: Let \(\nu\) be the subset of \(\alpha+1\) of elements \(i\) so that \(\mathcal{P}^{(i)}\) is cofibrant. It satisfies the following properties: 1. it contains the first element \(0\in\alpha+1\) since \(\mathcal{P}^{(0)}\cong\mathcal{I}\); 2. for \(i,i+1\in\alpha+1\), if \(i\in\nu\), then \(i+1\in\nu\) by Lemma 9; 3. if \(i\in\alpha+1\) is a limit ordinal and if \(\nu\cap\{j<i\}=\{j<i\}\), then \(i\in\nu\) since cofibrations are stable through transfinite composition. Therefore \(\nu=\alpha\), which implies that \(P^{(\alpha)}\) is cofibrant. ### Conilpotent cooperad ladders **Definition 33** (Cooperad ladder).: Let \(\alpha\) be a small ordinal. An \(\alpha\)-_cooperad ladder_ amounts to the data of a cocontinuous functor \[\mathcal{C}:\alpha\longrightarrow\mathsf{curv\,\,Cooperads}^{\mathrm{conil}}\,\] which satisfies the following conditions: 1. for every \(i,i+1\in\alpha\), the map \(\mathcal{C}^{(i)}\longrightarrow\mathcal{C}^{(i+1)}\) is an arity-wise degree-wise injection; 2. the decomposition map of the underlying pdg conilpotent cooperad \[\mathcal{C}^{(i+1)}\xrightarrow{}\mathbb{T}\mathcal{C}^{(i+1)}\xrightarrow{} \mathbb{T}\mathcal{C}^{(i+1)}\] factors through \(\mathbb{T}\mathcal{C}^{(i)}\). We denote \[\mathcal{C}^{(\alpha)}\coloneqq\underset{i\in\alpha}{\text{colim}}\ \mathcal{C}^{(i)}\,\] the colimit of the cooperad ladder. The _associated graded_ of this ladder is given by \[\mathrm{gr}_{i}\ \mathcal{C}\coloneqq\mathcal{C}^{(i)}/\underset{j<i}{\text{colim}} \ \mathcal{C}^{(i)}\.\] In fact, we have that \(\mathrm{gr}_{i}\ \mathcal{C}\cong\mathcal{C}^{(i)}/\mathcal{C}^{(i-1)}\) and that \(\mathrm{gr}_{0}\ \mathcal{C}=\mathcal{C}^{(0)}\). Furthermore, \(\mathrm{gr}_{i}\ \mathcal{C}=0\) when \(i\) is a limit ordinal. Remark 20.: For all \(i\in\alpha\), the associated graded object \(\mathrm{gr}_{i}\mathcal{C}\) is a dg module. Indeed, the induced coderivation on it squares to zero because of the condition in Definition 23. Example 2 (Coradical ladder).: Let \(\mathcal{C}\) be a conilpotent curved cooperad. The cradical filtration of \(\mathcal{C}\) \[\mathrm{F}_{0}^{\mathrm{rad}}\ \mathcal{C}\mapsto\mathrm{F}_{1}^{\mathrm{rad}} \ \mathcal{C}\mapsto\cdots\mapsto\mathrm{F}_{n}^{\mathrm{rad}}\ \mathcal{C}\mapsto\cdots\mapsto\underset{i\in\omega}{\text{colim}}\ \mathrm{F}_{i}^{\mathrm{rad}}\ \mathcal{C}\cong\mathcal{C}\] is a cooperad ladder called the _coradical ladder_. Notice that it is also the cradical development of the constant ladder. ### Quasi-planar cooperad ladders and quasi-planar cooperads **Definition 34** (Quasi-planar cooperad ladder).: A _quasi-planar cooperad ladder_ amounts to the data of a small ordinal \(\alpha\) and a commutative diagram of functors where the natural isomorphism \(\varphi\) that makes the diagram commute is specified, meaning there is a given family \[\varphi^{(i)}:\mathcal{C}^{(i)}_{\mathrm{pl}}\otimes\mathbb{S}\longrightarrow \mathcal{C}^{(i)}\] of isomorphisms of graded conilpotent cooperads for all \(i\in\alpha\), called the _quasi-planar isomorphisms_. This data satisfies the following conditions: 1. the functor \(\mathcal{C}\) is a cooperad ladder (thus \(\mathcal{C}_{\mathrm{pl}}\) is also cocontinuous); 2. for every \(i,i+1\in\alpha\), the restriction of the coderivation of \(\mathcal{C}^{(i+1)}\) to \(\mathcal{C}^{(i+1)}_{\mathrm{pl}}\otimes 1\) factors through \[\mathcal{C}^{(i+1)}_{\mathrm{pl}}\otimes 1\longrightarrow\mathcal{C}^{(i+1)}_{ \mathrm{pl}}\otimes 1+\mathcal{C}^{(i)}\hookrightarrow\mathcal{C}^{(i+1)};\] in other words, the differential of \(\mathrm{gr}_{i+1}\ \mathcal{C}=\left(\mathcal{C}^{(i+1)}_{\mathrm{pl}}/\mathcal{C}^{(i)}_{ \mathrm{pl}}\right)\otimes\mathbb{S}\) has the form \(d_{\mathrm{pl}}\otimes\mathrm{Id}_{\mathbb{S}}\). **Definition 35** (Quasi-planar cooperad).: A conilpotent curved cooperad \(\mathcal{C}\) is _quasi-planar_ if it is isomorphic to the colimit of a quasi-planar cooperad ladder. Remark 21.: The main ideas for this definition where already in [11, Definition 11.2]. Remark 22.: It would be very interesting to compare the notion of a quasi-planar cooperad with the notion of a _higher cooperad_ introduced in [10]. **Definition 36** (Quasi-planar morphism).: Let \(f:\mathcal{C}\longrightarrow\mathcal{D}\) be a morphism of curved cooperads between two quasi-planar conilpotent curved cooperads. It is _quasi-planar_ if it restricts to a morphism \(f_{\mathsf{pl}}:\mathcal{C}_{\mathsf{pl}}\longrightarrow\mathcal{D}_{\mathsf{ pl}}\) such that the following diagram commutes where \(\varphi_{\mathcal{C}}\) and \(\varphi_{\mathcal{D}}\) are the quasi-planar isomorphisms. There are many examples of quasi-planar conilpotent curved cooperads. The main example is the following. **Proposition 10**.: _Let \(\mathcal{P}\) be a dg operad. The conilpotent curved cooperad \(\mathsf{B}(\mathcal{P}\otimes\mathcal{E})\) is quasi-planar._ Remark 23.: A first version of this proposition was proven in [11, Proposition 11.31]. We will give a new proof in Subsection 2.9, Proposition 15. Example 3.: Let \(f:\mathcal{P}\longrightarrow\mathcal{Q}\) be a morphism of dg operads. It induces a _quasi-planar_ morphism \(\mathsf{B}(f\otimes\mathcal{E}):\mathsf{B}(\mathcal{E}\otimes\mathcal{P}) \longrightarrow\mathsf{B}(\mathcal{E}\otimes\mathcal{Q})\) of quasi-planar conilpotent curved cooperads. Over a characteristic zero field, dg operads admit a model category structure where weak-equivalences are given by arity-wise quasi-isomorphisms and where fibrations are given by arity-wise degree-wise epimorphisms. One can then consider the transferred model category structure, along the operadic bar-cobar adjunction, on conilpotent curved cooperads constructed in [11]. Every conilpotent curved cooperad is then shown to be weak-equivalent to a quasi-planar conilpotent curved cooperad. **Proposition 11** ([11, Lemma 11.32]).: _Let \(\Bbbk\) be a field of characteristic zero and let \(\mathcal{C}\) be a conilpotent curved cooperad. There exists a quasi-planar conilpotent curved cooperad \(\mathcal{C}^{\prime}\) and an acyclic cofibration \(\mathcal{C}\xrightarrow{}\mathcal{C}^{\prime}\)._ Proof.: The canonical quasi-morphism \(\mathcal{E}\otimes\Omega\mathcal{C}\xrightarrow{}\Omega\mathcal{C}\) admits a section, since both dg operads are cofibrant. Therefore it suffices to consider the following composition \[\mathcal{C}\rightarrow\mathsf{B}\Omega\mathcal{C}\rightarrow\mathsf{B}( \mathcal{E}\otimes\Omega\mathcal{C})\,\] which is an acylic cofibration since it is the composition of two acyclic cofibrations. Remark 24.: Proposition 11 will imply that when \(\Bbbk\) is a field of characteristic zero, one does recover the previously known homotopical operadic calculus of [11] and [11]. **Proposition 12**.: _Let \(\mathcal{C}\) be a quasi-planar conilpotent dg cooperad. The underlying dg \(\mathsf{S}\)-module of \(\mathcal{C}\) is projective._ Proof.: Let us prove this by induction on the quasi-planar ladder that defines \(\mathcal{C}\). The dg \(\mathsf{S}\)-module \(\mathcal{C}^{(0)}\) is free as a dg \(\mathsf{S}\)-module, therefore it is projective. Let us write \(\mathcal{C}^{(i)}\) as the direct sum \(\mathcal{C}^{(i-1)}\oplus Y\) in the category of graded \(\mathsf{S}\)-modules. Then \(\mathcal{C}^{(i)}\) is obtained by attaching \(Y\) to \(\mathcal{C}^{(i-1)}\) via a pushout of the form \[\diagram{\diagram{\diagram{\diagram{\diagram{\diagram{\diagram{\diagram{\diagram{\diagram{ \diagram{\diagram{\diagram{\diagram{\diagram{\hskip-\hskip-22pt}}}}}}}}}}}}}}}\] \mathcal{C}^{(i-1)}\] which implies that \(\mathcal{C}^{(i)}\) is again cofibrant in the category of dg S-modules endowed with the projective model structure. We conclude by the fact that the colimit of cofibrant objects is again a cofibrant. Remark 25.: When \(\mathcal{C}\) is a quasi-planar conilpotent curved cooperad, the pre-differential of its underlying dg S-module can still be constructed by attaching cells at each step. Quasi-planar conilpotent curved cooperads are sent to cofibrant dg operads in the semi-model category structure of [10] by the operadic cobar construction of Subsection 2.1. **Proposition 13**.: _Let \(\mathcal{C}\) be a quasi-planar conilpotent curved cooperad. Then the dg operad \(\Omega\mathcal{C}\) is cofibrant._ Proof.: We can notice that the composition of the following functors satisfies the conditions of Proposition 9, which allows us to conclude. **Corollary 1**.: _Let \(\mathcal{C}\) be a conilpotent curved cooperad. The following assertions are equivalent_ 1. _the operad_ \(\Omega\mathcal{C}\) _is cofibrant;_ 2. _the morphism of dg operads_ \(\Omega\mathcal{C}\otimes\mathcal{E}\longrightarrow\Omega\mathcal{C}\) _admits a section._ Proof.: If \(\Omega\mathcal{C}\) is cofibrant, it is clear that the map \(\Omega\mathcal{C}\otimes\mathcal{E}\longrightarrow\Omega\mathcal{C}\) admits a section. Conversely, let us suppose that this map admits a section. Then the projection \(\Omega\mathsf{B}\big{(}(\Omega\mathcal{C})\otimes\mathcal{E}\big{)} \longrightarrow\Omega\mathcal{C}\) also admits a section given by \[\Omega\mathcal{C}\longrightarrow\Omega\mathsf{B}\Omega\mathcal{C} \longrightarrow\Omega\mathsf{B}(\mathcal{E}\otimes\Omega\mathcal{C})\.\] Therefore \(\Omega\mathcal{C}\) is a retract of \(\Omega\mathsf{B}(\mathcal{E}\otimes\Omega\mathcal{C})\). Since \(\Omega\mathsf{B}(\mathcal{E}\otimes\Omega\mathcal{C})\) is cofibrant (by Proposition 13 combined with Proposition 10), then its retract \(\Omega\mathcal{C}\) is also cofibrant. Remark 26.: In fact, for any \(\mathcal{C}\) be a quasi-planar conilpotent curved cooperad, the dg operad \(\Omega\mathcal{C}\) is not only \(\mathcal{E}\)-split, but there exists explicit canonical map \(\Omega\mathcal{C}\longrightarrow\mathcal{E}\otimes\Omega\mathcal{C}\) which endows \(\Omega\mathcal{C}\) with a \(\mathcal{E}\)-comodule structure. This is done in Subsection 2.8. ### The cofibrant resolution Finally, for any dg operad \(\mathcal{P}\), there is always a cofibrant replacement of the form \(\Omega\mathcal{C}\), where \(\mathcal{C}\) is a quasi-planar conilpotent curved cooperad. This coincides a previous result for augmented dg operads, see [14, Theorem 2, Proposition 17]. **Proposition 14**.: _For every operad \(\mathcal{P}\), \(\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\) is cofibrant and the canonical morphism_ \[\psi:\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\xrightarrow{}\mathcal{P}\] _is an arity-wise quasi-isomorphism._ Proof.: The fact that \(\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\) is cofibrant follows from the fact that \(\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\) is quasi-planar and that \(\Omega\) sends quasi-planar conilpotent curved cooperads to cofibrant dg operads. Let us us show that the map \(\psi:\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\longrightarrow\mathcal{P}\) is an arity-wise quasi-isomorphism. It suffices to show that the counit morphism \(\epsilon:\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\longrightarrow \mathcal{E}\otimes\mathcal{P}\) is an arity-wise quasi-isomorphism. It admits a canonical section \(s\) in the category of dg S-modules, given by the following composition: \[\mathcal{E}\otimes\mathcal{P}\cong s^{-1}s(\mathcal{E}\otimes\mathcal{P}) \longrightarrow s^{-1}\mathsf{B}(\mathcal{E}\otimes\mathcal{P}) \longrightarrow\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\.\] Let us denote \(\pi_{\mathcal{P}}=s\circ\epsilon\). Since \(\epsilon\circ s=\mathsf{Id}\), the counit \(\epsilon\) is an arity-wise quasi-isomorphism if and only if \(\pi_{\mathcal{P}}\) is arity-wise quasi-isomorphism. So let us prove that \(\pi_{\mathcal{P}}\) is an arity-wise quasi-isomorphism. Let us first assume that the unit map \(\eta:\mathcal{I}\longrightarrow\mathcal{P}\) has a left inverse in the category of graded S-modules. Let us denote \(\mathcal{Q}\) the graded S-module defined as follows \[\mathcal{Q}\coloneqq\mathcal{E}_{\mathrm{pl}}\otimes\mathcal{P}\.\] The map \(\mathcal{I}\longrightarrow\mathcal{Q}\) also has a left inverse. We denote \(\overline{\mathcal{Q}}\) the kernel of this section. Let us denote \(\mathcal{C}\) the following planar graded conilpotent cooperad \[\mathcal{C}\coloneqq\mathbb{T}_{\mathrm{pl}}(s\mathcal{Q}\oplus s^{2}\mathcal{ I})\cong\mathbb{T}_{\mathrm{pl}}(s\overline{\mathcal{Q}}\oplus s\mathcal{I}\oplus s ^{2}\mathcal{I})\.\] We have a canonical isomorphism of graded operads \[\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\simeq\mathbb{T}_{\mathrm{pl}} (s^{-1}\overline{\mathcal{C}})\otimes\mathbb{S}\.\] Let \(h\) be the degree \(1\) endomorphism of \(\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\) given as follows. 1. On the trivial tree (with no node): \[\mathcal{I}\longrightarrow s^{-1}s^{2}\mathcal{I}\longrightarrow s^{-1} \overline{\mathcal{C}}\] \[1\mapsto s^{-1}s^{2}1\.\] 2. On planar trees with one node, it is zero. 3. On planar trees \(t\) with two nodes or more it is \[\begin{array}{c}t(\overline{\mathcal{C}})\otimes\mathbb{S}\\ \\ s^{-1}\overline{\mathcal{C}}(n_{1})\otimes s^{-1}\overline{\mathcal{C}}(n_{2}) \otimes\cdots\otimes\mathbb{S}\\ \\ s^{-1}F_{1}^{\mathrm{rad}}(\overline{\mathcal{C}})(n_{1})\otimes s^{-1} \overline{\mathcal{C}}(n_{2})\otimes\cdots\otimes\mathbb{S}\\ \\ s^{-2}F_{1}^{\mathrm{rad}}(\overline{\mathcal{C}})(n_{1})\otimes\overline{ \mathcal{C}}(n_{2})\otimes\cdots\otimes\mathbb{S}\\ \\ s^{-1}\overline{\mathcal{C}}(n_{1})\otimes\overline{\mathcal{C}}(n_{2}) \otimes\cdots\otimes\mathbb{S}\\ \\ s^{-1}\overline{\mathcal{C}}(n_{1}+n_{2}-1)\otimes\cdots\otimes\mathbb{S}\\ \end{array}\] where the third and the fourth map are given by \[s^{-1}x\otimes s^{-1}y\otimes\cdots\otimes\{\sigma\}\mapsto-(-1)^{|x|}s^{-2 }x\otimes y\otimes\cdots\otimes\{\sigma\}\mapsto-(-1)^{|x|}s^{-1}x\otimes y \otimes\cdots\otimes\{\sigma\}\.\] Let show that \(\partial(h)+\pi_{\mathcal{P}}\) is an isomorphism. We filter \(\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\) using the tree filtration relative to the coradical filtration of \(\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\), with the difference that we define the fist part not to be the trivial tree but \(0\). This gives \[F_{0}\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P}) =0\,\] \[F_{1}\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P}) =\mathcal{I}\oplus s^{-1}(s(\mathcal{E}\otimes\mathcal{P})\oplus s ^{2}\mathcal{I})\.\] This filtration is preserved by \(h\), \(\pi_{\mathcal{P}}\) and the derivation. Let us show that the map on the associated graded object \(\mathrm{gr}(\partial(h)+\pi_{\mathcal{P}})\) is an isomorphism. 1. On \(\mathrm{gr}_{1}(\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P}))\) it is the identity. 2. For \(n\geq 2\), we filter \(\mathrm{gr}_{n}(\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P}))\) by the total sum of the degrees of elements of \(\mathcal{E}\) that appear in the tensors. This filtration is again preserved by all maps of interest. On the associated graded object, the map \(\mathrm{gr}_{*}\mathrm{gr}_{n}(\partial(h)+\pi_{\mathcal{P}})\) is the identity. Thus \(\mathrm{gr}(\partial(h)+\pi_{\mathcal{P}})\) is an isomorphism and therefore so is \(\partial(h)+\pi_{\mathcal{P}}\). This means that \(\pi_{\mathcal{P}}\) is an arity-wise quasi-isomorphism as it is chain homotopic to an arity-wise quasi-isomorphism. Finally, if the unit \(\mathcal{I}\longrightarrow\mathcal{P}\) does not admit a left inverse in the category of graded \(\mathbb{S}\)-modules, then \(\mathcal{P}=0\) and the result holds trivially. ### The universal structure of a Barratt-Eccles comodule In this subsection, we equip any dg operad of the form \(\varOmega\mathcal{C}\) with a \(\mathcal{E}\)-comodule structure, where \(\mathcal{C}\) is again a quasi-planar conilpotent curved cooperad and where \(\mathcal{E}\) denotes the Barratt-Eccles operad. This comonoid structure can be viewed as the positive characteristic analogue of the \(\mathfrak{u}\mathcal{C}\)om-comonoid structure that exists for any dg operad. By that, we mean that it produces universal convolution (curved absolute) partition \(\mathcal{L}_{\infty}\)-algebra structures on the hom-graded modules between types of coalgebras and types of algebras. For more details, we refer to [13]. Recall that the restriction of the differential on \(\varOmega\mathcal{C}\) to the generators \(s^{-1}\overline{\mathcal{C}}_{\mathrm{pl}}\) decomposes into three maps * \(d_{s^{-1}\overline{\mathcal{C}}}:s^{-1}\overline{\mathcal{C}}_{\mathrm{pl}} \longrightarrow s^{-1}\overline{\mathcal{C}}_{\mathrm{pl}}\otimes\mathbb{S} \cong s^{-1}\overline{\mathcal{C}}\), which is given by \(s^{-1}d_{\overline{\mathcal{C}}}\); * \(d_{\mathrm{a}}:s^{-1}\overline{\mathcal{C}}_{\mathrm{pl}}\longrightarrow \mathbb{T}_{\mathrm{pl}}^{(2)}s^{-1}\overline{\mathcal{C}}_{\mathrm{pl}}\), induced by the (planar) partial decompositions; * \(d_{\mathrm{b}}:s^{-1}\overline{\mathcal{C}}_{\mathrm{pl}}\longrightarrow \mathcal{I}\), induced by the curvature of \(\mathcal{C}\). For \(p\geq 1\), \(q\geq 0\), \(1\leq i\leq p\), let us denote \[\Delta_{i}^{p,q}:s^{-1}\overline{\mathcal{C}}_{\mathrm{pl}}(p+q-1) \longrightarrow s^{-1}\overline{\mathcal{C}}_{\mathrm{pl}}(p)\otimes s^{-1} \overline{\mathcal{C}}_{\mathrm{pl}}(q)\] the \(i\)-th (planar) partial decomposition map of the cooperad \(\mathcal{C}_{\mathrm{pl}}\) into arity \(p\) and arity \(q\) elements. It is the composition of the map \(d_{\Delta}\) at arity \(p+q-1\) with the projection onto planar trees \(\mathfrak{f}\) with two nodes * the root node with \(p\) leaves; * the second node with \(q\) leaves and that is plugged to the root node at its \(\mathrm{ith}\) leaf. Let \(M\) be a pdg \(\mathbb{S}\)-module that is _quasi-free_, that is, such that there exists a graded \(\mathbb{N}\)-module \(N\) and an isomorphism of graded \(\mathbb{S}\)-module \(M\cong N\otimes\mathbb{S}\). We are interested in how the pre-differential interacts with the action of the symmetric groups. Note for any given \(n\geq 0\), there is an isomorphism \[M(n)\cong\bigoplus_{\sigma\in\mathbb{S}_{n}}N(n)\otimes\{\sigma\}\,\] of graded modules. **Definition 37** (\(\sigma\) pre-differential).: Let \(M\) be a pdg \(\mathbb{S}\)-module that is _quasi-free_, that is, such that there exists a graded \(\mathbb{N}\)-module \(N\) and an isomorphism of graded \(\mathbb{S}\)-module \(M\cong N\otimes\mathbb{S}\). Let \(\sigma\) be in \(\mathbb{S}_{n}\), the \(\sigma\)_pre-differential_, denoted by \(d_{\sigma}\), is the degree \(-1\) endomorphism of \(N(n)\) given by the composite \[N(n)\simeq N(n)\otimes\{\mathrm{Id}\}\hookrightarrow M(n)\xrightarrow{du_{s}} M(n)\twoheadrightarrow N(n)\otimes\{\sigma\}\simeq N(n)\.\] We denote by \(D_{\sigma}\) the \(\mathbb{S}_{n}\)-equivariant degree \(-1\) endomorphism of \(M(n)\) whose restriction to the generators \(N\) is the composition \[N(n)\otimes\{\mathrm{Id}\}\hookrightarrow M(n)\xrightarrow{du_{s}}M(n) \twoheadrightarrow N(n)\otimes\{\sigma\}\hookrightarrow M(n)\.\] For any \(x\) in \(N(n)\), we have \(D_{\sigma}(X\otimes\{\mathrm{Id}\})=d_{\sigma}(x)\otimes\{\sigma\}\). Similarly, for a sequence of permutations \(\underline{\sigma}:=(\sigma_{1},\ldots,\sigma_{k})\in\mathbb{S}_{n}^{k}\), we set \[d_{\underline{\sigma}}:=d_{\sigma_{1}}\cdots d_{\sigma_{k}}\quad\text{and} \quad D_{\underline{\sigma}}:=D_{\sigma_{1}}\cdots D_{\sigma_{k}}\.\] Notice that, for any \(x\) in \(M(n)\), we have \[D_{\emptyset}(x\otimes\{\mathrm{Id}\})=x\otimes\{\mathrm{Id}\}\ ;\quad D_{(\sigma_{1},\ldots, \sigma_{k})}(x\otimes\{\mathrm{Id}\})=d_{(\sigma_{1},\ldots,\sigma_{k})}(x) \otimes\{\sigma_{1}\cdots\sigma_{k}\}\.\] The case of interest is when \(M\) is the underlying pdg \(\mathbb{S}\)-module of the quasi-planar curved cooperad \(\mathcal{C}\) and \(N\) is the graded \(\mathbb{N}\)-module \(\mathcal{C}_{\mathrm{pl}}\) of planar generators. **Lemma 10**.: _Let \(n\geq 0\) and let \(\mathcal{C}\) be a quasi-planar conilpotent curved cooperad. For every planar generator \(x\in C_{\mathfrak{pl}}(n)\), there exists a natural integer \(k_{\kappa}\) such that for every sequence of permutations \((\sigma_{1},\dots,\sigma_{k})\in\mathbb{S}_{n}^{k}\) of length \(k\geq k_{\kappa}\), we have_ \[d_{(\sigma_{1},\dots,\sigma_{k})}(x)=0\.\] _The same result holds, mutatis mutandis, for \(s^{-1}\overline{\mathcal{C}}\) equipped with the pre-differential \(d_{s^{-1}\overline{\mathcal{C}}}=s^{-1}d_{\overline{\mathcal{C}}}\)._ Proof.: Since the cooperad \(\mathcal{C}\) is quasi-planar, it is the colimit of a quasi-planar ladder \(\{\mathcal{C}^{(i)}\}_{i\in\alpha}\). Let us prove by an ordinal induction that \(\mathcal{C}^{(i)}\) satisfies the property of the lemma for every \(i<\alpha+1\). The conilpotent curved cooperad \(\mathcal{C}^{(0)}\) is planar, therefore \(d_{\sigma}\) is zero whenever the permutation \(\sigma\) is non-trivial. So we only need to worry about the planar pre-differential \(D_{\mathrm{Id}}\). Now notice that by definition \(\mathcal{C}^{(0)}\) admits no non-trivial decomposition, therefore its pre-differential squares to zero. This proves the property for \(\mathcal{C}^{(0)}\). Now, let us suppose that \(\mathcal{C}^{(i)}\) satisfies the property for every \(j<i\). If \(i\) is a limit ordinal, \(\mathcal{C}^{(i)}\) also satisfies the property. Otherwise, \(i\) has the form \(i=l+1\). One can notice that on \(\operatorname{gr}_{i}\mathcal{C}\), the maps \(D_{\sigma}\) is zero whenever the permutation \(\sigma\) is non-trivial and that \(d^{2}=D_{\mathrm{Id}}^{2}=0\). So, for every \(x\in\mathcal{C}^{(i)}(n)\), and for every two permutations \(\sigma_{1},\sigma_{2}\), \(D_{(\sigma_{1},\sigma_{2})}(x)\) is in \(\mathcal{C}^{(i)}(n)\). This proves the property for \(\mathcal{C}^{(i)}\), and allows us to conclude. Finally, the result for \(s^{-1}\overline{\mathcal{C}}\) is a direct consequence of the previous one. **The comodule map.** Let us consider the morphism of graded \(\mathbb{N}\)-modules where \(s^{-1}x\in s^{-1}\overline{\mathcal{C}}_{\mathfrak{pl}}(n)\) is a planar generator and where the sum is taken over all sequences of permutations \(\underline{\sigma}=(\sigma_{1},\dots,\sigma_{k})\in\mathbb{S}_{n}^{k}\) for all \(k\geq 0\). The formula is well-defined by Lemma 10. Moreover, notice that \(\Delta_{\mathcal{E},\mathcal{C}}\) is the restriction of the morphism of operads \[\Omega\mathcal{C}=\mathbb{T}(s^{-1}\overline{\mathcal{C}})\xrightarrow{ \Delta_{\mathcal{E},\mathcal{C}}}\mathbb{T}(\mathcal{E}\otimes s^{-1} \overline{\mathcal{C}})\longrightarrow\mathbb{T}(\mathcal{E})\otimes\mathbb{ T}(s^{-1}\overline{\mathcal{C}})\longrightarrow\mathcal{E}\otimes\Omega \mathcal{C}\,\] which we also denote by \(\Delta_{\mathcal{E},\mathcal{C}}\). **Theorem 2**.: _The map_ _where \(s^{-1}x\in s^{-1}\overline{\mathcal{C}}_{\mathfrak{pl}}(n)\) is a planar generator and where the sum is taken over all sequences of permutations \(\underline{\sigma}=(\sigma_{1},\dots,\sigma_{k})\in\mathbb{S}_{n}^{k}\) for all \(k\geq 0\), induces a morphism of dg operads_ \[\Omega\mathcal{C}\longrightarrow\mathcal{E}\otimes\Omega\mathcal{C}\] _which endows \(\Omega\mathcal{C}\) with a left \(\mathcal{E}\)-comodule structure._ Remark 27.: There is an analogue right \(\mathcal{E}\)-comodule structure, induced by the same formula. Proof.: By definition, it induces a morphism of graded operads. By Lemma 11, Lemma 13 and Lemma 17, this induced morphism commutes with the differentials. Thus, it is a morphism of dg operads. By Lemma 18, it defines a left \(\mathcal{E}\)-comodule structure. The rest of this subsection is devoted to proving Theorem 2. **Lemma 11**.: _The diagram_ _is commutative, where \(d_{\theta}\) is the term of the differential in \(\Omega\mathcal{C}\) induced by the curvature \(\theta\) of \(\mathcal{C}\)._ Proof.: The diagram trivially commutes at arity \(n\neq 1\). In arity \(1\), the commutation is a direct consequence of the fact that \(\Delta_{\mathcal{E},\mathcal{C}}(x\otimes\mathrm{Id})=(\mathrm{Id})\otimes x \otimes\{\mathrm{Id}\}\) for any \(x\in s^{-1}\overline{\mathcal{C}}_{\mathrm{pl}}(1)\). **Lemma 12**.: _For every natural integers, \(n\geq 2,k\geq 1\), we have an equality of degree \(-1\) maps from \(s^{-1}\overline{\mathcal{C}}(n)\) to \(\mathcal{E}(n)\otimes s^{-1}\overline{\mathcal{C}}(n)\):_ \[\sum_{\underline{\sigma}\in\mathbb{S}_{n}^{k}}d_{\mathcal{E}}\rho(\underline{ \sigma})\otimes D_{\underline{\sigma}}(-)=\sum_{\underline{\sigma}\in\mathbb{ S}_{n}^{k}}\rho((\sigma_{1},\ldots,\sigma_{k-1}))^{\sigma_{k}}\otimes D_{ \underline{\sigma}}(-)+(-1)^{k}\rho((\sigma_{2},\ldots,\sigma_{k}))\otimes D_ {\underline{\sigma}}(-)\.\] Proof.: We have \[d_{\mathcal{E}}(\rho(\underline{\sigma}))=\rho((\sigma_{1},\ldots,\sigma_{k-1 }))^{\sigma_{k}}+\sum_{0<i<k}(-1)^{k-i}\delta_{i}(\rho(\underline{\sigma}))+( -1)^{k}\rho((\sigma_{2},\ldots,\sigma_{k}))\,\] where \[\delta_{i}(\rho(\underline{\sigma}))=\rho(\sigma_{1}\ldots,\sigma_{i}\sigma_ {i+1},\ldots,\sigma_{k})\.\] This gives us \[\sum_{\underline{\sigma}\in\mathbb{S}_{n}^{k}}d_{\mathcal{E}}\rho (\overline{\sigma})\otimes D_{\underline{\sigma}}(-)= \sum_{\underline{\sigma}\in\mathbb{S}_{n}^{k}}\Big{(}\rho(( \sigma_{1},\ldots,\sigma_{k-1}))^{\sigma_{k}}\otimes D_{\underline{\sigma}}(-)\] \[+\sum_{0<i<k}(-1)^{k-i}\sum_{\underline{\sigma}}\delta_{i}(\rho( \underline{\sigma}))\otimes D_{\underline{\sigma}}(-)\Big{)}\] \[+(-1)^{k}\sum_{\underline{\sigma}\in\mathbb{S}_{n}^{k}}\rho(( \sigma_{2},\ldots,\sigma_{k}))\otimes D_{\underline{\sigma}}(-)\.\] Since the underlying graded conilpotent cooperad of \(\mathcal{C}\) is planar, the curvature equation tells us that \(d_{s^{-1}\overline{\mathcal{C}}}^{2}\) is planar and that, for every permutation \(\sigma\in\mathbb{S}_{n}\), \[\sum_{\mu\notin\sigma}D_{\mu}D_{\xi}=\left\{\begin{array}{l}d_{s^{-1} \overline{\mathcal{C}}}^{2}\ \text{if}\ \sigma=\mathrm{Id}\,\\ 0\ \text{otherwise}.\end{array}\right.\] Noticing that \(\delta_{i}(\rho(\underline{\sigma}))\) depends only on the product \(\sigma_{i}\sigma_{i+1}\) and not on the particular values of \(\sigma_{i}\) and \(\sigma_{i+1}\), we have \[\sum_{\underline{\sigma}\in\mathbb{S}_{n}^{k}}\delta_{i}(\rho(\underline{ \sigma}))\otimes D_{\underline{\sigma}}(-)=\sum_{\underline{\mu}=(\mu_{1}, \ldots,\mu_{k-1})\in\mathbb{S}_{n}^{k-1}}\sum_{\sigma\sigma^{\prime}=\mu_{i} }\rho(\underline{\mu})\otimes D_{(\mu_{1},\ldots,\sigma\sigma^{\prime},\ldots,\mu_{k-1})}(-)\.\] If \(\mu_{i}=\mathrm{Id}\) then \(\rho(\underline{\mu})=0\) and if \(\mu_{i}\neq\mathrm{Id}\) then \(D_{(\mu_{1},\ldots,\sigma\sigma^{\prime},\ldots,\mu_{k-1})}=0\). Thus \[\sum_{\underline{\sigma}\in\mathbb{S}_{n}^{k}}\delta_{i}(\rho(\underline{ \sigma}))\otimes D_{\underline{\sigma}}(-)=0\,\] which concludes the proof. **Lemma 13**.: _The following diagram_ _is commutative._ Proof.: In arity \(0\) and \(1\), the commutation is straightforward to check. In arity \(n\geq 2\), we have \(d_{s^{-1}\overline{c}}=\sum_{\sigma\in\mathbb{S}_{n}}D_{\sigma}\). This gives the following equalities of maps on \(s^{-1}\overline{c}(n)\) \[\Delta_{\mathcal{E},\mathcal{C}}d_{s^{-1}\overline{c}}(-) =\sum_{\underline{\sigma}\in\mathbb{S}_{n}^{+},\ \sigma\in\mathbb{S}_{n}}\rho(\underline{\sigma})^{\sigma}\otimes D_{ \underline{\sigma}}D_{\sigma}(-)\,\] \[(\mathrm{Id}\otimes d_{s^{-1}\overline{c}})\Delta_{\mathcal{E}, \mathcal{C}}(-) =\sum_{k\geq 0}\sum_{\underline{\sigma}\in\mathbb{S}_{n}^{+},\ \sigma\in\mathbb{S}_{n}}(-1)^{k}\rho(\underline{\sigma})\otimes D_{ \sigma}D_{\underline{\sigma}}(-)\.\] It follows from Lemma 12 that \[\Delta_{\mathcal{E},\mathcal{C}}d_{s^{-1}\overline{c}}(-)=(d_{\mathcal{E}} \otimes\mathrm{Id})\Delta_{\mathcal{E},\mathcal{C}}(-)+(\mathrm{Id}\otimes d_ {s^{-1}\overline{c}})\Delta_{\mathcal{E},\mathcal{C}}(-)\,\] and thus that the square also commutes in arity \(n\geq 2\). **Lemma 14**.: _Let \(p,q\geq 1\), \(1\leq i\leq p,n=p+q-1\) and let \(\sigma\in\mathbb{S}_{n}\) be a non-trivial permutation. One has the following egalites between maps from \(s^{-1}\overline{c}_{\mathrm{pl}})(n)\) to \((\mathbb{T}_{\mathrm{pl}}^{(2)}(s^{-1}\overline{c}_{\mathrm{pl}}))(n)\otimes \mathbb{S}_{n}\) depending on the \((p,q,i)\)-admissibility of \(\sigma\)._ 1. _If_ \(\sigma\) _is_ \((p,q,i)\)_-top admissible, that is, if there exists a unique non-trivial permutation_ \(\psi\in\mathbb{S}_{q}\) _such that_ \(\sigma=\mathrm{Id}_{p}\circ_{i}\psi\)_, then_ \[\Delta_{i}^{p,q}D_{\sigma}=-(\mathrm{Id}\otimes D_{\psi})\Delta_{i}^{p,q}\.\] 2. _If_ \(\sigma\) _is_ \((p,q,i)\)_-bottom admissible, that is, if there exists a unique non-trivial permutation_ \(\mu\in\mathbb{S}_{p}\) _such that_ \(\sigma=\mu\circ_{\mu^{-1}(i)}\mathrm{Id}_{q}\)_, then_ \[\Delta_{i}^{p,q}D_{\sigma}=-(D_{\mu}\otimes\mathrm{Id})\Delta_{\mu^{-1}(i)}^{ p,q}\.\] 3. _If_ \(\sigma\) _is_ \((p,q,i)\)_-non admissible, then_ \[\Delta_{i}^{p,q}D_{\sigma}=0\.\] Proof.: Since the derivation of \(\Omega\mathcal{C}\) squares to zero, one has the following equation \[d_{\Delta}d_{s^{-1}\overline{c}}+(d_{s^{-1}\overline{c}}\otimes\mathrm{Id}+ \mathrm{Id}\otimes d_{s^{-1}\overline{c}})d_{\Delta}=0\] between maps from \(s^{-1}\overline{c}_{\mathrm{pl}}\) to \((\mathbb{T}^{(2)}(s^{-1}\overline{c}))\simeq(\mathbb{T}_{\mathrm{pl}}^{(2)}(s ^{-1}\overline{c}_{\mathrm{pl}}))\otimes\mathbb{S}\). In arity \(n\), we denote \(\pi_{p,q,i,\sigma}\) the projection \[(\mathbb{T}_{\mathrm{pl}}^{(2)}(s^{-1}\overline{c}_{\mathrm{pl}}))(n)\otimes \mathbb{S}_{n}\to\mathbb{T}_{\mathrm{pl},p,q,i}^{(2)}(s^{-1}\overline{c}_{ \mathrm{pl}})\otimes\{\sigma\}\,\] where \(\mathbb{T}_{\mathrm{pl},p,q,i}^{(2)}(s^{-1}\overline{c}_{\mathrm{pl}})\) denotes the planar trees labelled by \(s^{-1}\overline{c}_{\mathrm{pl}}\) that have two nodes: the root node with \(p\) input and another node with \(q\) inputs, where the top node is attached to the \(i^{th}\) input of the root node. One has \[\pi_{p,q,i,\sigma}d_{\Delta}d_{s^{-1}\overline{c}}=\Delta_{i}^{p,q}D_{\sigma}.\] Moreover, \[\pi_{p,q,i,\sigma}(d_{s^{-1}\overline{c}}\otimes\mathrm{Id})d_{\Delta}= \left\{\begin{array}{ll}(D_{\mu}\otimes\mathrm{Id})\Delta_{\mu^{-1}(i)}^{p,q }\ \text{if $\sigma$ is bottom admissible;}\\ 0\ \text{otherwise.}\end{array}\right.\] and \[\pi_{p,q,i,\sigma}(\mathrm{Id}\otimes d_{s^{-1}\overline{c}})d_{\Delta}= \left\{\begin{array}{ll}(\mathrm{Id}\otimes D_{\psi})\Delta_{i}^{p,q}\ \text{if $\sigma$ is top admissible;}\\ 0\ \text{otherwise.}\end{array}\right.\] Then, the three equations of the proposition are just the three possible cases (in terms of \((p,q,i)\)-admissibility of \(\sigma\)) of the equation \[\mathbf{\pi}_{p,q,i,\sigma}(d_{\mathsf{a}}d_{\mathsf{s}^{-1}\overline{\mathcal{C}}}+ (d_{\mathsf{s}^{-1}\overline{\mathcal{C}}}\otimes\mathsf{Id}+\mathsf{Id} \otimes d_{\mathsf{s}^{-1}\overline{\mathcal{C}}})d_{\mathsf{a}})=0\.\] **Lemma 15**.: _Let us denote by \(u\) the generator element of \(u\mathcal{Ass}(0)=\mathcal{E}(0)\). Let \(p\geq 1\), let \(\sigma\in\mathbb{S}_{p}\) be a non-trivial permutation and let \(1\leq i\leq p+1\). Then,_ \[\sum_{\mu\ \circ_{\mu-1}(0)}(D_{\mu}\otimes\mathsf{Id})\Delta_{\mu^{-1}(i)}^{p,0 }=-\Delta_{i}^{p,0}D_{\sigma}\,\] _where the sum ranges over all \(\mu\) in \(\mathbb{S}_{p+1}\) such that \(\mu\ \circ_{\mu^{-1}(i)}u=\sigma\)._ Proof.: It follows from the same arguments as those used in the proof of Lemma 14. **Lemma 16**.: _Let \(k\geq 0\), \(p\geq 1\), \(q\geq 0\), \(1\leq i\leq p\), \(n=p+q-1\) and let \(\underline{\sigma}\in\mathbb{S}_{n}^{k}\) be a sequence of non-trivial permutations. Then_ \[\Delta_{i}^{p,q}D_{\underline{\sigma}}(-)=\sum_{\underline{\mu}\circ_{\mu} \underline{\sigma}=\underline{\sigma}}(-1)^{k}\epsilon(\phi)(D_{\underline{ \mu}}\otimes D_{\underline{\psi}})\Delta_{\underline{\mu}^{-1}(i)}^{p,q}(-)\,\] _where the sum ranges over all \(\underline{\mu}\in\mathbb{S}_{p}^{q}\) and all \(\underline{\psi}\in\mathbb{S}_{q}^{b}\), with \(a\geq 1\), \(b\geq 1\) and \(a+b=k\), such that there exists a \((a,b)\)-shuffle \(\phi\) such that \(\underline{\mu}\circ_{i,\underline{\psi}}\underline{\psi}=\underline{\sigma}\). and where \(\underline{\mu}^{-1}(i)=\mu_{a}^{-1}\cdots\mu_{1}^{-1}(i)\)._ Proof.: First, in the case where \(q\geq 1\), a straightforward induction using Lemma 14 shows that: 1. if \(\underline{\sigma}\) is \((p,q,i)\)-admissible, that is, if there exist an unique shuffle permutation \(\phi\in\mathbb{S}_{k}\) (which is an \((a,b)\)-shuffle) and an unique pair of sequences of permutations \(\underline{\mu}\in\mathbb{S}_{p}^{q}\), \(\underline{\psi}\in\mathbb{S}_{q}^{b}\) such that \[\underline{\sigma}=\underline{\mu}\circ_{i,\phi}\underline{\psi}\,\] then \[\Delta_{i}^{p,q}D_{\underline{\sigma}}(-)=(-1)^{k}\epsilon(\phi)(D_{\underline{ \mu}}\otimes D_{\underline{\psi}})\Delta_{j}^{p,q}(-)\,\] where \(j=\mu_{a}^{-1}\cdots\mu_{1}^{-1}(i)\). 2. If \(\underline{\sigma}\) is not \((p,q,i)\)-admissible, then \[\Delta_{i}^{p,q}D_{\underline{\sigma}}(-)=0\.\] In the case where \(q=0\), the result follows from a similar induction using Lemma 15. **Lemma 17**.: _The following diagram_ _is commutative, where \(\gamma\) is the composition within the operad \(\mathcal{E}\otimes\Omega\mathcal{C}\)._ Proof.: First let \(p\geq 1\), \(q\geq 0\), \(1\leq i\leq p\) and \(n=p+q-1\). As in the proof of Lemma 14, we denote \(\mathbb{T}_{\mathsf{pl},p,q,i}^{(2)}(s^{-1}\overline{\mathcal{C}}_{\mathsf{pl}})\) the planar trees labelled by \(s^{-1}\overline{\mathcal{C}}_{\mathsf{pl}}\) that have two nodes: the root node with \(p\) inputs and another node with \(q\) inputs, which is attached to the \(i^{th}\) input of the root node. Let us prove that, in arity \(n\), the composition of the two maps \[s^{-1}\overline{\mathcal{C}}_{\mathsf{pl}}(n)\rightrightarrows\mathcal{E}(n) \otimes\mathbb{T}_{\mathsf{pl}}^{(2)}(s^{-1}\mathcal{C}_{\mathsf{pl}})(n) \otimes\mathbb{S}_{n}\] with the projection \[\mathsf{Id}\otimes\mathbf{\pi}_{p,q,i}\otimes\mathsf{Id}:\mathcal{E}(n)\otimes \mathbb{T}_{\mathsf{pl}}^{(2)}(s^{-1}\mathcal{C}_{\mathsf{pl}})(n)\otimes \mathbb{S}_{n}\twoheadrightarrow\mathcal{E}(n)\otimes\mathbb{T}_{\mathsf{pl},p,q,i}^{(2)}(s^{-1}\overline{\mathcal{C}}_{\mathsf{pl}})\otimes\mathbb{S}_{n}\] are equal. This amounts to show that \[\sum_{\underline{\mu}\in\mathbb{S}_{p}\cdot\underline{\psi}\in\mathbb{S}_{q}^{ \underline{p}}}(-1)^{|\mu||\psi|}(\rho(\underline{(\mu)}))\circ_{\underline{\mu}^ {-1}(i)}\rho((\underline{(\psi)}))\otimes((D_{\underline{\mu}}\otimes D_{ \underline{\psi}})\Delta_{\underline{\mu}^{-1}(i)}^{p,q})=(\operatorname{Id} \otimes\Delta_{i}^{p,q})\left(\sum_{\underline{\sigma}\in\mathbb{S}_{n}^{ \underline{\sigma}}}\rho(\underline{\sigma})\otimes D_{\underline{\sigma}} \right)\.\] which follows from the sequence of equalities \[\sum_{\underline{\mu}\in\mathbb{S}_{p}\cdot\underline{\psi}\in \mathbb{S}_{q}^{\underline{p}}}(-1)^{|\mu||\psi|}(\rho(\underline{(\mu)})) \circ_{\underline{\mu}^{-1}(i)}\rho((\underline{(\psi)}))\otimes((D_{ \underline{\mu}}\otimes D_{\underline{\psi}})\Delta_{\underline{\mu}^{-1}(i)} ^{p,q})\] \[= \sum_{\underline{\mu}\in\mathbb{S}_{p}\cdot\underline{\psi}\in \mathbb{S}_{q}^{\underline{p}}}(-1)^{|\mu||\psi|}(\rho(\underline{(\mu)}) \circ_{\underline{\mu}^{-1}(i)}\rho(\underline{(\psi)}))\otimes((D_{\underline {\mu}}\otimes D_{\underline{\psi}})\Delta_{\underline{\mu}^{-1}(i)}^{p,q})\] \[= \sum_{\underline{\mu}\in\mathbb{S}_{n}^{\underline{\sigma}}}\rho( \underline{\sigma})\otimes\left(\sum_{\underline{\mu}\circ_{\psi}\underline{ \sigma}=\sigma}(-1)^{|\sigma|}(D_{\underline{\mu}}\otimes D_{\underline{\psi}} )\Delta_{\underline{\mu}^{-1}(i)}^{p,q}\right)\] \[= \sum_{\underline{\sigma}\in\mathbb{S}_{n}^{\underline{\sigma}}} \rho(\underline{\sigma})\otimes\left((-1)^{|\sigma|}(\Delta_{i}^{p,q}D_{ \underline{\sigma}})\right)\quad\text{ by Lemma \ref{lem:eq:eq:eq:eq:eq:eq:eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq: eq eq: eq: eq: eq eq: eq: eq eq: eq: eq: eq: eq: eq: eq Then, one filters \(s(\mathcal{E}_{\mathrm{pl}}\otimes\mathcal{P})\oplus s^{2}\mathcal{I}\) as \[\bar{F}_{0}(s(\mathcal{E}_{\mathrm{pl}}\otimes\mathcal{P})\oplus s ^{2}\mathcal{I}) =0\] \[\bar{F}_{n}(s(\mathcal{E}_{\mathrm{pl}}\otimes\mathcal{P})\oplus s ^{2}\mathcal{I}) =\bar{F}_{n}(s(\mathcal{E}_{\mathrm{pl}}\otimes\mathcal{P})\oplus s ^{2}\mathcal{I},\quad n\geq 1.\] Thus, one can also filter \(\mathbb{T}_{\mathrm{pl}}(s(\mathcal{E}_{\mathrm{pl}}\otimes\mathcal{P})\oplus s ^{2}\mathcal{I})\) by \[\bar{F}_{n}\mathbb{T}_{\mathrm{pl}}(s\mathcal{E}_{\mathrm{pl}}\otimes\mathcal{ P})(m)=\bigoplus_{t}\sum_{i_{1}+\dots+i_{k}=n}\bigotimes_{j=1}^{k}\ \bar{F}_{i_{j}}(s(\mathcal{E}_{\mathrm{pl}}\otimes\mathcal{P})\oplus s^{2} \mathcal{I})(t_{j})\,\] where the first sum is taken over the planar trees \(t\) with \(m\)-leaves ; it has \(k\) nodes whose arities are \(l_{1},\dots,l_{k}\). Finally, this induces a filtration \(\bar{F}_{n}\mathcal{B}(\mathcal{E}\otimes\mathcal{P})\) on the underlying graded conilpotent cooperad of \(\mathcal{B}(\mathcal{E}\otimes\mathcal{P})\) that is canonically isomorphic to \(\mathbb{T}_{\mathrm{pl}}(s\mathcal{E}_{\mathrm{pl}}\otimes\mathcal{P})\otimes \mathbb{S}\), where the isomorphism is induced the above isomorphism \(\mathcal{E}\otimes\mathcal{P}\cong(\mathcal{E}_{\mathrm{pl}}\otimes\mathcal{ P})\otimes\mathbb{S}\). **Proposition 15**.: _For every natural integer \(n\), \(\bar{F}_{n}\mathcal{B}(\mathcal{E}\otimes\mathcal{P})\) is a conilpotent curved subcooperad of \(\mathcal{B}(\mathcal{E}\otimes\mathcal{P})\). Moreover, the diagram_ \[\bar{F}_{0}\mathcal{B}(\mathcal{E}\otimes\mathcal{P})\longrightarrow\bar{F}_{ 1}\mathcal{B}(\mathcal{E}\otimes\mathcal{P})\longrightarrow\dots\longrightarrow \bar{F}_{n}\mathcal{B}(\mathcal{E}\otimes\mathcal{P})\longrightarrow\dots\] _is a quasi-planar \(\omega\)-ladder whose colimit is \(\mathcal{B}(\mathcal{E}\otimes\mathcal{P})\)._ Proof.: The only point that is not clear is the fact that the non-planar component of the pre-differential of \(\bar{F}_{n+1}\mathcal{B}(\mathcal{E}\otimes\mathcal{P})\) actually targets \(\bar{F}_{n}\mathcal{B}(\mathcal{E}\otimes\mathcal{P})\). This follows from the fact that this non-planar component is given by applying the non-planar part of the differential of \(\mathcal{E}\). Let \(\mathcal{C}\) be a quasi-planar conilpotent curved cooperad that is the colimit of a quasi-planar cooperad ladder \((\mathcal{C}^{(i)})_{i\in\alpha}\). We built in the previous subsection 2.8 a morphism of dg operads \[\Omega\mathcal{C}\longrightarrow\mathcal{E}\otimes\Omega\mathcal{C}\] whose restriction to \(s^{-1}\overline{\mathcal{C}}_{\mathrm{pl}}\) factors through \[s^{-1}\overline{\mathcal{C}}_{\mathrm{pl}}\longrightarrow\mathcal{E}_{ \mathrm{pl}}\otimes s^{-1}\overline{\mathcal{C}}.\] Subsequently, the restriction of the adjoint map \(\mathcal{C}\longrightarrow\mathcal{B}(\mathcal{E}\otimes\Omega\mathcal{C})\) to \(\mathcal{C}_{\mathrm{pl}}\) factor through \[\mathcal{C}_{\mathrm{pl}}\longrightarrow\mathbb{T}_{\mathrm{pl}}(\mathcal{E}_ {\mathrm{pl}}\otimes\Omega\mathcal{C})\.\] **Definition 38**.: Let \(\mathcal{C}\) be a quasi-planar conilpotent curved cooperad. For every natural integer \(n\), we define the _quasi-planar ladder_\(F_{n}^{\mathrm{ap}}\mathcal{C}_{\mathrm{pl}}\) of \(\mathcal{C}_{\mathrm{pl}}\) as the following pullback in the category of graded conilpotent cooperads It is the largest sub-planar conilpotent cooperad of \(\mathcal{C}_{\mathrm{pl}}\) whose image in \(\mathcal{B}(\mathcal{E}\otimes\Omega\mathcal{C})_{\mathrm{pl}}\) lies inside \(\bar{F}_{n}\mathcal{B}(\mathcal{E}\otimes\Omega\mathcal{C})_{\mathrm{pl}}\). **Lemma 19**.: _The above square that defines \(F_{n}^{\mathrm{ap}}\mathcal{C}_{\mathrm{pl}}\) is a pullback in the category of graded \(\Bbbk\)-modules. In particular, the map \(F_{n}^{\mathrm{ap}}\mathcal{C}_{\mathrm{pl}}\mapsto\mathcal{C}_{\mathrm{pl}}\) is degree-wise injective._ Proof.: This is a direct consequence of the fact that the tensor product of graded \(\Bbbk\)-modules preserves intersections. Notice that the diagram of graded planar conilpotent cooperads \[F_{0}^{\mathrm{pl}}\mathcal{C}_{\mathrm{pl}}\mapsto F_{1}^{\mathrm{pl}} \mathcal{C}_{\mathrm{pl}}\rightarrow\dots\mapsto F_{n}^{\mathrm{pl}}\mathcal{C} _{\mathrm{pl}}\rightarrow\dots\] is an \(\omega\)-ladder whose colimit is \(\mathcal{C}_{\mathrm{pl}}\). **Definition 39** (Canonical quasi-planar ladder).: Let \(\mathcal{C}\) be a quasi-planar conilpotent curved cooperad. For every natural integer \(n\), we define the _canonical quasi-planar ladder_\(F_{n}^{\mathrm{ap}}\mathcal{C}\) of \(\mathcal{C}\) as the following pullback in the category of conilpotent curved cooperads **Proposition 16**.: _Let \(\mathcal{C}\) be a quasi-planar conilpotent curved cooperad._ 1. _There is a canonical isomorphism of graded conilpotent cooperads_ \[\left(F_{n}^{\mathrm{pl}}\mathcal{C}_{\mathrm{pl}}\right)\otimes\mathbb{S} \cong F_{n}^{\mathrm{ap}}\mathcal{C}\.\] _Furthermore, it is natural with respect to the inclusions_ \(F_{n}^{\mathrm{pl}}\mathcal{C}_{\mathrm{pl}}\mapsto F_{n+1}^{\mathrm{ap}} \mathcal{C}_{\mathrm{pl}}\) _and_ \(F_{n}^{\mathrm{ap}}\mathcal{C}\mapsto F_{n+1}^{\mathrm{ap}}\mathcal{C}\)_._ 2. _The diagram of conilpotent curved cooperads_ \[F_{0}^{\mathrm{ap}}\mathcal{C}\mapsto F_{1}^{\mathrm{ap}}\mathcal{C}\mapsto \cdots\mapsto F_{n}^{\mathrm{ap}}\mathcal{C}\mapsto\cdots\] _is a quasi-planar_ \(\omega\)_-ladder whose colimit is_ \(\mathcal{C}\)_._ Proof.: Let us prove the first point: let \(\mathcal{D}\) be the following pullback in the category of conilpotent curved cooperads Both the functor \(-\otimes\mathbb{S}\) from graded planar conilpotent cooperads to graded conilpotent cooperads and the forgetful functor from \(\mathrm{pdg}\) conilpotent cooperads to graded conilpotent cooperads preserve limits. Thus there is a canonical isomorphism of graded conilpotent cooperads \[\left(F_{n}^{\mathrm{ap}}\mathcal{C}_{\mathrm{pl}}\right)\otimes\mathbb{S} \cong\mathcal{D}.\] The map \(\mathcal{D}\rightarrow\mathcal{C}\) is degree-wise injective. Subsequently \(\mathcal{D}\) is curved and the above square is also a pullback in the category of conilpotent curved cooperads. Hence \(\mathcal{D}\cong F_{n}^{\mathrm{ap}}\mathcal{C}\). The fact that the diagram forms quasi-planar ladder follows from the fact that \[\tilde{F}_{0}B(\mathcal{E}\otimes\mathcal{P})\longrightarrow\tilde{F}_{1}B( \mathcal{E}\otimes\mathcal{P})\longrightarrow\cdots\longrightarrow\tilde{F}_ {n}B(\mathcal{E}\otimes\mathcal{P})\longrightarrow\cdots\] is a quasi-planar \(\omega\)-ladder. The fact that its colimit is \(\mathcal{C}\) is a direct consequence of Lemma 19. Remark 28.: Notice that for a dg operad \(\mathcal{P}\), we have defined two canonical quasi-planar cooperad \(\omega\)-ladder on \(B(\mathcal{E}\otimes\mathcal{P})\) that are \(\tilde{F}_{n}B(\mathcal{E}\otimes\mathcal{P})\) and \(F_{n}^{\mathrm{ap}}B(\mathcal{E}\otimes\mathcal{P})\). With some more work, one can check that these two quasi-planar cooperad ladders are equal. This will be done in a future work. **Proposition 17**.: _Let \(g:\mathcal{C}\longrightarrow\mathcal{D}\) be a quasi-planar morphism of quasi-planar conilpotent curved cooperads. Then the restriction of \(g\) to \(F_{n}^{\mathrm{ap}}\mathcal{C}\) factors through \(F_{n}^{\mathrm{ap}}\mathcal{D}\), and the following diagram commutes_ Proof.: It is straightforward to check that any quasi-planar morphism \(g:\mathcal{C}\longrightarrow\mathcal{D}\) induces a morphism of left \(\mathcal{E}\)-comodules \(\Omega(g):\Omega\mathcal{C}\longrightarrow\Omega\mathcal{D}\). Therefore it induces a morphism of diagrams between the pullbacks that define \(F_{n}^{\mathrm{ap}}\mathcal{C}\) and \(F_{n}^{\mathrm{ap}}\mathcal{D}\), which allows us to conclude. ## 3. Algebras, coalgebras, and Bar-Cobar adjunctions In this section, we recall the various notions of (co)algebras over a (co)operads, as well as the bar-cobar adjunctions that interrelate them. Along the way, we develop some of their main categorical and algebraic properties. Finally, we review the different methods that can give a homotopical meaning to the categories of dg (co)algebras over an dg operad. Finally, we review the different methods that can give a homotopical meaning to the categories of dg (co)algebras over an dg operad. ### From \(\mathbb{S}\)-modules to functors The category of dg modules is enriched and tensored in dg \(\mathbb{S}\)-modules as follows. 1. For every dg modules \(X,Y\), the mapping dg \(\mathbb{S}\)-module is \(\operatorname{Mult}(X,Y)\) given by \[\operatorname{Mult}(X,Y)(n)\coloneqq[X^{\otimes n},Y].\] In the case where \(Y=X\), we set \(\operatorname{End}(X)\coloneqq\operatorname{Mult}(X,X)\). 2. For every dg module \(X\) and every dg \(\mathbb{S}\)-module \(M\), the tensorisation of \(X\) by \(M\) is just given by \(M\circ X\) as dg module, where \(X\) is considered as dg \(\mathbb{S}\)-modules concentrated in arity zero. Thus for every additional dg \(\mathbb{S}\)-module \(N\), the structural map \[(N\circ M)\circ X\longrightarrow N\circ(M\circ X)\] is an isomorphism. It is also enriched and cotensored in dg \(\mathbb{S}\)-modules as follows. 1. For every dg modules \(X,Y\), the mapping dg \(\mathbb{S}\)-module is \(\operatorname{coMult}(X,Y)\) given by \[\operatorname{coMult}(X,Y)(n)\coloneqq[X,Y^{\otimes n}].\] In the case where \(Y=X\), we set \(\operatorname{coEnd}(X)\coloneqq\operatorname{coMult}(X,X)\). 2. For every dg module \(X\) and every dg \(\mathbb{S}\)-module \(M\), the cotensorisation of \(X\) by \(M\) is \(X^{M}\) given by \[X^{M}=\prod_{n\geq 0}\left[M(n),X^{\otimes n}\right]^{\mathbb{S}_{n}}\.\] For every additional dg \(\mathbb{S}\)-module \(N\), the structural map \[\varphi_{M,N}:\left(X^{M}\right)^{N}\longrightarrow X^{N\circ M}\] is a degree-wise injection. Similarly, dg modules are on the one hand enriched and tensored over dg \(\mathbb{N}\)-module through the bifunctors \[X,Y\mapsto\operatorname{Mult}_{\operatorname{pl}}(X,Y)\coloneqq[X^{\otimes-},Y];\quad M,X\mapsto M\circ_{\operatorname{pl}}X\] and they are on the other hand enriched and cotensored over dg \(\mathbb{N}\)-module through the bifunctors \[X,Y\mapsto\operatorname{coMult}_{\operatorname{pl}}(X,Y)\coloneqq[X,Y^{ \otimes-}];\quad X,M\mapsto X^{M}\prod_{n}[M(n),X^{\otimes n}]\.\] There are canonical isomorphisms of dg \(\mathbb{N}\)-modules \(\operatorname{Mult}_{\operatorname{pl}}(-,-)\cong U_{\mathbb{S}}\operatorname {Mult}(-,-)\) and \(\operatorname{coMult}_{\operatorname{pl}}(-,-)\cong U_{\mathbb{S}}\operatorname {coMult}(-,-)\). We obtain by transposing along the adjunction two canonical natural isomorphisms \[(M\otimes\mathbb{S})\circ X\simeq M\circ_{\operatorname{pl}}X\,\quad X^{M \otimes\mathbb{S}}\simeq X^{M}\,\] for every dg module \(X\) and every dg \(\mathbb{N}\)-module \(M\). Let us state some categorical properties that the constructions perfomed so far satisfy. **Lemma 20**.: _Let \(M\) be a dg \(\mathbb{S}\)-module. The endofunctor \(M\circ-\) commutes with sifted colimits and the endofunctor \((-)^{M}\) commutes with \(\beta\)-filtered colimits for a small regular cardinal \(\beta\)._ Proof.: The assertion about \(M\circ-\) can be deduced from the following facts 1. the tensor product \(X\mapsto X^{\otimes n}\) commutes with sifted colimits; 2. the functor \(Y\mapsto M(n)\otimes_{\mathbb{S}}Y\) commutes with colimits for every natural integer \(n\); 3. coproducts commute with colimits. Now, let \(\beta\) be a regular cardinal such that \(\beta>\omega\) and such that every \(M(n)\), for every \(n\) in \(\mathbb{N}\), is \(\beta\)-small. Then the endofunctor \((-)^{M}\) commutes with \(\beta\)-filtered colimits since 1. the tensor product \(X\mapsto X^{\otimes n}\) commutes with sifted colimits, in particular \(\beta\)-filtered colimits; 2. the construction \(Y\mapsto[M(n),Y]\) commutes with \(\beta\)-filtered colimits; 3. the construction \(Z\mapsto Z^{\mathbb{S}_{n}}\) commutes with filtered colimits; 4. countable products commute with \(\beta\)-filtered colimits since \(\beta>\omega\). **Lemma 21**.: _Let \(M\) be a dg \(\mathbb{S}\)-module. The endofunctor \((-)^{M}\) commutes with_ **finite** _cosifted limits._ Proof.: This can be deduced from the following facts: 1. the tensor product \(X\mapsto X^{\otimes n}\) commutes with finite cosifted limits, 2. the functor \(Y\mapsto[M(n),Y]^{\mathbb{S}_{n}}\) commutes, limits for every natural integer \(n\); 3. products commute with limits. One can perform the same constructions with graded modules or pdg modules instead of dg modules. Moreover, these constructions commute strictly with the forgetful functors dg \(\Bbbk\text{-}\mathsf{mod}\longrightarrow\mathsf{pdg}\ \Bbbk\text{-}\mathsf{mod}\). Finally, let us notice that when a dg \(\mathbb{S}\)-module \(M\) is planar as a graded \(\mathbb{S}\)-module, one has extra properties on these endofunctors. **Lemma 22**.: _Let \(M\) be a dg \(\mathbb{S}\)-module. Let us suppose that the underlying graded \(\mathbb{S}\)-module of \(M\) has the form \(M_{\mathsf{pl}}\otimes\mathbb{S}\). Equivalently, \(M(n)\) is a quasi-free dg \(\Bbbk[\mathbb{S}_{n}]\)-module for all \(n\geq 0\). Then_ 1. _the endofunctor_ \((-)^{M}\) _commutes with_ **finite** _sifted colimits;_ 2. _the endofunctor_ \(M\circ(-)\) _commutes with_ **finite** _cosifted limits._ Proof.: 1. For the endofunctor \((-)^{M}\), it is a consequence of the fact that the tensor product, the functors \([M_{\mathsf{pl}}(n),-]\) and products commute with finite sifted colimits. 2. For the endofunctor \(M\circ(-)\), it is a consequence of the fact that the tensor product, the functors \(M_{\mathsf{pl}}(n)\otimes-\) and coproducts commute with finite cosifted limits. Notation. Let \(f:X\longrightarrow Y\) be a map of degree \(0\) and \(g:X\longrightarrow Y\) be a map of degree \(p\) between graded modules \(X,Y\). Let us consider \[\shuffle_{n}(f,g)\coloneqq\sum_{i=0}^{n}f^{\otimes i-1}\otimes g\otimes f^{ \otimes n-i}:X^{\otimes n}\longrightarrow Y^{\otimes n}\,\] which is an \(\mathbb{S}_{n}\)-equivariant map of degree \(p\). Let \(M\) be an graded \(\mathbb{S}\)-module. 1. It induces a map of degree \(p\) \[\bigoplus_{n\geq 0}\operatorname{Id}_{M(n)}\otimes_{\mathbb{S}_{n}}\shuffle_{n}( f,g):\bigoplus_{n\geq 0}M(n)\otimes_{\mathbb{S}_{n}}X^{\otimes n}\longrightarrow \bigoplus_{n\geq 0}M(n)\otimes_{\mathbb{S}_{n}}Y^{\otimes n}\.\] By a slight abuse of notation, we denote this morphism by \(M\circ\shuffle(f,g)\). 2. It induces a map of degree \(p\) \[\prod_{n\geq 0}[[\mathrm{d}_{M(n)},\sqcup_{n}(f,g)]^{\mathbb{S}_{n}}:\prod_{n \geq 0}[M(n),X^{\otimes n}]^{\mathbb{S}_{n}}\longrightarrow\prod_{n\geq 0}[M(n),Y^{ \otimes n}]^{\mathbb{S}_{n}}\] By a slight abuse of notation, this map will be denoted by \(\sqcup(f,g)^{M}\). ### Algebras and coalgebras over (co)operads Let \(\mathcal{P}\) be a dg operad and let \(\mathcal{C}\) be a dg cooperad. We recall the definitions of a dg \(\mathcal{P}\)-algebra and dg \(\mathcal{P}\)-coalgebra, as well as the definitions of a dg \(\mathcal{C}\)-coalgebra and of a dg \(\mathcal{C}\)-algebra. See [12, Section 3]. **Definition 40** (dg \(\mathcal{P}\)-algebra).: A dg \(\mathcal{P}\)_-algebra \(A\)_ amounts to the data of a dg module \((A,d_{A})\) equipped with a morphism of dg operads \(\Gamma_{A}:\mathcal{P}\longrightarrow\mathrm{End}(A)\). This data is equivalent to a structural map \(\gamma_{A}:\mathcal{P}\circ A\longrightarrow A\) that satisfies unitality and associativity conditions. In other words, it is equivalent to the data of an algebra over the monad \(\mathcal{P}\circ-\). Remark 29.: More precisely, the data of a dg \(\mathcal{P}\)-algebra structure on \((A,d_{A})\) amounts to a collection of maps \[\gamma_{A}^{n}:\mathcal{P}\otimes_{\mathbb{S}_{n}}A^{\otimes n}\longrightarrow A\.\] for all \(n\geq 0\), which satisfy compatibility conditions. Since we consider on the left-hand side _coinvariants_ with respect to the action of \(\mathbb{S}_{n}\), no divided power operations appear. The definition of a dg \(\mathcal{P}\)-algebra encodes classical types of algebraic structures over a positive characteristic field. For instance, dg Lie-algebras, dg commutative algebras, dg Poisson algebras, etc. **Definition 41** (dg \(\mathcal{P}\)-coalgebra).: A dg \(\mathcal{P}\)-coalgebra \(V\) amounts to the data of a dg module \((V,d_{V})\) equipped with a morphism of dg operads \(\delta_{V}:\mathcal{P}\longrightarrow\mathrm{coEnd}(V)\). This data is equivalent to a structural map \(\Delta_{V}:V\longrightarrow V^{\mathcal{P}}\) such that the following diagrams commute where \(\boldsymbol{\gamma}:\mathcal{P}\circ\mathcal{P}\longrightarrow\mathcal{P}\) and \(\boldsymbol{\eta}:\mathcal{I}\longrightarrow\mathcal{P}\) are, respectively, the composition morphism and the unit of the dg operad \(\mathcal{P}\). Remark 30.: The data of a dg \(\mathcal{P}\)-coalgebra structure on \((V,d_{V})\) also amounts to a collection of maps \[\Delta_{V}^{n}:V\longrightarrow[\mathcal{P}(n),V^{\otimes n}]^{\mathbb{S}_{n}}\,\] for all \(n\geq 0\), which satisfy compatibility conditions. Since we consider on the right-hand side _invariants_ with respect to the action of \(\mathbb{S}_{n}\), no divided power operations appear. The definition of a dg \(\mathcal{P}\)-coalgebra encodes classical types of coalgebraic structures, _without any conilpotency condition_, over a positive characteristic field. For instance, dg Lie-coalgebras, dg cocommutative coalgebras, dg Poisson coalgebras, etc. Even though \((-)^{\mathcal{P}}\) fails to be a comonad, the forgetful functor from dg \(\mathcal{P}\)-coalgebras to dg modules is comonadic. **Theorem 3** ([14, Theorem 2.7.11]).: _The forgetful functor from dg \(\mathcal{P}\)-coalgebras to dg modules is comonadic. The related comonad \(L^{\mathcal{P}}(-)\) is given, for a dg module \(X\), by the following pullback _square_ _in the category of dg modules, where \(\gamma:\mathcal{P}\circ\mathcal{P}\longrightarrow\mathcal{P}\) is the composition map of \(\mathcal{P}\)._ _The structure of a comonad on \(L^{\mathcal{P}}\) is given by the decomposition map \(L^{\mathcal{P}}X\longrightarrow(X^{\mathcal{P}})^{\mathcal{P}}\), which factors through \(L^{\mathcal{P}}L^{\mathcal{P}}X\), and by the following counit map \(L^{\mathcal{P}}X\longrightarrow X^{\mathcal{P}}\longrightarrow X\)._ **Definition 42** (dg \(\mathcal{C}\)-coalgebra).: A dg \(\mathcal{C}\)-coalgebra \(W\) amounts to the data of a dg module \((W,d_{W})\) equipped with structural map \(\Delta_{W}:W\longrightarrow\mathcal{C}\circ W\) such that the following diagrams commute where \(\Delta:\mathcal{C}\longrightarrow\mathcal{C}\circ\mathcal{C}\) and \(\epsilon:\mathcal{C}\longrightarrow\mathcal{I}\) are, respectively, the decompositon map and the counit map of the cooper \(\mathcal{C}\). In other words, it is a coalgebra over the comonad \(\mathcal{C}\circ-\). _Remark 31_.: The structural map of a dg \(\mathcal{C}\)-coalgebra \(W\) \[\Delta_{W}:W\longrightarrow\bigoplus_{n\geq 0}\mathcal{C}(n)\otimes_{S_{n}}W^{ \otimes n}\,\] lands on the direct sum. Therefore any element \(w\) in \(W\) can only be decomposed into a _finite sum_. This implies that a dg \(\mathcal{C}\)-coalgebras always satisfies some type _conilpotency_ condition. Furthermore, the fact that \(\Delta_{W}\) lands on the _coinvariants_ on the right-hand side implies that _divided power operations_ will appear for this type of structures. Thus the definition of a dg \(\mathcal{C}\)-coalgebra encodes divided powers conilpotent types of coalgebraic structures. **Definition 43** (dg \(\mathcal{C}\)-algebra).: A dg \(\mathcal{C}\)_-algebra_\(\Lambda\) amounts to the data of a dg module \((\Lambda,d_{\Lambda})\) equipped with a structural morphism \(\gamma_{\Lambda}:\Lambda^{\mathcal{C}}\longrightarrow\Lambda\) such that the following diagrams commute where \(\Delta:\mathcal{C}\longrightarrow\mathcal{C}\circ\mathcal{C}\) and \(\epsilon:\mathcal{C}\longrightarrow\mathcal{I}\) are, respectively, the decompositon morphism and the counit morphism of the cooperad \(\mathcal{C}\). In other words, it is an algebra over the monad \((-)^{\mathcal{C}}\). _Remark 32_.: The structural map of a dg \(\mathcal{C}\)-algebra \(\Lambda\) \[\gamma_{\Lambda}:\prod_{n\geq 0}[\mathcal{C}(n),\Lambda^{\otimes n}]^{S_{n}} \longrightarrow\Lambda\,\] comes from an infinite product. Therefore any formal infinite sum of operations admits a well-defined image in \(\Lambda\)_by definition_, without presupposing any topology. We refer to these phenomena as _absolute_ algebraic structures. They generalize the notion of a _contramodule_, see [11] for a precise definition of them. Furthermore, the presence of _invariants_ on the left-hand side implies that these algebraic structures are also endowed with _divided powers_ operations. Therefore the definition of a dg \(\mathcal{C}\)-algebra encodes divided powers absolute types of algebraic structures. Remark 33 (Planar definitions). One can also define the dg (co)algebras over a planar dg operad \(\mathcal{P}\) as the (co)algebras over the dg operad \(\mathcal{P}\otimes\mathbb{S}\). And the dg (co)algebras over a dg planar cooperad \(\mathcal{C}\) as the (co)algebras over the dg cooperad \(\mathcal{C}\otimes\mathbb{S}\). In this case, no _divided powers_ operations appear in dg \(\mathcal{C}\)-coalgebras nor in dg \(\mathcal{C}\)-algebras. **Proposition 18**.: _The category of dg \(\mathcal{P}\)-algebras is \(\omega\)-presentable. The categories of dg \(\mathcal{P}\)-coalgebras, dg \(\mathcal{C}\)-coalgebras and dg \(\mathcal{C}\)-algebras are presentable._ Proof.: The monad \(\mathcal{P}\circ-\) and the comonad \(\mathcal{C}\circ-\) are \(\omega\)-accessible. The monad \((-)^{\mathcal{C}}\) and the comonad \(L^{\mathcal{P}}\) are also accessible but for a larger small cardinal (\(\aleph_{1}\)). We conclude by the fact that for a regular small cardinal \(\alpha\), the category of algebras over an \(\alpha\)-accessible monad in an \(\alpha\)-presentable category is \(\alpha\)-presentable. See [1] for more details. Moreover, the category of coalgebras over an accessible comonad in a presentable category is presentable, see [1]. One can consider the same definitions and perform the same constructions with graded modules or pdg modules instead of dg module. These constructions commute strictly with the forgetful functors dg \(\Bbbk\text{-}\text{mod}\longrightarrow\text{pdg}\Bbbk\text{-}\text{mod} \longrightarrow\text{gr}\Bbbk\text{-}\text{mod}\). ### Curved algebras and coalgebras Let \(\mathcal{C}\) be a curved cooperad. It is in particular a pdg cooperad, and one can define the categories of pdg \(\mathcal{C}\)-coalgebras and of pdg \(\mathcal{C}\)-algebras. Among these, the full subcategories of _curved_ objects will be of particular interest. **Definition 44** (Curved \(\mathcal{C}\)-coalgebra).: Let \(\mathcal{C}\) be a curved cooperad. A pdg coalgebra \((W,\Delta_{W},d_{W})\) is _curved_ if the following diagram commutes where \(\theta\) denotes the curvature of the cooperad \(\mathcal{C}\). The category curv \(\mathcal{C}\)-cog of curved \(\mathcal{C}\)-coalgebras is the full subcategory of pdg \(\mathcal{C}\)-coalgebras spanned by those which are curved. Remark 34.: If the curvature of \(\mathcal{C}\) is zero, then the category of curved \(\mathcal{C}\)-coalgebras in pdg modules is isomorphic to the category of dg \(\mathcal{C}\)-coalgebras. **Definition 45** (Curved \(\mathcal{C}\)-algebra).: Let \(\mathcal{C}\) be a curved cooperad. A pdg \(\mathcal{C}\)-algebra \((\Lambda,\gamma_{\Lambda},d_{\Lambda})\) is _curved_ if the following diagram commutes: The category curv \(\mathcal{C}\)-alg of curved \(\mathcal{C}\)-algebras is the full subcategory of pdg \(\mathcal{C}\)-algebras spanned by those which are curved. Remark 35.: If the curvature of \(\mathcal{C}\) is zero, then the category of curved \(\mathcal{C}\)-algebras in pdg modules is isomorphic to the category of dg \(\mathcal{C}\)-algebras. ### Categorical properties of the category of (co)algebras over a operad Let \(\mathcal{P}\) be a dg operad. Let us consider the following commutative diagram of forgetful functors All these categories are presentable. Moreover, all the functors are right adjoints because they preserve limits and filtered colimits. They are conservative and they also preserve sifted colimits. Therefore, all these forgetful functors are monadic. Let us consider the following commutative diagram of forgetful functors All these categories are presentable. Moreover, all the functors are left adjoints since they preserve colimits. They are also conservative and preserve finite cosifted limits. Therefore, all the functors are comonadic. ### Categorical properties of coalgebras over a conilpotent curved cooperad Let \(\mathcal{C}\) be a conilpotent curved cooperad whose underlying graded conilpotent cooperad is the image through \(-\otimes\mathbb{S}\) of a graded planar conilpotent cooperad \(\mathcal{C}_{\mathrm{pl}}\). **Proposition 19**.: _The forgetful functor from curved \(\mathcal{C}\)-coalgebras to pdg \(\mathcal{C}\)-coalgebras admits a right adjoint denoted by \(\operatorname{Curv}\). This gives an adjunction_ _Hence the category of curved \(\mathcal{C}\)-coalgebras forms a coreflexive full subcategory of the category of pdg \(\mathcal{C}\)-coalgebras. In other words, they are coalgebras over an idempotent comonad in the category of pdg \(\mathcal{C}\)-coalgebras._ Proof.: Let \((W,\Delta_{W},d_{W})\) be a pdg \(\mathcal{C}\)-coalgebra, we consider the following two maps of graded k-modules where \(\theta:\mathcal{C}\longrightarrow\mathcal{I}\) is the curvature of the cooperad \(\mathcal{C}\). Let \(Z_{W}\) be the cofree pdg k-module obtained from the graded k module \(s^{2}W\). Since a pdg k-module is a graded k-module together with a degree \(-1\) endomorphism, \(Z_{W}\) is given by \(s^{2}W\otimes\Bbbk[\mathrm{u}]\), where u is a generator in degree \(-1\). From the above two maps, one gets a natural coreflexive pair of maps of pdg \(\mathcal{C}\)-coalgebras where the rightward maps are obtained from the previous diagram of graded k-modules, using the forgetful-coffree adjunction between graded k-modules and pdg k-modules, and where the leftward map is just the projection onto \(W\). This pair has an equaliser \((H,\Delta_{H},d_{H})\) in pdg \(\mathcal{C}\)-coalgebras, which can be computed in graded k-modules. In particular, the morphism of pdg \(\mathcal{C}\)-coalgebras \(H\longrightarrow W\) is a degree-wise monomorphism. Furthermore, the following diagram commutes The map from \(H\) to \(s^{2}W\) is zero by the universal property of \(H\). Since the bottom map is a monomorphism, the left map is also zero. So \(H\) is curved. We set \(\operatorname{Curv}(W)\coloneqq H\), and it is straightforward to show that it defines a right adjoint to the forgetful functor. **Corollary 2**.: _The category of curved \(\mathcal{C}\)-coalgebras is the category of coalgebra over an idempotent comand over pdg \(\mathcal{C}\)-coalgebras which is accessible and preserves coreflexive equalisers. Hence, the category of curved \(\mathcal{C}\)-coalgebras is presentable._ Proof.: Let \(W\) be a pdg \(\mathcal{C}\)-coalgebra. Its image by the functor \(\operatorname{Curv}\) is also given by the coreflexive equaliser of a the pair of maps of the form This coreflexive equaliser is constructed from the one that defines the functor \(\operatorname{Curv}\) and from the one that gives \(W\) as a pdg \(\mathcal{C}\)-coalgebra. The comand that defines curved \(\mathcal{C}\)-coalgebras preserves finite cosifted limits since its composition with the the forgetful functor to graded \(\Bbbk\)-modules does. Indeed, we have that: 1. the forgetful functor preserves and creates finite cosifted limits; 2. the constructions \(X\mapsto\mathcal{C}\circ X\) and \(W\mapsto\mathcal{C}\circ((\mathcal{C}\circ W)\oplus Z_{W})\) preserves finite cosifted limits; 3. then the construction \(W\mapsto\lim(W\rightrightarrows\mathcal{C}\circ W\oplus Z_{W})\) preserves finite cosifted limits. To prove that this comand preserves filtered colimits, it suffices to prove that its composition with the forgetful functor towards pdg \(\Bbbk\)-modules preserves filtered colimits. This follows from: 1. the forgetful functor preserves and creates filtered colimits and coreflexive equalisers; 2. the constructions \(W\mapsto\mathcal{C}\circ W\) and \(W\mapsto\mathcal{C}\circ((\mathcal{C}\circ W)\oplus Z_{W})\) preserve filtered colimits; 3. filtered colimits and coreflexive equalisers commute in graded \(\Bbbk\)-modules. Let us consider the following commutative diagram of forgetful functors All the categories are presentable and all the functors are left adjoint (since they preserve colimits), conservative and they also preserve coreflexive equalisers. Therefore all the functors are comonadic. ### Curved algebras over a conilpotent curved cooperad Again, let \(\mathcal{C}\) be a conilpotent curved cooperad whose underlying graded conilpotent cooperad is the image through \(-\otimes\mathbb{S}\) of a graded planar conilpotent cooperad \(\mathcal{C}_{\text{pl}}\). **Proposition 20**.: _The forgetful functor from curved \(\mathcal{C}\)-algebras to pdg \(\mathcal{C}\)-algebras admits a left adjoint denoted by \(\operatorname{Curv}\). This gives an adjunction_ \[\operatorname{\mathsf{curv}}\,\,\mathcal{C}\text{-alg}\,\,\xrightarrow{ \operatorname{U}}\,\,\xrightarrow{\operatorname{T}}\,\,\operatorname{pdg}\, \,\mathcal{C}\text{-alg}\,\,.\] _Hence the category of curved \(\mathcal{C}\)-algebras forms a reflexive full subcategory of the category of pdg \(\mathcal{C}\)-algebras. In other words, they are algebras over an idempotent monad in the category of pdg \(\mathcal{C}\)-algebras._ Proof.: Let \((\Lambda,\gamma_{\Lambda},d_{\Lambda})\) be a pdg \(\mathcal{C}\)-algebra. We consider the two maps of graded \(\Bbbk\)-modules \[s^{-2}\Lambda\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\, \xrightarrow{\Lambda}\,\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\, \xrightarrow{\Lambda}\,\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\, \xrightarrow{\Lambda}\,\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\, \xrightarrow{\Lambda}\,\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\, \xrightarrow{\Lambda}\,\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\, \xrightarrow{\Lambda}\,\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\, \xrightarrow{\Lambda}\,\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\, \xrightarrow{\Lambda}\,\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\Lambda}\,\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\Lambda}\,\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\Lambda}\,\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\Lambda}\,\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\,\xrightarrow{\gamma_{ \Lambda}\,\,\Lambda^{\theta}}\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{ \theta}}\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\,\xrightarrow{ \gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\,\xrightarrow{\gamma_{\Lambda}\,\, \Lambda^{\theta}}\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\,\xrightarrow{\gamma_{ \Lambda}\,\,\Lambda^{\theta}}\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{ \theta}}\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\,\xrightarrow{ \gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\,\xrightarrow{\gamma_{\Lambda}\,\, \Lambda^{\theta}}\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{ \Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{ \theta}}\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\,\xrightarrow{ \gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\,\xrightarrow{\gamma_{\Lambda}\,\, \Lambda^{\theta}}\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda} \,\,\Lambda^{\theta}}\,\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{ \Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{ \theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\,\xrightarrow{ \gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\,\xrightarrow{\gamma_{\Lambda}\,\, \Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda} \,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda} \,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda} \,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda} \,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda} \,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda} \,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{ \Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{ \Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{ \Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda} \,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\, \Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{ \gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{ \theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{ \gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{ \theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{ \gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{ \theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{ \gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{ \theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{ \gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\, \Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{ \gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{ \theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{ \gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{ \theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{ \gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{ \theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{ \gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{ \theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{ \gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{ \theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{ \gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{ \theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{ \Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\, \Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{ \gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{ \theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{ \Lambda}\,\,\Lambda^{\theta}}\,\xrightarrow{\gamma_{\Lambda}\,\,\Lambda^{\theta}}\, \xrightarrow{\gamma_{\Lambda}\,\,\Lambda^ 1. this forgetful functor preserves and creates finite sifted colimits; 2. the constuction \(X\mapsto X^{C}\) preserves finite sifted colimits; 3. the construction \(\Lambda\mapsto Z_{\Lambda}\) preserves finite sifted colimits; 4. then the construction \(\Lambda\mapsto\varinjlim((Z_{\Lambda}\oplus\Lambda^{C})^{C}\rightrightarrows \Lambda^{C})\) preserves finite sifted colimits. Let us prove that this monad is accessible. Let \(\beta\) be a small regular cardinal so that the monad \((-)^{C}\) is \(\beta\)-accessible. To prove that the idempotent "curvature" monad preserves \(\beta\)-filtered colimits, it suffices to prove that its composition with the forgetful functor towards pdg k-modules preserves these colimits. This follows from the following facts: 1. the forgetful functor preserves and creates reflexive coequalisers and \(\beta\)-filtered colimits; 2. the constructions \(\Lambda\mapsto\Lambda^{C}\) and \(\Lambda\mapsto\mathcal{C}\circ((\mathcal{C}\circ\Lambda)\oplus Z_{\Lambda})\) preserve filtered colimits; 3. reflexive coequalisers and \(\beta\)-filtered colimits commute in pdg k-modules. Let us consider the following commutative diagram of forgetful functors: All the categories are presentable. Moreover all the functors are right adjoint (they are accessible and preserve limits), conservative and they also preserve finite sifted colimits. Therefore they are monadic. ### Complete algebras over the canonical ladder of a quasi-planar cooperad This section is the mirror of [10, Section 4.4] in any characteristic for a quasi-planar cooperad ladder equipped with its canonical quasi-planar filtration. For the rest of this section, we fix a quasi-planar curved conilpotent cooperad \(\mathcal{C}\). **Lemma 23**.: _Let \(n\) be a natural integer and let \(X\) be a pdg k module. Let us consider the inclusion \(i_{n}:F_{n}^{\text{ap}}\mathcal{C}\mapsto\mathcal{C}\). The natural map_ \[(\operatorname{Id})^{i_{k}}:X^{C}\twoheadrightarrow X^{F_{n}^{\text{ap}} \mathcal{C}}\] _is a degree-wise epimorphism. Subsequently, the forgetful functor from pdg \(F_{n}^{\text{ap}}\mathcal{C}\)-algebras to pdg \(\mathcal{C}\)-algebras is fully faithful._ Proof.: In the category of graded k-modules, this map may be rewritten as \[X^{\mathcal{C}_{\text{ap}}}\twoheadrightarrow X^{F_{n}^{\text{ap}} \mathcal{C}_{\text{pl}}}\] Moreover, the map of graded N-module \(F_{n}^{\text{ap}}\mathcal{C}_{\text{pl}}\mapsto\mathcal{C}_{\text{pl}}\) has a left inverse. Therefore the above map has a section. It implies that it is a degree-wise epimorphism. We thus get a morphism of monads that is object-wise an epimorphism. Therefore the related right adjoint forgetful functor is fully faithful. For every natural integer \(n\), we denote by \[F_{\text{ap}}^{a}:\text{pdg $\mathcal{C}$-alg}\longrightarrow\text{ pdg $F_{n}^{\text{ap}}\mathcal{C}$-alg}\] be the functor that is left adjoint to the forgetful functor. It sends every pdg \(\mathcal{C}\)-algebra \((B,\gamma_{B},d_{B})\) to the coequaliser of the following pair \[(\Lambda^{C})^{F_{n}^{\text{ap}}\mathcal{C}}\xrightarrow{(\operatorname{Id} )^{\text{a}}\ \circ\ (\operatorname{Id})^{\text{a}}}\ \overbrace{(\gamma_{B})^{\text{a}}}^{\text{ap} \mathcal{C}}\.\] This coequalizer is also the pushout of the span Both of these colimits can be computed in the category of pdg \(F_{n}^{\text{gp}}\mathcal{C}\)-algebras or in the underlying category of graded \(\Bbbk\)-modules. This gives a natural diagram \[\Lambda\longrightarrow\cdots\longrightarrow F_{\text{qp}}^{n}\Lambda \longrightarrow F_{\text{qp}}^{n-1}\Lambda\longrightarrow\cdots F_{\text{qp}}^ {1}\Lambda\longrightarrow F_{\text{qp}}^{0}\Lambda\,\] where we set \(F_{\text{qp}}^{-1}(\Lambda)=0\). For every natural integer \(n\) in \(\mathbb{N}\), the _associated graded_ is given by \[\text{gr}_{\text{qp}}^{n}\Lambda:=\text{Ker}\left(F_{\text{qp}}^{n}(\Lambda) \longrightarrow F_{\text{qp}}^{n-1}(\Lambda)\right)\.\] **Definition 46** (qp-completion functor).: The _qp-completion functor_, denoted by \(\widehat{(-)}\), is the endo-functor of pdg \(\mathcal{C}\)-algebras that sends an algebra \(\Lambda\) to the limit \[\widehat{\Lambda}:=\lim_{n\in\text{\sf{wp}}^{n}}F_{\text{qp}}^{n}(\Lambda)\.\] **Definition 47** (qp-complete pdg \(\mathcal{C}\)-algebra).: A pdg \(\mathcal{C}\)-algebra \(\Lambda\) is called _qp-complete_ if the canonical morphism \(\Lambda\longrightarrow\widehat{\Lambda}\) is an isomorphism. We denote pdg \(\mathcal{C}\)-alg\({}^{\text{\sf{ap-comp}}}\) the full subcategory of that of pdg \(\mathcal{C}\)-algebras spanned by the qp-complete ones. **Lemma 24** (After [1, Lemma 4.22]).: _Let \(f:\Lambda\longrightarrow\Gamma\) be a morphism of pdg \(\mathcal{C}\)-algebras. Let us suppose that for every natural integer \(n\), the induced map_ \[\text{gr}^{n}(f):\text{gr}_{\text{qp}}^{n}\Lambda\twoheadrightarrow\text{gr}_ {\text{qp}}^{n}\Gamma\] _is a degree-wise epimorphism._ 1. _For every_ \(n\)_, the map_ \[F_{\text{qp}}^{n+1}\Lambda\twoheadrightarrow F_{\text{qp}}^{n}\Lambda \times_{F_{\text{qp}}\Gamma}F_{\text{qp}}^{n+1}\Gamma\] _is a degree-wise epimorphism._ 2. _The map_ \(\widehat{f}:\widehat{\Lambda}\twoheadrightarrow\widehat{\Gamma}\) _is a degree-wise epimorphism._ Proof.: Let us prove the first assertion. Let us consider the square diagram Given an element \((y,z)\in F_{\text{qp}}^{n}\Lambda\times_{F_{\text{qp}}\Gamma}F_{\text{qp}}^{n +1}\Gamma\) one can find \(x^{\prime}\) so that \(p_{n}(x^{\prime})=y\). Then \(q_{n}(z-F_{rad}^{n+1}(f)(x^{\prime}))=0\). In other words \[z-F_{\text{qp}}^{n+1}(f)(x^{\prime})\in\text{gr}_{\text{qp}}^{n+1}\Gamma\.\] Let \(x^{\prime\prime}\) be one of its antecedent in \(\text{gr}_{\text{qp}}^{n+1}\Lambda\). Then \(x=x^{\prime}+x^{\prime\prime}\) is an antecedent of \((y,z)\). This proves the first point. Let us prove the second point. Let \(z\) be a degree \(k\) element of \(\widehat{\Lambda}\). It is a sequence \((z_{n})_{n\in\mathbb{N}}\in\prod_{n}F_{\text{qp}}^{n}\widehat{\Lambda}_{k}\), where each \(z_{n}\) is the image of \(z_{n+1}\) via the projection \(F_{\text{qp}}^{n+1}(\widehat{\Lambda})\twoheadrightarrow F_{\text{qp}}^{n}( \widehat{\Lambda})\). Using 1. the fact that the map \(\text{gr}^{0}(f)\) is a degree-wise epimorphism; 2. the first assertion of the lemma; 3. the axiom of choice; one can build a sequence \(x=(x_{n})_{n\in\mathbb{N}}\in\prod_{n}F_{\text{ap}}^{n}\Lambda_{k}\), where each \(x_{n}\) is the image of \(x_{n+1}\) via the projection \(F_{\text{ap}}^{n+1}(\Lambda)\to F_{\text{ap}}^{n}(\Lambda)\) and such that \(F_{\text{ap}}^{n}(f)(x_{n})=z_{n}\). Thus \(x\) is an antecedent of \(z\) in \(\widehat{\Lambda}\). **Lemma 25** (After [2, Proposition 4.23]).: _The completion endofunctor preserves degree-wise epimorphisms._ Proof.: Let \(f:\Lambda\longrightarrow\Gamma\) be a degree-wise epimorphism. In order to show that \(\widehat{f}\) is a degree-wise epimorphism, it suffices using Lemma 24 to prove that \(\operatorname{gr}_{\text{ap}}^{n}(f)\) is a degree-wise epimorphism for every natural integer \(n\). Infinite products in graded \(\Bbbk\)-modules are exact, the contravariant functor \([-,X]\) for every \(X\) is also exact, therefore the sequence \[0\longrightarrow\Lambda^{F_{\text{ap}}^{\text{ap}}\mathcal{C}_{\mu}/F_{\text{ ap}-1}^{\text{ap}}\mathcal{C}_{\mu}}\longrightarrow\Lambda^{F_{\text{ap}}^{ \text{ap}}\mathcal{C}_{\mu}}\longrightarrow\Lambda^{F_{\text{ap}}^{\text{ap} }\mathcal{C}_{\mu}}\longrightarrow 0\] is exact. Thus, the following commutative diagram of graded \(\Bbbk\)-modules is a pushout Therefore the kernel \(\operatorname{gr}_{\text{ap}}^{n}\Lambda\) is the image in \(F_{\text{ap}}^{n}\Lambda\) of the map \(\Lambda^{F_{\text{ap}}^{\text{ap}}\mathcal{C}/F_{\text{ap}-1}^{\text{ap}} \mathcal{C}}\longrightarrow F_{\text{ap}}^{n}\Lambda\). Let us consider the following commutative diagram of graded \(\Bbbk\)-modules Since the top map and the right map are degree-wise epimorphisms, so is bottom map, which proves the result. **Lemma 26** (After [2, Proposition 4.21 and 4.24]).: 1. _The_ \(X\) _be a pdg module_ \(X\)_, the free pdg_ \(\mathcal{C}\)_-algebra_ \(X^{\mathcal{C}}\) _is qp-complete._ 2. _The natural map_ \[\Lambda\rightarrow\widehat{\Lambda}\] _is an degree-wise epimorphism for every dg_ \(\mathcal{C}\)_-algebra_ \(\Lambda\)_._ Proof.: For every free pdg \(\mathcal{C}\)-algebra \(X^{\mathcal{C}}\) on a pdg module \(X\), one has \(F_{\text{ap}}^{n}X^{\mathcal{C}}=X^{F_{\text{ap}}^{\text{ap}}\mathcal{C}}\) and that it is complete: \[\widehat{X^{\mathcal{C}}}=\lim_{n\in\text{ap}}X^{F_{\text{ap}}^{n}\mathcal{C}} \cong X^{\mathcal{C}}.\] Now, let \(\Lambda\) be any pdg \(\mathcal{C}\)-algebra. We consider the following commutative square The top horizontal map is an isomorphism and the right vertical map is a degree-wise epimorphism by Lemma 25. The map from \(\Lambda^{\mathcal{C}}\) to \(\Lambda\) is also epimorphism. Therefore so is the map \(\Lambda\longrightarrow\widehat{\Lambda}\). **Lemma 27**.: _The canonical map_ \[\widehat{\Lambda}\longrightarrow\widehat{\widehat{\Lambda}}\] _is an isomorphism. Therefore the completion functor equipped with the natural map \(\Lambda\longrightarrow\widehat{\Lambda}\) is an idempotent monad in the category of pdg \(\mathcal{C}\)-algebras._ Proof.: For every natural integer \(n\), both maps \(\Lambda\longrightarrow\widehat{\Lambda}\longrightarrow F_{n}^{\text{op}}\Lambda\) are degree-wise epimorphisms. Applying the left adjoint functor \(F_{\text{qp}}^{n}\) which preserves degree-wise epimorphisms, one gets a diagram The first map is both a degree-wise epimorphism and a degree-wise monomorphism. Hence it is an isomorphism and so is the second map. Taking the limit over \(n\in\omega^{\text{op}}\), one gets the required isomorphism. **Proposition 21**.: _The category of qp-complete pdg \(\mathcal{C}\)-algebra is presentable._ Proof.: It is a reflexive full subcategory of the category of pdg \(\mathcal{C}\)-algebras which is presentable. The reflector is the completion \(\Lambda\mapsto\widehat{\Lambda}\). One can find a small regular cardinal \(\beta\) so that every functor \(F_{\text{qp}}^{n}\) is \(\beta\)-accessible and so that \(\beta\)-filtered colimits commute with limits of diagrams indexed by \(\omega^{\text{op}}\). Then, the completion functor is \(\beta\)-accessible. Hence the category of qp-complete pdg \(\mathcal{C}\)-algebras is presentable. Remark 36.: The category of qp-complete pdg \(\mathcal{C}\)-algebra is the homotopy limit of the (pseudo) diagram of categories \[\cdots\longrightarrow\text{pdg }F_{n}^{\text{qp}}\mathcal{C}\text{-alg} \longrightarrow\text{pdg }F_{n-1}^{\text{op}}\mathcal{C}\text{-alg}\longrightarrow\cdots \longrightarrow\text{pdg }F_{1}^{\text{qp}}\mathcal{C}\text{-alg} \longrightarrow\text{pdg }F_{0}^{\text{qp}}\mathcal{C}\text{-alg}.\] It is presentable as a small limit of presentable categories and left adjoint functors. Remark 37.: The analogue definitions and results are valid for dg \(\mathcal{C}\)-algebras or graded \(\mathcal{C}\)-algebras as well. ### Categorical properties of qp-complete curved algebras over a quasi-planar cooperad **Proposition 22**.: _The qp-completion idempotent monad on pdg \(\mathcal{C}\)-algebras preserves curved algebras. Moreover, the restricted monad (to curved algebras) is accessible. Hence, the category of qp-complete curved \(\mathcal{C}\)-algebras is presentable and the forgetful functor towards curved \(\mathcal{C}\)-algebras is accessible._ Proof.: For every curved \(\mathcal{C}\)-algebra \(\Lambda\), \(\widehat{\Lambda}\) is also curved since the map \(\Lambda\longrightarrow\widehat{\Lambda}\) is a degree-wise epimorphism. Therefore, the qp-completion monad restricts to curved \(\mathcal{C}\)-algebras. Let \(\beta\) be regular small cardinal such that 1. the forgetful functor curv \(\mathcal{C}\)-alg \(\longrightarrow\) pdg \(\mathcal{C}\)-alg is a \(\beta\)-accessible functor between \(\beta\)-accessible categories; 2. the completion monad on pdg \(\mathcal{C}\)-alg is \(\beta\)-accessible. Then, the restriction of this monad to curved algebras is also \(\beta\)-accessible. Remark 38.: Let us consider the following commutative diagram of functors. The vertical arrows are monadic and accessible but they do not preserve reflexive coequalisers in general; otherwise all \(\mathcal{C}\)-algebras would be qp-complete. ### Naturality of (co)algebras of a dg operad The definitions of algebras and coalgebras are compatible with morphisms of operads. This will mean that any such morphism induces an adjunction between the appropriate categories. Let \(f:\mathcal{P}\longrightarrow\mathcal{Q}\) be a morphism of dg operads. **Naturality of algebras.** The morphism \(f\) induces a morphism of monads \(\mathcal{P}\circ-\longrightarrow\mathcal{Q}\circ-\), which yields an adjunction since the category of dg \(\mathcal{Q}\)-algebras is cocomplete, by the adjoint lifting theorem. See Appendix A.1 for more details. Since the monad \(\mathcal{Q}\circ-\) commutes with sifted colimits, the left adjoint functor \(f\) sends a dg \(\mathcal{P}\)-algebra \(A\) to the reflexive coequaliser of the pair between the structural map \(\gamma_{A}\) of \(A\) and the map induced by the morphism \(f\) composed with the monad structure \(\gamma_{Q}\circ-\) of \(\mathcal{Q}\circ-\). This is a coequalizer both in dg \(\mathcal{Q}\)-algebras and in graded k-modules. **Naturality of coalgebras.** The morphism \(f\) also induces a morphism of comonads \(L^{\mathcal{P}}\longrightarrow L^{\mathcal{Q}}\), which yields an adjunction since the category of dg \(\mathcal{Q}\)-coalgebras is complete, again using the adjoint lifting theorem. Since the comonad \(L^{\mathcal{Q}}\) commutes with _finite_ cosifted limits, the functor \(f_{!}\) sends a dg \(\mathcal{P}\)-coalgebra \(V\) to the coreflexive equaliser of the pair between the structure map \(\Delta_{V}\) of \(V\) and the map given by the comonad structure \(\omega_{\mathcal{Q}}\) of \(L^{\mathcal{Q}}\) composed with the morphism induced by \(f\). This is an equalizer both in dg \(\mathcal{Q}\)-coalgebra and in graded k-modules. ### Naturality of (co)algebras over a cooperad The definitions of algebras and coalgebras are compatible with morphisms of of cooperads. This will mean that any such morphism induces an adjunction between the appropriate categories. Let \(\mathcal{C},\mathcal{D}\) be two conilpotent curved cooperads whose underlying graded conilpotent cooperads are the image through \(-\otimes\mathbb{S}\) of a graded planar conilpotent cooperads respectively \(\mathcal{C}_{\mathrm{pl}},\mathcal{D}_{\mathrm{pl}}\). Let \(g:\mathcal{C}\longrightarrow\mathcal{D}\) be a morphisms of curved cooperads. **Naturality of coalgebras.** The map \(g\) induces a morphism of comonads on the category of pdg \(\Bbbk\)-modules \(\mathcal{C}\circ-\longrightarrow\mathcal{D}\circ-\), which yields an adjunction by the adjoint lifting theorem. Since the comonad \(\mathcal{D}\circ-\) preserves finite sifted limits, the functor \(g^{!}\) sends a pdg \(\mathcal{D}\)-coalgebra \(W\) to the coreflexive equaliser of the pair between the structure map \(\Delta_{W}\) of \(W\) and the map given by the comonad structure \(\Delta_{\mathcal{C}}(-)\) of \(\mathcal{C}\circ-\) composed with the morphism induced by \(g\). The existence of this adjunction and the fact that this equaliser can be computed in graded k-modules follows from Lemma 22. **Proposition 23**.: _The adjunction \(g_{*}\dashv g^{!}\) restrict to an adjunction_ _relating curved \(\mathcal{D}\)-coalgebras and curved \(\mathcal{D}\)-coalgebras._ Proof.: The fact that \(g_{*}\) sends curved objects to curved objects is clear. Now, given a curved \(\mathcal{D}\)-coalgebra \(W\), the squared coderation \(d^{2}\) on \(\mathcal{C}\circ W\) is \[d^{2} =d_{C}^{2}\circ\mathrm{Id}_{W}+\mathrm{Id}_{C}\circ\shuffle( \mathrm{Id}_{W},d_{W}^{2})\] \[=-((\theta_{C}\circ\mathrm{Id}_{C})\Delta_{C})\circ\mathrm{Id}_{ W}+((\mathrm{Id}_{C}\circ\shuffle(\epsilon_{C},\theta_{C}))\Delta_{C})\circ \mathrm{Id}_{W}-\mathrm{Id}_{C}\circ\shuffle(\mathrm{Id}_{W},(\theta_{D}\circ \mathrm{Id}_{W})\Delta_{W})\,\] where \(\upsilon_{C}\) denotes the counit of the cooperad \(\mathcal{C}\). If we denote \(j\) the inclusion \(g^{!}W\hookrightarrow\mathcal{C}\circ W\), then the pre-composition of \(j\) with the sum \[((\mathrm{Id}_{C}\circ\shuffle(\epsilon_{C},\theta_{C}))\Delta_{C})\circ \mathrm{Id}_{W}-\mathrm{Id}_{C}\circ\shuffle(\mathrm{Id}_{W},(\theta_{D}\circ \mathrm{Id}_{W})\Delta_{W})\] is zero. Thus, \[jd^{2}=d^{2}j=-((\theta_{C}\circ\mathrm{Id}_{C})\Delta_{C}\circ\mathrm{Id}_{W}) j=-j((\theta_{C}\circ\mathrm{Id}_{g_{W}})\Delta_{g_{W}})\.\] Since \(j\) is injective, \(g_{W}^{!}\) is curved. Remark 39. Actually, when dealing with coalgebras, we do not need \(\mathcal{C}\) to proceed from a planar cooperad. We will need it in the context of algebras, in order to be able to talk about complete \(\mathcal{C}\)-algebras. **Naturality of algebras.** The morphism \(g\) induces a morphism of monads on pdg k-modules \((-)^{\mathcal{D}}\longrightarrow(-)^{C}\), which yields an adjunction by the adjoint lifting theorem. The functor \(g^{!}\) sends a curved \(\mathcal{D}\)-algebra \(\Lambda\) to the reflexive coequaliser of the pair between the structure map \(\gamma_{\Lambda}\) of \(\Lambda\) and the map induced by the morphism \(g\) composed with the monad structure \((-)^{\Delta_{C}}\) of \((-)^{C}\). The existence of this adjunction and the fact that this coequaliser can be computed in graded k-modules follows from Lemma 22. **Proposition 24**.: _The adjunction \(g^{!}\dashv g^{*}\) restrict to an adjunction_ _relating curved \(\mathcal{D}\)-algebras and curved \(\mathcal{D}\)-algebras._ Proof.: This follows from dual arguments as those used to prove Proposition 24. The fact that \(g_{*}\) sends curved algebras to curved algebras is clear. Now, given a curved \(\mathcal{D}\)-algebra \(\Lambda\), the squared coderation \(d^{2}\) on \(\Lambda^{\mathcal{C}}\simeq\Lambda^{\mathcal{C}_{\mu}}\) is \[d^{2}=-\Lambda^{\mathcal{C}^{2}}+\shuffle(\mathrm{Id}_{\Lambda},d_{\Lambda}^{ 2})^{C}\.\] It is thus the sum of the three maps \[\delta^{(1)}:\Lambda^{\mathcal{C}_{\mu}} \xrightarrow{\Lambda^{\mathcal{C}_{\mu}\mu\mu}}\Lambda^{ \mathcal{C}_{\mu}\circ\mu\circ\mu\circ\mu}\longrightarrow\Lambda^{\mathcal{C }_{\mu}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \(\upsilon_{C},\upsilon_{D}\) are the counits of the cooperads \(\mathcal{C},\mathcal{D}\). Since the curvature on cooperads in zero on arities different from \(1\) and since the morphism of cooperads \(g\) commutes with the curvatures and the counits, the map \[\Lambda^{[\mathcal{C}_{\alpha_{\mu}}\sqcup(\upsilon_{C},\delta_{C})}:\Lambda^{ \mathcal{C}_{\mu}}\longrightarrow\Lambda^{\mathcal{C}_{\alpha_{\mu}}\circ_{ \mu}\mathcal{C}_{\mu}}\] is equal to the composition \[\Lambda^{\mathcal{C}_{\mu}}\xrightarrow{\shuffle(\Lambda^{\alpha_{\mu}}, \Lambda^{g})^{\mathcal{C}_{\mu}}}(\Lambda^{\mathcal{D}_{\mu}})^{\mathcal{C}_{ \mu}}\xrightarrow{(\Lambda^{\mathcal{C}_{\mu}})^{\mathcal{C}_{\mu}}}(\Lambda^ {\mathcal{C}_{\mu}})^{\mathcal{C}_{\mu}}\hookrightarrow\Lambda^{\mathcal{C}_{ \alpha_{\mu}}\circ_{\mu}\mathcal{C}_{\mu}}.\] As a consequence, if we denote \(p\) the projection \(p:\Lambda^{\mathcal{C}_{\mu}}\to g^{\downarrow}(\Lambda)\), then \[p(\delta^{(2)}+\delta^{(3)})=0\.\] The map \[\Lambda^{\mathcal{C}_{\alpha_{\mu}}\sqcup\mathrm{d}}:\Lambda^{\mathcal{C}_{ \mu}}\longrightarrow\Lambda^{\mathcal{C}_{\alpha_{\mu}}\circ_{\mu}}\] is equal to the composition \[\Lambda^{\mathcal{C}_{\mu}}\xrightarrow{(\Lambda^{\mathcal{C}_{\mu}})^{ \mathcal{C}_{\mu}}}(\Lambda^{\mathcal{C}_{\mu}})^{\mathcal{C}_{\mu}} \hookrightarrow\Lambda^{\mathcal{C}_{\alpha_{\mu}}\circ_{\mu}\mathcal{C}_{\mu}}\.\] Thus \(\delta^{(1)}=\boldsymbol{\gamma}_{\Lambda^{\mathcal{C}_{\mu}}}(\Lambda^{ \mathcal{C}_{\mu}})^{\mathcal{C}_{\mathrm{c}}}\) and therefore \[d^{2}p=\rho d^{2}=\rho\delta^{(1)}=\rho\gamma_{\Lambda^{\mathcal{C}_{\mu}}}( \Lambda^{\mathcal{C}_{\mu}})^{\mathcal{C}_{\mathrm{c}}}=\gamma_{g^{\downarrow} (\Lambda)}(g^{\downarrow}(\Lambda))^{\mathcal{C}_{\mathrm{c}}}p\.\] Since \(p\) is surjective, this entails that \[d^{2}=\boldsymbol{\gamma}_{g(\Lambda)}(g^{\downarrow}(\Lambda))^{\mathcal{C}_ {\mathrm{c}}}\,\] which proves the result. **Proposition 25**.: _Let \(g:\mathcal{C}\longrightarrow\mathcal{D}\) be a quasi-planar morphism of curved cooperads. Then the functor \(g_{*}\) sends qp-complete \(\mathcal{C}\)-algebras to qp-complete \(\mathcal{D}\)-algebras._ Proof.: Since \(g\) is quasi-planar, by Proposition 17, the following diagram of conilpotent curved cooperads commutes for every \(n\geq 0\). Let \(\Lambda\) be a qp-complete pdg \(\mathcal{C}\)-algebra. The \(\mathcal{D}\)-algebra \(g_{*}(F^{n}_{\mathrm{qp}}\Lambda)\) rewrites as \[g_{*}(F^{n}_{\mathrm{qp}}\Lambda)=g_{*}{}^{*}{}^{*}{}_{i}(\Lambda)\cong j^{*}( F^{n}_{\mathrm{qp}}g)^{*}{}_{i}(\Lambda)\.\] Thus \(g_{*}(F^{n}_{\mathrm{qp}}\Lambda)\) is qp-complete as it is the image of a pdg \(F^{\mathrm{qp}}_{n}\mathcal{D}\)-algebra. Since \(g_{*}\) preserves limits and since \(\Lambda\) is complete, the map \[g_{*}(\Lambda)\longrightarrow\lim_{n\in\text{\tiny{\sf{euf}}}}g_{*}(F^{n}_{ \mathrm{qp}}\Lambda)\] is an isomorphism. So \(g_{*}(\Lambda)\) is complete as it is the limit of a diagram of complete algebras. **Proposition 26**.: _Let \(g:\mathcal{C}\longrightarrow\mathcal{D}\) be a quasi-planar morphism of curved cooperads. There is an adjunction_ \[\text{\rm{curv}}\ \mathcal{C}\text{-\rm{alg}}^{\text{\rm{qp-comp}}}\xrightarrow{g^{ \uparrow}}\xrightarrow{\uparrow}\text{\rm{curv}}\ \mathcal{D}\text{-\rm{alg}}^{\text{\rm{qp-comp}}}\.\] _between qp-complete curved \(\mathcal{C}\)-algebras and qp-complete curved \(\mathcal{D}\)-algebras, where the left adjoint \(\widehat{g}^{\downarrow}\) is given by \(g^{\downarrow}\) post-composed with the completion functor of \(\mathcal{C}\)-algebras._ Proof.: Follows from Proposition 24, using the fact that the completion functor of \(\mathcal{C}\)-algebras preserves curved objects. **Embedding of cooperads.** Let us assume that \(g:\mathcal{C}\longrightarrow\mathcal{D}\) is a arity-wise degree-wise injection. Since the underlying graded \(\mathbb{S}\)-module of \(\mathcal{C}\) is cofree and thus injective, we get a section of graded \(\mathbb{S}\)-modules \(p:\mathcal{D}\longrightarrow\mathcal{C}\) that is left inverse to \(g\). As a consequence, the morphism of comonads on pdg \(\Bbbk\)-modules \(\mathcal{C}\circ-\longrightarrow\mathcal{D}\circ-\) is object-wise a monomorphism and the morphism of monads on pdg \(\Bbbk\)-modules \((-)^{\mathcal{D}}\longrightarrow(-)^{\mathcal{C}}\) is object-wise an epimorphism. This implies that the functors \[g_{*}:\text{pdg }\mathcal{C}\text{-}\text{cg}\longrightarrow\text{pdg }\mathcal{D}\text{-}\text{cg}\] \[g_{*}:\text{pdg }\mathcal{C}\text{-}\text{alg}\longrightarrow\text{pdg }\mathcal{D}\text{-}\text{alg}\] are fully faithful. Therefore the first \(g_{*}\) is the forgetful functor related to an idempotent comonad and the second \(g_{*}\) is the forgetful functor related to an idempotent monad. In this case, their respective adjoint functors admit an easy description in terms of pullbacks and pushouts. **Lemma 28**.: _For every pdg \(\mathcal{D}\)-coalgebra \((W,\Delta_{W},d_{W})\), \(g^{\dagger}(W)\) is given by the following pullback_ _in the category of pdg \(\Bbbk\)-modules._ Proof.: Follows directly from the dual of Proposition 71. **Lemma 29**.: _For every pdg \(\mathcal{D}\)-algebra \((\Lambda,\gamma_{\Lambda},d_{\Lambda})\), \(g^{\dagger}(\Lambda)\) is given by the following pushout_ _in the category of pdg \(\Bbbk\)-modules._ Proof.: Follows directly from Proposition 71. ### The Bar-Cobar adjunction Koszul duality between dg operads and conilpotent curved cooperads is encoded by the notion of a curved twisting morphism, see [10] or [11]. Any curved twisting morphism \(\alpha:\mathcal{C}\longrightarrow\mathcal{P}\) between a conilpotent curved cooperad and a dg operad induces a bar-cobar adjunction between the categories of dg \(\mathcal{P}\)-algebras and of curved \(\mathcal{C}\)-coalgebras. **The bar construction relative to \(\alpha\).** Given a dg \(\mathcal{P}\)-algebra \((A,\gamma_{A},d_{A})\), one can construct a curved \(\mathcal{C}\)-coalgebra \(\mathrm{B}_{\alpha}A\) called the bar construction. The underlying graded \(\mathcal{C}\)-coalgebra of \(\mathrm{B}_{\mathcal{C}}A\) is given by \(\mathcal{C}\circ A\). It is equipped with the unique coderivation whose projection onto the generators is the sum of the following maps One can check that the pdg \(\mathcal{C}\)-coalgebra \(\mathrm{B}_{\alpha}A\) is in fact a curved \(\mathcal{C}\)-coalgebra. **The cobar construction relative to \(\alpha\).** Given a curved \(\mathcal{C}\)-coalgebra \((W,\Delta_{W},d_{W})\), one can construct a dg \(\mathcal{P}\)-algebra \(\mathrm{B}_{\alpha}W\) called de cobar construction. The underlying graded \(\mathcal{P}\)-algebra is given by \(\mathcal{P}\circ W\). It is equipped with the unique derivation whose restriction to the generators is the sum of the maps One can check that the pdg \(\mathcal{P}\)-algebra \(\Omega_{\alpha}W\) is in fact a dg \(\mathcal{P}\)-algebra. **Definition 48** (Twisting morphism).: Let \((W,\Delta_{W},d_{W})\) be a curved \(\mathcal{C}\)-coalgebra and let \((A,\gamma_{A},d_{A})\) be a dg \(\mathcal{P}\)-algebra. A _twisting morphism relative to \(\alpha\)_ is the data of a graded morphism \(\lambda:W\longrightarrow A\) which satisfies the following equation \[\gamma_{A}\;(\alpha\circ\lambda)\;\Delta_{W}+d_{A}\;\lambda-\lambda\;d_{W}=0\.\] We denote the set of twisting morphisms between \(W\) and \(A\) by \(\operatorname{Tw}^{\alpha}(W,A)\). **Proposition 27**.: _Let \(\alpha:\mathcal{C}\longrightarrow\mathcal{P}\) be a curved twisting morphism between a conilpotent curved cooperad \(\mathcal{C}\) and a dg operad \(\mathcal{P}\). There are natural isomorphisms_ \[\operatorname{Hom}_{\text{dg $\mathcal{P}$-alg}}(\Omega_{\alpha}W,A)\cong \operatorname{Tw}^{\alpha}(W,A)\cong\operatorname{Hom}_{\text{curv $\mathcal{C}$-cog}}(W,\operatorname{B}_{\alpha}A)\.\] _for any curved \(\mathcal{C}\)-coalgebra \(W\) and any dg \(\mathcal{P}\)-algebra \(A\). This gives the bar-cobar adjunction relative to \(\alpha\)_ \[\operatorname{curv $\mathcal{C}$-cog}\xleftarrow{\alpha}\xleftarrow{\alpha} \xleftarrow{\alpha}\xleftarrow{\alpha}\xleftarrow{\alpha}\text{dg $\mathcal{P}$-alg}.\] _between the categories of dg \(\mathcal{P}\)-algebras and the category of curved \(\mathcal{C}\)-coalgebras. Furthermore,_ Proof.: These constructions can be found in [10, Section 4.3]. The case where \(\mathcal{C}\) is a dg cooperad can be found in [11, Section 11]. ### The complete Bar-Cobar adjunction Any curved twisting morphism \(\alpha:\mathcal{C}\longrightarrow\mathcal{P}\) between a conilpotent curved cooperad and a dg operad also induces a complete bar-cobar adjunction between the categories of dg \(\mathcal{P}\)-coalgebras and of curved \(\mathcal{C}\)-algebras. **The complete cobar construction.** Given a dg \(\mathcal{P}\)-coalgebra \((V,\Delta_{V},d_{V})\), one can construct a curved \(\mathcal{C}\)-algebra \(\widehat{\Omega}_{\alpha}V\) called the complete cobar construction. The underlying graded \(\mathcal{C}\)-algebra is given by \(V^{\mathcal{C}}\). It is equipped with the unique derivation whose restriction to the generators is the sum of the maps \[d_{1}:V\xrightarrow{d_{V}}V\xrightarrow{V^{\mathcal{C}}}V^{\mathcal{C}}\,\] \[d_{2}:V\xrightarrow{(-1).(-)}V\xrightarrow{\Delta_{V}}V^{\mathcal{P}} \xrightarrow{(\text{Id})^{\alpha}}V^{\mathcal{C}}\.\] One can check that the pdg \(\mathcal{C}\)-algebra \(\widehat{\Omega}_{\alpha}V\) is in fact a qp-complete curved \(\mathcal{C}\)-algebra. **The complete bar construction.** Given a curved \(\mathcal{C}\)-algebra \((\Lambda,\gamma_{\Lambda},d_{\Lambda})\), one can construct a dg \(\mathcal{P}\)-coalgebra \(\widehat{\operatorname{B}}_{\alpha}\Lambda\) called the complete bar construction. The underlying graded \(\mathcal{P}\)-coalgebra is given by \(L^{\mathcal{P}}\Lambda\). It is equipped with the unique coderivation whose projection onto the generators is the sum of the maps \[d_{1}:L^{\mathcal{P}}\Lambda\xrightarrow{d_{\Lambda}}\Lambda\,\] \[d_{2}:L^{\mathcal{P}}\Lambda\xrightarrow{(\text{Id})^{\alpha}}\Lambda^{ \mathcal{C}}\xrightarrow{\gamma_{\Lambda}}\Lambda\.\] Remark 40.: The cofree \(\mathcal{P}\)-coalgebra functor \(L^{\mathcal{P}}\) behaves well with respect to coderivations, which can be induced simply by a graded map onto the cogenerators. We refer to [10, Section 6.5] for these type of results. **Definition 49** (Twisting morphism).: Let \((\Lambda,\gamma_{\Lambda},d_{\Lambda})\) be a curved \(\mathcal{C}\)-algebra and let \((C,\Delta_{C},d_{\mathcal{C}})\) be a dg \(\mathcal{P}\)-coalgebra. A _twisting morphism relative to \(\alpha\)_ is the data of a graded morphism \(\lambda:C\longrightarrow\Lambda\) which satisfies the following equation \[\gamma_{\Lambda}\ (\lambda)^{\alpha}\ \Delta_{C}+d_{\Lambda}\ \lambda-\lambda\ d_{C}=0\.\] We denote the set of twisting morphisms between \(C\) and \(\Lambda\) by \(\operatorname{Tw}^{\alpha}(C,\Lambda)\). **Proposition 28**.: _Let \(\alpha:\mathcal{C}\longrightarrow\mathcal{P}\) be a curved twisting morphism between a conilpotent curved cooperad \(\mathcal{C}\) and a dg operad \(\mathcal{P}\). There are natural isomorphisms_ \[\operatorname{Hom}_{\text{dg }\mathcal{P}\text{-coalg}}\left(C,\widehat{ \operatorname{B}}_{\alpha}\Lambda\right)\cong\operatorname{Tw}^{\alpha}(C, \Lambda)\cong\operatorname{Hom}_{\text{curv }C\text{-alg}}\left(\widehat{\Omega}_{\alpha}C,\Lambda\right)\,\] _for any curved \(\mathcal{C}\)-algebra \(\Lambda\) and any dg \(\mathcal{P}\)-coalgebra \(\mathcal{C}\). This gives the complete bar-cobar adjunction relative to \(\alpha\)_ _between the categories of dg \(\mathcal{P}\)-coalgebras and the category of curved \(\mathcal{C}\)-algebras._ Proof.: These constructions can be found in [11, Section 8]. Remark 41.: It is also an adjunction \[\text{dg }\mathcal{P}\text{-cog }\overbrace{\begin{array}{c}\widehat{ \Omega}_{\alpha}\\ \hline\widehat{\operatorname{B}}_{\alpha}\end{array}}^{\text{$\widehat{\Omega}_ {\alpha}$}}\text{curv }\mathcal{C}\text{-alg}^{\text{qp-comp}}\.\] since \(\widehat{\Omega}_{\alpha}\) naturally lands in qp-complete curved \(\mathcal{C}\)-algebras. ### Compatibility with the morphisms of (co)operads Let \(f:\mathcal{P}\longrightarrow\mathcal{Q}\) be a morphims of dg operads, let \(g:\mathcal{C}\longrightarrow\mathcal{D}\) be a morphism of conilpotent curved cooperads, and let \(\alpha:\mathcal{C}\longrightarrow\mathcal{P}\) and \(\beta:\mathcal{D}\longrightarrow\mathcal{Q}\) be two curved twisting morphisms, such that the following diagram commutes: Then the bar-cobar adjunctions are compatible with the natural adjunctions induced by the morphisms, meaning that the following square of adjunctions is commutative. The complete bar-cobar adjunctions are also compatible with the natural adjunctions induced by the morphisms, meaning that the following square of adjunctions is commutative. Remark 42.: Under the extra assumption that the morphism \(g:\mathcal{C}\longrightarrow\mathcal{D}\) is quasi-planar, one can consider qp-complete algebras in the right hand side of the last commutative square of adjunctions, where \(g^{l}\) is replaced by \(\widehat{g}^{l}\). ### (Co)admissible operads In this subsection, we review various constructions that allow one the give a meaning to the homotopy theory of dg \(\mathcal{P}\)-(co)algebras, where \(\mathcal{P}\) is a dg operad. **Definition 50** (Admissible dg operad).: A dg operad \(\mathcal{P}\) is called _admissible_ if the category of dg \(\mathcal{P}\)-algebras admits a combinatorial model structure on right-transferred along the free-forgetful adjunction determined by the following sets of maps 1. the set of weak-equivalences is given by quasi-isomorphisms; 2. the set of fibrations is given by degree-wise epimorphisms; 3. the generating cofibrations are the maps \(\mathcal{P}\circ S^{n}\longrightarrow\mathcal{P}\circ D^{n+1}\) for \(n\in\mathbb{Z}\); 4. the generating acyclic cofibrations are the maps \(\mathcal{P}\circ 0\longrightarrow\mathcal{P}\circ D^{n+1}\) for \(n\in\mathbb{Z}\). **Definition 51** (Coadmissible dg operad).: A dg operad \(\mathcal{P}\) is called _coadmissible_ if the category of dg \(\mathcal{P}\)-coalgebras admits a combinatorial model structure left-transferred along the forgetful-coffree adjunction determined by the following sets of maps 1. the set of weak-equivalences is given by quasi-isomorphisms, 2. the set of cofibrations is given by degree-wise injections, 3. the generating (acyclic) cofibrations are the maps between small objects so whose image through the forgetful functor is an (acyclic) cofibration. Remark 43.: Let \(\Bbbk\) be a characteristic zero field. Then every dg operad is admissible. Even in characteristic zero, not every dg operad is coadmissible. For instance, the operad \(\mathfrak{u}\mathcal{C}\mathfrak{o}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m} \mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{m}\mathfrak{ Proof.: As described in [1], the cellular model of the interval \(I\) has the canonical structure of a \(\mathcal{E}\)-coalgebra. For every dg \(\mathcal{P}\)-algebra \(A\), the dg module \([I,A]\) inherits the structure of a dg \(\mathcal{E}\otimes\mathcal{P}\)-algebra and therefore the structure of a dg \(\mathcal{P}\)-algebra. This provides a natural path object, and proves the existence of the transferred model structure on dg \(\mathcal{P}\)-algebras. For every dg \(\mathcal{P}\)-coalgebra \(V\), the dg module \(V\otimes I\) inherits the structure of a dg \(\mathcal{E}\otimes\mathcal{P}\)-coalgebra and therefore the structure of a dg \(\mathcal{P}\)-coalgebra. This provides a natural cylinder object, and proves the existence of the transferred model structure on dg \(\mathcal{P}\)-coalgebras. Example 4.: For any dg operad \(\mathcal{P}\), the dg operad \(\mathcal{E}\otimes\mathcal{P}\) is \(\mathcal{E}\)-split. Indeed, the Barratt-Eccles operad is a dg Hopf operad: in particular, there exists a morphism of dg operad \(\mathcal{E}\longrightarrow\mathcal{E}\otimes\mathcal{E}\), which induces an \(\mathcal{E}\)-splitting on \(\mathcal{E}\otimes\mathcal{P}\). Thus, \(\mathcal{E}\otimes\mathcal{P}\) is always admissible and coadmissible. **Proposition 30**.: _Let \(f:\mathcal{P}\longrightarrow\mathcal{Q}\) be a morphism of dg operads._ 1. _If_ \(\mathcal{P}\) _and_ \(\mathcal{Q}\) _are both admissible, the adjunction_ \[\text{dg }\mathcal{P}\text{-alg }\xymatrix{\begin{array}{c}\text{\emph{f}}_{i}\\ \text{\emph{f}}_{i^{*}}\end{array}}\text{dg }\mathcal{Q}\text{-alg }\xymatrix{\begin{array}{c}\text{\emph{g}}_{i}\\ \text{\emph{f}}_{i^{*}}\end{array}}\text{dg }\mathcal{P}\text{-cg }\xymatrix{\begin{array}{c}\text{\emph{g}}_{i}\\ \text{\emph{g}}_{i}\end{array}}\text{dg }\mathcal{Q}\text{-alg }\xymatrix{\begin{array}{c}\text Let us prove that the Quillen adjunctions relating dg \(\mathcal{Q}\)-algebras to dg \(\mathcal{P}\)-algebras are Quillen equivalences. This amounts to prove that the two right Quillen functors \[\operatorname{Ho}(p^{*}) :\operatorname{Ho}(\text{dg $\mathcal{P}$-alg})\longrightarrow \operatorname{Ho}(\text{dg $\mathcal{Q}$-alg})\] \[\operatorname{Ho}(i^{*}) :\operatorname{Ho}(\text{dg $\mathcal{Q}$-alg})\longrightarrow \operatorname{Ho}(\text{dg $\mathcal{P}$-alg})\] are equivalences. Let us show that these are inverse equivalences. On the one hand, we have a canonical isomorphism \(i^{*}p^{*}\simeq\operatorname{Id}\), thus we get an isomorphism \(\operatorname{Ho}(i^{*})\operatorname{Ho}(p^{*})\simeq\operatorname{Id}\). Notice that the data of a dg \(\mathcal{Q}\)-algebra structure amounts to the data of a dg \(\mathcal{P}\)-algebra structure and of a morphism of dg module \(K\circ_{\operatorname{pl}}A\longrightarrow A\). We define an endofunctor \((\overset{\raisebox{-0.5pt}{\scalebox{0.5}{$-$}}}{-})\) of the category of dg \(\mathcal{Q}\)-algebras as follows. Let \(A\) be a dg \(\mathcal{Q}\)-algebra. Its image \(\bar{A}\) is the dg \(\mathcal{Q}\)-algebra whose underlying dg module is the path object \([I,A]=I^{*}\otimes A\) (where \(I\) is the cellular model of the interval). Its dg \(\mathcal{P}\)-algebra structure is given by the following maps \[\mathcal{P}(n)\otimes(I^{*}\otimes A)^{\otimes n}\] \[\mathcal{E}(n)\otimes\mathcal{P}(n)\otimes(I^{*}\otimes A)^{ \otimes n}\] \[\mathcal{E}(n)\otimes(I^{*})^{\otimes n}\otimes\mathcal{P}(n) \otimes A^{\otimes n}\] \[I^{*}\otimes A,\] where \(\gamma^{n}_{A|P}\) is given by the dg \(\mathcal{P}\)-algebra structure on \(A\) and where \(\gamma^{n}_{I}\) is given by the dg \(\mathcal{E}\)-algebra structure on \(I\) constructed in [1]. The action of \(K\) is given as follows \[K(n)\otimes(I^{*}\otimes A)^{\otimes n}\xrightarrow{}K(n)\otimes A^{\otimes n }\xrightarrow{}A\xrightarrow{}I^{*}\otimes A\] where the first map results from the inclusion of the first point into the interval \(\Bbbk\longrightarrow I\) which gives a map \(I^{*}\longrightarrow\Bbbk\), and where \(\gamma^{n}_{A|K}\) is the action of \(K\) on \(A\) given by its dg \(\mathcal{Q}\)-algebra structure. We have canonical natural transformations \[p^{*}i^{*}\mathrel{\raisebox{-0.5pt}{\scalebox{0.5}{$\leftrightarrow$}}} \mathrel{\raisebox{-0.5pt}{\scalebox{0.5}{$\leftrightarrow$}}}(\overset{ \raisebox{-0.5pt}{\scalebox{0.5}{$-$}}}{-})\mathrel{\raisebox{-0.5pt}{ \scalebox{0.5}{$\leftrightarrow$}}}\operatorname{Id}\] that are object-wise quasi-isomorphisms. Hence we get an isomorphism \(\operatorname{Ho}(p^{*})\operatorname{Ho}(i^{*})\simeq\operatorname{Id}\). Therefore, \(\operatorname{Ho}(i^{*})\) and \(\operatorname{Ho}(p^{*})\) are inverse to each other and their related Quillen adjunctions are Quillen equivalences. Finally, let us prove that the Quillen adjunctions relating dg \(\mathcal{Q}\)-coalgebras to dg \(\mathcal{P}\)-coalgebras are also Quillen equivalences. Again, this amounts to prove that the two right adjoint functors \[\operatorname{Ho}(p^{*}) :\operatorname{Ho}(\text{dg $\mathcal{P}$-cog})\longrightarrow \operatorname{Ho}(\text{dg $\mathcal{Q}$-cog})\] \[\operatorname{Ho}(i^{*}) :\operatorname{Ho}(\text{dg $\mathcal{Q}$-cog})\longrightarrow \operatorname{Ho}(\text{dg $\mathcal{P}$-cog})\] are equivalences. This follows from similar arguments as those used in the algebra case. There is a canonical isomorphism \(i^{*}p^{*}\simeq\operatorname{Id}\). Again, a dg \(\mathcal{Q}\)-coalgebra structure is completely determined by a dg \(\mathcal{P}\)-coalgebra and a morphism of dg modules \(V\longrightarrow V^{K\otimes\mathbb{S}}\). We define an endofunctor \((\overset{\raisebox{-0.5pt}{\scalebox{0.5}{$-$}}}{-})\) category of dg \(\mathcal{Q}\)-coalgebras as follows. Let \(V\) be a dg \(\mathcal{Q}\)-coalgebra. The underlying dg module of \(\widetilde{V}\) is the cylinder object \(I\otimes V\). Its dg \(\mathcal{P}\)-coalgebra is given by \[\begin{CD}l\otimes V\otimes\mathcal{P}(n)\\ l\otimes V\otimes\mathcal{E}(n)\otimes\mathcal{P}(n)\\ \raisebox{-2.0pt}{$\underset{\simeq}{\vbox{\hbox{\includegraphics[scale=0.4]{ pgf_ **Theorem 4** ([11]).: _Let \(\mathcal{P}\) be a \(\mathbb{S}\)-projective dg operad. The category of dg \(\mathcal{P}\)-algebras admits a semi-model category structure, determined by the following sets of maps_ 1. _the set of weak-equivalences is given by quasi-isomorphisms,_ 2. _the set of fibrations is given by degree-wise epimorphisms,_ 3. _the set of cofibrations is given by morphisms with a left-lifting property against acyclic fibrations._ These semi-model category structures are stable under quasi-isomorphisms of dg operads. **Theorem 5** ([10]).: _Let \(f:\mathcal{P}\xrightarrow{\ }\mathcal{Q}\) be a quasi-isomorphism of \(\mathbb{S}\)-projective dg operads. The induced adjunction_ _is an equivalence of semi-model categories._ In particular, for any \(\mathbb{S}\)-projective dg operad \(\mathcal{P}\), the canonical quasi-isomorphism \(\varphi_{\mathcal{P}}:\mathcal{E}\otimes\mathcal{P}\xrightarrow{\ }\mathcal{P}\) always induces an equivalence of semi-model categories where on the left-hand side there is a model category structure on the category of dg \(\mathcal{E}\otimes\mathcal{P}\)-algebras. Recall that \(\mathcal{E}\otimes\mathcal{P}\) is admissible since it is \(\mathcal{E}\)-split. Therefore, by replacing \(\mathcal{P}\) with \(\mathcal{E}\otimes\mathcal{P}\) we can always "rectify" the semi-model structure into a model structure, and both present the same underlying homotopy category. ### Quasi-planar Bar-Cobar adjunctions In the subsequent sections, we will manly work with dg operads of the form \(\Omega\mathcal{C}\), where \(\mathcal{C}\) is a quasi-planar conilpotent curved cooperad. Nevertheless, if one starts with a dg operad \(\mathcal{P}\), there is a canonical quasi-planar conilpotent curved cooperad given by \(\mathrm{B}(\mathcal{E}\otimes\mathcal{P})\). One can then construct _quasi-planar_ bar-cobar adjunctions between their respective (co)algebras categories. It will follow from the results in the subsequent sections that these adjunctions are Quillen adjunctions when \(\mathcal{P}\) is (co)admissible and induce Quillen equivalences when \(\mathcal{P}\) is cofibrant. Let \(\mathcal{P}\) be a dg operad. Let \(\varphi_{\mathcal{P}}:\mathcal{E}\otimes\mathcal{P}\xrightarrow{\ }\mathcal{P}\) be the canonical quasi-isomorphism of dg operads, where \(\mathcal{E}\) is the Barratt-Eccles operad. **Quasi-planar bar-cobar adjunction.** The morphism \(\varphi_{\mathcal{P}}\) induces an adjunction \((\varphi_{\mathcal{P}})_{\mathfrak{l}}\dashv\varphi_{\mathcal{P}}^{*}\) between dg \(\mathcal{P}\)-algebras and dg \(\mathcal{E}\otimes\mathcal{P}\)-algebras. We define the _quasi-planar bar-cobar adjunction_ as the following composition of adjunctions **Quasi-planar complete bar-cobar adjunction.** The morphism \(\varphi_{\mathcal{P}}\) induces an adjunction \((\varphi_{\mathcal{P}})_{\mathfrak{l}}\dashv\varphi_{\mathcal{P}}^{*}\) between dg \(\mathcal{P}\)-coalgebras and dg \(\mathcal{E}\otimes\mathcal{P}\)-coalgebras. We define the _quasi-planar complete bar-cobar adjunction_ as the following composition of adjunctions ## 4. Model structure on coalgebras over a cooperad The goal of this section is to study the homotopical properties of the bar-cobar adjunction between dg \(\Omega\mathcal{C}\)-algebras and curved \(\mathcal{C}\)-coalgebras, in the case where \(\mathcal{C}\) is a quasi-planar conilpotent curved cooperad. The dg operad \(\Omega\mathcal{C}\) is cofibrant, and therefore admissible by Proposition 29. This means that the category of dg \(\Omega\mathcal{C}\)-algebras admits a model category structure where weak-equivalences are given by qausi-isomorphisms and fibrations by degree-wise epimorphisms. Let us consider the bar-cobar adjunction relative to \(\iota:\mathcal{C}\longrightarrow\Omega\mathcal{C}\), which will be denoted by \(\Omega_{\mathcal{C}},\mathbb{B}_{\mathcal{C}}\) from now on, since we will not consider any other curved twisting morphism. Our first goal is going to be to transfer this model category structure along the bar-cobar adjunction to the category of dg \(\mathcal{C}\)-coalgebras. **Theorem 6**.: _Let \(\mathcal{C}\) be a quasi-planar conilpotent curved cooperad. The exists a combinatorial model category structure on the category of curved \(\mathcal{C}\)-coalgebras given by the following sets of maps:_ 1. _the set of weak-equivalences is given by morphisms_ \(f\) _such that_ \(\Omega_{\mathcal{C}}(f)\) _is a quasi-isomorphism,_ 2. _the set of cofibrations is given by morphisms_ \(f\) _such that_ \(\Omega_{\mathcal{C}}(f)\) _is a cofibration; they correspond to degree-wise injections,_ 3. _the set of fibrations is given by morphisms with the right lifting property with respect to acyclic cofibrations._ Remark 44.: Using the standard transfer theorem for model category structures only gives that cofibrations are morphism which are sent by \(\Omega_{\mathcal{C}}\) to cofibrations. The theorem contains an additional characterization of cofibrations of curved \(\mathcal{C}\)-coalgebras as degree-wise injective maps. ### Outline of the transfer of model structures Let us first start by defining the sets of morphisms of curved \(\mathcal{C}\)-coalgebras that we intend to study. **Definition 52** (Cofibrations).: A morphism \(f\) of curved \(\mathcal{C}\)-coalgebras is a _cofibration_ if \(\Omega_{\mathcal{C}}(f)\) is a cofibration of dg \(\Omega\mathcal{C}\)-algebras. **Definition 53** (Weak-equivalences).: A morphism \(f\) of curved \(\mathcal{C}\)-coalgebras is a _weak-equivalence_ if \(\Omega_{\mathcal{C}}(f)\) is a quasi-isomorphism of dg \(\Omega\mathcal{C}\)-algebras. **Definition 54** (Fibrations).: A morphism of curved \(\mathcal{C}\)-coalgebras is a _fibration_ if it has the right-lifting property against all acyclic cofibrations. Both categories \(\mathsf{curv}\)\(\mathcal{C}\)-\(\mathsf{cog}\) and dg \(\Omega\mathcal{C}\)-\(\mathsf{alg}\) are presentable, therefore by Appendix A.2 it suffices to exhibit a natural cofibrant resolution and a natural cylinder for coalgebras to prove the existence of the transferred model structure. We will show that cofibrations of Definition 52 are given by degree-wise injective maps in Proposition 36 and we will construct a natural cylinder in Proposition 39. For the rest of this section, let us fix a quasi-planar conilpotent curved cooperad \(\mathcal{C}\) whose quasi-planar ladder is indexed by some small ordinal \(\alpha\). ### Elementary cofibrations Elementary cofibrations are a particularly well-behaved set of cofibrations of curved \(\mathcal{C}\)-coalgebras. For any such elementary cofibration, its cokernel is a dg k-module. **Definition 55** (Elementary cofibrations).: A morphism \(f:W^{\prime}\to W\) of curved \(\mathcal{C}\)-coalgebras is an _elementary cofibration_ if it is degree-wise injective and if the map \(\overline{\Delta}_{W}:W\longrightarrow\overline{\mathcal{C}}\circ W\) factors through the sub-object \(\overline{\mathcal{C}}\circ W^{\prime}\subseteq\overline{\mathcal{C}}\circ W\), that is, if there exists a dashed map that makes the diagram commute. Here \(\Delta_{W}\) denotes the structural map of \(W\). Remark 45.: The induced map \(\overline{\mathcal{C}}\circ W^{\prime}\longrightarrow\overline{\mathcal{C}}\circ W\) is also monomorphism since it is has a left inverse in the category of graded \(\Bbbk\)-modules. **Lemma 31**.: _Let \(f:W^{\prime}\to W\) be an elementary cofibration curved \(\mathcal{C}\)-coalgebras. Then \(\operatorname{Coker}(f)\) is a dg module._ Proof.: Let us denote \(p_{f}:W\twoheadrightarrow\operatorname{Coker}(f)\) the canonical projection. We have \[d^{2}p_{f}=p_{f}d^{2}=-p_{f}(\theta\circ\operatorname{Id}_{W})\overline{ \Delta}_{W}=-(\theta\circ p_{f})\overline{\Delta}_{W}=-(\theta\circ\operatorname {Id}_{\operatorname{Coker}(f)})(\mathcal{C}\circ p_{f})\overline{\Delta}_{W}.\] The definition of an elementary cofibration tells us that \((\mathcal{C}\circ p_{f})\overline{\Delta}_{W}=0\). We conclude by noticing that \(p_{f}\) is an epimorphism. **Proposition 32**.: _Let \(f:W^{\prime}\to W\) be an elementary cofibration. Then \(f\) is a cofibration._ Proof.: Identifying \(W^{\prime}\) with its image in \(W\), we can decompose the underlying graded \(\Bbbk\)-module of \(W\) as the a direct \(W\cong W^{\prime}\oplus U\). The pre-differential of \(W\) decomposes as the pre-differential \(d_{W^{\prime}}\) on \(V\), the differential \(d_{U}\) on \(U\) and a degree \(-1\) map \(\zeta:U\longrightarrow W^{\prime}\). The map of graded \(\Bbbk\)-modules \[g:U\hookrightarrow W\hookrightarrow\mathcal{P}\circ W\] induces a morphism of dg modules \[D^{0}\otimes U\longrightarrow\Omega_{\mathcal{C}}W\,\] whose restriction to \(S^{0}\otimes U\) is \(g\) and whose restriction to \(S^{-1}\otimes U\) is \(-\mathrm{Id}\otimes\partial(g)\). One can notice that this restriction to \(S^{-1}\otimes U\) factors through the sub-dg module \(\Omega_{\mathcal{C}}W^{\prime}\subset\Omega_{\mathcal{C}}W\). We thus get a commutative diagram of dg modules which gives a commutative diagram of dg \(\Omega\mathcal{C}\)-algebras This diagram is a pushout square since the underlying diagram of graded \(\Omega\mathcal{C}\)-algebras is a pushout square. Since the left vertical map is a cofibration, so is the right vertical map \(f\) **Lemma 32**.: _Let us consider a commutative diagram of curved \(\mathcal{C}\)-coalgebras_ _where_ 1. \(i\) _and_ \(j\) _are elementary cofibrations,_ 2. _the map of dg modules induced by_ \(g\)__ \[\bar{g}:W^{\prime}/U\longrightarrow W/U\] _is a quasi-isomorphism._ _Then \(g\) is a weak-equivalence._ Proof.: Let us decompose the underlying graded \(\Bbbk\)-module of \(W\) as a direct sum \(W\cong U\oplus Y\). We decompose the underlying graded \(\Bbbk\)-module of \(W^{\prime}\) as a direct sum \(W^{\prime}\cong U\oplus X\) in such a way so that the restriction of \(g\) to \(X\) lands in \(Y\). The diagrams built in the proof of Proposition 32 fit in the following commutative cube The left and the right face are pushouts squares which are also homotopy pushouts. The two back horizontal maps and the top front horizontal map are quasi-isomorphisms. Thus the homotopy pushout map \(\Omega_{\mathcal{C}}g\) is also a quasi-isomorphism, which implies that \(g\) is a weak-equivalence. **Proposition 33**.: _Let us consider a commutative diagram of curved \(\mathcal{C}\)-coalgebras_ _such that_ 1. \(f\) _is a weak-equivalence,_ 2. \(i\) _and_ \(j\) _are elementary cofibrations,_ 3. _the map of dg modules induced by_ \(g\)__ \[\bar{g}:U^{\prime}/U\longrightarrow W^{\prime}/W\] _is a quasi-isomorphism._ _Then \(g\) is a weak-equivalence._ Proof.: Let us consider the following pushout square in the category of curved \(\mathcal{C}\)-coalgebras It yields a pushout square of dg \(\Omega\mathcal{C}\)-algebras which is also an homotopy pushout. Thus, the map \(\Omega_{\mathcal{C}}U^{\prime}\allowbreak\mathrel{\mathop{\kern 0.0pt \kern 0.0pt\rightharpoonup}\limits_{\kern-1.0pt\raise 0.43pt\hbox{$\to$}}}\Omega_{ \mathcal{C}}Z\) is a quasi-isomorphism. To conclude, the map \(Z\mathrel{\mathop{\kern 0.0pt\kern 0.0pt\rightharpoonup}\limits_{\kern-1.0pt\raise 0.43pt\hbox{$ \to$}}}W^{\prime}\) is a weak-equivalence by Lemma 32. ### Ladders and filtered quasi-isomorphisms A sub-set of weak-equivalences of curved \(\mathcal{C}\)-coalgebras is given by filtered quasi-isomorphisms of \(\beta\)-indexed ladders, for any small ordinal \(\beta\). The key example of such ladders is the quasi-planar ladder induced by the canonical quasi-planar filtration of \(\mathcal{C}\). **Definition 56** (\(\beta\)-ladder of curved \(\mathcal{C}\)-coalgebras).: Let \(\beta\) be a small ordinal. A \(\beta\)-_ladder of curved \(\mathcal{C}\)-coalgebras_ is a functor \[W:\beta\longrightarrow\text{curv $\mathcal{C}$-cog}\,\] that sends every limit ordinal \(k\in\beta\) to \[W(k)=\underset{i<k}{\text{colim }}W(i)\,\] with \(W(-1)=0\), and such that every map \[W(i-1)\to W(i)\,i\in\beta\.\] is an _elementary cofibration_. Notation.: We denote by \[W(\beta)\coloneqq\underset{i\in\beta}{\text{colim }}W(i)\] the value of the colimit of this \(\beta\)-ladder. Remark 46.: The first property about limit ordinal is equivalent to the fact that the functor \[(1+\beta) \longrightarrow\text{curv $\mathcal{C}$-cog}\] \[i \mapsto W(i+1)\] is cocontinuous. **Definition 57** (Associated graded of a ladder).: Given a \(\beta\)-ladder \(W\), we define its _associated graded_ as \[\text{gr}_{i}W\coloneqq W(i)/\underset{j<i}{\text{colim }}W(j)\.\] The coderivation squares to zero on this quotient \(\text{gr}_{i}W\), therefore it is a dg module. The following general proposition will allow us to construct \(\beta\)-ladders of coalgebras in a general setting. **Proposition 34**.: _Let \(\beta\) be a small ordinal, and let_ \[\mathcal{D}^{(0)}\longrightarrow\mathcal{D}^{(1)}\longrightarrow\cdots \longrightarrow\mathcal{D}^{(i)}\longrightarrow\cdots\] _be a \(\beta\)-indexed cooperad ladder, where we denote \(\mathcal{D}\coloneqq\mathcal{D}^{(\beta)}\). For every \(i\in\beta\), let \(F_{i}^{\mathcal{D}}(-)\) be the idempotent comonad on curved \(\mathcal{D}\)-coalgebras related to coreflexive full subcategory of curved \(\mathcal{D}^{(i)}\)-coalgebras._ 1. _For every curved_ \(\mathcal{D}\)_-coalgebra_ \(W\) _the diagram_ \[F_{0}^{\mathcal{D}}W\longrightarrow F_{i}^{\mathcal{D}}W\longrightarrow\cdots \longrightarrow F_{i}^{\mathcal{D}}W\longrightarrow\cdots\] _is a_ \(\beta\)_-ladder of curved_ \(\mathcal{D}\)_-coalgebras._ 2. _The canonical map_ \[\operatorname*{colim}_{i\in\mathcal{B}}F_{i}^{\mathcal{D}}W\longrightarrow W\] _is an isomorphism, therefore the colimit of the ladder is_ \(W\)_._ Proof.: For every \(i\in\beta\) and for every pdg \(\mathcal{D}\)-coalgebra \(W\), \(F_{i}^{\mathcal{D}}W\) is given as the following pullback square in the category of graded k-modules. Combined with the fact that directed colimits commute with pullbacks in graded k-modules, we get the fact that the map \[\operatorname*{colim}_{i<k}F_{i}^{\mathcal{D}}W\longrightarrow F_{i}^{ \mathcal{D}}W\] is an isomorphism for every limit ordinal \(k\in\beta+1\). It remains to show that for every \(i\in\beta\), the map \(F_{i}^{\mathcal{D}}W\longrightarrow F_{i+1}^{\mathcal{D}}W\) is an elementary cofibration. It fits in the following pullback diagram of graded k-modules Since degree-wise injections are preserved by the tensor product and by pullbacks, the map \(F_{i}^{\mathcal{D}}W\longrightarrow F_{i+1}^{\mathcal{D}}W\) is a degree-wise injection. It remains to show that for every \(p,q,j\) (with \(p\geq 1\) and \(1\leq j\leq p\)) the decomposition map factors through the sub-object \[\overline{\mathcal{D}}_{\text{pl}}^{(i+1)}(p)\otimes(F_{i+1}^{\mathcal{D}}W)^ {\otimes j-1}\otimes\overline{\mathcal{D}}^{(i)}(q)\otimes(F_{i+1}^{\mathcal{ D}}W)^{\otimes q}\otimes(F_{i+1}^{\mathcal{D}}W)^{\otimes p-j}\.\] Using coassociativity, one can rewrite the map as Since the sequence \((\mathcal{D}^{(i)})_{i\in\beta}\) is a cooperad ladder, the map \[\Delta_{j}:\overline{\mathcal{D}}_{\mathrm{pl}}^{(i+1)}(p+q-1) \longrightarrow\overline{\mathcal{D}}_{\mathrm{pl}}^{(i+1)}(p)\otimes \overline{\mathcal{D}}_{\mathrm{pl}}^{(i+1)}(q)\] factors through \(\overline{\mathcal{D}}_{\mathrm{pl}}^{(i)}(p)\otimes\overline{\mathcal{D}}_{ \mathrm{pl}}^{(i)}(q)\), which proves the result. **Corollary 4** (Coradical ladder).: _Let \(\mathcal{C}\) be a quasi-planar conlpotent curved cooperad. For every \(n\in\omega\), let \(F_{n}^{\mathrm{rad}}(-)\) be the idempotent comand of curved \(\mathcal{C}\)-coalgebras that coreflects onto the full subcategory of curved \(F_{n}^{\mathrm{rad}}\mathcal{C}\)-coalgebras, where \(F_{n}^{\mathrm{rad}}\mathcal{C}\) is the \(n\)-stage of the coradical ladder._ _Let \(W\) be a curved \(\mathcal{C}\)-coalgebra. The diagram_ \[F_{0}^{\mathrm{rad}}W\hookrightarrow\cdots\hookrightarrow F_{i}^{\mathrm{ rad}}W\hookrightarrow\cdots\] _is an \(\omega\)-ladder of curved \(\mathcal{C}\)-coalgebras, called the coradical ladder. The colimit of this diagram is again \(W\)._ Proof.: Follows directly from Proposition 34. **Corollary 5** (Quasi-planar ladder).: _Let \(\mathcal{C}\) be a quasi-planar conlpotent curved cooperad. Recall from Subsection 2.9 that since \(\mathcal{C}\) is quasi-planar, it admits a canonical quasi-planar \(\omega\)-ladder_ \[F_{0}^{\mathrm{qp}}\mathcal{C}\longrightarrow F_{1}^{\mathrm{qp}}\mathcal{C} \longrightarrow\cdots\longrightarrow F_{n}^{\mathrm{qp}}\mathcal{C}\longrightarrow\cdots\] _whose colimit is \(\mathcal{C}\). For every \(i\in\omega\), let \(F_{i}^{\mathrm{qp}}(-)\) be the idempotent comand of curved \(\mathcal{C}\)-coalgebras that coreflects onto the full subcategory of curved \(F_{i}^{\mathrm{qp}}\mathcal{C}\)-coalgebras._ _Let \(W\) be a curved \(\mathcal{C}\)-coalgebra. The diagram_ \[F_{0}^{\mathrm{qp}}\mathcal{C}\longrightarrow F_{1}^{\mathrm{qp}}\mathcal{C} \longrightarrow\cdots\longrightarrow F_{n}^{\mathrm{qp}}\mathcal{C}\longrightarrow\cdots\] _is an \(\omega\)-ladder of curved \(\mathcal{C}\)-coalgebras, called the quasi-planar ladder. The colimit of this diagram is again \(W\)._ Proof.: Follows directly from Proposition 34. Using ladders, we can define the notion of a filtered quasi-isomorphism of ladders. **Definition 58** (Filtered quasi-isomorphism of ladders).: A morphism of \(\beta\)-ladders of curved \(\mathcal{C}\)-coalgebras \(f:W\longrightarrow W^{\prime}\) is a _filtered quasi-isomorphism_ if \[\mathrm{gr}_{i}(f):\mathrm{gr}_{i}W\rightharpoonup\mathrm{gr}_{i}W^{\prime}\] is a quasi-isomorphism for all \(i\in\beta\). **Proposition 35**.: _Let \(f:A\rightharpoonup A^{\prime}\) be a quasi-morphism of dg \(\Omega\mathcal{C}\)-algebras. The morphism of quasi-planar ladders_ \[F_{i}^{\mathrm{qp}}\mathcal{B}_{\mathcal{C}}(f):F_{i}^{\mathrm{qp}}\mathcal{B} _{\mathcal{C}}A\rightharpoonup F_{i}^{\mathrm{qp}}\mathcal{B}_{\mathcal{C}}A^{\prime}\] _is a filtered quasi-isomorphism._ Proof.: For every \(i\in\omega\), the map \[\operatorname{gr}_{i}^{\operatorname{ap}}(\mathcal{B}_{C}A)\longrightarrow \operatorname{gr}_{i}^{\operatorname{ap}}(\mathcal{B}_{C}A^{\prime})\] can be rewritten as the morphism of dg modules \[\operatorname{gr}_{i}^{\operatorname{ap}}(\mathcal{C}_{\operatorname{pl}}) \circ_{\operatorname{pl}}A\longrightarrow\operatorname{gr}_{i}^{ \operatorname{ap}}(\mathcal{C}_{\operatorname{pl}})\circ_{\operatorname{pl}}A^{\prime}\] which is a quasi-isomorphism. ### Colibrations We characterize cofibrations as degree-wise injections and subsequently prove that any filtered quasi-isomorphism is a weak-equivalence. **Lemma 33**.: _A cofibration of dg \(\Omega\mathcal{C}\)-algebras is in particular a degree-wise injection._ Proof.: Let us consider the acyclic dg module \(D^{1}\) equipped with its canonical structure of a unital associative commutative algebra. Let \(f:A\longrightarrow A^{\prime}\) be a cofibration of dg \(\Omega\mathcal{C}\)-algebras. Its lifting property and the fact that the map \(D^{1}\otimes A\longrightarrow 0\) is an acyclic fibration imply that the inclusion \(A\mapsto D^{1}\otimes A\) which is a degree-wise injection factors through \(f\). Therefore \(f\) is also a degree-wise injection. **Proposition 36**.: _A morphism of curved \(\mathcal{C}\)-coalgebras is a cofibration if and only if it is a degree-wise injection._ Proof.: Let \(f:W\longrightarrow W^{\prime}\) be a morphism of curved \(\mathcal{C}\)-coalgebras. On the one hand, let us suppose that \(f\) is a degree-wise injection. Then it can be recovered from the transfinite composition of the sequence \[W\longrightarrow W+F_{1}^{\operatorname{rad}}W^{\prime}\longrightarrow \cdots\longrightarrow W+F_{n}^{\operatorname{rad}}W^{\prime}\longrightarrow \cdots\.\] Every morphism in this sequence is an cofibration since it is an elementary cofibration. Cofibrations are stable by transfinite compositions, therefore \(f\) is also a cofibration. On the other hand, let us suppose that \(f\) is a cofibration. We consider the following diagram of graded k-modules The top horizontal maps are clearly degree-wise injections. The map \(\Omega_{C}(f)\) is by definition a cofibration of dg \(\Omega\mathcal{C}\)-algebras, therefore it is in particular a degree-wise injection by Lemma 33. This implies that \(f\) is also a degree-wise injection. **Proposition 37**.: _Let \(f:W\longrightarrow W^{\prime}\) be a filtered quasi-isomorphism of \(\beta\)-ladders of curved \(\mathcal{C}\)-coalgebras. The map \(f(\beta):W(\beta)\xrightarrow{\ \ \ }W^{\prime}(\beta)\) is a weak-equivalence._ Proof.: Notice that the following holds. 1. The map \(f(0)\) is a weak-equivalence, since it is the identity of the zero object \(0\). 2. If \(i\in\beta+1\) is a limit ordinal so that \(f(j)\) is an equivalence for every \(j<i\), then the colimits \[\Omega_{C}W(i)=\operatorname*{colim}_{j<i}\Omega_{C}W(j),\quad\Omega_{C}W^{ \prime}(i)=\operatorname*{colim}_{j<i}\Omega_{C}W^{\prime}(j)\] are homotopy colimits, and therefore the map \(\Omega_{C}f(i)\) is a quasi-isomorphism. Thus \(f(i)\) is a weak equivalence. 3. By Proposition 33, \(f(i+1)\) is a weak-equivalence whenever \(f(i)\) is a weak-equivalence. We conclude by an ordinal induction. ### The cylinder object Let \(W\) be a curved \(\mathcal{C}\)-coalgebra. Let \(A\) be a cylinder object of \(\Omega_{\mathcal{C}}W\) in the category of dg \(\Omega\mathcal{C}\)-algebras, that is, a dg \(\Omega\mathcal{C}\)-algebra together with a factorisation where \(i\) is a cofibration and \(p\) is an acyclic fibration of dg \(\Omega\mathcal{C}\)-algebras. Let \(\operatorname{Cyl}(W)\) be the following pullback in in the category of curved \(\mathcal{C}\)-coalgebras \[\operatorname{Cyl}(W)\coloneqq\operatorname{B}_{\mathcal{C}}A\times_{ \operatorname{B}_{\mathcal{C}}\Omega_{\mathcal{C}}(W)}W.\] Our goal is to show that \(\operatorname{Cyl}(W)\) is a natural cylinder object for \(W\). Notice that \(\operatorname{Cyl}(W)\) fits in the following diagram of curved \(\mathcal{C}\)-coalgebras. The morphism \(i^{\dagger}:W\oplus W\longrightarrow\operatorname{B}_{\mathcal{C}}A\) is the transpose of \(i\) and the morphism \(\nabla:W\oplus W\longrightarrow W\) is the universal codiagonal morphism. The morphism \(\boldsymbol{\eta}\) is the unit of the bar-cobar adjunction \(\Omega_{\mathcal{C}}+\operatorname{B}_{\mathcal{C}}\), which is a degree-wise injection, and thus a cofibration by Proposition 36. Notice that \(\operatorname{B}_{\mathcal{C}}(p)\) is a filtered quasi-isomorphism of quasi-planar ladders by Proposition 35. Let us choose a particular summand \(W\) in the direct sum \(W\oplus W\). We choose one of the two sections \(s:W\longrightarrow\operatorname{Cyl}(W)\) of the map \(\operatorname{proj}_{W}:\operatorname{Cyl}(W)\longrightarrow W\) and one of the two sections \(d:\Omega_{\mathcal{C}}W\longrightarrow A\) of the map \(p:A\longrightarrow\Omega_{\mathcal{C}}W\), in such a way that the following diagram of curved \(\mathcal{C}\)-coalgebras commutes. Moreover, the dg module \(A\) decomposes into \(A=\Omega_{\mathcal{C}}W\oplus K\) where \(K\) is the kernel of the map \(p:A\longrightarrow\Omega_{\mathcal{C}}W\). The dg module \(K\) is acyclic, since \(p\) is a quasi-isomorphism. Let us take a contracting homotopy \(h\) of \(K\), that is, a degree \(1\) endomorphism of \(K\) such that \[\partial(h)=d_{K}h+hd_{K}=\operatorname{Id}_{K}.\] We can extend \(h\) to \(A\) by zero on \(\Omega_{\mathcal{C}}W\). Then \(\partial(h)=\pi_{K}\). Now let \(H\) be the degree \(1\) endormophism of the graded complex \(\operatorname{B}_{\mathcal{C}}A=\mathcal{C}_{\operatorname{pl}}\circ_{ \operatorname{pl}}A\) defined as follows \[H\coloneqq\sum_{k=0}^{n}\operatorname{Id}_{\mathcal{C}_{\operatorname{pl}}(n) }\otimes(\pi_{\Omega_{\mathcal{C}}W}^{\otimes k}\otimes h\otimes\operatorname {Id}_{A}^{\otimes n-k-1})\quad\text{ on }\quad\mathcal{C}_{\operatorname{pl}}(n) \otimes A^{\otimes n}\quad\text{for}\quad n\geq 1,\] and \[H=0\quad\text{on}\quad\mathcal{C}_{\operatorname{pl}}(0)\.\] In particular, for every \(0\leq k\leq n-1\), the restriction of \(H\) to \(\mathcal{C}_{\mathrm{pl}}(n)\otimes((\Omega_{\mathcal{C}}W)^{\otimes k}\otimes K \otimes A^{\otimes n-k-1})\) is given by \[\mathcal{C}_{\mathrm{pl}}(n)\otimes((\Omega_{\mathcal{C}}W)^{\otimes k}\otimes K \otimes A^{\otimes n-k-1})\] \[\mathcal{C}_{\mathrm{pl}}(n)\otimes((\Omega_{\mathcal{C}}W)^{\otimes k}\otimes K \otimes A^{\otimes n-k-1})\] \[\mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}}A\.\] One can extend \(H\) to \(\mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}}B_{\mathcal{C}}A=\mathcal{C}_{ \mathrm{pl}}\circ_{\mathrm{pl}}\mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}}A\) through the same formula \[H:=\left\{\begin{array}{ll}\sum_{k=0}^{n}\mathrm{Id}_{\mathcal{C}_{\mathrm{ pl}}(n)}\otimes(\pi_{\mathbb{B}_{\mathcal{C}}\circ_{\mathrm{pl}}W}^{\otimes k} \otimes H\otimes\mathrm{Id}_{A}^{\otimes n-k-1})\ \text{on}\ \mathcal{C}_{\mathrm{pl}}(n)\otimes(B_{ \mathcal{C}}A)^{\otimes n}\quad\text{for}\quad n\geq 1;\\ 0\quad\text{on}\quad\mathcal{C}_{\mathrm{pl}}(0)\.\end{array}\right.\] The same formula mutatis mutandis allows us to extend \(H\) to \(\mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}}\mathcal{C}_{\mathrm{pl}}\circ_{ \mathrm{pl}}\mathcal{B}_{\mathcal{C}}A=\mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{ pl}}\mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}}\mathcal{C}_{\mathrm{pl}}\circ_{ \mathrm{pl}}A\). One can notice then that \(H\) commutes with the maps \[B_{\mathcal{C}}A\longrightarrow\mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}}B_{ \mathcal{C}}A\rightrightarrows\mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}} \mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}}B_{\mathcal{C}}A.\] **Lemma 34**.: _The subobject \(\mathrm{Cyl}(W)\longrightarrow B_{\mathcal{C}}A\) is stable by \(H\)._ Proof.: Let \(\mathrm{Cyl}(W)^{\prime}\) be the pullback in the category of pdg \(\mathcal{C}\)-coalgebras. By universal property, one has a morphism of pdg \(\mathcal{C}\)-coalgebras \(\mathrm{Cyl}(W)\longrightarrow\mathrm{Cyl}(W)^{\prime}\). Let us show that \(\mathrm{Cyl}(W)^{\prime}\) is curved and therefore isomorphic to \(\mathrm{Cyl}(W)\). Let \(X\) be the pullback of the span \(B_{\mathcal{C}}A\longrightarrow B_{\mathcal{C}}\Omega_{\mathcal{C}}W \longleftarrow W\) in the category of pdg k-modules and let \(Y\) be the pullback of the span \(\mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}}B_{\mathcal{C}}A\longrightarrow \mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}}B_{\mathcal{C}}\Omega_{\mathcal{C}}W \longleftarrow\mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}}W\) in the category of pdg k-modules. The pdg \(\mathcal{C}\)-coalgebra \(\mathrm{Cyl}(W)^{\prime}\) can be computed as the following equalizer both in the category pdg \(\mathcal{C}\)-coalgebras and in the category of pdg k-modules. This follows from the results of Subsection 3.4, as the forgetful functor preserves finite cosifted limits. The following square of pdg \(\mathcal{C}\)-coalgebras commutes. The bottom horizontal and the left vertical maps are monomorphisms, which implies that the map \(\mathrm{Cyl}(W)^{\prime}\longrightarrow B_{\mathcal{C}}A\) is also a monomorphism. Therefore \(\mathrm{Cyl}(W)^{\prime}\) is also curved and the canonical morphism \(\mathrm{Cyl}(W)\longrightarrow\mathrm{Cyl}(W)^{\prime}\) is an isomorphism of curved \(\mathcal{C}\)-coalgebras. It is clear that the sub-objects \(\mathcal{C}\circ X\subset\mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}}B_{ \mathcal{C}}A\) and \(\mathcal{C}\circ Y\subset\mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}}\mathcal{C} _{\mathrm{pl}}\circ_{\mathrm{pl}}B_{\mathcal{C}}A\) are stable through \(H\). Hence, so is the sub-object \(\mathrm{Cyl}(W)\subset B_{\mathcal{C}}A\). **Lemma 35**.: _The subobjects \(F^{\mathrm{exp}}_{\mathrm{exp}}B_{\mathcal{C}}A\) and \(F^{\mathrm{exp}}_{\mathrm{exp}}\mathrm{Cyl}(W)\) are stable by \(H\) for every \(i\in\alpha\)._ Proof.: It is clear by definition of \(H\) that \(F_{i}^{\text{ap}}\text{B}_{C}A=F_{i}^{\text{ap}}\mathcal{C}\circ A\) is stable by it. Recall that \(F_{i}^{\text{ap}}\text{Cyl}(W)\) is given by the following pullback square in the category of graded k-modules. We already know that \(\text{Cyl}(W)\) and \(F_{i}^{\text{ap}}\text{B}_{C}A\) are stable by \(H\). Therefore \(F_{i}^{\text{ap}}\text{Cyl}(W)\) is also stable by \(H\). **Lemma 36**.: _For every \(i+1\in\omega\), the endomorphism \(\partial(H)\) of \(\text{gr}_{i+1}^{\text{ap}}\text{B}_{C}A\) is equal to the identity minus the projection onto \(\text{gr}_{i+1}^{\text{ap}}\text{B}_{C}\Omega_{C}W\)._ Proof.: For every natural integer \(n\) and every \(0\leq k\leq n-1\), the summands \[\text{gr}_{i+1}^{\text{ap}}\mathcal{C}_{\text{pl}}(n)\otimes(\Omega_{C}W)^{ \otimes n}\subset\text{gr}_{i+1}^{\text{ap}}\mathcal{C}_{\text{pl}}\circ_{ \text{pl}}A\cong\text{gr}_{i+1}^{\text{ap}}\text{B}_{C}A\,\] \[\text{gr}_{i+1}^{\text{ap}}\mathcal{C}_{\text{pl}}(n)\otimes(\Omega_{C}W)^{ \otimes k}\otimes K\otimes A^{n-k-1}\subset\text{gr}_{i+1}^{\text{ap}}\mathcal{ C}_{\text{pl}}\circ_{\text{pl}}A\cong\text{gr}_{i+1}^{\text{ap}}\text{B}_{C}A\,\] are both stable by the differential and by \(H\). Then, a direct inspection shows that \(\partial(H)\) is zero on the first summand and the the identity on the second one, which concludes the proof. **Proposition 38**.: _For every \(i+1\in\alpha\), the endomorphism \(\partial(H)\) of \(\text{gr}_{i+1}\text{Cyl}(W)\) is equal to the identity minus the projection onto \(\text{gr}_{i+1}^{\text{ap}}\mathcal{C}\). Therefore the maps_ \[\text{gr}_{i+1}^{\text{ap}}W\xrightarrow{\text{gr}_{i+1}^{\text{ap}}(s)}\text {gr}_{i+1}^{\text{ap}}\mathcal{C}\text{yl}(W)\xrightarrow{\text{gr}_{i+1}^{ \text{ap}}(\text{proj})}\text{gr}_{i+1}^{\text{ap}}W\] _are quasi-isomorphisms._ Proof.: Let us consider the following commutative diagram of dg modules Let us denote by \(\pi_{W}\) the composition \(\text{gr}_{i+1}^{\text{ap}}(s)\)\(\text{gr}_{i+1}^{\text{ap}}(\text{proj}_{W})\) and by \(\pi_{\text{B}_{C}\Omega_{C}W}\) the composition \(\text{gr}_{i+1}^{\text{ap}}(\text{B}_{C}(d))\)\(\text{gr}_{i+1}^{\text{ap}}(\text{B}_{C}(p))\). By Lemma 36, we know that: \[\partial(h)=(\text{Id}-\pi_{\text{B}_{C}\Omega_{C}W})\.\] Since \(\text{gr}_{i+1}^{\text{ap}}(j)\) commutes with the differential \(d\) and \(H\), we have that \(\text{gr}_{i+1}^{\text{ap}}(j)\partial(h)=\partial(h)\text{gr}_{i+1}^{\text{ap}} (j)\) and we compute that \[\text{gr}_{i+1}^{\text{ap}}(j)\partial(h)=\text{gr}_{i+1}^{\text{ap}}(j)( \text{Id}-\pi_{W})\.\] Since \(\text{gr}_{i+1}^{\text{ap}}(j)\) is a monomorphism, it implies that \(\partial(h)=\text{Id}-\pi_{W}\). **Proposition 39**.: _The factorisation_ \[W\oplus W\xrightarrow{i^{\dagger}\times\nabla}\operatorname{Cyl}(W)\xrightarrow{ \operatorname{proj}_{W}}W\] _makes \(\operatorname{Cyl}(W)\) a good cylinder object of \(W\), in the sense that_ 1. _the map_ \(i^{\dagger}\times\nabla:W\oplus W\rightarrow\operatorname{Cyl}(W)\) _is a cofibration,_ 2. _the map_ \(\operatorname{proj}_{W}:\operatorname{Cyl}(W)\xrightarrow{\simeq}W\) _is a weak-equivalence._ Proof.: The morphism \(i^{\dagger}\times\nabla:W\oplus W\longrightarrow\operatorname{Cyl}(W)\) is a degree-wise monomorphism since both \(\pi_{W}:W\oplus W\longrightarrow\operatorname{B}_{C}\Omega_{\mathcal{C}}(W \oplus W)\) and \(\operatorname{B}_{\mathcal{C}}(i):\operatorname{B}_{\mathcal{C}}\Omega_{ \mathcal{C}}(W\oplus W)\longrightarrow\operatorname{B}_{\mathcal{C}}A\) are. Thus, it is a cofibration. To conclude, Proposition 38 tells us that the map \(\operatorname{proj}_{W}:\operatorname{Cyl}(W)\longrightarrow W\) is a filtered quasi-isomorphism. Thus it is a weak-equivalence. Remark 47.: Actually, the map \(\operatorname{proj}_{W}:\operatorname{Cyl}(W)\twoheadrightarrow W\) is also a fibration as the pullback of a fibration. ## 5. A Quillen equivalence, \(\infty\)-morphisms and homotopy transfer theorems for algebras We show that the bar-cobar adjunction induces a Quillen equivalence. This allows us to give another presentation of the homotopy category of dg \(\Omega\mathcal{C}\)-algebras in terms of curved \(\mathcal{C}\)-coalgebras together with their transferred model category structure. We introduce \(\infty\)-morphisms and show that they are invertible. We prove a homotopy transfer theorem for dg \(\Omega\mathcal{C}\)-algebras. Finally, we show how another model categories structures on the category of curved \(\mathcal{C}\)-coalgebras can be obtained by a left Bousfield localization. ### The Quillen equivalence The goal of this subsection is to prove the following theorem. Similar theorems in characteristic zero were proven in [11, 12, 13, 14]. Here we leverage the quasi-planar context to build a contracting homotopy that works in positive characteristic. **Theorem 7**.: _The adjunction_ _is a Quillen equivalence, when one considers the transferred model category structure on the category of curved \(\mathcal{C}\)-coalgebras._ Proof.: The theorem follows directly from Lemma 37. Before going into the proofs, let us spell out a direct consequence of Theorem 7. **Corollary 6**.: _Let \(\mathcal{P}\) be an cofibrant dg operad. The quasi-planar bar-cobar adjunction_ _is a Quillen equivalence, when curved \(\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\)-coalgebras are endowed with the transferred structure from dg \(\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\)-algebras._ Proof.: Let us denote \(\psi_{P}=\Omega\mathsf{B}(\varphi_{P}):\Omega\mathsf{B}(\mathcal{E}\otimes \mathcal{P})\xrightarrow{\sim}\mathcal{P}\) the canonical quasi-isomorphism, and \(\epsilon:\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\xrightarrow{\sim} \mathcal{E}\otimes\mathcal{P}\) the quasi-isomorphism given by the counit map. First notice that the adjunction \((\psi_{P})_{!}\dashv\psi_{P}^{*}\) is a Quillen equivalence since both \(\Omega\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\) and \(\mathcal{P}\) are cofibrant dg operads. The adjunctions \((\varphi_{P})_{!}\dashv\psi_{P}^{*}\) and \((\epsilon)_{!}\dashv\epsilon^{*}\) are in general just Quillen adjunctions. Let \(\mathcal{A}\) be a dg \(\mathcal{P}\)-algebra. Our goal is to show that the counit \[\epsilon_{\mathcal{A}}:\Omega^{\alpha,\rho}_{\pi}\mathsf{B}^{\alpha,\rho}_{ \pi}A\longrightarrow A\] is a quasi-isomorphism. A direct computation shows that \[\Omega^{\alpha,\rho}_{\pi}\mathsf{B}^{\alpha,\rho}_{\pi}\cong(\varphi_{P})_{! }\Omega_{\pi}\mathsf{B}_{\pi}\mathcal{A}\varphi_{P}^{*}\cong(\psi_{P})_{!}( \epsilon)_{!}\Omega_{\pi}\mathsf{B}_{\pi}\epsilon^{*}\psi_{P}^{*}\cong(\psi_{ P})_{!}\Omega_{!}\mathsf{B}_{\epsilon}\psi_{P}^{*}\.\] Therefore since the counit of \((\psi_{P})_{!}\dashv\psi_{P}^{*}\) and of \(\Omega_{!}\dashv\mathsf{B}_{\epsilon}\) are quasi-isomorphisms, so is \(\epsilon_{\mathcal{A}}\). Showing that for a curved \(\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\)-coalgebra, the unit of adjunction is a weak-equivalence is completely analogous. Remark 48.: If the dg operad \(\mathcal{P}\) is just \(\mathsf{S}\)-projective, the above result can be adapted, and homotopy category of curved \(\mathsf{B}(\mathcal{E}\otimes\mathcal{P})\)-coalgebras given by localizing at transferred weak-equivalences is still equivalent to the homotopy category of dg \(\mathcal{P}\)-algebras given by localizing at quasi-isomorphisms. **Lemma 37**.: _For every dg \(\Omega\mathcal{C}\)-algebra \(A\), the counit map \(\epsilon_{\mathcal{A}}:\Omega_{\mathcal{C}}\mathsf{B}_{\mathcal{C}}A \xrightarrow{\sim}A\) is a quasi-isomorphism._ In the following paragraphs, our strategy to prove Lemma 37 is the following. First, we construct a section \(\zeta_{A}\) of the counit map \(\epsilon_{A}:\Omega_{C}\mathbb{B}_{C}A\longrightarrow A\) in the category of dg modules. The goal is then to show that \(\pi_{A}=\zeta_{A}\epsilon_{A}\) is homotopic to an isomorphism on \(\Omega_{C}\mathbb{B}_{C}A\). This is done by constructing an explicit degree \(1\) endomorphism \(H\) and by showing in Proposition 40 that \(\partial(H)+\pi_{A}\) is an isomorphism. Therefore \(\epsilon_{A}\) has a left inverse, \(\zeta\), and a right inverse, \(\zeta_{A}(\partial(H)+\pi_{A})^{-1}\), in the homotopy category of dg modules. This implies that \(\epsilon_{A}\) is a isomorphism in the homotopy category, and therefore a quasi-isomorphism. Notation. We abbreviate the dg operad \(\Omega C\) by \(\mathcal{P}\) for the rest of this subsection. Let us denote: 1. \(\mathcal{P}_{\mathrm{pl}}=\mathbb{T}_{\mathrm{pl}}s^{-1}\overline{\mathcal{C} _{\mathrm{pl}}}\) and \(\overline{\mathcal{P}_{\mathrm{pl}}}=\overline{\mathbb{T}_{\mathrm{pl}}}s^{-1} \overline{\mathcal{C}_{\mathrm{pl}}}\). 2. \(QA=\mathcal{P}_{\mathrm{pl}}\circ_{\mathrm{pl}}\mathcal{C}_{\mathrm{pl}}\circ _{\mathrm{pl}}A\), the underlying graded k-module of \(\Omega_{C}\mathbb{B}_{C}A\). 3. \(RA=\overline{\mathcal{P}_{\mathrm{pl}}}\circ_{\mathrm{pl}}\mathcal{C}_{\mathrm{ pl}}\circ_{\mathrm{pl}}A\oplus\overline{\mathcal{C}_{\mathrm{pl}}}\circ_{ \mathrm{pl}}A\subset QA\), where we have a a canonical isomorphism of graded k-modules \(QA\cong A\oplus RA\). 4. \(p_{A}:QA\twoheadrightarrow A\) the projection onto \(A\) with respect to \(RA\). The section.The counit map \(\epsilon_{A}:QA\longrightarrow A\) has a canonical section in the category of dg modules \(\zeta_{A}:A\longrightarrow QA\) whose underlying graded map is the inclusion \[A\cong\Bbbk\circ_{\mathrm{pl}}\Bbbk\circ_{\mathrm{pl}}A\hookrightarrow \mathcal{P}_{\mathrm{pl}}\circ_{\mathrm{pl}}\mathcal{C}_{\mathrm{pl}}\circ_{ \mathrm{pl}}A.\] There is a canonical isomorphism of dg modules \[\Omega_{C}\mathbb{B}_{C}A\cong A\oplus K\] where \(K\) is the kernel of the counit. Our goal is to prove that \(\pi_{A}=\zeta_{A}\epsilon_{A}\) is a quasi-isomorphism. **The contracting homotopy.** Let \(H\) be the degree \(1\) endomorphism of \(QA\) such that 1. its restriction to \(A\) is zero, 2. its restriction to \(\overline{\mathcal{C}_{\mathrm{pl}}}\circ_{\mathrm{pl}}A\) is \[c\otimes(a_{1}\otimes\cdots\otimes a_{n})\mapsto s^{-1}c\otimes(a_{1}\otimes \cdots\otimes a_{n}).\] 3. it is then defined by induction on \(s^{-1}\overline{\mathcal{C}_{\mathrm{pl}}}(n)\circ_{\mathrm{pl}}\mathcal{C}_{ \mathrm{pl}}\circ_{\mathrm{pl}}A\) by \[\sum_{i=0}^{n-1}\mathrm{Id}_{s^{-1}\overline{\mathcal{C}_{\mathrm{pl}}^{n}}(n )}\otimes(\rho_{A}^{\otimes i}\otimes H\otimes\mathrm{Id}^{\otimes n-i-1}).\] In other words, as the graded S-module \(\mathcal{P}_{\mathrm{pl}}\circ_{\mathrm{pl}}\mathcal{C}_{\mathrm{pl}}\) consists in of planar trees whose nodes are labelled by elements of \(s^{-1}\overline{\mathcal{C}_{\mathrm{pl}}}\) and \(\mathcal{C}_{\mathrm{pl}}\) and whose nodes that are not at the top must belong to \(s^{-1}\overline{\mathcal{C}_{\mathrm{pl}}}\). \(H\) consists in adding \(s^{-1}\) to the leftest top vertex if it belongs to \(\overline{\mathcal{C}_{\mathrm{pl}}}\) and zero otherwise. The differential on the graded k-module \(QA=\mathcal{P}_{\mathrm{pl}}\circ_{\mathrm{pl}}\mathcal{C}_{\mathrm{pl}}\circ _{\mathrm{pl}}A\) may be decomposed into several terms. 1. The pre-differential \(d_{C}\) that comes from the pre-differential of \(\mathcal{C}\) which is non-zero on elements in \(s^{-1}\overline{\mathcal{C}}\) and in \(\mathcal{C}\). 2. The differential \(d_{A}\) on \(A\). 3. The pre-differential \(d_{CP}\) induced the operadic curved twisting morphism \(\iota:\mathcal{C}\longrightarrow\mathcal{P}\), which sends elements in \(\mathcal{C}\) to elements in \(\mathcal{P}\). 4. The pre-differential \(d_{\Delta}\) induced by the decomposition morphism \(\Delta\) of the cooperad \(\mathcal{C}\) which is non-zero on elements in \(\mathcal{P}\). 5. The pre-differential \(d_{\theta}\) induced by the curvature of \(\mathcal{C}\) which is non-zero on elements in \(\mathcal{P}\). 6. The pre-differential \(d_{CA}\) induced by the dg \(\Omega C\)-algebra structure of \(A\), which sends elements in \(\mathcal{C}\) to elements in \(A\). **The coradical filtration.** First we filtrate \(QA\) with respect to the coradical filtration of \(\mathcal{C}_{\mathrm{pl}}\) \[F_{0}^{\mathrm{rad}}(QA) \coloneqq A\,\] \[F_{n}^{\mathrm{rad}}(QA) \coloneqq(F_{n}\mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}}A) \oplus\bigoplus_{k\geq 0,\,j+h+\cdots+j=n,k>0}\left(s^{-1}F_{b}^{\mathrm{rad}} \overline{\mathcal{C}_{\mathrm{pl}}}(k)\right)\otimes\left(F_{h}^{\mathrm{rad} }QA\otimes\cdots\otimes F_{h}^{\mathrm{rad}}QA\right)\,\] where the sum symbol \(\sum\) stands for the union within \(\mathcal{C}_{\mathrm{pl}}(k)\circ_{\mathrm{pl}}\mathcal{C}_{\mathrm{pl}}\). This filtration is exhaustive, that is, \[QA\cong\underset{n\in\mathcal{U}}{\mathrm{colim}}\ F_{n}^{\mathrm{rad}}QA\.\] Moreover, the coradical filtration is stable by the pre-differentials \(d_{A},d_{CA},d_{C},d_{\theta},d_{CP},d_{\Delta}\) and by \(H\). It is also stable through \(\pi_{A}=\zeta_{A}\epsilon_{A}\). Furthermore, one has a canonical isomorphisms of graded \(\Bbbk\)-modules \[\mathrm{gr}_{0}^{\mathrm{rad}}QA\cong A\] \[\mathrm{gr}_{n}^{\mathrm{rad}}QA\cong\mathrm{gr}_{n}^{\mathrm{rad }}\mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}}A\oplus\bigoplus_{k\geq 0,\,j+h+\cdots+j=n,k>0}s^{-1} \mathrm{gr}_{b}^{\mathrm{rad}}\overline{\mathcal{C}_{\mathrm{pl}}}(k)\otimes \left(\mathrm{gr}_{h}^{\mathrm{rad}}QA\otimes\cdots\otimes\mathrm{gr}_{h}^{ \mathrm{rad}}QA\right)\,,\,n\geq 1.\] One can notice that \(d_{\theta}\) and \(d_{CA}\) vanish on \(\mathrm{gr}^{\mathrm{rad}}(QA)\). Besides, the map \(\pi_{A}\) is given by 1. the identity on \(\mathrm{gr}_{0}^{\mathrm{rad}}(QA)\) while \(\partial(H)\) is zero; 2. zero on \(\mathrm{gr}_{n}^{\mathrm{rad}}(QA)\) for every \(n\geq 1\). **Proposition 40**.: _For every \(n\geq 0\), the map \(\partial(H)+\pi_{A}\) on \(\mathrm{gr}_{n}^{\mathrm{rad}}(QA)\) is an isomorphism._ **The ladder filtration.** In order to show Proposition 40, we introduce the ladder filtration which will allow us to conclude. Let \(n\) be in \(\mathbb{N}\). We have a colimiting diagram of dg \(\mathbb{N}\)-modules indexed by the ordinal \(\alpha+1\) \[\mathrm{gr}_{n}^{\mathrm{rad}}\mathcal{C}_{\mathrm{pl}}^{(0)}\mapsto\mathrm{ gr}_{n}^{\mathrm{rad}}\mathcal{C}_{\mathrm{pl}}^{(1)}\mapsto\cdots\mapsto\mathrm{ gr}_{n}^{\mathrm{rad}}\mathcal{C}_{\mathrm{pl}}^{(i)}\mapsto\cdots\mapsto\mathrm{ gr}_{n}^{\mathrm{rad}}\mathcal{C}_{\mathrm{pl}}^{(\alpha)}\] that preserves non-empty directed colimits and whose maps are degree-wise injections. For \(n\geq 1\), we can filtrate further \(\mathrm{gr}_{n}^{\mathrm{rad}}(QA)\) using this diagram \[F_{i}^{\mathrm{ladder}}\mathrm{gr}_{n}^{\mathrm{rad}}QA\coloneqq\mathrm{gr}_{ n}^{\mathrm{rad}}\mathcal{C}_{\mathrm{pl}}^{(i)}\circ_{\mathrm{pl}}A\oplus \bigoplus_{k\geq 0}\bigoplus_{k+j+\cdots+k=n,j>0}s^{-1}\mathrm{gr}_{h}^{ \mathrm{rad}}\overline{\mathcal{C}_{\mathrm{pl}}^{(i)}}(k)\otimes\left(\mathrm{ gr}_{h}^{\mathrm{rad}}QA\otimes\cdots\otimes\mathrm{gr}_{h}^{\mathrm{rad}}QA\right).\] This filtration is stable by all the pre-differentials \(d_{A},d_{CA},d_{C},d_{\theta},d_{CP},d_{\Delta}\) and by \(H\). For \(i+1\in\alpha\), let us denote \[\mathrm{gr}_{i+1,n}\mathcal{C}_{\mathrm{pl}}\coloneqq\mathrm{gr}_{n}\mathcal{ C}_{\mathrm{pl}}^{(i+1)}/\mathrm{gr}_{n}^{\mathrm{rad}}\mathcal{C}_{\mathrm{pl}}^{(i)}.\] There is a canonical isomorphism of graded \(\Bbbk\)-modules \[\mathrm{gr}_{i+1}^{\mathrm{ladder}}\mathrm{gr}_{n}^{\mathrm{rad}}QA\cong \mathrm{gr}_{i+1,n}\mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}}A\oplus \bigoplus_{k\geq 0}\bigoplus_{k+j+\cdots+j=n,k>0}s^{-1}\mathrm{gr}_{i+1,j }\overline{\mathcal{C}_{\mathrm{pl}}}(k)\otimes\left(\mathrm{gr}_{h}^{\mathrm{ rad}}QA\otimes\cdots\otimes\mathrm{gr}_{h}^{\mathrm{rad}}QA\right).\] Let \(n\geq 1\) and \(i+1\in\alpha\). The restriction of the endomorphism \(\partial(H)\) of \(\mathrm{gr}_{i+1}^{\mathrm{ladder}}\mathrm{gr}_{n}^{\mathrm{rad}}QA\) 1. to \(\mathrm{gr}_{i+1,n}\mathcal{C}_{\mathrm{pl}}\circ_{\mathrm{pl}}A\) is the identity; 2. to \[s^{-1}\mathrm{gr}_{i+1,n}\overline{\mathcal{C}_{\mathrm{pl}}}(k)\otimes\left( \mathrm{gr}_{0}^{\mathrm{rad}}QA\otimes\cdots\otimes\mathrm{gr}_{0}^{\mathrm{ rad}}QA\right),\] is the identity for every \(k\geq 0\); 3. to \[\bigoplus_{j_{0}+j_{1}+\cdots+k_{n}=n,n>b>0}s^{-1}\mathsf{gr}_{i+1,j_{0}}\overline {C_{\mathsf{pl}}}(k)\otimes\left(\mathsf{gr}_{i_{1}}^{\text{rad}}QA\otimes \cdots\otimes\mathsf{gr}_{i_{k}}^{\text{rad}}QA\right)\] is given by the following formula for every \(k\geq 0\) \[\sum_{u=0}^{k-1}\operatorname{Id}_{s^{-1}\mathsf{gr}_{i+1,\overline{C}_{ \mathsf{pl}}}(k)}\otimes\left(\pi_{A}^{\otimes u}\otimes\partial(H)\otimes \operatorname{Id}^{\otimes k-u-1}\right).\] Proof of Proposition 40.: Let us prove the result by induction. 1. For \(n=0\), we have \(\partial(H)+\pi_{A}=\pi_{A}=\operatorname{Id}\) on \(\mathsf{gr}_{0}^{\text{rad}}(QA)\). Thus \(\partial(H)+\pi_{A}\) is an isomorphism. 2. For \(n=1\), we have \(\partial(H)+\pi_{A}=\partial(H)=\operatorname{Id}\) on \(\mathsf{gr}_{1}^{\text{rad}}(QA)\). Thus \(\partial(H)+\pi_{A}\) is an isomorphism. 3. Let \(n\geq 1\) and let us suppose that \(\partial(H)+\pi_{A}\) is an isomorphism on \(\mathsf{gr}_{m}^{\text{rad}}(QA)\) for every \(0\leq m\leq n\). One can then prove with the formulas given just above that \(\partial(H)\) is an isomorphism on \(\mathsf{gr}_{i+1}^{\text{ladder}}\mathsf{gr}_{n+1}^{\text{rad}}QA\) for every \(i+1\in\alpha\). Hence, by a straightforward ordinal induction \(\partial(H)+\pi_{A}=\partial(H)\) is an isomorphism on \(\mathsf{gr}_{n+1}^{\text{rad}}(QA)\). Which concludes the proof. ### Fibrant objects Fibrant objects in the transferred model structure on curved \(\mathcal{C}\)-coalgebras admit a comparatively much simpler description than cofibrant objects in the model structure of dg \(\Omega\mathcal{C}\)-algebras. They are given by quasi-cofree \(\mathcal{C}\)-coalgebras, that is, curved \(\mathcal{C}\)-coalgebras whose underlying graded \(\mathcal{C}\)-coalgebra is cofree. **Lemma 38**.: _The functor \(\mathsf{B}_{\mathcal{C}}\) commutes with sifted colimits._ Proof.: This follows from the fact that sifted colimits in dg \(\Omega\mathcal{C}\)-algebras and in curved \(\mathcal{C}\)-coalgebras are computed in graded k-modules and that the endofunctor of graded k-modules \(\mathcal{C}\circ(-)\) preserves sifted colimits. Remark 49.: The functor \(\mathsf{B}_{\mathcal{C}}\) is also conservative and thus it is monadic. Moreover, \(\Omega_{\mathcal{C}}\) preserves finite cosifted limits and is conservative, thus it is comonadic. Hence, the adjunction \(\Omega_{\mathcal{C}}\dash\mathsf{B}_{\mathcal{C}}\) is bimonadic; see for [10] for more details. **Proposition 41**.: _Let \(W\) be a curved \(\mathcal{C}\)-coalgebra. The following assertions are equivalent:_ 1. \(W\) _is fibrant;_ 2. \(W\) _is in the essential image of the functor_ \(\mathsf{B}_{\mathcal{C}}\)_;_ 3. \(W\) _is a quasi-cofree_ \(\mathcal{C}\)_-coalgebra, that is, its underlying graded_ \(\mathcal{C}\)_-coalgebra is cofree._ Proof.: Let \(W\) be a quasi-cofree \(\mathcal{C}\)-coalgebra, that is, a curved \(\mathcal{C}\)-coalgebra whose underlying graded \(\mathcal{C}\)-coalgebra is of the form \(W\cong\mathcal{C}\circ A\). A straightforward check shows that the degree \(0\) map \[s^{-1}\overline{\mathcal{C}}\circ A\longrightarrow\overline{\mathcal{C}} \circ A\hookrightarrow\mathcal{C}\circ A\xrightarrow{d_{V}}\mathcal{C}\circ A \twoheadrightarrow A\] defines the structure of a dg \(\Omega\mathcal{C}\)-algebra on \(A\). Therefore, \(W\cong\mathsf{B}_{\mathcal{C}}A\). Conversely, every image of a dg \(\Omega\mathcal{C}\)-algebra \(A\) through \(\mathsf{B}_{\mathcal{C}}\) is quasi-cofree by definition. Now, let \(W\) be a fibrant object in curved \(\mathcal{C}\)-coalgebras. Let us prove that it is quasi-cofree. The unit of adjunction \(\eta_{\mathcal{C}}:W\longrightarrow\mathsf{B}_{\mathcal{C}}\Omega_{\mathcal{C }}W\) admits a retraction \(r\), since it is an acyclic cofibration and since \(W\) is fibrant. Let \(A\) be the colimit of the reflexive pair of maps in the category of dg \(\Omega\mathcal{C}\)-algebras, where \(\epsilon_{\Omega_{\mathcal{C}}W}:\Omega_{\mathcal{C}}\mathsf{B}_{\mathcal{C}} \Omega_{\mathcal{C}}W\longrightarrow\Omega_{\mathcal{C}}W\) is the counit of adjunction. By Lemma 38, the colimit of the diagram of curved \(\mathcal{C}\)-coalgebras is \(\mathcal{B}_{\mathcal{C}}A\), since \(\mathcal{B}_{\mathcal{C}}\) preserves sifted colimits. It is also clear that this colimit is \(W\). So there exists a canonical isomorphism \(W\cong\mathcal{B}_{\mathcal{C}}A\). ### Infinity morphisms The notion of \(\infty\)-morphism extends the usual notion of morphisms of dg \(\Omega\mathcal{C}\)-algebras. Their main advantage is that \(\infty\)-quasi-isomorphisms are invertible, and therefore one can replace a zig-zag of quasi-isomorphisms of dg \(\Omega\mathcal{C}\)-algebras with two inverse \(\infty\)-quasi-isomorphism. This provides a powerful tool to describe the homotopy category of dg \(\Omega\mathcal{C}\)-algebras. Recall that for every dg \(\Omega\mathcal{C}\)-algebra, the counit map \(\epsilon_{A}:\Omega_{\mathcal{C}}\mathcal{B}_{\mathcal{C}}A\longrightarrow A\) has a canonical section \(\zeta_{A}:A\longrightarrow\Omega_{\mathcal{C}}\mathcal{B}_{\mathcal{C}}A\) in the category of dg modules. Thus, if \(K\) is the kernel of this counit map, one has a canonical decomposition of the dg module \(\Omega_{\mathcal{C}}\mathcal{B}_{\mathcal{C}}A\) as \[\Omega_{\mathcal{C}}\mathcal{B}_{\mathcal{C}}A\cong K\oplus A.\] **Definition 59** (\(\infty\)-morphisms).: Let \(A,A^{\prime}\) be two dg \(\Omega\mathcal{C}\)-algebras. An \(\infty\)-_morphism_ from \(f:A\rightsquigarrow A^{\prime}\) amounts to the data of, equivalently: 1. a morphism of dg \(\Omega\mathcal{C}\)-algebras \(f:\Omega_{\mathcal{C}}\mathcal{B}_{\mathcal{C}}A\longrightarrow A^{\prime}\), 2. a morphism of curved \(\mathcal{C}\)-coalgebras \(f^{\dagger}:\mathcal{B}_{\mathcal{C}}A\longrightarrow\mathcal{B}_{\mathcal{C} }A^{\prime}\). **Linear part.** Let \(f:A\rightsquigarrow A^{\prime}\) be an \(\infty\)-morphism of dg \(\Omega\mathcal{C}\)-algebras. Its _linear part_\(f_{\mathrm{dg}}\) is the morphism of dg modules given by the composition Let us denote \(\epsilon:\mathcal{C}\longrightarrow\mathcal{I}\) and \(\mu:\mathcal{I}\longrightarrow\mathcal{C}\) the counit and the coaugmentation of the quasi-planar conilpotent curved cooperad \(\mathcal{C}\), respectively. The linear part of \(f:A\rightsquigarrow A^{\prime}\) is equivalently given by It induces a graded endomorphism \(t\) of the graded \(\mathcal{C}\)-coalgebra \(\mathcal{C}\circ A\) given by \[t:\mathcal{C}\circ A\xrightarrow{\Delta\circ\text{Id}}\mathcal{C}\circ\mathcal{C} \circ A\xrightarrow{\text{Id}\circ\tau}\mathcal{C}\circ A\,\] that is an isomorphism since \(\operatorname{gr}^{\operatorname{rad}}(t)=\operatorname{Id}\). Let us denote \(D\) the coderivation of \(\operatorname{B}_{\mathcal{C}}A\) and let \[\bar{D}\coloneqq tDt^{-1}.\] This is again a coderivation of the graded \(\mathcal{C}\)-coalgebras \(\mathcal{C}\circ A\) such that \(\bar{D}^{2}=(\theta\circ\operatorname{Id})\Delta\). Therefore it defines a dg \(\Omega\mathcal{C}\)-algebra structure on \(A\), and \(t:A\leadsto A\) becomes an \(\infty\)-isotopy from \((\mathcal{C}\circ A,D)\) to \((\mathcal{C}\circ A,\bar{D})\). Moreover, we have \[(\mathcal{C}\circ f_{\operatorname{dg}})t=f^{\dagger}\.\] **Proposition 42**.: _Let \(f:A\leadsto A^{\prime}\) be an \(\infty\)-morphism of dg \(\Omega\mathcal{C}\)-algebras. The morphism of curved \(\mathcal{C}\)-coalgebras \(f^{\dagger}:\operatorname{B}_{\mathcal{C}}A\longrightarrow\operatorname{B}_{ \mathcal{C}}A^{\prime}\) is_ 1. _a weak-equivalence if and only if the linear part_ \(f_{\operatorname{dg}}\) _is a quasi-isomorphism;_ 2. _an isomorphism if and only if the dg part_ \(f_{\operatorname{dg}}\) _is an isomorphism;_ 3. _a fibration if and only if the dg part_ \(f_{\operatorname{dg}}\) _is a degree-wise epimorphisms;_ 4. _a cofibration if and only if the dg part_ \(f_{\operatorname{dg}}\) _is a degree-wise injection._ Proof.: 1. Let us consider the following commutative diagram in the category of dg modules. The two horizontal maps, \(\zeta_{A}\) and \(\zeta_{A^{\prime}}\), are quasi-isomorphisms. Thus, the left vertical map is a quasi-isomorphism if and only if the right vertical map is a quasi-isomorphism. 2. If \(f^{\dagger}\) is an isomorphism, then \(f_{\operatorname{dg}}=\operatorname{gr}_{0}^{\operatorname{rad}}(f^{\dagger})\) is an isomorphism. Conversely, if \(f_{\operatorname{dg}}\) is an isomorphism, then \[\operatorname{gr}_{n}^{\operatorname{rad}}(f^{\dagger})=\operatorname{Id}_{ \operatorname{gr}_{n}(\mathcal{C})}\circ f_{\operatorname{dg}}\] is an isomorphism. Hence, \(f^{\dagger}\) is an isomorphism. 3. Since \(\mathcal{C}\) is coaugmented, one has a functor from dg modules to curved \(\mathcal{C}\)-coalgebras that sends a dg module \(X\) to \(X\) endowed with the trivial curved \(\mathcal{C}\)-coalgebra structure. This structure is given by the zero structural map \(0:X\longrightarrow\overline{\mathcal{C}}\circ X\). This functor sends acyclic cofibrations of dg modules to filtered quasi-isomorphisms that are also degree-wise inclusions. They are in particular acyclic cofibrations of curved \(\mathcal{C}\)-coalgebras. If \(f^{\dagger}\) is a fibration, then it has the right lifting property with respect to all acyclic cofibration, and in particular with respect to such acyclic cofibrations of dg modules. Subsequently, \(f_{\operatorname{dg}}\) has the right lifting property with respect to every acyclic cofibration of dg modules. So it is a fibration of dg modules, hence a degree-wise epimorphism. Conversely, if \(f_{\operatorname{dg}}\) is a fibration, then \(f^{\dagger}\) is a fibration as a direct consequence of Lemma 39. 4. Let us consider the same commutative square diagram shown in point (1). If \(f^{\dagger}\) is a cofibration, then the right vertical map \(\Omega_{\mathcal{C}}(f^{\dagger})\) and the top horizontal map of this square are degree-wise injection. Thus the left vertical map \(f_{\operatorname{dg}}\) is also a degree-wise injection. Conversely, let us suppose that \(f_{\operatorname{dg}}\) is a degree-wise injection. Then, it has a left inverse \(g\) in the category of graded k-modules. Let us consider the endomorphism \(h\) of graded \(\mathcal{C}\)-coalgebra Its linear part is the identity of \(A\). The same arguments as those used to prove point (2) show that \(h\) is a graded isomorphism. In particular, it is a degree-wise injection. So \(f^{\dagger}\) is a degree-wise injection, hence a cofibration. **Proposition 43**.: _Let \(A\) and \(A^{\prime}\) be two dg \(\Omega\mathcal{C}\)-algebras. There exists a zig-zag of quasi-isomorphisms of dg \(\Omega\mathcal{C}\)-algebras_ \[A\xleftarrow{\ **Definition 62**.: Let \(D_{\beta}\) be the degree \(-1\) coderivation on the graded \(\mathsf{B}\mathcal{P}\)-coalgebra \(\mathsf{B}\mathcal{P}\circ A\) that proceeds from the structure of a \(\Omega\mathsf{B}\mathcal{P}\)-algebra on the dg module \(A\), in the sense that \[(\mathsf{B}\mathcal{P}\circ A,D_{\beta})=\mathsf{B}_{\mathsf{B}\mathcal{P}}A.\] Moreover, let \(\beta\) be the composition of \(D_{\beta}\) with the projection onto \(A\). One can notice that the restriction of \(\beta\) to \(A\cong\mathcal{I}\circ A\) is the differential \(d_{A}\) and that its restriction to \(\overline{\mathsf{B}\mathcal{P}}\circ A\) is \[\overline{\mathsf{B}\mathcal{P}}\circ A\to sM\circ A\twoheadrightarrow s \mathcal{P}\circ A\xrightarrow{s\gamma}sA\longrightarrow A.\] where the last map just sends \(sa\) to \(a\). The degree \(0\) map \[sM\circ A\xrightarrow{\beta}A\xrightarrow{-h}A\] yields a morphism of graded \(\mathbb{S}\)-modules \(sM\longrightarrow\operatorname{End}(A)\). Therefore there is a morphism of graded operads \(\mathbb{T}(sM)\longrightarrow\operatorname{End}(A)\), which in turn yields a degree \(0\) map of graded \(\Bbbk\)-modules \[\phi:\mathsf{B}\mathcal{P}\circ A=\mathbb{T}(sM)\circ A\longrightarrow A.\] Let \(f\) be the related morphism of graded \(\mathsf{B}\mathcal{P}\)-coalgebras \[f:\mathsf{B}\mathcal{P}\circ A\longrightarrow\mathsf{B}\mathcal{P}\circ \mathsf{B}\mathcal{P}\circ A\xrightarrow{\operatorname{Id}\circ\phi}\mathsf{ B}\mathcal{P}\circ A.\] **Definition 63**.: Let \(\boldsymbol{\chi}\) be the degree \(-1\) morphism from \(\mathbb{T}(sM)\circ A\) to \(A\) whose restriction to \(A\) is zero and whose restriction to \(\mathbb{T}\circ A\) is \[\overline{\mathbb{T}}(sM)\circ A\cong sM\circ\mathbb{T}(sM)\circ A \xrightarrow{\operatorname{Id}\circ\phi}sM\circ A\xrightarrow{\beta}A.\] Notice that \(\phi\) is given by the sum of \(-h\)\(\boldsymbol{\chi}\) and of the projection onto \(A\). Recall that the conilpotent curved cooperad \(\mathsf{B}\mathcal{P}\) is given by the conilpotent graded cooperad \(\mathbb{T}(sM)\) endowed with the following pre-differentials: 1. the pre-differential \(d_{\gamma}\) which is induced by the operad structure of \(\mathcal{P}\); 2. the pre-differential \(d_{P}\) which is induced by the differential of \(\mathcal{P}\); 3. the pre-differential \(d_{u}\), which maps \(s^{2}\mathcal{I}\) to the unit of \(\mathcal{P}\). We will denote the later two pre-differentials by \(sd_{M}\), as they are defined on the generators of \(\mathsf{B}\mathcal{P}\). We refer to Subsection 2.1 for more details. **Lemma 40**.: _The following diagram in graded \(\Bbbk\)-modules_ _commutes._ Proof.: This follows from a straightforward checking. **Lemma 41**.: _The map \(\beta\ f:\mathbb{T}(sM)\circ A\longrightarrow A\) is equal to \(d_{A}\)\(\phi+\boldsymbol{\chi}\)._ Proof.: This follows from a straightforward computation. **Definition 64**.: Let \(\mu\) be the degree \(-1\) map from \(\mathsf{B}\mathcal{P}\circ A\) to \(A\) whose restriction to \(A\cong\mathcal{I}\circ A\) is \(d_{A}\) and whose restriction to \(\overline{\mathsf{B}\mathcal{P}}\circ A\) is the sum of the two maps \[\overline{\mathsf{B}\mathcal{P}}\circ A\xrightarrow{\gamma}A\xrightarrow{ \pi_{\boldsymbol{\chi}}}A\] Moreover, let \(D_{\mu}\) be the unique degree \(-1\) coderivation on the graded \(\mathsf{B}\mathcal{P}\)-coalgebra \(\mathsf{B}\mathcal{P}\circ A\) whose projection onto \(A\) is \(\mu\). **Lemma 42**.: _The degree \(-1\) composite map \(\beta f-\mu:\mathbb{T}(sM)\circ A\longrightarrow A\) is equal to the sum of the two maps_ \[\overline{\mathbb{T}}(sM)\circ A\cong sM\circ\mathbb{T}(sM)\circ A\xrightarrow{ \operatorname{Id\phi}}sM\circ A\xrightarrow{\operatorname{\mathit{sd}}_{ \operatorname{Id\omega}}A}sM\circ A\xrightarrow{\phi}A\] \[\overline{\mathbb{T}}(sM)\circ A\cong sM\circ\mathbb{T}(sM)\circ A\xrightarrow{ \operatorname{\mathit{lod\phi}}}sM\circ A\xrightarrow{\operatorname{\mathit{lod \omega}}(\operatorname{Id},d_{A})}sM\circ A\xrightarrow{\phi}A.\] Proof.: Let us denote \(g=d_{A}\ \phi+\chi-\mu\) and \(g^{\prime}\) the sum of the two maps described in the lemma. We want to show that \(g=g^{\prime}\). Since \(g\) and \(g^{\prime}\) are respectively equal to the compositions \[sM\circ\mathbb{T}(sM)\circ A\xrightarrow{\operatorname{Id\phi}}sM\circ A \xrightarrow{g}A\] \[sM\circ\mathbb{T}(sM)\circ A\xrightarrow{\operatorname{Id\phi}}sM\circ A \xrightarrow{g^{\prime}}A\] it suffices to prove the result on \(sM\circ A\). We can notice that, on \(sM\circ A\) we have \[g=d_{A}\ \phi+\chi-\mu=-d_{A}h\beta+\beta-\pi_{X}\ \beta+h(\theta\circ \operatorname{Id})=hd_{A}\beta+h(\theta\circ\operatorname{Id})\.\] Still on \(sM\circ A\), this gives \[d_{A}\beta=\beta D_{\beta}-\beta(d_{sM}\circ A)-\beta(sM\circ\shuffle( \operatorname{Id},d_{A}))=-\theta\circ\operatorname{Id}_{A}-\beta(d_{sM}\circ A )-\beta(sM\circ\shuffle(\operatorname{Id},d_{A})).\] Therefore \[g=-h\beta(d_{sM}\circ A)-h\beta(sM\circ\shuffle(\operatorname{Id},d_{A}))=g^ {\prime}\.\] **Proposition 44**.: _The following equality between degree \(-1\) maps from \(\mathcal{BP}\circ A\) to \(A\)_ \[\phi D_{\mu}=\beta f\] _holds. Thus \(fD_{\mu}=D_{\beta}f\)._ Proof.: Let us prove the result on the height of the trees that make \(\mathcal{BP}=\mathbb{T}(sM)\). First, on \(A\cong\mathcal{I}\circ A\), one has \[\phi D_{\mu}=d_{A}=\beta f.\] Let us assume that the result the restrictions of \(\phi D_{\mu}\) and \(\beta f\) are equal on \(\mathbb{T}_{\leq n}(sM)\circ A\) for some natural integer \(n\). On larger trees \(\overline{\mathbb{T}}_{\leq n+1}(sM)\circ A\simeq sM\circ\mathbb{T}_{\leq n}(sM)\circ A\), \(\phi D_{\mu}\) is the sum of the maps \[sM\circ\mathbb{T}_{\leq n}(sM)\circ A\xrightarrow{\operatorname{\mathit{sd }}_{\operatorname{Id\omega}}}sM\circ\mathbb{T}_{\leq n}(sM)\circ X \xrightarrow{\operatorname{\mathit{sd}}}A\, \tag{1}\] \[sM\circ\mathbb{T}_{\leq n}(sM)\circ X\xrightarrow{\operatorname{\mathit{ld \omega}}(\operatorname{\psi},\operatorname{\mathcal{WD}}_{n})}sM\circ A \xrightarrow{\phi}A\, \tag{2}\] together with the contribution (4) to the coderivation of \(\mathcal{BP}\) given by the composition of \(\mathcal{P}\) at the root level \[\overline{\mathbb{T}}_{\leq n+1}(sM)\circ A\] \[sM\circ(A\oplus sM\circ\mathbb{T}_{\leq n-1}(sM)\circ A)\] \[sM\circ(A\oplus sM\circ A)\] \[\xrightarrow{\operatorname{\mathit{ld\omega}}(\operatorname{\mathit{ld \omega}}sM\circ A)}\] As a consequence of Lemma 40, this last map (4) is equal to \[sM\circ\mathbb{T}_{\leq n}(sM)\circ X\xrightarrow{\operatorname{Id}\cup(\Psi,-X) }sM\circ A\xrightarrow{\phi}A\.\] By the induction hypothesis (\(\beta f=\phi D_{\mu}\) on \(\mathbb{T}_{\leq n}(sM)\circ A\)) and by Lemma 41, the sum of the contributions (3) and (4) is \[sM\circ\mathbb{T}_{\leq n}(sM)\circ A\xrightarrow{\operatorname{Id}\cup(\phi,d _{\mu}\phi)}sM\circ A\xrightarrow{\phi}A\,\] which rewrites as \[sM\circ\mathbb{T}_{\leq n}(sM)\circ A\xrightarrow{\operatorname{Id}\phi}sM \circ A\xrightarrow{\operatorname{Id}\cup(\operatorname{Id},d_{\mu})}sM\circ A \xrightarrow{\phi}A\.\] We conclude by Lemma 42. **Proposition 45**.: _The coderivation \(D_{\mu}\) endows \(\mathsf{B}\mathcal{P}\circ A\) with a curved \(\mathcal{B}\mathcal{P}\)-coalgebra structure._ Proof.: Since \(fD_{\mu}=D_{\beta}f\) and since \(f\) is an isomorphism (by a standard filtration argument): \[D_{\mu}=f^{-1}D_{\beta}f.\] Thus, the coderivation \(D_{\mu}\) makes \(\mathsf{B}\mathcal{P}\circ A\) a curved \(\mathsf{B}\mathcal{P}\)-coalgebra because so does the coderivation \(D_{\beta}\). **Proposition 46**.: _The sub-graded \(\mathcal{B}\mathcal{P}\)-coalgebra \(\mathsf{B}\mathcal{P}\circ X\) of \(\mathsf{B}\mathcal{P}\circ A\) is stable through \(D_{\mu}\)._ Proof.: The sub-graded \(\mathsf{B}\mathcal{P}\)-coalgebra \(\mathsf{B}\mathcal{P}\circ X\) is actually the quotient/kernel of the idempotent endomorphism \((\operatorname{Id}\circ\pi_{X})\) of \(\mathsf{B}\mathcal{P}\circ A\). One can notice that the projection onto \(A\) of \((\operatorname{Id}\circ\pi_{X})\)\(D_{\mu}\)\((\operatorname{Id}\circ\pi_{X})\) and \(D_{\mu}\)\((\operatorname{Id}\circ\pi_{X})\) are equal: \[(\operatorname{Id}\circ\pi_{X})\ \mu\ \pi_{X}=(\operatorname{Id}\circ\pi_{X})\ \mu.\] Thus \[(\operatorname{Id}\circ\pi_{X})\ D_{\mu}\ (\operatorname{Id}\circ\pi_{X})=D_{ \mu}\ (\operatorname{Id}\circ\pi_{X})\] which proves the result. To conclude, we have a composition of morphisms of curved \(\mathsf{B}\mathcal{P}\)-coalgebras \[(\mathsf{B}\mathcal{P}\circ X,D_{\mu})\hookrightarrow(\mathsf{B}\mathcal{P} \circ A,D_{\mu})\xrightarrow{f}(\mathsf{B}\mathcal{P}\circ A,D_{\beta}).\] #### 5.4.2. The cooperad version of the homotopy transfer theorem for algebras **Theorem 8**.: _Let \(i:X\longrightarrow A\) be an acyclic cofibration of dg modules and let \(\gamma_{A}:\Omega\mathcal{C}\circ A\longrightarrow A\) be a dg \(\Omega\mathcal{C}\)-algebra structure on \(A\). There exists another dg \(\Omega\mathcal{C}\)-algebra structure_ \[\mu_{A}:\Omega\mathcal{C}\circ A\longrightarrow A\] _which restricts on \(X\), together with an \(\infty\)-isotopy_ \[(A,\mu_{A})\rightsquigarrow(A,\gamma_{A})\] _of dg \(\Omega\mathcal{C}\)-algebras._ Proof.: Since \(i\) is an acylic cofibration of dg modules, it has a left inverse \(p\) and one can decompose \(A\) as \(X\oplus K\) where \(K\) is the kernel of \(p\). The paragraph just above gives us a diagram of curved \(\mathsf{B}\Omega\mathcal{C}\)-coalgebras \[(\mathsf{B}\Omega\mathcal{C}\circ X,D_{\mu})\hookrightarrow(\mathsf{B} \Omega\mathcal{C}\circ A,D_{\mu})\xrightarrow{f}(\mathsf{B}\Omega\mathcal{C},D_{\beta})\.\] Applying the right adjoint functor from curved \(B\Omega\mathcal{C}\)-coalgebras to curved \(\mathcal{C}\)-coalgebra that results from the unit map \(\mathcal{C}\longrightarrow\mathsf{B}\Omega\mathcal{C}\), we get diagram of curved \(\mathcal{C}\)-coalgebras \[(\mathcal{C}\circ X,\tilde{D}_{\mu})\hookrightarrow(\mathcal{C}\circ A,\tilde {D}_{\mu})\xrightarrow{\tilde{f}}(\mathcal{C}\circ A,\tilde{D}_{\beta})= \mathsf{B}_{\mathcal{C}}(A,\gamma_{A})\.\] In that context, \(D_{\mu}\) is the coderivation on \(\mathcal{C}\circ A\) that induces the expected dg \(\Omega\mathcal{C}\)-algebra \(\mu_{A}\) structure on \(A\) and \(\tilde{f}\) is the expected \(\infty\)-isotopy. #### 5.4.3. The homotopy transfer theorem for algebras **Theorem 9**.: _Let \(\mathcal{Q}\) be a cofibrant dg operad, let \(i:X\longrightarrow A\) be an acyclic cofibration of dg modules and let \(\gamma_{A}:\mathcal{Q}\circ A\longrightarrow A\) be a dg \(\mathcal{Q}\)-algebra structure on \(A\). There exists a dg \(\mathcal{Q}\)-algebra structure \(\mu_{X}\) on \(X\), together with a zig-zag of quasi-isomorphisms_ \[(A,\gamma_{A})\mathrel{\mathop{\kern 0.0pt\leftarrow}\limits^{i}}\cdots \mathrel{\mathop{\kern 0.0pt\leftarrow}\limits^{\infty}}(X,\mu_{X})\] _of dg \(\mathcal{Q}\)-algebras. Furthermore, the maps in this zig-zag are homotopic to \(i\) in the model category of dg modules._ Proof.: Taking \(\mathcal{C}\) to be the quasi-planar conilpotent curved cooperad \(\mathrm{B}(\mathcal{Q}\otimes\mathcal{E})\), Theorem 8 yields the dg \(\Omega\mathcal{C}\)-algebra structure \(\mu_{X}\) on \(X\) together with a zig-zag of quasi-isomorphisms of dg \(\Omega\mathcal{C}\)-algebras \[(X,\mu_{X})\xrightarrow{i}(A,\mu_{A})\leftarrow\Omega_{\mathcal{C}}\mathcal{B }_{\mathcal{C}}(A,\mu_{A})\xrightarrow{f}(A,\gamma_{A})\.\] Moreover, the acyclic fibration of dg operads \(\Omega\mathcal{C}\longrightarrow\mathcal{Q}\) has a section since \(\mathcal{Q}\) is cofibrant. Applying the induced right adjoint functor from dg \(\Omega\mathcal{C}\)-algebras to dg \(\mathcal{Q}\)-algebras yields the expected zig-zag in the category of dg \(\mathcal{Q}\)-algebras. Remark 50.: This last result also follows from model-categorical arguments, as developed in [10]. ### Further localisations and divided powers operations Let \(\mathcal{Q}\) be an admissible dg operad and let \(\mathcal{C}\) be a quasi-planar conilpotent curved cooperad. Let us consider a morphism of dg operads \(f:\Omega\mathcal{C}\longrightarrow\mathcal{Q}\). We have two Quillen adjunctions Let us denote \(\Omega_{f}\) the composite left adjoint and \(\mathrm{B}_{f}\) the composite right adjoint. **Proposition 47**.: _There exists a combinatorial model structure on curved \(\mathcal{C}\)-coalgebras called the \(f\)-model structure, transferred from that of dg \(\mathcal{Q}\)-algebras, determined by the following sets of morphisms_ 1. _the set of_ \(f\)_-cofibrations is given by morphisms_ \(g\) _such that_ \(\Omega_{f}(g)\) _is a cofibration,_ 2. _the set of_ \(f\)_-weak-equivalences is given by morphisms_ \(g\) _such that_ \(\Omega_{f}(g)\) _is a weak equivalence._ 3. _the set of_ \(f\)_-fibrations is determined by right-lifting property against all acyclic cofibrations._ _Moreover, this is a left Bousfield localisation of the canonical model structure transferred from dg \(\Omega\mathcal{C}\)-algebras. Meaning that the identity functor of curved \(\mathcal{C}\)-coalgebra, where at the source they are endowed with the canonical model structure, and at the target with the \(f\)-model structure, is a left Quillen functor._ Proof.: The \(f\)-cofibrations and the \(f\)-weak equivalences respectively contain cofibrations and weak-equivalences of the canonical model structure, transfer along the bar-cobar adjunction. Hence, every object is cofibrant and a natural cylinder object is provided by Proposition 39. This proves the existence of the transferred \(f\)-model structure (see Appendix A.2). To prove that this is a left Bousfield localisation of that transferred from dg \(\Omega\mathcal{C}\)-algebras, it suffices to show that \(f\)-cofibrations are in particular degree-wise injections which results from the same arguments as those used to prove Proposition 36. **Localizing at quasi-isomorphisms.** Let \(\mathcal{C}\) be a quasi-planar conilpotent _differential graded_ cooperad. The cobar construction \(\Omega\mathcal{C}\) is augmented since \(\mathcal{C}\) has zero curvature. Let us denote \(\nu:\Omega\mathcal{C}\longrightarrow\mathcal{I}\) the canonical morphism of dg operads given by the augmentation. We have the following adjunctions where the adjunction \(\nu_{1}\dashv\nu^{*}\) is in fact given by the indecomposables functor \(\mathrm{Indec}\) (which is \(\nu_{1}\)) and by the trivial structure functor \(\mathrm{Triv}\) (which is \(\nu^{*}\)). Notice that since \(\mathcal{C}\) has zero curvature, curved \(\mathcal{C}\)-coalgebras in pdg modules are precisely given by dg \(\mathcal{C}\)-coalgebras. **Proposition 48**.: _The set of \(\nu\)-weak-equivalences is precisely the set of quasi-isomorphisms of dg \(\mathcal{C}\)-coalgebras._ Proof.: The composition \(\operatorname{Indec}\,\Omega_{\mathcal{C}}\) is isomorphic to the forgetful functor from dg \(\mathcal{C}\)-coalgebras to dg modules. **Corollary 7**.: _Let \(\mathcal{C}\) be a quasi-planar conilpotent dg cooperad. The set of weak-equivalences in the canonical model structure on dg \(\mathcal{C}\)-coalgebras is contained in the set of quasi-isomorphims._ Proof.: It suffices to apply Proposition 47 to the morphism of dg operads \(\nu:\Omega\mathcal{C}\longrightarrow\mathcal{I}\), combining it with Proposition 48. **Divided power operations in the homotopical setting.** Let \(\mathcal{C}\) be a quasi-planar conilpotent dg cooperad. By Proposition 47, the category of dg \(\mathcal{C}\)-coalgebras admits a model category structure where 1. the set of cofibrations is given by degree-wise injections; 2. the set of weak-equivalences is given by quasi-isomorphisms; 3. the set of fibrations is given by maps with right lifting property with respect to acyclic cofibrations. Let \((W,\Delta_{W},\,a_{W})\) be a dg \(\mathcal{C}\)-coalgebra. The structural map \[\Delta_{W}:W\longrightarrow\bigoplus_{n\geq 0}\mathcal{C}(n)\otimes_{\mathbb{S}_ {n}}W^{\otimes n}\,\] lands on the coinvariants on the right-hand side, therefore divided power operations should appear. Nevertheless, since \(\mathcal{C}\) is quasi-planar, there is a natural isomorphism \[\bigoplus_{n\geq 0}\mathcal{C}(n)\otimes_{\mathbb{S}_{n}}W^{\otimes n} \cong\bigoplus_{n\geq 0}\left(\mathcal{C}(n)\otimes W^{\otimes n}\right)^{ \mathbb{S}_{n}}\,\] of dg modules induced by the norm map (Proposition 2). Therefore no divided power operations appear at the algebraic level. These divided power operations do not disappear at the \(\infty\)-categorical level. Indeed, \(\mathcal{C}(n)\) is a quasi-free dg \(\Bbbk[\mathbb{S}_{n}]\)-module which is furthermore _projective_ by Proposition 12. Therefore we have equivalences \[\bigoplus_{n\geq 0}\mathcal{C}(n)\otimes_{H\mathbb{S}_{n}}W^{\otimes n} \simeq\bigoplus_{n\geq 0}\mathcal{C}(n)\otimes_{\mathbb{S}_{n}}W^{ \otimes n}\cong\bigoplus_{n\geq 0}\left(\mathcal{C}(n)\otimes W^{ \otimes n}\right)^{\mathbb{S}_{n}}\,\] where on the upmost left-hand side we consider _homotopy coinvariants_ and on the upmost right-hand side we consider _homotopy invariants_. This means that the \(\infty\)-category of dg \(\mathcal{C}\)-coalgebras localized at quasi-isomorphisms behaves like an \(\infty\)-category of _divided power conilpotent coalgebras_. ## 6. Model structure on complete algebras over a cooperad The goal of this section is to study the homotopical properties of the complete bar-cobar adjunction between dg \(\Omega\mathcal{C}\)-coalgebras and complete curved \(\mathcal{C}\)-algebras, in the case where \(\mathcal{C}\) is a quasi-planar conilpotent curved cooperad. The dg operad \(\Omega\mathcal{C}\) is cofibrant, and therefore also coadmissible by Proposition 29. This means that the category of dg \(\Omega\mathcal{C}\)-coalgebras admits a model category structure where weak-equivalences are given by quasi-isomorphisms and cofibrations by degree-wise injections. Let us consider the complete bar-cobar adjunction relative to \(\iota:\mathcal{C}\longrightarrow\Omega\mathcal{C}\), which will be denoted by \(\widehat{\Omega}_{\mathcal{C}},\widehat{\mathbb{B}}_{\mathcal{C}}\) from now on. Our first goal is going to be to transfer the model structure on dg \(\Omega\mathcal{C}\)-coalgebras along this adjunction to the category of qp-complete curved \(\mathcal{C}\)-algebras. **Theorem 10**.: _Let \(\mathcal{C}\) be a quasi-planar curved conilpotent cooperad. The category of qp-complete curved \(\mathcal{C}\)-algebras has the structure of a combinatorial model category given by the following sets of maps:_ 1. _the set of weak-equivalences is given by morphisms_ \(f\) _such that_ \(\widehat{\Omega}_{\mathcal{C}}\) _is a quasi-isomorphism,_ 2. _the set of fibrations is given by morphisms_ \(f\) _such that_ \(\widehat{\Omega}_{\mathcal{C}}\) _is a fibration; these are degree-wise epimorphisms,_ 3. _the set of cofibrations is given by morphisms with the left-lifting property with respect to acyclic fibrations._ Remark 51.: Using the standard transfer theorem for model category structures only gives that fibrations are morphism which are sent by \(\widehat{\Omega}_{\mathcal{C}}\) to fibrations. The theorem contains an additional characterization of fibrations of complete curved \(\mathcal{C}\)-algebras as degree-wise surjective maps. ### Outline of the transfer of model structures The proof is somewhat dual to the proof of Theorem 6, except we deal with not all curved \(\mathcal{C}\)-algebras but those that are qp-complete. However, we extend our definitions of fibrations and weak equivalences to morphisms between any pair of curved \(\mathcal{C}\)-algebras. **Definition 65** (Fibrations).: A morphism \(f\) of curved \(\mathcal{C}\)-algebras is a _fibration_ if \(\widehat{\Omega}_{\mathcal{C}}(f)\) is a fibration of dg \(\Omega\mathcal{C}\)-coalgebras. **Definition 66** (Weak-equivalences).: A morphism \(f\) of curved \(\mathcal{C}\)-coalgebras is a _weak-equivalence_ if \(\widehat{\Omega}_{\mathcal{C}}(f)\) is a quasi-isomorphism of dg \(\Omega\mathcal{C}\)-coalgebras. **Definition 67** (Cofibrations).: A morphism of qp-complete curved \(\mathcal{C}\)-algebras is a _cofibration_ if it has the left-lifting property against all acyclic fibrations between qp-complete curved \(\mathcal{C}\)-algebras. Both categories \(\mathsf{curv}\)\(\mathcal{C}\)-\(\mathsf{alg}^{\mathsf{qp-comp}}\) and dg \(\mathcal{P}\)-\(\mathsf{cop}\) are presentable, therefore by Appendix A.2 it suffices to exhibit a natural fibrant resolution and a natural path object for qp-complete curved algebras to prove the existence of the transferred model structure. We will show that fibrations of Definition 65 are given by degree-wise surjective maps in Proposition 52 and we will construct a natural path object in Proposition 56. For the rest of this section, let us fix a quasi-planar conilpotent curved cooperad \(\mathcal{C}\) whose quasi-planar ladder is indexed by some small ordinal \(\alpha\). ### Elementary fibrations Elementary fibrations are a particularly well-behaved set of fibrations of (qp-complete) curved \(\mathcal{C}\)-algebras, such that the kernel of any such elementary fibration is a dg k-module. **Definition 68** (Elementary fibrations).: A morphism \(f:\Lambda\to\Lambda^{\prime}\) of curved \(\mathcal{C}\)-algebras is an _elementary fibration_ if it is degree-wise surjective and if the map \(\overline{\gamma}_{\Lambda}:\Lambda^{\overline{\mathcal{C}}}\longrightarrow\Lambda\) factors through \((\Lambda^{\prime})^{\overline{\mathcal{C}}}\), that is, if there exists a dotted arrow such that the diagram commutes, where \(\gamma_{\Lambda}\) denotes the structural map of \(\Lambda\). Remark 52.: The map \(\Lambda^{\overline{\mathcal{C}}}\longrightarrow(\Lambda^{\prime})^{\overline{ \mathcal{C}}}\) is a degree-wise epimorphism since it is has a retract in the category of graded k-modules. **Lemma 43**.: _Let \(f:\Lambda\twoheadrightarrow\Lambda^{\prime}\) be an elementary fibration of curved \(\mathcal{C}\)-algebras. Then \(\operatorname{Ker}(f)\) is a dg module._ Proof.: Let \(i_{f}:\operatorname{Ker}(f)\longrightarrow\Lambda\) denote the canonical inclusion. We have \[i_{f}\,d^{2}=d^{2}i_{f}=\overline{\gamma}_{\Lambda}\,\left(\operatorname{id} \right)^{\theta}i_{f}=\overline{\gamma}_{\Lambda}\,\left(i_{f}\right)^{ \operatorname{Id}}\,\left(\operatorname{id}\right)^{\theta}\,,\] and \(\overline{\gamma}_{\Lambda}\,(i_{f})^{\operatorname{Id}}=0\) since \(f\) is an elementary fibration. We conclude using that \(i_{f}\) is a monomorphism. **Proposition 49**.: _Let \(f:\Lambda\twoheadrightarrow\Lambda^{\prime}\) be an elementary fibration of curved \(\mathcal{C}\)-algebras. It is in particular a fibration._ Proof.: This amounts to prove that \(\widehat{\operatorname{B}}_{\mathcal{C}}(f)\) is a fibration. Let \(U\) be the kernel of \(f\). Let us decompose the underlying graded k-module of \(A\) as \(\Lambda\cong\Lambda\oplus U\). The pre-differential of \(\Lambda\) rewrites as the sum of the differential \(d_{U}\) on \(U\), the pre-differential \(d_{N}\) on \(\Lambda^{\prime}\), and a degree \(-1\) map \(\xi:\Lambda^{\prime}\longrightarrow U\). Let us consider the morphism of graded k-modules \(\pi_{U}:\widehat{\operatorname{B}}_{\mathcal{C}}\Lambda\longrightarrow \Lambda\longrightarrow U\). It induces the morphism of dg modules \[p:D^{0}\otimes\widehat{\operatorname{B}}_{\mathcal{C}}\Lambda\longrightarrow U\] whose restriction to \(S^{0}\otimes\widehat{\operatorname{B}}_{\mathcal{C}}\Lambda\) is \(\operatorname{Id}\otimes\pi_{U}\) and whose restriction to \(S^{-1}\otimes\widehat{\operatorname{B}}_{\mathcal{C}}\Lambda\) is \(-\operatorname{Id}\otimes\partial(\pi_{U})\). The fact that \(f\) is an elementary fibration implies that the restriction of \(p\) to \(S^{-1}\otimes\widehat{\operatorname{B}}_{\mathcal{C}}\Lambda\) factors through the projection \(S^{-1}\otimes\widehat{\operatorname{B}}_{\mathcal{C}}\Lambda\longrightarrow S ^{-1}\otimes\widehat{\operatorname{B}}_{\mathcal{C}}\Lambda^{\prime}\). One thus gets a commutative square of dg modules and thus a commutative square of dg \(\Omega\mathcal{C}\)-coalgebras This square is a pullback square since the underlying square of graded \(\Omega\mathcal{C}\)-coalgebras is a pullback square. Moreover, the map \([D^{0},U]\longrightarrow[S^{-1},U]\) is a fibration of dg modules, thus \(L^{P}[D^{0},U]\longrightarrow L^{P}[S^{-1},U]\) is a fibration of dg \(\Omega\mathcal{C}\)-coalgebras. This implies that the pullback map \(\widehat{\operatorname{B}}_{\mathcal{C}}\Lambda\longrightarrow\widehat{ \operatorname{B}}_{\mathcal{C}}\Lambda^{\prime}\) is a fibration. **Lemma 44**.: _Let us consider a commutative diagram of curved \(\mathcal{C}\)-algebras_ _where_ 1. \(p\) _and_ \(q\) _are elementary fibrations;_ 2. _the map of dg modules induced by_ \(g\)__ \[\operatorname{Ker}(g):\operatorname{Ker}(p)\longrightarrow\operatorname{Ker} (q)\] _is a quasi-isomorphism._ _Then \(g\) is a weak-equivalence._ Proof.: Let us decompose the underlying graded \(\Bbbk\)-module of \(\bigwedge\) as \(\bigwedge\cong U\oplus Z\). We can then decompose the underlying graded \(\Bbbk\)-module of \(\bigwedge^{\prime}\) as \(\bigwedge^{\prime}\cong U^{\prime}\oplus Z\) in such a way so that the restriction of \(g\) to \(U\) targets \(U^{\prime}\). The square diagrams built in the proof of Proposition 32, fit in the following commutative cube diagram The left and the right faces are pullback (homotopy) squares. The two front horizontal maps and the bottom back horizontal map are quasi-isomorphisms. Thus the homotopy pullback map \(\widehat{\mathbb{B}}_{C}g\) is also a quasi-isomorphism. **Proposition 50**.: _Let us consider a commutative diagram of curved \(\mathcal{C}\)-algebras_ _where_ 1. \(f\) _is an equivalence,_ 2. \(i\) _and_ \(j\) _are elementary fibrations,_ 3. _the map of dg modules induced by_ \(g\)__ \[\operatorname{Ker}(g):\operatorname{Ker}(p)\longrightarrow\operatorname{Ker} (q)\] _is a quasi-isomorphism._ _Then \(g\) is a weak-equivalence._ Proof.: Let us consider the following pullback square in the category of curved \(\mathcal{C}\)-algebras. It yields a pullback square of dg \(\Omega\mathcal{C}\)-coalgebras which is also an homotopy pullback square. Thus, the map \(\widehat{\mathsf{B}}_{\mathcal{C}}U\longrightarrow\widehat{\mathsf{B}}_{ \mathcal{C}}\Lambda^{\prime}\) is a quasi-isomorphism. Moreover, the map \(\Lambda\longrightarrow U\) is a weak-equivalence by Lemma 44. ### Coladders We introduce coladders of qp-complete curved \(\mathcal{C}\)-algebras, which lead to the notion of a cofiltered quasi-isomorphism. We will show that these cofiltered quasi-isomorphism are a subset of weak-equivalences of qp-complete curved \(\mathcal{C}\)-algebras. The key example of such coladders is the quasi-planar ladder induced by the canonical quasi-planar filtration of \(\mathcal{C}\). **Definition 69** (\(\beta\)-coladder).: Let \(\beta\) be a small ordinal. A \(\beta\)-_indexed curved \(\mathcal{C}\)-coladder_ is a functor \[\Lambda:\beta^{\mathrm{op}}\longrightarrow\mathsf{curv}\ \mathcal{C}\text{-alg}\] that sends every limit ordinal \(k\in\beta\) to \[\Lambda(k)=\lim_{i<k}\Lambda(i)\,\] with \(\Lambda(-1)=0\), and such that every map \[\Lambda(i)\longrightarrow\Lambda(i-1),\quad i\in\beta\] is an _elementary fibration_. A \(\mathcal{C}\)-coladder \(\Lambda\) is called _qp-complete_ if \(\Lambda(i)\) is qp-complete for all \(i<\beta\). We denote by \[\Lambda(\beta)\coloneqq\lim_{i\in\beta}\ \Lambda(i)\,\] the value of the limit of this \(\beta\)-coladder. It is qp-complete whenever the coladder is. Remark 53.: The property about limit ordinal is equivalent to the fact that the functor \[(1+\beta)^{\mathrm{op}} \longrightarrow\mathsf{curv}\ \mathcal{C}\text{-alg}\] \[0 \mapsto 0\] \[i+1<\omega \mapsto\Lambda(i)\] \[\omega\leq i<\beta \mapsto\Lambda(i)\] is continuous. **Definition 70** (Associated graded of a coladder).: Given a \(\beta\)-coladder \(\Lambda\), we define its _associated graded_ as \[\mathrm{gr}^{j}\Lambda\coloneqq\mathrm{Ker}(\Lambda(i)\longrightarrow\lim j<i \Lambda(j))\.\] Notice that the derivation squares to zero on this kernel \(\mathrm{gr}_{i}\Lambda\), therefore it is a dg module. The following proposition will allow us to construct \(\beta\)-coladders in a general setting. **Proposition 51**.: _Let \(\beta\) be a small ordinal, and let_ \[\mathcal{D}^{(0)}\longrightarrow\mathcal{D}^{(1)}\longrightarrow\cdots \longrightarrow\mathcal{D}^{(l)}\longrightarrow\cdots\] _be a \(\beta\)-indexed cooperad ladder, where we denote \(\mathcal{D}\coloneqq\mathcal{D}^{(\beta)}\). For every \(i\in\beta\), let \(F^{i}_{\mathcal{D}}(-)\) be the idempotent monad on curved \(\mathcal{D}\)-algebras related to the reflexive full subcategory made up of curved \(\mathcal{D}^{(l)}\)-algebras._ _Let \(\Lambda\) be a curved \(\mathcal{D}\)-algebra which is complete with respect to the cooperad ladder, that is, such_ \[\Lambda\cong\lim_{i\in\beta}F^{i}_{\mathcal{D}}\Lambda\.\] _Then the diagram_ \[\cdots\longrightarrow F^{i}_{\mathcal{D}}\Lambda\longrightarrow\cdots \longrightarrow F^{1}_{\mathcal{D}}\Lambda\longrightarrow F^{0}_{\mathcal{D}}\Lambda\] _is a \(\beta\)-indexed coladder of curved \(\mathcal{D}\)-algebras, which are all complete with respect to the cooperad ladder._ Proof.: Let \(\Lambda\) be a curved \(\mathcal{D}\)-algebra that is complete with respect to the cooperad ladder \((\mathcal{D}^{(l)})_{i\in\beta}\). Since \(\Lambda\) is complete with respect to the cooperad ladder the map \[F^{k}_{\mathcal{D}}\Lambda\longrightarrow\lim_{i<k}F^{i}_{\mathcal{D}}\Lambda\] is an isomorphism for every limit ordinal \(k\in\beta+1\). It remains to show that for every \(i<i+1\in\beta\), the transition map \(F^{i+1}_{\mathcal{D}}\Lambda\longrightarrow F^{i+1}_{\mathcal{D}}\Lambda\) is an elementary fibration. Let us denote \(K\) its kernel. This map fits in the following pushout diagram of graded \(\Bbbk\)-modules In particular, it is a degree-wise epimorphism since the map \((F^{i+1}_{\mathcal{D}}\Lambda)^{\mathcal{D}^{(i+1)}_{\mu}}\longrightarrow(F^{ i+1}_{\mathcal{D}}\Lambda)^{\mathcal{D}^{(l)}_{\mu}}\) is a degree-wise epimorphism. Thus proving that it is an elementary fibration amounts to prove that the composite map \[\shuffle(F^{i+1}_{\mathcal{D}}\Lambda,K)^{\overline{\mathcal{D}^{(i+1)}_{\mu} }}\longrightarrow(F^{i+1}_{\mathcal{D}}\Lambda)^{\overline{\mathcal{D}^{(i+1) }_{\mu}}}\longrightarrow F^{i+1}_{\mathcal{D}}\Lambda\] is zero, since the kernel of the first map is \((F^{i}_{\mathcal{D}}\Lambda)^{\overline{\mathcal{D}^{(i+1)}_{\mu}}}\). Moreover, the kernel \(K\) is the image of the map \[(F^{i+1}_{\mathcal{D}}\Lambda)^{\mathcal{D}^{(i+1)}_{\mu}}/\mathcal{D}^{(i)}_{ \mu}\hookrightarrow(F^{i+1}_{\mathcal{D}}\Lambda)^{\mathcal{D}^{(i+1)}_{\mu}} \longrightarrow F^{i+1}_{\mathcal{D}}\Lambda.\] Thus, it suffices to prove that the composite map \[\shuffle(F^{i+1}_{\mathcal{D}}\Lambda,(F^{i+1}_{\mathcal{D}}\Lambda)^{ \mathcal{D}^{(i+1)}_{\mu}}/\mathcal{D}^{(i)}_{\mu})^{\overline{\mathcal{D}^{(i +1)}_{\mu}}}\longrightarrow((F^{i+1}_{\mathcal{D}}\Lambda)^{\mathcal{D}^{(i+1 )}_{\mu}})^{\overline{\mathcal{D}^{(i+1)}_{\mu}}}\longrightarrow F^{i+1}_{ \mathcal{D}}\Lambda\] is zero. Since degree-wise injections are preserved by the tensor products and by pullbacks, this map is a degree-wise injection. Thus it remains to show that for every \(p,q,j\), with \(p\geq 1\) and \(1\leq j\leq p\), the map \[\begin{CD}F^{\mathcal{D}}_{i+1}W\\ \overline{\mathcal{D}^{(i+1)}_{\rm pl}}(p)\otimes(F^{\mathcal{D}}_{i+1}W)^{ \otimes p}\\ \end{CD}\] \[\overline{\mathcal{D}^{(i+1)}_{\rm pl}}(p)\otimes(F^{\mathcal{D}}_{i+1}W)^{ \otimes j-1}\otimes\overline{\mathcal{D}^{(i+1)}_{\rm pl}}(q)\otimes(F^{ \mathcal{D}}_{i+1}W)^{\otimes q}\otimes(F^{\mathcal{D}}_{i+1}W)^{\otimes p-j}\] factors through the sub-object \[\overline{\mathcal{D}}_{\mathrm{pl}}^{(i+1)}(\rho)\otimes(F_{i+1}^{\mathcal{D}}W) ^{\otimes j-1}\otimes\overline{\mathcal{D}}^{(i)}(q)\otimes(F_{i+1}^{\mathcal{D }}W)^{\otimes q}\otimes(F_{i+1}^{\mathcal{D}}W)^{\otimes p-j}\.\] Using coassociativity, such a map rewrites as follows \[\begin{CD}F_{i+1}^{\mathcal{D}}W\\ @V{}V{}V\\ \overline{\mathcal{D}}_{\mathrm{pl}}^{(i+1)}(\rho+q-1)\otimes(F_{i+1}^{\mathcal{D }}W)^{\otimes p+q-1}\\ @V{}V{}V\\ (\overline{\mathcal{D}}_{\mathrm{pl}}^{(i+1)}(\rho)\otimes\overline{\mathcal{D} }_{\mathrm{pl}}^{(i+1)}(q))\otimes(F_{i+1}^{\mathcal{D}}W)^{\otimes p+q-1}\.\end{CD}\] Since the sequence \((\mathcal{D}^{(i)})_{i\in\mathcal{B}}\) is a cooperad ladder, the map \[\Delta_{j}:\overline{\mathcal{D}}_{\mathrm{pl}}^{(i+1)}(\rho+q-1)\longrightarrow \overline{\mathcal{D}}_{\mathrm{pl}}^{(i+1)}(\rho)\otimes\overline{\mathcal{D }}_{\mathrm{pl}}^{(i+1)}(q)\] factors through \(\overline{\mathcal{D}}_{\mathrm{pl}}^{(i)}(\rho)\otimes\overline{\mathcal{D} }_{\mathrm{pl}}^{(i)}(q)\), which proves the result. **Corollary 8** (Quasi-planar coladder).: _Let \(\mathcal{C}\) be a quasi-planar conilpotent curved cooperad. Recall from Subsection 2.9 that since \(\mathcal{C}\) is quasi-planar, it admits a canonical quasi-planar \(u\)-ladder_ \[F_{0}^{\alpha\mathcal{C}}\longrightarrow F_{0}^{\alpha\mathcal{C}}\longrightarrow \cdots\longrightarrow F_{n}^{\alpha\mathcal{D}}\mathcal{C}\longrightarrow\cdots\] _whose colimit is \(\mathcal{C}\). For every \(i\in\omega\), let \(F_{i}^{\alpha}(-)\) be the idempotent monad of curved \(\mathcal{C}\)-algebras that reflects onto the full subcategory of curved \(F_{i}^{\alpha\mathcal{D}}\mathcal{C}\)-algebras._ _Let \(\Lambda\) be a curved \(\mathcal{C}\)-coalgebra. The diagram_ \[\cdots\twoheadrightarrow F_{\alpha}^{n}\Lambda\longrightarrow\cdots \twoheadrightarrow F_{\alpha}^{n}\Lambda\] _is an \(\omega\)-coladder of curved \(\mathcal{C}\)-algebras, whose limit is the qp-completion of \(\Lambda\), called the quasi-planar coladder of \(\Lambda\)._ Proof.: Follows directly from Proposition 51. ### Fibrations **Lemma 45**.: _A fibration of dg \(\Omega\mathcal{C}\)-coalgebras is in particular a degree-wise epimorphism._ Proof.: Let \(f:V\longrightarrow V^{\prime}\) be a fibration of dg \(\Omega\mathcal{C}\)-coalgebras. We equip \(D^{0}\) with its canonical counital coassociative cocommutative coalgebra structure. The map \(D^{0}\otimes V^{\prime}\longrightarrow V^{\prime}\) which is degree-wise surjective factors through \(f\) which is thus also degree-wise surjective. **Proposition 52**.: _A morphism of qp-complete curved \(\mathcal{C}\)-algebras is a fibration if and only if it is a degree-wise epimorphism._ Proof.: Let \(f:\Lambda\longrightarrow\Lambda^{\prime}\) be a morphism of qp-complete curved \(\mathcal{C}\)-algebras. On the one hand, let us suppose that \(f\) is a degree-wise epimorphism and let \(K\) be its kernel. Since \(\Lambda\) is qp-complete, the morphism \(f\) can be recovered as the transfinite backward composition of the sequence \[\cdots\longrightarrow F_{\alpha\mathcal{D}}^{n}\Lambda\times_{F_{\alpha \mathcal{D}}^{n}\Lambda^{\prime}}\Lambda^{\prime}\longrightarrow\cdots \longrightarrow F_{\alpha\mathcal{D}}^{0}\Lambda\times_{F_{\alpha\mathcal{D}}^ {0}\Lambda^{\prime}}\Lambda^{\prime}\longrightarrow\Lambda^{\prime}.\] Since every such morphism is an elementary fibration and since fibration are preserved by backward transfinite compositions, the morphism \(f\) is therefore a fibration. On the other hand, let us suppose that \(f\) is a fibration. Let us consider the following square diagram of graded k-modules The bottom horizontal map and the left vertical map are degree-wise epimorphisms. Thus, so is the right vertical map. ### Colfiltered quasi-isomorphisms **Definition 71** (Cofiltered quasi-isomorphism of coladders).: A morphism of \(\beta\)-indexed curved \(\mathcal{C}\)-coladders \(f:\Lambda\longrightarrow\Lambda^{\prime}\) is a _cofiltered quasi-isomorphism_ if \[\mathrm{gr}^{i}(f):\mathrm{gr}^{i}\Lambda\xrightarrow{\mathrm{\mathfrak{g}r} ^{i}}\Lambda^{\prime}\] is a quasi-isomorphism for all \(i\in\beta\). **Proposition 53**.: _Let \(f:V\xrightarrow{\mathrm{\mathfrak{g}r}^{i}}V^{\prime}\) be a quasi-isomorphism of dg \(\Omega\mathcal{C}\)-coalgebras. The morphism of coladders_ \[F^{n}_{\mathrm{qp}}\widehat{\Omega}_{\mathcal{C}}V\longrightarrow F^{n}_{ \mathrm{qp}}\widehat{\Omega}_{\mathcal{C}}V^{\prime}\] _is a cofiltered quasi-isomorphism._ Proof.: For every \(i\in\omega\), the map \[\mathrm{gr}^{i}_{\mathrm{qp}}(\widehat{\Omega}_{\mathcal{C}}V)\longrightarrow \mathrm{gr}^{i}_{\mathrm{qp}}(\widehat{\Omega}_{\mathcal{C}}V^{\prime})\] rewrites as the morphism of dg modules \[V^{\mathrm{gr},\mathcal{C}_{\mathrm{\mathfrak{g}r}}}\xrightarrow{\mathrm{ \mathfrak{g}r}^{i},\mathcal{C}_{\mathrm{\mathfrak{g}r}}}\] which is a quasi-isomorphism. **Proposition 54**.: _Let \(f:\Lambda\longrightarrow\Lambda^{\prime}\) be a cofiltered quasi-isomorphism of curved \(\mathcal{C}\)-coladders indexed by an ordinal \(\beta\). The map \(f(\beta):\Lambda(\beta)\longrightarrow\Lambda^{\prime}(\beta)\) is a weak-equivalence._ Proof.: Notice that the following holds. 1. The map \(f(0)\) is a weak-equivalence, since it is the identity of the zero object \(0\). 2. If \(i\in\beta+1\) is a limit ordinal so that \(f(j)\) is an equivalence for every \(j<i\), then the limits \[\widehat{\mathrm{B}}_{\mathcal{C}}\Lambda(i)\cong\lim_{j<i}\widehat{\mathrm{B }}_{\mathcal{C}}\Lambda(j),\quad\widehat{\mathrm{B}}_{\mathcal{C}}\Lambda^{ \prime}(i)\cong\lim_{j<i}\widehat{\mathrm{B}}_{\mathcal{C}}\Lambda^{\prime}(j)\] are homotopy limits, and therefore the map \(\widehat{\mathrm{B}}_{\mathcal{C}}f(i)\) is a quasi-isomorphism. Thus \(f(i)\) is a weak-equivalence. 3. By Proposition 50, \(f(i+1)\) is a weak-equivalence whenever \(f(i)\) is a weak-equivalence. We conclude by an ordinal induction. ### Path object Let \(\Lambda\) be an qp-complete curved \(\mathcal{C}\)-algebra. Let \(P\) be a path object of \(\widehat{\mathrm{B}}_{\mathcal{C}}\Lambda\) in the category of dg \(\Omega\mathcal{C}\)-coalgebras. This means that there exists a factorization where \(\iota\) is an acyclic cofibration and \(p\) a fibration in the category of dg \(\Omega\mathcal{C}\)-coalgebras. Let \(D\) be the following pushout \[D\coloneqq\widehat{\Omega}_{\mathcal{C}}P\amalg_{\widehat{\mathrm{B}}_{ \mathcal{C}}\widehat{\mathrm{B}}_{\mathcal{C}}(B)}\Lambda.\] in the category of curved \(\mathcal{C}\)-algebras. We have that, for every natural integer \(n\), the canonical map \[F^{i}_{\mathrm{qp}}D\longrightarrow F^{i}_{\mathrm{qp}}\widehat{\Omega}_{ \mathcal{C}}P\amalg_{\widehat{\mathrm{F}}_{\mathrm{qp}}\widehat{\Omega}_{ \mathcal{C}}\widehat{\mathrm{B}}_{\mathcal{C}}(\mathcal{B})}F^{i}_{\mathrm{qp}}\Lambda.\] is an isomorphism. Moreover, the qp-completion \(\widehat{D}\) of \(D\) is the limit of the diagram \[\cdots\longrightarrow F^{i}_{\mathrm{qp}}D\longrightarrow\cdots \longrightarrow F^{0}_{\mathrm{qp}}D\.\] It can be computed as the pushout \(\widehat{\Omega}_{\mathcal{C}}P\amalg_{\widehat{\mathrm{B}}_{\mathcal{C}} \widehat{\mathrm{B}}_{\mathcal{C}}(B)}\Lambda\) in the category of qp-complete curved \(\mathcal{C}\)-algebras. Our goal is to show that \(\widehat{D}\) is a natural path object in the category of qp-complete curved \(\mathcal{C}\)-algebras. These objects fit in the following commutative diagram of curved \(\mathcal{C}\)-algebras. Here the morphism \(j\) is given by \(\Lambda\) in the pushout. The morphism \(p^{\dagger}\amalg\Lambda\) is induced by the transpose of \(p\) and by the diagonal \(\Delta:\Lambda\longrightarrow\Lambda\oplus\Lambda\). The morphisms \(\epsilon\) are given by the counit of the complete bar-cobar adjunction \(\widehat{\Omega}_{\mathcal{C}}\dashv\widehat{\Omega}_{\mathcal{C}}\). They are degree-wise epimorphisms and thus fibrations by Proposition 52. Notice that \(\widehat{\Omega}_{\mathcal{C}}(i)\) is a filtered quasi-isomorphism of coherent coladders by Proposition 53. The first step is to show that \(j\) is a cofiltered quasi-isomorphism. Let us choose a particular summand \(\Lambda\) in the product \(\Lambda\oplus\Lambda\). This way, we choose one of the two projections \(t:D\longrightarrow\Lambda\) and one of the two projections \(r:P\longrightarrow\widehat{\mathbb{B}}_{\mathcal{C}}\Lambda\) in such a way that the following diagram commutes. The dg module \(P\) decomposes into \(r:P\cong\widehat{\mathbb{B}}_{\mathcal{C}}\Lambda\oplus K\) where \(K\) is the kernel of the map \(P\longrightarrow\widehat{\mathbb{B}}_{\mathcal{C}}\Lambda\). The dg module \(K\) is acyclic since \(r\) is a quasi-isomorphism. Let us take a contracting homotopy \(h\) of \(K\), that is, a degree \(1\) endomorphism of \(K\) such that \[\partial(h)=d_{K}h+hd_{K}=\operatorname{Id}_{K}.\] We can extend \(h\) to \(P\) by zero on \(\widehat{\mathbb{B}}_{\mathcal{C}}\Lambda\). Then \(\partial(h)=\pi_{K}\), where \(\pi_{K}\) is the projection onto \(K\). Now let \(H\) be the degree \(1\) endomorphism of the graded module \[\widehat{\Omega}_{\mathcal{C}}P=P^{\mathcal{C}_{\mathrm{pl}}}=\prod_{n\geq 0 }[\mathcal{C}_{\mathrm{pl}}(n),P^{\otimes n}]\] defined as \[H\coloneqq\prod_{n\geq 0}\left[\operatorname{Id}_{\mathcal{C}_{\mathrm{pl}}(n) },\sum_{k=0}^{n-1}\pi_{\widehat{\mathbb{B}}_{\mathcal{C}}\Lambda}^{\otimes k} \otimes h\otimes\operatorname{Id}_{V}^{\otimes n-k-1}\right]\.\] One can extend \(H\) to \(\left(P^{\mathcal{C}_{\mathrm{pl}}}\right)^{\mathcal{C}_{\mathrm{pl}}}\) using the same formula \[H\coloneqq\prod_{n\geq 0}\left[\operatorname{Id}_{\mathcal{C}_{\mathrm{pl}}(n) },\sum_{k=0}^{n-1}\pi_{\widehat{\mathbb{B}}_{\mathcal{C}}\Lambda}^{\otimes k} \otimes H\otimes\operatorname{Id}_{V}^{\otimes n-k-1}\right]\.\] The same formula mutatis mutandis allows us to extend \(H\) to \(\left(\left(P^{\mathcal{C}_{\mathrm{pl}}}\right)^{\mathcal{C}_{\mathrm{pl}}}\right) ^{\mathcal{C}_{\mathrm{pl}}}\). One can notice then that \(H\) commutes with the maps \[\left(\left(P^{\mathcal{C}_{\mathrm{pl}}}\right)^{\mathcal{C}_{\mathrm{pl}}} \right)^{\mathcal{C}_{\mathrm{pl}}}\Rightarrow\left(P^{\mathcal{C}_{\mathrm{pl} }}\right)^{\mathcal{C}_{\mathrm{pl}}}\longrightarrow P^{\mathcal{C}_{\mathrm{ pl}}}\.\] **Lemma 46**.: _The degree \(1\) graded endomorphism \(H\) of \(\widehat{\Omega}_{\mathcal{C}}P\) projects onto \(D\), in the sense that the map_ \[\widehat{\Omega}_{\mathcal{C}}P\xrightarrow{H}\widehat{\Omega}_{\mathcal{C}}P \xrightarrow{q}D\] _factors through the projection \(q:\widehat{\Omega}_{\mathcal{C}}P\longrightarrow D\). We also denote by \(H\) the resulting unique degree \(1\) graded endomorphism of \(D\)._ Proof.: We set 1. \(X\) to be the pushout of the cospan \(\Lambda\longleftarrow\widehat{\Omega}_{\mathcal{C}}\widehat{\mathbb{B}}_{ \mathcal{C}}\Lambda\longrightarrow\widehat{\Omega}_{\mathcal{C}}P\) in the category of graded \(\Bbbk\)-modules; 2. and \(Y\) to be the pushout of the cospan \(\Lambda^{\mathcal{C}_{\mathrm{alg}}}\longleftarrow(\widehat{\Omega}_{ \mathcal{C}}\widehat{\mathbb{B}}_{\mathcal{C}}\Lambda)^{\mathcal{C}_{\mathrm{alg} }}\longrightarrow(\widehat{\Omega}_{\mathcal{C}}P)^{\mathcal{C}_{\mathrm{alg}}}\) in the category of graded \(\Bbbk\)-modules. The diagram of graded \(\mathcal{C}\)-algebras is colimiting both in the category complete graded \(\mathcal{C}\)-algebras and in the category of graded \(\Bbbk\)-modules. Indeed, the forgetful functor from graded \(\mathcal{C}\)-algebras to graded \(\Bbbk\)-modules commutes with reflexive coequalisers since those are preserved by the comomad \((-)^{\mathcal{C}_{\mathrm{alg}}}\). Notice the following: 1. the degree \(1\) endomorphism \(H\) of \(\left(P^{\mathcal{C}_{\mathrm{alg}}}\right)^{\mathcal{C}_{\mathrm{alg}}}\) projects onto \(X^{\mathcal{C}_{\mathrm{alg}}}\); 2. the degree \(1\) endomorphism \(H\) of \(\left(\left(P^{\mathcal{C}_{\mathrm{alg}}}\right)^{\mathcal{C}_{\mathrm{alg} }}\right)^{\mathcal{C}_{\mathrm{alg}}}\) projects onto \(Y^{\mathcal{C}_{\mathrm{alg}}}\); since their restriction to, respectively, \(\widehat{\Omega}_{\mathcal{C}}\widehat{\mathbb{B}}_{\mathcal{C}}\Lambda\) and \((\widehat{\Omega}_{\mathcal{C}}\widehat{\mathbb{B}}_{\mathcal{C}}\Lambda)^{ \mathcal{C}_{\mathrm{alg}}}\) is zero. Therefore, \(H\) projects onto the of the coequalizer, which is given by \(D\). **Lemma 47**.: _The map \(H\) projects onto the quotients \(F^{i}_{\mathrm{alg}}\widehat{\Omega}_{\mathcal{C}}P\) and \(F^{i}_{\mathrm{alg}}D\) for every \(i\geq 0\)._ Proof.: It is clear that, by definition, \(H\) that it projects onto \(F^{i}_{\mathrm{alg}}\widehat{\Omega}_{\mathcal{C}}P=P^{\mathcal{C}_{\mathrm{alg }}^{\mathrm{alg}}\mathcal{C}}\). Thus \(H\) also projects onto \(F^{i}_{\mathrm{alg}}D\) since the following square diagram is a pushout in the category of graded \(\Bbbk\)-modules and since we already know that \(H\) projects onto \(D\) and onto \(F^{i}_{\mathrm{alg}}\widehat{\Omega}_{\mathcal{C}}P\). **Lemma 48**.: _For every \(i\geq 0\), the endomorphism \(\partial(H)\) of \(\mathrm{gr}^{i}_{\mathrm{alg}}\widehat{\Omega}_{\mathcal{C}}(P)\) is equal to the identity minus the projection onto \(\mathrm{gr}^{i}_{\mathrm{alg}}\widehat{\Omega}_{\mathcal{C}}\widehat{\mathbb{ B}}_{\mathcal{C}}\Lambda\)._ Proof.: Both summands \(\mathrm{gr}^{i}_{\mathrm{alg}}\widehat{\Omega}_{\mathcal{C}}\widehat{\mathbb{ B}}_{\mathcal{C}}\widehat{\mathbb{B}}\) and \[\prod_{n\geq 0}\left[\mathrm{gr}^{i}_{i}\mathcal{C}_{\mathrm{alg}}(n), \bigoplus_{1\leq k<n}(\widehat{\mathbb{B}}_{\mathcal{C}}\Lambda)^{\otimes k} \otimes K\otimes V^{n-k-1}\right]\] of \(\mathrm{gr}^{i}_{\mathrm{alg}}\widehat{\Omega}_{\mathcal{C}}P\) are stable through the differential and \(H\). Then, a straightforward check shows that \(\partial(H)\) is zero on the first summand and the identity on the second one. **Proposition 55**.: _For every \(i\geq 0\), the endomorphism \(\partial(H)\) of \(\mathrm{gr}^{i}_{\mathrm{alg}}D\) is equal to the identity minus the projection onto \(\mathrm{gr}^{n}_{\mathrm{alg}}\Lambda\). Subsequently, the maps_ _are quasi-isomorphisms._ Proof.: Let us consider the following commutative diagram of dg modules where the maps are denoted by the same letter as before, omitting the functor \(\operatorname{gr}_{\operatorname{qp}}^{i}(-)\) applied on them for simplicity. Since \(q\) commutes with the differential \(d\) and with \(H\), and since the squares above are commutative, we have \[\partial(h)q=q\partial(h)=q(\operatorname{Id}-\widehat{\Omega}_{\mathcal{C}}( \iota r))=q-q\widehat{\Omega}_{\mathcal{C}}(\iota r)=q-jtq=(\operatorname{Id} -jt)q=(\operatorname{Id}-\pi_{\operatorname{gr}_{\operatorname{qr}}^{n}_{ \operatorname{qr}}\widehat{\Omega}_{\mathcal{C}}\widehat{\mathbb{B}}_{\mathcal{ C}}\Lambda})q.\] Since \(q:\operatorname{gr}_{\operatorname{qp}}^{i}\widehat{\Omega}_{\mathcal{C}}P \longrightarrow\operatorname{gr}_{\operatorname{qp}}^{j}D\) is a degree-wise epimorphism, \(\partial(h)=\operatorname{Id}-\pi_{\operatorname{gr}_{\operatorname{qp}}^{i} \Lambda}\) on \(\operatorname{gr}_{\operatorname{qp}}^{i}D\). Therefore, \(\operatorname{gr}_{\operatorname{qp}}^{i}\Lambda\) is a deformation retract of \(\operatorname{gr}_{\operatorname{qp}}^{i}D\). **Proposition 56**.: _Let \(\Lambda\) be a qp-complete curved \(\mathcal{C}\)-algebra. The factorisation_ _makes \(\widehat{D}\) a good path object of \(\Lambda\), in the sense that_ 1. _the map_ \(p^{\dagger}\amalg\Lambda:\widehat{D}\longrightarrow\Lambda\oplus\Lambda\) _is a fibration,_ 2. _the map_ \(j:\Lambda\longrightarrow\widehat{D}\) _is a weak-equivalence._ Proof.: The map \(\widehat{D}\longrightarrow\Lambda\oplus\Lambda\) is a degree-wise epimorphism since the maps \(\widehat{\Omega}_{\mathcal{C}}(p):\widehat{\Omega}_{\mathcal{C}}P \twoheadrightarrow\widehat{\Omega}_{\mathcal{C}}\widehat{\mathbb{B}}_{ \mathcal{C}}(\Lambda\oplus\Lambda)\) and \(\epsilon_{\Lambda\oplus\Lambda}:\widehat{\Omega}_{\mathcal{C}}\widehat{ \mathbb{B}}_{\mathcal{C}}(\Lambda\oplus\Lambda)\twoheadrightarrow\Lambda\oplus\Lambda\) are degree-wise epimorphisms. Therefore it is a fibration, since both \(\widehat{D}\) and \(\Lambda\oplus\Lambda\) are qp-complete. To conclude, Proposition 55 tells us that the map \(j:\Lambda\longrightarrow\widehat{D}\) is a cofiltered quasi-isomorphism. Thus it is a weak-equivalence. Remark 54.: Actually, the map \(j:\Lambda\longrightarrow\widehat{D}\) is also a cofibration as the pushout of a cofibration. ## 7. A Quillen equivalence, \(\infty\)-morphisms and homotopy transfer theorems for coalgebras We show that the complete bar-cobar adjunction induces a Quillen equivalence. This allows us to give another presentation of the homotopy category of dg \(\Omega\mathcal{C}\)-coalgebras in terms of complete curved \(\mathcal{C}\)-algebras. We introduce \(\infty\)-morphisms and show that they are invertible. This allows us to prove a homotopy transfer theorem for dg \(\Omega\mathcal{C}\)-coalgebras. Finally, we show how another model categories structures on the category of complete curved \(\mathcal{C}\)-coalgebras can be obtained by a right Bousfield localization. ### The Quillen equivalence The goal of this subsection is to show the following theorem. **Theorem 11** (After [11, Section 11]).: _Let \(\mathcal{C}\) be a quasi-planar conilpotent curved cooperad. The complete bar-cobar adjunction_ _is a Quillen equivalence._ Proof.: The theorem follows directly from Lemma 49. **Corollary 9**.: _Let \(\mathcal{P}\) be a cofibrant dg operad. The quasi-planar complete bar-cobar adjunction_ _is a Quillen equivalence._ Proof.: It suffices to notice that this Quillen adjunction factors into two Quillen equivalences where the first adjunction is induced by the quasi-morphism \(\psi:\Omega\mathrm{B}(\mathcal{E}\otimes\mathcal{P})\xrightarrow{\sim} \mathcal{P}\). **Lemma 49**.: _For every dg \(\Omega\mathcal{C}\)-coalgebra \(V\), the unit map \(\eta_{V}:V\xrightarrow{\sim}\widehat{\mathrm{B}}_{\mathcal{C}}\widehat{ \Omega}_{\mathcal{C}}V\) is a quasi-isomorphism._ The result was already proven in [11, Section 11]. The proof here is almost the same. The main difference lays in the fact that leveraging the qp-filtration of \(\mathcal{C}\) in Lemma 55 makes the cofiltration arguments simpler than in [11, Lemma 11.28]. In the following paragraphs, our strategy to show Lemma 49 is the following. First, we notice that the unit map \(\eta_{V}:V\longrightarrow\widehat{\mathrm{B}}_{\mathcal{C}}\widehat{\Omega}_{ \mathcal{C}}V\) has a canonical left-inverse \(\xi_{V}\) in the category of dg modules. Let \(\pi_{V}=\eta_{V}\xi_{V}\). We define a define a degree \(1\) map \(H\) on \(\Omega_{\mathcal{C}}\mathrm{B}_{\mathcal{C}}B\) and show that \(\partial(H)+\pi_{V}\) is an isomorphism in Proposition 40. Therefore \(\pi_{V}\) is an quasi-isomorphism since it is homotopic to an isomorphism. This implies that \(\eta_{V}\) has both a left-inverse \(\xi_{V}\) and a right-inverse \(\xi_{V}(\partial(H)+\pi_{V})^{-1}\) in the homotopy category of dg modules. Therefore \(\eta_{V}\) is a quasi-isomorphism and this concludes the proof. Notation. We abbreviate the dg operad \(\Omega\mathcal{C}\) by \(\mathcal{P}\) for the rest of this subsection. We denote: 1. by \(z:\mathcal{P}\longrightarrow\mathcal{I}\) the canonical augmentation of the underlying graded operad. 2. the pre-differential \(d_{\mathcal{C}}\) on \(\mathcal{P}\) that is induced by the pre-differential of \(\mathcal{C}\), 3. the pre-differential \(d_{\mathcal{B}}\) that results from the curvature of \(\mathcal{C}\), 4. the pre-differential \(d_{\mathcal{B}}\) that results from the cooperad structure on \(\mathcal{C}\). **Definition 72** (Operad-cooperad diagrams).: The _planar operad-cooperad diagram_ is the following one arrow diagram of graded N-modules The _symmetric operad-cooperad diagram_ is the following one arrow diagram of graded S-modules This is actually the image through the free S-module functor \(-\otimes\mathbb{S}\) of the planar operad-cooperad diagram. One has the following pre-differentials on the planar operad-cooperad diagram: 1. the pre-differential \(d_{\theta}\) that results from the eponymous derivation on \(\mathcal{P}\), 2. the pre-differential \(d_{CP}\) that is induced by the canonical twisting morphism \(\iota:\mathcal{C}\longrightarrow\Omega\mathcal{C}=\mathcal{P}\), 3. the pre-differential \(d_{\text{A}}\) that result from the eponymous derivation on \(\mathcal{P}\). These pre-differentials induce pre-differentials the symmetric operad-cooperad diagram. There is an additional pre-differential \(d_{C}\) that again results from the eponymous derivation on \(\mathcal{P}\), which is a priori non-planar. **The restriction-extension diagram** The restriction-extension diagram is a tool that allows one to check whether derivations and homotopies can be restricted to cofree coalgebras, which are very hard to describe in general. **Definition 73** (Restriction-extension diagram).: Let \(V\) be a graded k-module. The _restriction-extension diagram_ RE is the following diagram of graded k-modules **Lemma 50** (After [1, Lemma 11.8]).: _The restriction-extension diagram_ RE _is a pullback square._ Proof.: Let us consider the following diagram of graded k-modules Both the top square and the bottom square are pullback squares. Thus, the restriction-extension diagram is a pullback square. **Definition 74** (Extension diagram).: We define the _extension diagram_ E as the sub-diagram of the restriction-extension diagram that only contains \(\text{E}_{1}\) and \(\text{E}_{2}\). Remark 55.: Notice that the extension diagram is the image of the planar operad-cooperad diagram and of the symmetric operad-cooperad diagram by the functor \(V^{(-)}\), for any graded k-module \(V\). Let \(f\) be a degree \(k\) endomorphism of the extension diagram. If \(f\) extends to the whole restriction-extension diagram, this extension is necessarily unique since \(\mathsf{R}_{1},\mathsf{R}_{2},W\) are sub-graded \(\Bbbk\)-modules of \(\mathsf{E}_{2}\). Moreover, the existence of such an extension amounts to the fact that the restriction of \(f(\mathsf{E}_{2})\) to \(\mathsf{R}_{1}\), \(\mathsf{R}_{2}\) and \(W\) has its image in respectively \(\mathsf{R}_{1}\), \(\mathsf{R}_{2}\) and \(W\). **Definition 75** (Cocone of the restriction-extension diagram).: The restriction-extension diagram RE has a canonical cocone towards \(V\) induced by the unit of \(\mathcal{P}\) and the coaugmentation of \(\mathcal{C}\) \[V^{\mathcal{P}}\circ\mathcal{P}\circ\mathcal{C}\longrightarrow V^{\mathcal{I} }\circ\mathcal{I}\cong V.\] This cocone restricts to a morphism \(\xi_{V}:L^{\mathcal{P}}V^{\mathcal{C}}\longrightarrow V\). Notice that the graded map \(\xi_{V}:\widehat{\mathsf{B}}_{\mathcal{C}}\widehat{\Omega}_{\mathcal{C}}V \longrightarrow V\) is in fact a morphism of dg modules. The morphism \(\xi_{V}\) admits two sections: 1. the unit map \(\eta_{V}:V\longrightarrow\widehat{\mathsf{B}}_{\mathcal{C}}\widehat{\Omega}_ {\mathcal{C}}V\), which is also a morphism of dg modules, 2. the map induced by the counit on \(\mathcal{C}\) and the graded augmentation map on the underlying graded operad of \(\mathcal{P}\) \[\zeta_{V}:V\simeq L^{\mathcal{I}}V^{\mathcal{I}}\longrightarrow L^{\mathcal{ P}}V^{\mathcal{C}}.\] which is a section of \(\xi_{V}\) in graded \(\Bbbk\)-modules. By the universal property of the pullback, such a map \(\zeta_{V}\) extends to a cone of the restriction-extension diagram with source \(V\). Let us denote \(\pi_{V}\) the degree \(0\) endomorphism of the restriction-extension diagram defined as \(\pi_{V}=\eta_{V}\circ\xi_{V}\). We can notice that the restriction of \(\pi_{V}\) onto \(\widehat{\mathsf{B}}_{\mathcal{C}}\widehat{\Omega}_{\mathcal{C}}V\) is a morphism of dg modules. Let us denote \(\rho_{V}\) the degree \(0\) endomorphism of the restriction-extension diagram defined as \(\pi_{V}=\zeta_{V}\circ\xi_{V}\). **The coderivation on \(W\)** Let \(W:=\widehat{\mathsf{B}}_{\mathcal{C}}\widehat{\Omega}_{\mathcal{C}}V\). The coderivation \(D\) on \(W\) may be decomposed as \[D=D_{in}+D_{ex}.\] On the one hand, \(D_{ex}\) is the degree \(-1\) endomorphism of the extension diagram given by the formula \[D_{ex}=V^{d_{Cp}};\] it restricts to \(W\) but not to \(\mathsf{R}_{1}\) nor \(\mathsf{R}_{2}\). On the other hand, \(D_{in}\) is the degree \(-1\) endomorphism of the restriction-extension diagram that is the sum of the following pre-differentials: 1. the pre-differential \(D_{V}\) which results from the differential on \(V\) and whose restriction to the extension diagram is \(\sqcup(\mathsf{Id},d_{V})^{\mathsf{E}}\), 2. the pre-differential \(D_{CV}\) induced by the structural map of \(V\) composed with the curved twisting morphism \(\iota\), given by \(V\longrightarrow V^{\mathcal{P}_{in}}\longrightarrow V^{\mathcal{C}_{\mu}}\), 3. the pre-differential \(D_{C}\) which results from the pre-differential on \(\mathcal{C}\). Its restriction to the extension diagram is \(V^{d_{C}}\), 4. the pre-differential \(D_{\theta}\) which results from the curvature of \(\mathcal{C}\). Its restriction to the extension diagram is \(V^{d_{\theta}}\), 5. the pre-differential \(D_{\mathsf{A}}\) which results from the structure of a cooperad on \(\mathcal{C}\). Its restriction to the extension diagram is \(V^{d_{\mathsf{A}}}\). **The contracting homotopy \(H\).** Let \(h\) be the degree \(1\) endomorphism of \(\mathcal{P}_{\mathsf{pl}}\circ_{\mathsf{pl}}\mathcal{C}_{\mathsf{pl}}=(\mathsf{ T}_{\mathsf{pl}}(s^{-1}\overline{\mathcal{C}}_{\mathsf{pl}}))\circ_{ \mathsf{pl}}\mathcal{C}_{\mathsf{pl}}\) defined as follows by induction on the height of the planar trees that make \(\mathcal{P}_{\mathsf{pl}}\): 1. On \(\mathcal{I}\circ_{\mathsf{pl}}\mathcal{C}\), \(h\) is zero. 2. On \(s^{-1}\overline{\mathcal{C}}_{\mathsf{pl}}\circ_{\mathsf{pl}}\mathcal{C}\), \(h\) is given by \[s^{-1}\overline{\mathcal{C}}_{\mathsf{pl}}\circ_{\mathsf{pl}}\mathcal{C} \twoheadrightarrow s^{-1}\overline{\mathcal{C}}_{\mathsf{pl}}\circ_{\mathsf{pl}} \mathcal{I}\longrightarrow\mathcal{I}\circ_{\mathsf{pl}}\overline{\mathcal{C}}_{ \mathsf{pl}}\hookrightarrow\mathcal{P}_{\mathsf{pl}}\circ_{\mathsf{pl}}\mathcal{C} _{\mathsf{pl}}\] \[s^{-1}\chi\mapsto x\.\] 3. Then on \[(\overline{\mathbb{T}}_{\mathsf{pl},\leq n+1}(s^{-1}\overline{\mathcal{C}}_{ \mathsf{pl}}))\circ_{\mathsf{pl}}\mathcal{C}_{\mathsf{pl}}\simeq s^{-1} \overline{\mathcal{C}}_{\mathsf{pl}}\circ_{\mathsf{pl}}\left(\overline{\mathbb{ T}}_{\mathsf{pl},\leq n}(s^{-1}\overline{\mathcal{C}}_{\mathsf{pl}}) \right)\circ_{\mathsf{pl}}\circ_{\mathsf{pl}}\mathcal{C}_{\mathsf{pl}}\] \[h\text{ is defined inductively by the sum of the map }\] with the map \[\left(\overline{\mathbb{T}}_{\mathsf{pl},\leq n+1}(s^{-1}\overline{ \mathcal{C}}_{\mathsf{pl}})\right)\otimes_{\mathsf{pl}}\left(\sum_{k=0}^{n-1} \pi_{\mathcal{I}\circ_{\mathsf{pl}}\mathcal{I}}^{\otimes k}\otimes h\otimes \mathsf{Id}_{\mathcal{P}_{\mathsf{pl}}\circ_{\mathsf{pl}}\mathcal{C}_{\mathsf{ pl}}}^{\otimes n-k-1}\right).\] In order words, it takes the leftest most vertex of the planar tree labelled by an element \(s^{-1}\overline{\mathcal{C}}_{\mathsf{pl}}\) and suspends it. Pictorially, it is given as follows \[\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)* +{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{ \xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{ \xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{ \xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{ \xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{ \xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{ \xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{ \xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{ \xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{ \xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{ \xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{ \xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{ \xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{ \xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{ \xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{xy(-1,0)*+{ \xy(-10)*+{\xy(-1,0)*+{\xy(-10)*+{\xy(-1,0)*+{\xy(-1,0)*+{\xy(-1,0)*+{xy(-10)*+{xy(-1,0)*+{xy(-10)*+{xy(-10)*+{xy(-10)*+{xy(-10)*+{xy(-10)*+{xy(-10)*+{xy(-10)*+{xy(-10)*+{xy(-10)*+{xy(-10)*+{xy(-10)*+{xy(-10)*+{xy(-10)*+{xy(-10)*+{xy(-10)*+{xy(-10)*+{xy(-10)*+{xy(-10)*+{xy(-1)*{xy(-1 0)*+{xy(-1)*{xy(-1)*{xy(-10)*+{xy(-1)*{xy(-1)*{xy(-1)*{xy(-1 0)*+{xy(-1)*{xy(-1)*{xy(-1)*{xyxy(-1)*{xy((-1)*{xyxy(-1 )*{((-1)*{xyxy(-1)*{xyxy(-1)*{xyxy(-1)*{( ((-1))*{xyxy(-1)*{((-10)*{xyxy(-1)*{( (-10)*{((-1))*{xyxy(-10}{(-1)*{( *{({({\leftleft}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}{\}\}\}\\\\\\\\\\\\\\\\\\\\\\\ {{{{{{{{ \{\}}}}}}}}}\}\}\}\}\}\\\\\\\\\\\\\\\\\\\\\\\}}\\\\\\\\ {\{\{\}}\{\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\}\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\ **Lemma 51** ([11, Proposition 11.17]).: _The map \(H\) extends to the whole restriction extension-diagram._ **Definition 77** (The garbage map).: Let \(G\) be the degree \(0\) endomorphism of the extension diagram given by the formula \[G=D_{ex}H+HD_{ex}-\operatorname{Id}+\rho_{V}.\] **Lemma 52** ([11, Proposition 11.17]).: _The maps \(\partial_{ex}(H)=D_{ex}H+HD_{ex}\) and \(G\) extend to the whole restriction-extension diagram._ **Definition 78** (The boundary map).: Let \(B\) be the degree \(0\) endomorphism on the restriction-extension diagram RE defined as \[B=\partial_{ex}(H)+D_{in}H+HD_{in}+\pi_{V}=G+\operatorname{Id}+D_{in}H+HD_{in} +\pi_{V}-\rho_{V}.\] **Lemma 53**.: _The restriction of \(B\) to \(W=\widehat{\mathbb{E}}_{c}\widehat{\Omega}_{c}V\) is equal to \(\partial(H)+\pi_{V}\)._ Let \(gB\) be the degree \(0\) endomorphism on the restriction-extension diagram RE defined as \[gB=B+\rho_{V}-\pi_{V}=\operatorname{Id}+G+D_{in}H+HD_{in}.\] Moreover, let \(\overline{B}\) be the degree \(0\) endomorphism on the restriction-extension diagram RE defined as \[\overline{B}=B-\pi_{V}=G+D_{in}H+HD_{in}-\rho_{V}.\] **The map \(H\) is a contracting homotopy.** Finally, we show that \(H\) is a contracting homotopy, which will allow us to conclude the proof of Lemma 49. **Proposition 57**.: _The maps \(B\) and \(gB\) are degree \(0\) automorphisms of the restriction-extension diagram RE._ Proof.: This is a direct consequence of Lemma 54 and Lemma 55. **Lemma 54**.: _The maps \(gB\) and \(B\) are automorphisms of \(\mathbb{E}_{1},\,\mathbb{E}_{2}\) and \(\mathbb{R}_{1}\)._ Proof.: Let us filter \(\mathcal{P}\) from the qp-filtration on \(\mathcal{C}\). This induces a cofiltration on \(\mathbb{E}_{1}\) and on \(\mathbb{R}_{1}\). On the graduated object related to this cofiltration, \(\operatorname{gr}H=0\) and \(\operatorname{gr}(G)=0\) and \(\operatorname{gr}(\pi_{V})=\operatorname{gr}(\rho_{V})\). Thus \(\operatorname{gr}(gB)=\operatorname{gr}(B)=\operatorname{Id}\). Subsequently, \(gB\) and \(B\) are automorphisms. For \(\mathbb{E}_{2}\), one can filter \(\mathcal{P}\circ\mathcal{P}\) in a similar fashion and obtain the same result. **Lemma 55**.: _The map \(B\) is an automorphism of \(R_{2}\)._ Proof.: The proof is similar to that of [11, Lemma 11.28]. The main difference is the use of the "qp filtration". One can filter \(\mathcal{P}\) from the qp filtration on \(\mathcal{C}\). This induces a cofiltration on \(\mathbb{R}_{2}=\mathbb{R}_{1}^{\mathcal{P}}\) that is preserved by \(D_{in}\), \(H\), \(G\), and \(\pi_{V}\). Let us prove that on the graded object related to this cofiltration, the map \(\operatorname{gr}(B)\) is an isomorphism. Such a graded object has the form \[\operatorname{gr}(\mathbb{R}_{2})=\mathbb{R}_{1}^{qr\mathcal{P}}=\mathbb{R}_{ 1}^{qr\mathcal{P}_{\#}}=\prod_{t}\left[t(s^{-1}\operatorname{gr}^{\operatorname {qp}}\overline{\mathcal{C}_{\operatorname{pl}}}),\mathbb{R}_{1}^{\otimes l(t )}\right]\enspace,\] where the product is taken over planar trees \(t\) and where \(l(t)\) is the number of leaves of \(t\). On this graded object 1. \(\rho_{V}=\pi_{V}\), thus \(gB=B\); 2. \(G\) acts independently on each part of the product over planar trees \(t\) as \[G_{t}=\left[\sum_{k=0}^{s-1}\rho_{V}^{\otimes k}\otimes G(\mathbb{R}_{1}) \otimes\operatorname{Id}_{\mathbb{R}_{1}}^{\otimes n-1-k}\right]\enspace,\] where \(a\) is the number of the last leaf of the leftest top vertex of \(t\) (\(1\) if it is the first leaf); 3. similarly \(H\) acts independently on each part of the product over planar trees \(t\) as \[H_{t}=\left[\operatorname{Id},\sum_{k=0}^{s-1}\rho_{V}^{\otimes k}\otimes H( \mathbb{R}_{1})\otimes\operatorname{Id}_{\mathbb{R}_{1}}^{\otimes n-1-k}\right]\enspace,\] where again \(a\) is the number of the last leaf of the leftest top vertex of \(t\); 4. \(D_{in}\) is given by the formula \[\shuffle(\operatorname{Id},D_{in}(\mathsf{R}_{1}))^{\mathsf{gr}\mathcal{P}}-[d_{C} +d_{\Delta},\operatorname{Id}]\enspace.\] For every natural integer \(n\), \(\operatorname{gr}_{n}\mathcal{P}_{\text{pl}}\) is made up planar trees labelled by \(s^{-1}\mathsf{gr}^{\mathsf{gp}}\overline{C_{\text{pl}}}\) so that the sum of the grading degree over the whole tree is \(n\); in particular, such a tree has at most \(n\) nodes. One can furthermore filter \(\operatorname{gr}_{n}\mathcal{P}_{\text{pl}}\) by the opposite of the number of nodes in trees. On the associated graded object \(\mathsf{grgr}_{n}\mathcal{P}\), the pre-differential \(d_{\Delta}\) disappears. This filtration of \(\operatorname{gr}_{n}\mathcal{P}\) induces a cofiltration of \(\mathsf{gr}^{n}\mathsf{R}_{2}\) whose associated graded object is given by \[\mathsf{gr}^{-m}\mathsf{gr}^{n}\mathsf{R}_{2} =\mathsf{R}_{1}^{\mathsf{gr}_{n}\mathcal{P}}\] \[\cong\prod_{t}\bigoplus_{i_{k}\dots+i_{k}=n}\left[s^{-1} \mathsf{gr}_{i_{k}}^{\mathsf{gp}}\overline{C}_{\text{pl}}(n_{1}(t))\otimes \dots\otimes s^{-1}\mathsf{gr}_{i_{m}}^{\mathsf{gp}}\overline{C}_{\text{pl}}( n_{m}(t)),\mathsf{R}_{1}^{\otimes(t)}\right]\enspace,\] where \(0\leq m\leq n\) and where the product is taken over the planar trees \(t\) with \(m\)-nodes whose number of inputs are \(n_{1}(t),\dots,n_{m}(t)\). Let us denote by \(l(t)\) the number of leaves of \(t\). On such a multi-graded object, \(D_{in}\), \(H\) and \(G\) are still given by the same formula on each tree as above; but now, \([d_{\Delta},\operatorname{Id}]\) is zero. Besides, \([d_{C},\operatorname{Id}]\) acts independently on each tree and thus commutes with \(H\). For every planar tree with \(m\)-nodes \(t\), let us denote \(D_{in,t}\) be the degree \(-1\) endomorphism of the \(t\) part of the above product that is given by the formula \[D_{in,t}=\left[\operatorname{Id},\sum_{k=0}^{l(t)-1}\operatorname{Id}^{k} \otimes D_{in}(\mathsf{R}_{1})\otimes\operatorname{Id}^{\otimes(t)-k-1} \right]\enspace.\] Since \([d_{C},\operatorname{Id}]\) commutes with \(H\) on \(\mathsf{gr}^{n}\mathsf{E}_{2}\), one has on \(\mathsf{gr}^{-m}\mathsf{gr}^{n}\mathsf{R}_{2}\) that \[D_{in}H+HD_{in}=\prod_{t}D_{in,t}H_{t}+H_{t}D_{in,t}\enspace.\] Let us prove that for every planar tree \(t\), the map \(\operatorname{Id}+G_{t}+D_{in,t}H_{t}+H_{t}D_{in,t}\) is an isomorphism of the \(t\) part of \(\mathsf{gr}^{m}\mathsf{gr}^{n}\mathsf{R}_{2}\). Again we denote \(l(t)\) the number of leaves of \(t\) and \(a\) the number of leaves of \(t\) that are before the leftest top node or on top of its node; thus \(l(t)=a+b\) where \(b\) is the number of leaves after those of the leftest top node. The case \(a=0\) is clear as then \(\operatorname{Id}+G_{t}+D_{in,t}H_{t}+H_{t}D_{in,t}=\operatorname{Id}\). Let us tackle the case \(a>0\). We can filtrate \(\mathsf{R}_{1}^{\otimes(t)}\) in a finite way as follows \[F_{0}\mathsf{R}_{1}^{\otimes(t)} =\overline{\mathsf{R}_{1}}\otimes\mathsf{R}_{1}^{l(t)-1}\] \[F_{1}\mathsf{R}_{1}^{\otimes(t)} =(F_{0}\mathsf{R}_{1}^{\otimes(t)})\oplus V\otimes\overline{ \mathsf{R}_{1}}\otimes\mathsf{R}_{1}^{l(t)-2}\] \[\dots\] \[F_{a}\mathsf{R}_{1}^{\otimes(t)} =F_{a-1}\mathsf{R}_{1}^{\otimes(t)}\oplus V^{\otimes a}\otimes \mathsf{R}_{1}^{l(t)-a}.\] The related graded object is \[\mathsf{gr}_{k}\mathsf{R}_{1}^{\otimes(t)} =V^{\otimes k}\otimes\overline{\mathsf{R}_{1}}\otimes\mathsf{R}_{ 1}^{l(t)-k-1},\quad 0\leq k\leq a-1\] \[\mathsf{gr}_{a} =V^{\otimes a}\otimes\mathsf{R}_{1}^{l(t)-a}.\] Such a filtration induces a filtration on the \(t\) part of of \(\mathsf{gr}^{-m}\mathsf{gr}^{n}\mathsf{R}_{2}\) that is preserved by \(D_{in,t}\), \(G_{t}\), \(H_{t}\). On the \(k\)-th layer of the associated graded object \[\mathsf{gr}_{k}(\operatorname{Id}+G_{t}+D_{in,t}H_{t}+H_{t}D_{in,t})=\left[ \operatorname{Id},\operatorname{Id}_{\mathsf{R}_{1}}^{\otimes k}\otimes g \mathcal{B}(\mathsf{R}_{1})\otimes\operatorname{Id}_{\mathsf{R}_{1}}^{\otimes(t )-k-1}\right]\enspace,\] if \(k<a\) and \[\mathsf{gr}_{a}(\operatorname{Id}+G_{t}+D_{in,t}H_{t}+H_{t}D_{in,t})= \operatorname{Id}\enspace,\] otherwise. In both case, it is an isomorphism. Thus \(\mathsf{gr}(\operatorname{Id}+G_{t}+D_{in,t}H_{t}+H_{t}D_{in,t})\) is an isomorphism and so is \(\operatorname{Id}+G_{t}+D_{in,t}H_{t}+H_{t}D_{in,t}\). Subsequently \(\mathcal{B}\) is an isomorphism on \(\mathsf{gr}^{-m}\mathsf{gr}^{n}\mathsf{R}_{2}\) for every \(n\in\mathbb{N}\), \(m\leq n\). Therefore \(\mathcal{B}\) is an isomorphism of \(\mathsf{R}_{2}\) ### Cofibrant objects Cofibrant complete curved \(\mathcal{C}\)-algebras admit a simple description: they are exactly those whose underlying graded \(\mathcal{C}\)-algebra is free. **Lemma 56**.: _The functor \(\widehat{\Omega}_{\mathcal{C}}\) from dg \(\Omega\mathcal{C}\)-coalgebras to curved \(\mathcal{C}\)-algebras commutes with finite cosifted limits._ Proof.: This follows from the fact that cosifted limits in dg \(\Omega\mathcal{C}\)-coalgebras and in curved \(\mathcal{C}\)-algebras are computed in graded \(\Bbbk\)-modules and that the endofunctor of graded \(\Bbbk\)-modules \((-)^{\mathcal{C}}\) preserves cosifted limits. Remark 56.: From the above fact one can deduce that the adjunction \(\widehat{\Omega}_{\mathcal{C}}\dashdot\widehat{\mathbb{B}}_{\mathcal{C}}\) is bimonadic. See the upcoming article [10] for more details. **Proposition 58**.: _Let \(\Lambda\) be a qp-complete curved \(\mathcal{C}\)-algebra. The following assertions are equivalent._ 1. \(\Lambda\) _is cofibrant;_ 2. \(\Lambda\) _is in the essential image of the functor_ \(\widehat{\Omega}_{\mathcal{C}}\)_;_ 3. \(\Lambda\) _is a quasi-free_ \(\mathcal{C}\)_-algebra that is, its underlying graded algebra is free._ Proof.: For every free graded \(\mathcal{C}\)-algebra \(\Lambda\cong X^{\mathcal{C}}\), the data of a degree \(-1\) derivation is equivalent to the data of a degree \(-1\) map \(X\longrightarrow X^{\mathcal{C}}\) by restricting it to the generators \(X\). In turn, any map \(X\longrightarrow X^{\mathcal{C}}\) is equivalent to a degree \(-1\) map \(\mathcal{C}\longrightarrow\mathrm{coEnd}(X)\). Therefore the following data are equivalent. 1. A degree \(-1\) derivation \(d\) of \(\Lambda\). 2. A morphism of graded operads \(\varphi_{d}:\Omega\mathcal{C}\longrightarrow\mathrm{coEnd}(X)\), which induces a graded \(\Omega\mathcal{C}\)-coalgebra structure on \(X\). A straightforward computation then shows that the derivation \(d\) is _curved_, that is, \(d^{2}=-\gamma_{\Lambda}(\Lambda^{\theta})\), if and only if \(\varphi_{d}\) is a morphism of dg operads. This proves the equivalence between assertion 2 and assertion 3. Furthermore, we already know that a qp-complete curved \(\mathcal{C}\)-algebra that is in the essential image of the functor \(\widehat{\Omega}_{\mathcal{C}}\) is cofibrant. Now, let \(\Lambda\) be a cofibrant object in qp-complete curved \(\mathcal{C}\)-algebras. Let us prove that it is quasi-free. Since \(\Lambda\) is cofibrant, the counit map \(\epsilon_{\mathcal{B}}:\widehat{\Omega}_{\mathcal{C}}\widehat{\mathbb{B}}_{ \mathcal{C}}\Lambda\longrightarrow\Lambda\) which is an acyclic fibration has a section \(s\). Let \(V\) be the equaliser of the coreflexive pair of maps in the category of dg \(\Omega\mathcal{C}\)-coalgebras. By Lemma 56, the limit of the diagram of curved \(\mathcal{C}\)-algebras is \(\widehat{\Omega}_{\mathcal{C}}V\). It is also clear that this limit is \(\Lambda\). So one has a canonical isomorphism \(\Lambda\cong\widehat{\Omega}_{\mathcal{C}}V\) ### Infinity morphisms The notion of \(\infty\)-morphism extends the usual notion of morphisms of dg \(\Omega\mathcal{C}\)-coalgebras. Their main advantage is that \(\infty\)-quasi-isomorphisms are invertible, therefore one can replace a zig-zag of quasi-isomorphisms of dg \(\Omega\mathcal{C}\)-coalgebras with two inverse \(\infty\)-quasi-isomorphism. This provides a powerful tool to describe the homotopy category of dg \(\Omega\mathcal{C}\)-coalgebras. Recall that for every dg \(\Omega\mathcal{C}\)-coalgebra \(V\), the unit \(\eta_{V}:V\longrightarrow\widehat{\mathbb{B}}_{\mathbb{C}}\widehat{\Omega}_{ \mathcal{C}}V\) admits a left-inverse \(\xi_{V}:\widehat{\mathbb{B}}_{\mathbb{C}}\widehat{\Omega}_{\mathcal{C}}V \longrightarrow V\) in the category of dg modules. Let \(K\) be the kernel of this map, we get a decomposition of dg modules \[\widehat{\mathbb{B}}_{\mathbb{C}}\widehat{\Omega}_{\mathcal{C}}V=V\oplus K.\] **Definition 79** (\(\infty\)-morphism).: Let \(V,V^{\prime}\) be two dg \(\Omega\mathcal{C}\)-coalgebras. An \(\infty\)_-morphism_\(f:V\rightsquigarrow V^{\prime}\) amounts to the data of, equivalently: 1. a morphism \(f:V\longrightarrow\widehat{\mathbb{B}}_{\mathbb{C}}\widehat{\Omega}_{\mathcal{ C}}V^{\prime}\) of dg \(\Omega\mathcal{C}\)-coalgebras, 2. a morphism \(f^{\dagger}:\widehat{\Omega}_{\mathcal{C}}V\longrightarrow\widehat{\Omega}_{ \mathcal{C}}V^{\prime}\) of qp-complete curved \(\mathcal{C}\)-algebras. **Linear part.** Let \(f:V\rightsquigarrow V^{\prime}\) be an \(\infty\)-morphism of dg \(\Omega\mathcal{C}\)-coalgebras. Its _linear part_\(f_{\mathrm{dg}}\) is the morphism of dg modules given by the following composition: Let us denote \(\epsilon:\mathcal{C}\longrightarrow\mathcal{I}\) and \(\mu:\mathcal{I}\longrightarrow\mathcal{C}\) the counit and the coaugmentation of the quasi-planar conilpotent curved cooperad \(\mathcal{C}\), respectively. The linear part of \(f_{\mathrm{dg}}:V\rightsquigarrow V^{\prime}\) is equivalently given by \[V\xrightarrow{(V^{\prime})}\widehat{\Omega}_{\mathcal{C}}V\xrightarrow{f^{ \dagger}}\widehat{\Omega}_{\mathcal{C}}V^{\prime}\xrightarrow{(V^{\prime})^{ \mu}}V^{\prime}\.\] **Homotopy part.** Let \(f:V\rightsquigarrow V^{\prime}\) be an \(\infty\)-morphism of dg \(\Omega\mathcal{C}\)-coalgebras. Its _homotopy part_\(f_{\mathrm{h}}\) is the morphism of dg modules given by the composition where \(K\) is the kernel of the map \(\xi_{V}:\widehat{\mathbb{B}}_{\mathbb{C}}\widehat{\Omega}_{\mathcal{C}}V \longrightarrow V\). Equivalently, it is given by the composition where \(L\) is the kernel of the map \((\mathrm{Id})^{\mu}:\widehat{\Omega}_{\mathcal{C}}V\rightsquigarrow V\) induced by the coaugmentation of \(\mathcal{C}\). **Definition 80** (\(\infty\)-quasi-isomorphism).: An \(\infty\)-morphism \(f:V\rightsquigarrow V^{\prime}\) is called an \(\infty\)_-quasi-isomorphism_ if its linear part \(f_{\mathrm{dg}}\) is a quasi-isomorphism of dg modules. **Definition 81** (\(\infty\)-isotopy).: An \(\infty\)-morphism \(f:V\rightsquigarrow V\) is called an \(\infty\)_-isotopy_ if its linear part \(f_{\mathrm{dg}}\) is an identity map of \(V\). **Lemma 57**.: _Let \(f^{\dagger}:\widehat{\Omega}_{\mathcal{C}}V\longrightarrow\widehat{\Omega}_{ \mathcal{C}}V^{\prime}\) be an \(\infty\)-morphism. If \(f_{\mathrm{dg}}\) is a cofibration, then \(f^{\dagger}\) is the composition of a strict cofibration, that is the image through \(\widehat{\Omega}_{\mathcal{C}}\) of a cofibration between dg \(\Omega\mathcal{C}\)-coalgebras, followed by an \(\infty\)-isotopy._ Proof.: Let \(p:V^{\prime}\longrightarrow V\) be a left inverse of \(f_{\mathrm{dg}}\) in the category of graded \(\Bbbk\)-modules that is, a map so that \(\rho f_{\mathrm{dg}}=\mathrm{Id}\). We consider the following degree \(0\) map \(\tau:V^{\prime}\longrightarrow(V^{\prime})^{\mathcal{C}}\), defined by \(\tau_{\mathrm{dg}}\coloneqq\mathrm{Id}_{V^{\prime}}\) and by It induces a morphism \(t\) of the graded \(\mathcal{C}\)-algebras where \(\Delta:\mathcal{C}\longrightarrow\mathcal{C}\circ\mathcal{C}\) is the decomposition of the cooperad \(\mathcal{C}\). Notice that \(t\) is an isomorphism since \(\mathrm{gr}^{\mathrm{dg}}(t)=\mathrm{Id}\). We have the equality of morphism of graded \(\mathcal{C}\)-algebras \[t(\mathcal{C}\circ f_{\mathrm{dg}})=f^{\dagger}.\] Let us denote \(D\) the derivation of \((V^{\prime})^{c}\) coming from the dg \(\Omega\mathcal{C}\)-coalgebra structure of \(V^{\prime}\) and let \[\bar{D}\coloneqq t^{-1}Dt.\] This is a derivation of \((V^{\prime})^{c}\) that makes it a complete curved \(\mathcal{C}\)-algebra. Thus it defines the structure of a dg \(\Omega\mathcal{C}\)-coalgebra on \(V^{\prime}\) such that \(t:V^{\prime}\rightsquigarrow V^{\prime}\) becomes a \(\infty\)-isotopy between \((V^{\prime},\bar{D})\) to \((V^{\prime},D)\) and thus \(\mathcal{C}\circ f_{\mathrm{dg}}\) becomes a strict cofibration from \(V\) to \((V^{\prime},\bar{D})\). **Proposition 59**.: _Let \(f:V\rightsquigarrow V^{\prime}\) be an \(\infty\)-morphism of dg \(\Omega\mathcal{C}\)-coalgebras. The morphism of QP-complete curved \(\mathcal{C}\)-algebras \(f^{\dagger}:\widehat{\Omega}_{\mathcal{C}}V\longrightarrow\widehat{\Omega}_{ \mathcal{C}}W\) is_ 1. _a weak-equivalence if and only if the dg part_ \(f_{\mathrm{dg}}\) _is a quasi-isomorphism;_ 2. _an isomorphism if and only if the dg part_ \(f_{\mathrm{dg}}\) _is an isomorphism;_ 3. _a fibration if and only if the dg part_ \(f_{\mathrm{dg}}\) _is a degree-wise epimorphism;_ 4. _a cofibration if and only if the dg part_ \(f_{\mathrm{dg}}\) _is a degree-wise injection._ Proof.: Let us prove these assertions. 1. Let us consider the following commutative diagram \(\widehat{\mathbb{B}}_{\mathcal{C}}\widehat{\Omega}_{\mathcal{C}}V\)\(\stackrel{{\simeq}}{{\longrightarrow}}\)\(V\)\(\widehat{\mathbb{B}}_{\mathcal{C}}\widehat{\Omega}_{\mathcal{C}}V^{\prime}\)\(\stackrel{{\simeq}}{{\longrightarrow}}\)\(V^{\prime}\). in the category of dg modules. The two horizontal maps, \(\xi_{V}\) and \(\xi_{V^{\prime}}\), are quasi-isomorphisms. Thus, the left vertical map is a quasi-isomorphism if and only if the right vertical map is a quasi-isomorphism. 2. If \(f^{\dagger}\) is an isomorphism, then \(f_{\mathrm{dg}}=gr_{0}^{\mathrm{dg}}(f^{\dagger})\) is an isomorphism. Conversely, if \(f_{\mathrm{dg}}\) is an isomorphism, then \[\mathrm{gr}_{n}^{\mathrm{op}}(f^{\dagger})=(f_{\mathrm{dg}})^{\mathrm{Id}_{n} ^{\mathrm{op}}(\xi_{V})}\] is an isomorphism. Hence, \(f^{\dagger}\) is an isomorphism. 3. Let us suppose that \(f_{\mathrm{dg}}\) is a degree-wise epimorphism. Then, it has a section \(g\) in the category of graded \(\Bbbk\)-modules. Let us consider the endomorphism \(h\) of graded \(\mathcal{C}\)-algebras \[(V^{\prime})^{c}\stackrel{{(g)^{\mathrm{dt}}}}{{\longrightarrow }}(V)^{c}\stackrel{{ f^{\dagger}}}{{\longrightarrow}}(V^{\prime})^{c}\.\] Its linear part is the identity of \(V^{\prime}\). The same arguments as those used to prove point (2) show that \(h\) is a graded isomorphism. In particular, it is a degree-wise epimorphism. So \(f^{\dagger}\) is a degree-wise epimorphism, hence a fibration. Conversely, suppose that \(f^{\dagger}\) is a fibration. Let us consider the same commutative square diagram shown in point (1). The right vertical map \(\widehat{\mathbb{B}}_{\mathcal{C}}(f^{\dagger})\) and the bottom horizontal map of this square are degree-wise epimorphisms. Thus the right vertical map \(f_{\mathrm{dg}}\) is also a degree-wise epimorphism. 4. One has a functor from dg modules to qp-complete curved \(\mathcal{C}\)-algebras that sends a dg module \(X\) to \(X\) equipped with the trivial complete curved \(\mathcal{C}\)-algebra structure. This structure is given by the zero structural map \(0:X^{\overline{c}}\longrightarrow X\). This functor sends acyclic fibrations of dg modules to filtered quasi-isomorphisms that are also degree-wise epimorphisms. They are in particular acyclic fibrations of complete curved \(\mathcal{C}\)-algebras. If \(f^{\dagger}\) is a cofibration, then it has the left lifting property with respect to all acyclic fibrations, and in particular with respect to such acyclic fibrations of dg modules. As a consequence of the fact that a square of qp-complete curved \(\mathcal{C}\)-algebras (where \(X,Y\) are chain complexes equipped with the zero structure of a qp-complete curved \(\mathcal{C}\)-algebra) amounts to the data of a square chain complexes the map \(f_{\mathrm{dg}}\) has the left lifting property with respect to every acyclic fibration of dg modules. So it is a cofibration of dg modules that is, a degree-wise injection. Conversely, if \(f_{\mathrm{dg}}\) is a cofibration, then \(f^{\dagger}\) is a cofibration as a direct consequence of Lemma 57. **Proposition 60**.: _Let \(V\) and \(V^{\prime}\) be two dg \(\Omega\mathcal{C}\)-coalgebras. There exists a zig-zag of quasi-isomorphisms of dg \(\Omega\mathcal{C}\)-coalgebras_ \[V\xleftarrow{\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ \cdot\ \ \cdot\ \cdot\ \ \cdot\ \cdot\ \cdot\ Let us also consider a direct sum of dg modules \[V=X\oplus K\] where \(K\) is acyclic. Let us choose a contracting homotopy \(h\) of \(K\) that we can extend to all \(V\) by \(0\) on \(X\). Let us denote \(\pi_{X},\pi_{K}\) the projections endomorphisms of \(V\) onto respectively \(X\) and \(K\). All these arrows satisfy the following equations: \[\left\{\begin{gathered}\pi_{X}\pi_{K}=\pi_{K}\pi_{X}=0;\\ \partial(h)=\pi_{K}=\operatorname{Id}_{V}-\pi_{X};\\ h\pi_{K}=\pi_{K}h=h;\\ h\pi_{X}=\pi_{X}h=0.\end{gathered}\right.\] Let us suppose that the dg module \((V,d_{V})\) is endowed with a dg \(\mathcal{P}\)-coalgebra structure \[\Delta_{V}:V\longrightarrow V^{\mathcal{P}}.\] Notice that it induces a dg \(\Omega\mathcal{B}\mathcal{P}\)-coalgebra structure on \(V\) by pulling back along the map \(\Omega\mathcal{B}\mathcal{P}\longrightarrow\mathcal{P}\). **Definition 82**.: Let \(D_{\delta}\) be the degree \(-1\) derivation on the graded \(\mathcal{B}\mathcal{P}\)-algebra \(V^{\mathcal{B}\mathcal{P}}\) that induced by the \(\Omega\mathcal{B}\mathcal{P}\)-coalgebra structure \(\Delta_{V}\) on the dg module \(V\), in the sense that \[(V^{\mathcal{B}\mathcal{P}},D_{\delta})\cong\widehat{\Omega}_{\mathcal{B} \mathcal{P}}V.\] Moreover, let \(\delta\) be the composition of \(D_{\delta}\) with the inclusion of \(V\) inside \(V^{\mathcal{B}\mathcal{P}}\). One can notice that the projection of \(\delta\) onto \(V\) is the differential \(d_{V}\) and that its projection onto \(V^{\overline{\mathcal{B}\mathcal{P}}}\) is given by \[V\xrightarrow{\Delta}V^{\mathcal{P}}\xrightarrow{-V^{s}}V^{s\mathcal{P}} \hookrightarrow V^{sM}\hookrightarrow V^{\overline{\Gamma}(sM)}=V^{\overline{ \mathcal{B}\mathcal{P}}}\,\] where the second map \(-V^{s}\) sends a sequence to a sequence \[(\phi_{n})_{n\in\mathbb{N}}\in\prod_{n}[\mathcal{P}(n),V^{\otimes n}]^{ \mathbb{S}_{n}}\mapsto(\psi_{n})_{n\in\mathbb{N}}\in\prod_{n}[s\mathcal{P}(n),V^{\otimes n}]^{\mathbb{S}_{n}}\,\] and where each \(\psi_{n}\) is defined as \[s\mathcal{P}(n)\xrightarrow{s\times n\to-x}\mathcal{P}(n)\xrightarrow{\phi_{ n}}V^{\otimes n}.\] The degree \(0\) map \[V\xrightarrow{-h}V\xrightarrow{\overline{\mathfrak{s}}_{\leq 1}}V^{sM}\] yields a morphism of graded \(\mathbb{S}\)-modules \(sM\longrightarrow\operatorname{coEnd}(V)\). Therefore there is a morphism of graded operads \(\mathbb{T}(sM)\longrightarrow\operatorname{coEnd}(V)\), which in turn yields a degree \(0\) map of graded \(\mathbb{k}\)-modules \[\phi:V\longrightarrow V^{\mathbb{T}sM}=V^{\mathcal{B}\mathcal{P}}.\] Let \(f\) be the related morphism of qp-complete graded \(\mathcal{B}\mathcal{P}\)-algebras \[f:V^{\mathcal{B}\mathcal{P}}\xrightarrow{\phi^{\mathcal{B}\mathcal{P}}}(V^{ \mathcal{B}\mathcal{P}})^{\mathcal{B}\mathcal{P}}\hookrightarrow V^{\mathcal{B} \mathcal{P}\circ\mathcal{B}\mathcal{P}}\longrightarrow V^{\mathcal{B} \mathcal{P}}.\] One can notice that the composition of \(\phi\) with the projection onto \(V\) is the identity and that \[\overline{\phi}=f\overline{\phi}_{\leq 1}.\] **Definition 83**.: Let \(\chi\) be the degree \(-1\) morphism from \(V\) to \(V^{\mathbb{T}(sM)}\) defined as \(\chi\coloneqq f\overline{\delta}\). Notice that \(\phi=\operatorname{Id}_{V}-\chi h\). Recall that the conilpotent curved cooperad \(\mathcal{B}\mathcal{P}\) is given by the conilpotent graded cooperad \(\mathbb{T}(sM)\) endowed with the following pre-differentials: 1. the pre-differential \(d_{\gamma}\) which is induced by the operad structure of \(\mathcal{P}\); 2. the pre-differential \(d_{P}\) which is induced by the differential of \(\mathcal{P}\); 3. the pre-differential \(d_{u}\), which maps \(s^{2}\mathcal{I}\) to the unit of \(\mathcal{P}\). We will denote the later two pre-differentials by \(d_{sM}\), as they are defined on the generators of \(\mathcal{B}\mathcal{P}\). We refer to Subsection 2.1 for more details. Notation. We denote by \(d_{\gamma}^{\text{root}}\) the degree \(-1\) endomorphism of \(\overline{\Gamma}(sM)\hookrightarrow\text{B}\mathcal{P}\) given by applying \(d_{\gamma}\) only to inner edges of the trees of \(\overline{\Gamma}(sM)\) that touch the root node. We denote by \(d_{\gamma}^{\text{rroot}}\) the other component of \(d_{\gamma}\), given by applying \(d_{\gamma}\) to all the other edges that do not touch the root. Similarly, we denote by \(d_{sM}^{\text{root}}\) and \(d_{sM}^{\text{rroot}}\) the degree \(-1\) endomorphisms given by applying \(d_{sM}\) to the root node of a tree (resp. all the other nodes). **Lemma 58**.: _The following diagram in the category of graded \(\Bbbk\)-modules_ _is commutative._ Proof.: A straightforward check shows these two maps from \(V\) to \(V^{\overline{\Gamma}_{\leq 2}(sM)}\) correspond to the same \(\mathbb{S}_{n}\)-equivariant maps from \(V\otimes\overline{\Gamma}_{\leq 2}(sM)(n)\) to \(V^{\otimes n}\). **Lemma 59**.: _The map \(f\delta=f\delta_{\leq 1}:V\longrightarrow V^{\Gamma(sM)}\) is equal to \((\phi\,d_{\gamma})+\chi\)._ Proof.: This follows from the equation \(\delta=\delta_{\leq 1}=d_{\gamma}+\overline{\delta}_{\leq 1}\). **Definition 84**.: Let \(\zeta\) be the degree \(-1\) map from \(V\) to \(V^{\mathcal{B}\mathcal{P}}\) whose projection onto to \(V\cong V^{\mathcal{I}}\) is \(d_{\gamma}\) and whose projection onto \(V^{\overline{\mathcal{B}\mathcal{P}}}\) is the sum of the two maps \[V\xrightarrow{\pi_{\times}}V\xrightarrow{\chi}V^{\overline{ \mathcal{B}\mathcal{P}}};\] \[V\xrightarrow{h}V\xrightarrow{V^{\Psi}}V^{sM}\xrightarrow{f_{ \times}}V^{\overline{\Gamma}(sM)}.\] Moreover, let \(D_{\zeta}\) be the unique degree \(-1\) derivation on the graded \(\text{B}\mathcal{P}\)-algebra \(V^{\text{B}\mathcal{P}}\) whose restriction to \(V\) is \(\zeta\). **Lemma 60**.: _The degree \(-1\) map \(f\delta-\zeta:V\longrightarrow V^{\Gamma(sM)}\) is equal to the sum of the two maps_ \[V\xrightarrow{\overline{\phi}_{\leq 1}}V^{sM}\xrightarrow{-V^{ \epsilon_{sM}}}V^{sM}\xrightarrow{f}V^{\overline{\Gamma}(sM)}\] \[V\xrightarrow{\overline{\phi}_{\leq 1}}V^{sM}\xrightarrow{ \shuffle(\operatorname{Id},d_{\gamma})^{sM}}V^{sM}\xrightarrow{f_{\times}}V^{ \overline{\Gamma}(sM)}\] _where \(d_{sM}\) denotes the restriction of the coderivation of \(\text{B}\mathcal{P}\) to \(sM\)._ Proof.: The projection of the two maps onto \(V\) are both equal to \(0\). Let us prove that their projection onto \(V^{\overline{\Gamma}(sM)}\) are equal. Let us first notice that \[\overline{f}\phi_{\nu}=\overline{\phi}d_{\nu}=\overline{f}\overline{\phi}_{ \leq 1}d_{\nu}=-\overline{f}\overline{\delta}_{\leq 1}hd_{\nu}\] Then, the first map rewrites as \[\overline{f}\delta-\overline{\zeta} =\overline{f}(d_{\nu}+\overline{\delta}_{\leq 1}-\overline{ \delta}_{\leq 1}\pi_{\chi}-V^{\theta}h)\] \[=\overline{f}\overline{\delta}_{\leq 1}(-hd_{\nu}+\operatorname{Id} -\pi_{\chi})-\overline{f}V^{\theta}h\] \[=\overline{f}(\overline{\delta}_{\leq 1}d_{\nu}h-V^{\theta}h).\] Noticing that \[\overline{\delta}_{\leq 1}d_{\nu}=\overline{(D_{\delta})}_{\leq 1}\delta- \left(\shuffle(\operatorname{Id},d_{\nu})^{sM}-V^{d_{sM}}\right)\overline{ \delta}_{\leq 1}=V^{\theta}-\shuffle(\operatorname{Id},d_{\nu})^{sM}\overline{ \delta}_{\leq 1}+V^{d_{sM}}\overline{\delta}_{\leq 1}\] we get \[\overline{f}\delta-\overline{\zeta} =\overline{f}(V^{\theta}-\shuffle(\operatorname{Id},d_{\nu})^{ sM}\overline{\delta}_{\leq 1}+V^{d_{sM}}\overline{\delta}_{\leq 1}-V^{\theta})h\] \[=\overline{f}(\shuffle(\operatorname{Id},d_{\nu})^{sM}\overline{ \phi}_{\leq 1}-V^{d_{sM}}\overline{\phi}_{\leq 1}).\] **Proposition 61**.: _One has an equality of degree \(-1\) maps from \(V\) to \(V^{\text{BP}}\)_ \[D_{\zeta}\phi=f\delta.\] _Thus \(D_{\zeta}f=fD_{\delta}\)._ Proof.: Let us prove the result on the height of the trees that make \(\text{BP}=\mathbb{T}(sM)\). More precisely, let us prove that for every natural integer \(n\) \[D_{\zeta,\leq n}\phi=f_{\leq n}\delta.\] First, for \(n=0\) \[D_{\zeta,\leq 0}\phi=d_{V}=f_{\leq 0}\delta\] Let us assume that the equality is verifies for some natural integer \(n\). The map \(\overline{D_{\zeta,\leq n+1}}\phi\) is given on larger trees \(V^{\overline{\zeta}_{\leq n+1}(sM)}\) as the following sum of maps \[V \xrightarrow{\phi_{\leq 0}=\text{id}_{V}}\vee\xrightarrow{ \overline{\zeta}_{\leq n+1}}V^{\overline{\zeta}_{\leq n+1}(sM)}\,\] \[V \xrightarrow{\overline{\phi}}V^{\overline{\Gamma}(sM)} \xrightarrow{\omega(\text{id}_{V},\zeta)^{\mathbb{T}(sM)}}(V^{ \overline{\Gamma}(sM)})^{\mathbb{T}(sM)}\to V^{\overline{\Gamma}_{\leq n+1}( sM)}\,\] \[V \xrightarrow{\overline{\phi}}V^{\overline{\Gamma}(sM)} \xrightarrow{-V^{\text{detect}}}V^{\overline{\Gamma}(sM)}\to V^{ \overline{\Gamma}_{\leq n+1}(sM)}\,\] \[V \xrightarrow{\overline{\phi}}V^{\overline{\Gamma}(sM)} \xrightarrow{-V^{\text{detect}}}V^{\overline{\Gamma}(sM)}\to V^{ \overline{\Gamma}_{\leq n+1}(sM)}\,\] \[V \xrightarrow{\overline{\phi}}V^{\overline{\Gamma}(sM)} \xrightarrow{-V^{\text{detect}}}V^{\overline{\Gamma}(sM)}\to V^{ \overline{\Gamma}_{\leq n+1}(sM)}\,\] \[V \xrightarrow{\overline{\phi}}V^{\overline{\Gamma}(sM)} \xrightarrow{-V^{\text{detect}}}V^{\overline{\Gamma}(sM)}\to V^{ \overline{\Gamma}_{\leq n+1}(sM)}\,\] \[V \xrightarrow{\overline{\phi}}V^{\overline{\Gamma}(sM)} \xrightarrow{-V^{\text{detect}}}V^{\overline{\Gamma}(sM)}\to V^{ \overline{\Gamma}_{\leq n+1}(sM)}\,\] where the four last map are actually equal to the composition \[V \xrightarrow{\overline{\phi}}V^{\overline{\text{BP}}}\xrightarrow{-V^{ \text{BP}}}V^{\overline{\Gamma}(sM)}\to V^{\overline{\Gamma}_{\leq n+1}(sM)}\.\] We can the notice that the first map is just \(\overline{\zeta}_{\leq n+1}\) and that the sum of the second map, the fourth map and the sixth map is the composition \[V \xrightarrow{\overline{\delta}_{\leq 1}}V^{sM} \xrightarrow{\omega(\phi_{\leq n},D_{\zeta,\leq n}\phi)^{\text{JM}}}(V ^{\overline{\Gamma}_{\leq n}(sM)})^{\text{JM}}.\] By the induction hypothesis, \(D_{\zeta,\leq n}\phi=f_{\leq n}\delta\) and by Lemma 59\(f\delta=\phi d_{V}+\mathbf{\chi}\). So this composition is equal to the composition \[V \xrightarrow{\overline{\delta}_{\leq 1}}V^{sM} \xrightarrow{\omega(\phi_{\leq n},\phi_{\leq n}d_{V}+\mathbf{\chi}_{ \leq n})^{\text{JM}}}(V^{\overline{\Gamma}_{\leq n}(sM)})^{\text{JM}}.\] using Lemma 58, its sum with the fifth map is \[V \xrightarrow{\overline{\delta}_{\leq 1}}V^{sM} \xrightarrow{\omega(\phi_{\leq n},\phi_{\leq n}d_{V})^{\text{JM}}}(V ^{\overline{\Gamma}_{\leq n}(sM)})^{\text{JM}}.\] To conclude Lemma 60 tells that the sum of all the six maps is equal to \(\overline{f}_{\leq n+1}\delta\). **Proposition 62**.: _The derivation \(D_{\zeta}\) makes \(V^{\text{BP}}\) a qp-complete curved \(\text{BP}\)-coalgebra._ Proof.: Since \(D_{\zeta}f=fD_{\delta}f\) and since \(f\) is an isomorphism (by a standard filtration argument): \[D_{\zeta}=fD_{\delta}f^{-1}.\] Thus, the derivation \(D_{\zeta}\) makes \(V^{\text{BP}}\) a curved \(\text{BP}\)-algebra because so does the derivation \(D_{\delta}\). **Proposition 63**.: _The derivation \(D_{\zeta}\) projects onto the quotient graded \(\text{BP}\)-algebra \(X^{\text{BP}}\) in the sense that there exists a (necessarily unique) derivation on \(X^{\text{BP}}\), also denoted \(D_{\zeta}\), such that the projection map_ \[V^{\text{BP}}\twoheadrightarrow X^{\text{BP}}\] _commutes with the derivations. In particular, \((V^{\text{BP}},D_{\zeta})\) is a curved \(\text{BP}\)-algebra._ Proof.: Actually, \(X^{\mathsf{BP}}\) is the quotient/kernel in \(\mathsf{BP}\)-algebras of the idempotent endomorphism \(\pi_{X}^{\mathsf{BP}}\) on \(V^{\mathsf{BP}}\). One can notice that the restriction to \(V\) of \(\pi_{X}^{\mathsf{BP}}\)\(D_{\zeta}\)\(\pi_{X}^{\mathsf{BP}}\) and \(\pi_{X}^{\mathsf{BP}}D_{\zeta}\) are equal: \[\pi_{X}^{\mathsf{BP}}\ \zeta\ \pi_{X}=\pi_{X}^{\mathsf{BP}}\ \zeta.\] Thus \[\pi_{X}^{\mathsf{BP}}\ D_{\zeta}\ \pi_{X}^{\mathsf{BP}}=\pi_{X}^{\mathsf{BP}} \ D_{\zeta}\] which proves the result. To conclude, we have a composition of morphisms of qp-complete curved \(\mathsf{BP}\)-algebras \[(V^{\mathsf{BP}},D_{\delta})\xrightarrow{f_{i}}(V^{\mathsf{BP}},D_{\zeta}) \twoheadrightarrow(X^{\mathsf{BP}},D_{\zeta}).\] #### 7.4.2. The cooperad version of the homotopy transfer theorem for coalgebras **Theorem 12**.: _Let \(p:V\to X\) be an acyclic fibration of dg modules and let \(\Delta_{V}:V\longrightarrow V^{\mathsf{AC}}\) be a dg \(\Omega\mathcal{C}\)-coalgebra structure on \(V\). There exists another dg \(\Omega\mathcal{C}\)-coalgebra structure_ \[\zeta_{V}:V\longrightarrow V^{\Omega\mathcal{C}}\] _which projects onto \(X\), together with an \(\infty\)-isotopy_ \[(V,\Delta_{V})\rightsquigarrow(V,\zeta_{V})\] _of dg \(\Omega\mathcal{C}\)-coalgebras._ Proof.: Since \(p\) is an acylic fibration of dg modules, it has a section \(i\) and one can decompose \(V\) as \(X\oplus K\) where \(K\) is the kernel of \(p\). The paragraph just above gives us a diagram of qp-complete curved \(\mathsf{B}\Omega\mathcal{C}\)-algebras \[(V^{\mathsf{B}\Omega\mathcal{C}},D_{\delta})\xrightarrow{f_{i}}(V^{\mathsf{ B}\Omega\mathcal{C}},D_{\zeta})\twoheadrightarrow(X^{\mathsf{B}\Omega\mathcal{C}},D_{ \zeta}).\] Applying the left adjoint functor from qp-complete \(\mathsf{B}\Omega\mathcal{C}\)-algebras to qp-complete curved \(\mathcal{C}\)-algebra that results from the unit map \(\mathcal{C}\to\mathsf{B}\Omega\mathcal{C}\), we get diagram of curved \(\mathcal{C}\)-coalgebras \[(V^{\mathcal{C}},D_{\delta})\xrightarrow{f_{i}}(V^{\mathcal{C}},D_{\zeta}) \twoheadrightarrow(X^{\mathcal{C}},D_{\zeta}).\] In that context, \(D_{\zeta}\) is the derivation on \(V^{\mathcal{C}}\) that induces the expected dg \(\Omega\mathcal{C}\)-coalgebra structure \(\zeta_{V}\) on \(V\) and \(\vec{f}\) is the expected \(\infty\)-isotopy. #### 7.4.3. The homotopy transfer theorem for coalgebras **Theorem 13**.: _Let \(\mathcal{Q}\) be a cofibrant dg operad, let \(p:V\twoheadrightarrow X\) be an acyclic fibration of dg modules and let \(\Delta_{V}:V\longrightarrow V^{\mathcal{Q}}\) a dg \(\mathcal{Q}\)-coalgebra structure on \(V\). There exists a dg \(\mathcal{Q}\)-coalgebra structure \(\zeta_{X}\) on \(X\), together with a zig-zag of quasi-isomorphisms_ \[(V,\Delta_{V})\xleftrightarrow{\cdots}\xleftrightarrow{(X,\zeta_{X})}\] _of dg \(\mathcal{Q}\)-coalgebras. Furthermore, the maps in this zig-zag are homotopic to \(p\) in the model category of dg modules._ Proof.: Taking \(\mathcal{C}\) to be the quasi-planar conilpotent dg cooperad \(\mathsf{B}(\mathcal{Q}\otimes\mathcal{E})\), Theorem 12 yields a dg \(\Omega\mathcal{C}\)-coalgebra structure on \(X\) together with a zig-zag of quasi-isomorphisms of dg \(\Omega\mathcal{C}\)-coalgebras \[(V,\Delta_{V})\xleftrightarrow{\widehat{\mathsf{B}}}_{\widehat{\mathcal{C}}} \widehat{\Omega}_{\mathcal{C}}(X,\zeta_{X})\xleftrightarrow{(X,\zeta_{X})}.\] Moreover, the acyclic fibration of dg operads \(\Omega\mathcal{C}\twoheadrightarrow Q\) has a section since \(\mathcal{Q}\) is cofibrant. Applying the related left adjoint forgetful functor from dg \(\Omega\mathcal{C}\)-coalgebras to dg \(\mathcal{Q}\)-coalgebras yields the expected zig-zag. Remark 57.: This last result also follows from model-categorical arguments, as developed in [10]. ### Further localisations and divided powers operations Let \(\mathcal{Q}\) be an coadmissible dg operad and let \(\mathcal{C}\) be a quasi-planar conilpotent curved cooperad. Let us consider a morphism of dg operads \(f:\Omega\mathcal{C}\longrightarrow\mathcal{Q}\). We have two Quillen adjunctions Let us denote \(\widehat{\Omega}_{f}\) the composite left adjoint and \(\widehat{\mathbb{B}}_{f}\) the composite right adjoint. **Proposition 64**.: _There exists a combinatorial model structure on qp-complete curved \(\mathcal{C}\)-algebras, called the \(f\)-model structure, transferred from that of dg \(\mathcal{Q}\)-coalgebras, determined by the following sets of morphisms_ 1. _the set of_ \(f\)_-fibrations is given by morphisms_ \(g\) _such that_ \(\widehat{\mathbb{B}}_{f}(g)\) _is a fibration,_ 2. _the set of_ \(f\)_-weak-equivalences is given by morphisms_ \(g\) _such that_ \(\widehat{\mathbb{B}}_{f}(g)\) _is a weak equivalence._ 3. _the set of_ \(f\)_-cofibrations is determined by left-lifting property against all acyclic fibrations._ _Moreover, this is a right Bousfield localisation of the canonical model structure transferred from dg \(\Omega\mathcal{C}\)-coalgebras. Meaning that the identity functor of qp-complete curved \(\mathcal{C}\)-algebra, where at the source they are endowed with the canonical model structure, and at the target with the \(f\)-model structure, is a right Quillen functor._ Proof.: Any fibration and weak-equivalence in model structure transferred from dg \(\Omega\mathcal{C}\)-coalgebras is an \(f\)-fibration and an \(f\)-weak-equivalence. Hence, every object is fibrant and a natural path object is given by Proposition 56. This proves the existence of the transferred model structure. To prove that this is a right Bousfield localisation of that transferred from dg \(\Omega\mathcal{C}\)-coalgebras, it suffices to notice that fibrations are in particular degree-wise epimorphisms, in a similar way as shown in Propostion 52. ### Localizing at quasi-isomorphisms Let \(\mathcal{C}\) be a quasi-planar conilpotent _differential graded_ cooperad. The cobar construction \(\Omega\mathcal{C}\) is augmented since \(\mathcal{C}\) has zero curvature. Let us denote \(\nu:\Omega\mathcal{C}\longrightarrow\mathcal{I}\) the canonical morphism of dg operads given by the augmentation. We have the following adjunctions where the adjunction \(\nu^{*}\dashv\nu_{!}\) is in fact given by the primitive elements functor Prim (which is \(\nu_{!}\)) and by the trivial structure functor Triv (which is \(\nu^{*}\)). Notice that since \(\mathcal{C}\) has zero curvature, curved \(\mathcal{C}\)-algebras in pdg modules are precisely given by dg \(\mathcal{C}\)-algebras. **Proposition 65**.: _The set of \(\nu\)-weak-equivalences is precisely the set of quasi-isomorphisms of qp-complete dg \(\mathcal{C}\)-algebras._ Proof.: The composition Prim \(\widehat{\mathbb{B}}_{\mathcal{C}}\) is isomorphic to the forgetful functor from qp-complete dg \(\mathcal{C}\)-algebras to dg modules. **Corollary 10**.: _Let \(\mathcal{C}\) be a quasi-planar conilpotent dg cooperad. The set of weak-equivalences in the canonical model structure on dg \(\mathcal{C}\)-algebras is contained in the set of quasi-isomorphims._ Proof.: It suffices to apply Proposition 64 to the morphism of dg operads \(\nu:\Omega\mathcal{C}\longrightarrow\mathcal{I}\), combining it with Proposition 65. ### Divided power operations in the homotopical setting Let \(\mathcal{C}\) be a quasi-planar conilpotent dg cooperad. By Proposition 64, the category of qp-complete dg \(\mathcal{C}\)-coalgebras admits a model category structure where 1. the set of fibrations is given by degree-wise epimorphisms; 2. the set of weak-equivalences is given by quasi-isomorphisms; 3. the set of cofibrations is given by maps with left lifting property with respect to acyclic fibrations. Let \((\Lambda,\gamma_{\Lambda},d_{\Lambda})\) be a qp-complete dg \(\mathcal{C}\)-algebra. The structural map \[\gamma_{\Lambda}:\prod_{n\geq 0}[\mathcal{C}(n),\Lambda^{\otimes n}]^{\mathbb{S}_ {n}}\longrightarrow\Lambda\,\] comes from the invariants on the left-hand side, therefore divided power operations should appear. Nevertheless, since \(\mathcal{C}\) is quasi-planar, there is a natural isomorphism \[\prod_{n\geq 0}[\mathcal{C}(n),\Lambda^{\otimes n}]_{\mathbb{S}_{n}}\cong \prod_{n\geq 0}[\mathcal{C}(n),\Lambda^{\otimes n}]^{\mathbb{S}_{n}}\,\] of dg modules induced by the norm map (Proposition 2). Therefore no divided power operations appear at the algebraic level. These divided power operations do not disappear at the \(\infty\)-categorical level. The reason is that \(\mathcal{C}(n)\) is a quasi-free dg \(\Bbbk[\mathbb{S}_{n}]\)-module, which is furthermore _projective_ by Proposition 12. Therefore we have that \[\prod_{n\geq 0}[\mathcal{C}(n),\Lambda^{\otimes n}]_{h\mathbb{S}_{n}}\not \simeq\prod_{n\geq 0}[\mathcal{C}(n),\Lambda^{\otimes n}]_{\mathbb{S}_{n}} \cong\prod_{n\geq 0}[\mathcal{C}(n),\Lambda^{\otimes n}]^{\mathbb{S}_{n}} \simeq\prod_{n\geq 0}[\mathcal{C}(n),\Lambda^{\otimes n}]^{h\mathbb{S}_{n}}\,\] where on the upmost left-hand side we consider _homotopy coinvariants_ and on the upmost right-hand side we consider _homotopy invariants_. Indeed, the dg \(\Bbbk[\mathbb{S}_{n}]\)-module \([\mathcal{C}(n),\Lambda^{\otimes n}]\) is automatically cofibrant in the injective model structure by Proposition 3, since \(\mathcal{C}(n)\) is projective. This means that the \(\infty\)-category of qp-complete dg \(\mathcal{C}\)-algebras localized at quasi-isomorphisms behaves like an \(\infty\)-category of algebraic objects with _divided power operations_. ## 8. Linear duality The two bar-cobar Quillen adjunctions constructed so far are intertwined by linear duality adjunctions: they form a commuting square of Quillen adjunctions called the duality square. This allows us to show that for any cofibrant dg operad \(\mathcal{P}\), the \(\infty\)-category of dg \(\mathcal{P}\)-algebras with degree-wise finite dimensional bounded below (resp. bounded above) homology and the \(\infty\)-category of dg \(\mathcal{P}\)-coalgebras with degree-wise finite dimensional bounded above (resp. bounded below) homology are equivalent. This section is based on the results of [22], which we extend to a positive characteristic setting. They play a key role in the companion paper [23] about formal moduli problems and will extended into the context of mapping coalgebras in [23]. ### Lifting linear duality Linear duality lifts to (co)algebras over an operad and to (co)algebras over a cooperad, always sending types of coalgebras to types of algebras. The linear duality functor lifts to pdg k-modules and to dg modules. **The Sweedler dual functor.** The linear duality functor lifts to (co)algebras over an operad and admits an adjoint which generalizes the Sweedler dual functor of [10]. **Lemma 61**.: _Let \(\mathcal{P}\) be a dg operad. The linear dual functor lifts to a functor_ _between dg \(\mathcal{P}\)-coalgebras and dg \(\mathcal{P}\)-algebras._ Proof.: Let \(C\) be a dg \(\mathcal{P}\)-coalgebra. Any \(\mu\) in \(\mathcal{P}\) gives a decomposition map \[\Delta_{\mu}:C\longrightarrow C^{\otimes n}\.\] And any such map induces the following composition map \[\gamma_{\mu}:(C^{*})^{\otimes n}\longrightarrow(C^{\otimes n})^{*} \xrightarrow{(\Delta_{\mu})^{*}}C\.\] It can be checked that the collection of \(\{\gamma_{\mu}\}\) induces a dg \(\mathcal{P}\)-algebra structure on \(C^{*}\), and that this defines a functor. **Proposition 66**.: _Let \(\mathcal{P}\) be a dg operad. There is a contravariant adjunction_ _between dg \(\Omega\mathcal{C}\)-coalgebras and dg \(\Omega\mathcal{C}\)-algebras._ Proof.: This follows directly from the adjoint lifting theorems of Appendix A.1. **Topological dual functor.** The linear dual functor also lifts to (co)algebras over a cooperad. It admits an adjoint, which can be though as a _topological dual_ type of functor. **Lemma 62**.: _Let \(\mathcal{C}\) be a quasi-planar conilpotent curved cooperad. The linear dual functor lifts to a functor_ _between pdg \(\mathcal{C}\)-coalgebras and ap-complete pdg \(\mathcal{C}\)-algebras. Moreover, such a functor sends curved pdg \(\mathcal{C}\)-coalgebras to ap-complete curved \(\mathcal{C}\)-algebras_ Proof.: Let \(W\) be a pdg \(\mathcal{C}\)-coalgebra. The following map \[\gamma_{W^{*}}:\prod_{n\geq 0}[\mathcal{C}(n),(W^{*})^{\otimes n}]^{S_{n}} \longmapsto\prod_{n\geq 0}[\mathcal{C}(n),(W^{\otimes n})^{*}]^{S_{n}} \xrightarrow{(\Delta_{W^{*}})^{*}}W^{*}\] can be shown to induce a pdg \(\mathcal{C}\)-algebra structure on \(W^{*}\). Let us check that \(W^{*}\) is qp-complete. It follows from Corollary 5 that \(W\) can be written as the following colimit \[W\cong\operatorname*{colim}_{i\in\omega}F_{i}^{\mathrm{qp}}W\] where \(F_{i}^{\mathrm{qp}}W\) is the image of \(W\) by the idempotent comonad induced by the inclusion \(F_{i}^{\mathrm{qp}}\mathcal{C}\mapsto\mathcal{C}\). Therefore \[W^{*}\cong\lim_{i\in\omega}\,(F_{i}^{\mathrm{qp}}W)^{*}\.\] If we show that \((F_{i}^{\mathrm{qp}}W)^{*}\) is qp-complete for every \(i\in\omega\), then it is clear that \(W\) is qp-complete. But this follows from the fact that since \(F_{i}^{\mathrm{qp}}W\) is a curved \(F_{i}^{\mathrm{qp}}\mathcal{C}\)-coalgebra, its linear dual \((F_{i}^{\mathrm{qp}}W)^{*}\) is naturally a curved \(F_{i}^{\mathrm{qp}}\mathcal{C}\)-algebra. Therefore it is qp-complete. Finally, the assertion about curved \(\mathcal{C}\)-coalgebras follows by direct inspection. **Proposition 67**.: _Let \(\mathcal{C}\) be a quasi-planar conilpotent curved cooperad. There is an adjunction_ \[\operatorname{curv}\,\,\mathcal{C}\text{-}\mathrm{cog}\,\,\,\,\,\,\,\,\,\, \xrightarrow{(-)^{*}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, Proof.: It follows from the fact that, for any curved \(\mathcal{C}\)-coalgebra \(W\), there is a natural isomorphism \[(\Omega_{\alpha}W)^{\circ}\cong\widehat{\mathbb{B}}_{\alpha}W^{*}\,\] which comes from the natural isomorphism \[(\mathcal{P}\circ W)^{\circ}\cong L^{\mathcal{P}}(W^{*})\,\] at the level of graded \(\Bbbk\)-modules. This later isomorphism is a formal consequence of the construction of \((-)^{\circ}\). ### Homotopical duality squares Let us fix a quasi-planar conilpotent curved cooperad \(\mathcal{C}\). When we consider the duality square which corresponds to the canonical curved twisting morphism \(\iota:\mathcal{C}\longrightarrow\Omega\mathcal{C}\), this square can be promoted in the following way: all the adjunctions in the square are Quillen adjunctions. **Proposition 69** (After [14, Theorem 2.22]).: _The following square of adjunctions_ _is made of Quillen adjunctions, when one considers the right/left transferred structure from dg modules on the left hand side, and where one considers the transferred structures along the bar-cobar adjunctions on the right hand side._ Proof.: We already know that the left vertical adjunction and the two horizontal adjunctions are all Quillen adjunctions. The only thing left to check is that the adjunction is also a Quillen adjunction. This follows from the fact that the model structure on curv \(\mathcal{C}\)-alg\({}^{\text{op}}\) is transferred from that on dg \(\Omega\mathcal{C}\)-cog\({}^{\text{op}}\), and therefore it suffices to notice that the composite left adjoint functor is left Quillen. Remark 59.: Let \(\mathcal{P}\) be a cofibrant dg operad. There is an analogue homotopical duality square where the bar-cobar adjunctions are the quasi-planar bar-cobar adjunctions of Subsection 3.14. **Proposition 70**.: _Let \(\mathcal{P}\) be a cofibrant dg operad. The adjunction induced at the level of \(\infty\)-categories on the localisations of the model categories_ _restricts to an equivalence_ _between the full sub-\(\infty\)-category dg \(\mathcal{P}\)-cog\([\text{Q}.\text{iso}^{-1}]^{\text{f.d.}}_{\pm}\) of dg \(\mathcal{P}\)-cog\([\text{Q}.\text{iso}^{-1}]\) spanned by coalgebras with degree-wise finite dimensional and bounded above (resp. bounded below) homology and the full sub-\(\infty\)-category dg \(\mathcal{P}\)-alg\([\text{Q}.\text{iso}^{-1}]^{\text{f.d.}}_{\pm}\) of dg \(\mathcal{P}\)-alg\([\text{Q}.\text{iso}^{-1}]\) spanned by algebras with degree-wise finite dimensional and bounded below (resp. bounded above) homology._ Proof.: Since the weak-equivalence of cofibrant dg operads \(\Omega B(\mathcal{E}\otimes\mathcal{P})\xrightarrow{\ \ }\mathcal{P}\) yields Quillen equivalences between their respective the categories of algebras and coalgebras by Theorem 31, we can restrict to the case where \(\mathcal{P}=\Omega\mathcal{C}\) for \(\mathcal{C}\) a quasi-planar conilpotent curved cooperad, without any loss of generality. It suffices then to show that the two derived functors associated to the Quillen adjunction \((-)^{*}\dashrightarrow(-)^{\circ}\) interchange objects with degree-wise finite dimensional and bounded above homology with objects with degree-wise finite dimensional and bounded below homology. One the one hand, it is clear that the (derived) functor \((-)^{*}\) does so. On the other hand, let \(A\) be a dg \(\mathcal{P}\)-algebra with degree-wise finite dimensional and bounded below homology (the bounded above case is analogue). Let us first assume that \(A\) is degree-wise finite dimensional and bounded below. Let us denote dg \(\mathcal{P}\)-alg\({}_{+}^{\mathrm{f.d.}}\) and dg \(\mathcal{P}\)-cog\({}_{-}^{\mathrm{f.d.}}\) respectively the full subcategory of dg \(\mathcal{P}\)-algebras who are degree-wise finite dimensional and bounded below, and the full subcategory of dg \(\mathcal{P}\)-coalgebras spanned by objects who are degree-wise finite dimensional and bounded above. The linear duality functor from dg \(\mathcal{P}\)-coalgebras to dg \(\mathcal{P}\)-algebras restricts to an equivalence of categories \[\text{dg }\mathcal{P}\text{-cog}_{-}^{\mathrm{f.d.}}\xrightarrow{\ \ (-)^{*}\ }\text{dg } \mathcal{P}\text{-alg}_{+}^{\mathrm{f.d.}\mathrm{op}}\] whose pseudo-inverse is also a lifting \((-)^{*}\) of the linear duality functor of dg modules. The following square diagram is commutative This gives a sequence of natural isomorphisms \[(\Omega_{\mathcal{C}}B_{\mathcal{C}}A)^{\circ}\xrightarrow{\ \ \ \ \cong\ }\widehat{B}_{\mathcal{C}}((B_{\mathcal{C}}A)^{*})\xrightarrow{\ \ \ \cong\ }\widehat{B}_{\mathcal{C}}\widehat{\Omega}_{\mathcal{C}}(A^{*})\.\] Notice that \(\eta_{A^{*}}:A^{*}\xrightarrow{\ \ \ }\widehat{B}_{\mathcal{C}}\widehat{\Omega}_{ \mathcal{C}}A^{*}\) is a quasi-isomorphism of dg \(\mathcal{P}\)-coalgebras. Since \((\Omega_{\mathcal{C}}B_{\mathcal{C}}A)^{\circ}\) is weakly-equivalent to the value of the left derived functor of \((-)^{\circ}\) taken on \(A\), then the derived unit of adjunction is a quasi-isomorphism for any \(A\) which is degree-wise finite dimensional and bounded below. In the general case where \(A\) has degree-wise finite dimensional and bounded below homology, the homotopy transfer theorem endows the dg module \(\mathsf{H}(A)\) (with zero differential) with the structure of a dg \(\mathcal{P}\)-algebra and provides a zig-zag of weak-equivalences of dg \(\mathcal{P}\)-algebras relating \(A\) to \(\mathsf{H}(A)\). Thus the image of \(A\) through the derived functor of \((-)^{\circ}\) is equivalent to that of \(\mathsf{H}(A)\). ## Appendix A Adjoint lifting theorems, right and left transferred structures ### Adjoint lifting theorem The goal of this appendix is to give recollections on the adjoint lifting theorem. We mainly follow the work of P. T. Johnstone in [14]. Let us consider two categories \(\mathsf{C},\mathsf{D}\). Let \(M\) be a monad on \(\mathsf{C}\) and let \(N\) be a monad on \(\mathsf{D}\). Moreover, let us consider a commutative diagrams of functors where \(U^{N}\) and \(U^{M}\) are the monadic forgetful functors and where \(R\) is a right adjoint functor between the underlying categories. Notice that the commutativity of this diagram induces a natural transformation \(\lambda:NR\longrightarrow RM\) that satisfies some coherence conditions. We define the natural transformation \(\xi:MLN\longrightarrow ML\) as follows. First we consider the composition where \(\eta_{RL}:\operatorname{Id}\longrightarrow R\ L\) is the unit of adjunction and \(\epsilon_{LR}:L\ R\longrightarrow\operatorname{Id}\) the counit. Now simply notice that \(M\cong U_{M}\ F_{M}\), where \(F_{M}\) is the free \(M\)-algebra functor. Therefore by adjunction, we get a natural transformation \[\xi:F_{M}\ L\ N\longrightarrow F_{M}\ L\,\] by taking the transpose of \(\varphi\). **Theorem 14** (Adjoint lifting theorem).: _Let us suppose that \(R\) has a left adjoint \(L\). Then \(R_{m}\) has a left adjoint \(L_{m}\) if and only if for every \(N\)-algebra \((A,\gamma_{A})\), the reflexive pair_ \[F_{M}\ L\ N\left(A\right)\xrightarrow[F_{M}\ L\,\gamma_{A}]{\xi_{A}}F_{M}\ L \left(A\right)\,\] _has a coequaliser in \(\operatorname{Alg}_{\mathsf{C}}(M)\). If this is the case, then the image of the functor \(L_{M}\) on \(A\) is given by the above coequalizer._ Proof.: If such coequalisers exists, then the natural construction that sends \(A\) to this coequaliser can be checked to be the right adjoint of \(R_{m}\). Conversely, if \(R_{m}\) has a left adjoint \(L_{m}\), then the universal property satisfied by \(L_{m}(A)\) makes it the expected coequaliser. Now, let \(M,N\) be two monads on a category \(\mathsf{C}\) and let us consider a morphism of monads \(f:N\longrightarrow M\) which leads to a functor \(U_{f}:\operatorname{Alg}\left(M\right)\longrightarrow\operatorname{Alg} \left(N\right)\) above the identity functor of the category \(\mathsf{C}\). **Proposition 71**.: _Let us suppose that \(\mathsf{C}\) has reflexive coequalisers and these coequalisers are preserved by \(M\), thus also by the forgetful functor \(U_{M}:\operatorname{Alg}\left(M\right)\longrightarrow\mathsf{C}\). Let us also suppose that the map \(f(X):N(X)\to M(X)\) is an epimorphism for every object \(X\) in \(\mathsf{C}\). Then the functor \(U^{f}:\operatorname{Alg}\left(M\right)\longrightarrow\operatorname{Alg} \left(N\right)\) above \(\mathsf{C}\) has a left adjoint \(T_{f}\) and the following diagram_ _is a pushout diagram in \(\mathsf{C}\), for every \(N\)-algebra \((A,\gamma_{A})\)._ Proof.: We can notice that this pushout in \(\mathsf{C}\) is canonically isomorphic to the image the coequaliser of the reflexive pair of Theorem 14 by the forgetful functor \(U_{M}\), which creates and reflects reflexive coequilisers. ### Right and left transfers of model structures The goal of this appendix is to review the different results that allow one to transfer a model category structure along an adjunction. For the rest of this appendix, we consider an adjunction between presentable categories #### a.2.1. Right transfer Let us suppose that \(\mathsf{C}\) is endowed with a cofibrantly generated (thus combinatorial) model structure. The sets of maps in \(\mathsf{C}\) that form its model structure induce a sets of maps in \(\mathsf{D}\) via the adjunction \(L\dashv R\). **Definition 85** (Sets of maps in \(\mathsf{D}\)).: Let \(f:X\longrightarrow Y\) be a morphism in \(\mathsf{D}\). Let us call it * a _fibration_ if \(R(f)\) is a fibration; * a _weak-equivalence_ if \(R(f)\) is a weak equivalence; * an _acyclic fibration_ if \(R(f)\) is an acyclic fibration (equivalently if \(f\) is both a fibration and a weak-equivalence); * a _cofibration_ if it has the left lifting property with respect to acyclic fibrations; * an _acyclic cofibration_ if it is both a cofibration and a weak equivalence; * a _generating cofibration_ if it is the image through \(L\) of a generating cofibration of \(\mathsf{C}\); * a _generating acyclic cofibration_ if it is the image through \(L\) of a generating acyclic cofibration of \(\mathsf{C}\); * a _left fibration-lifting map_ is it has the left lifting property with respect to fibrations. We will refer to them as _the sets of maps in \(\mathsf{D}\) induced by the adjunction \(L\dashv R\)_. **Lemma 63**.: _The sets of maps in \(\mathsf{D}\) induced by the adjunction \(L\dashv R\) satisfy the following properties._ 1. _The weak-equivalences, the cofibrations and the fibrations are stable through composition and retracts. Furthermore, they contain all isomorphisms._ 2. _The weak-equivalences follows the 2-out-of-3 rule and the 2-out-of-6 rule._ 3. _Every commutative square in_ \(\mathsf{D}\)__ \[\tikzfig{height=1.5}{\includegraphics[height=1.5}]{images/2-out-of-3}\] _admits a lifting whenever_ 1. \(f\) _is a cofibration and_ \(g\) _is an acylic fibration,_ 2. _or_ \(f\) _is a left fibration-lifting map and_ \(g\) _is a fibration._ 4. _Every map_ \(f:X\longrightarrow Y\) _may be factored in a natural way as either_ 1. _the composition of a cofibration followed by an acyclic fibration,_ 2. _or the composition of a left fibration-lifting map followed by a fibration._ 5. _Cofibrations are retracts of transfinite compositions of pushouts of generating cofibrations._ 6. _Left fibration-lifting maps are retracts of transfinite compositions of pushouts of generating acyclic cofibrations._ Proof.: It many follows from a straightforward check and from the small object argument. **Theorem 15** (Right acyclicity condition, [10]).: _The following assertions about the sets of maps in \(\mathsf{D}\) induced by the adjunction \(L\dashv R\) are equivalent._ 1. _These maps define a combinatorial model category structure on_ \(\mathsf{D}\)_._ 2. _The set of left fibration-lifting maps of_ \(\mathsf{D}\) _is equal to the set of acyclic cofibrations._ 3. _The set of left fibration-lifting maps of_ \(\mathsf{D}\) _is contained in the set of acyclic cofibrations of_ \(\mathsf{D}\)_._ 4. _The set of left fibration-lifting maps of_ \(\mathsf{D}\) _is contained in the set of weak-equivalences of_ \(\mathsf{D}\)_._ Proof.: Clearly, (1) implies (2). Conversely, if (2) is satisfied, then by Lemma 63, the sets of maps in \(\mathsf{D}\) induced by the adjunction \(L\dashv R\) form a model category structure. Therefore (1) and (2) are equivalent. It is clear that (2) implies (3) and that (3) implies (4). Let us assume (4) and prove (2). Let \(f:U\to Y\) be an acylic cofibration. Let us decompose \(f:X\longrightarrow Y\) as a left fibration-lifting map \(a:X\longrightarrow U\) followed by a fibration \(b:U\longrightarrow Y\). By the 2-out-of-3 rule and since \(f\) and \(a\) are weak equivalences, \(b\) is an acyclic fibration. Thus the square has a lifting \(i\), which yields the following retract commutative diagram Since \(f\) is a retract of \(a\), it is a left fibration-lifting map. **Proposition 72** ([11]).: _Let us suppose that_ 1. _every object in_ \(\mathsf{D}\) _has a natural fibrant replacement functor, that is, there exists an endofunctor_ \(F\) _of_ \(\mathsf{D}\) _together with a natural transformation_ \(e:\mathsf{Id}\longrightarrow F\) _so that for every object_ \(X\)_,_ \(F(X)\) _is fibrant and the map_ \(X\longrightarrow F(X)\) _is a weak-equivalence;_ 2. _every fibrant object_ \(X\) _has a path object, that is, the diagonal map_ \(X\longrightarrow X\times X\) _can be factored by a weak-equivalence followed by a fibration._ _Then the sets of maps in \(\mathsf{D}\) induced by the adjunction \(L\dashv R\) endow \(\mathsf{D}\) with a model category structure._ Proof.: Let \(f:X\longrightarrow Y\) be a left fibration-lifting map. Its lifting property gives us a map \(p:Y\longrightarrow F(X)\) such that \(pf=e(X)\). Let \(P\) be a path object of \(Y\). It fits in the following commutative diagram This square has a lifting \(g:Y\longrightarrow P\). Since each of the two projections \(P\longrightarrow F(Y)\) are weak-equivalences and since the map \(e(Y):Y\longrightarrow F(Y)\) is a weak-equivalence, the 2-out-of-3 rule tells us that \(g\) and \(F(f)p\) are also weak equivalences. Then \(pf=e(X)\) is also a weak-equivalence. The 2-out-of-6 rule implies that the three maps \(f,p,F(f)\) are also weak-equivalences. We conclude by Theorem 15. #### a.2.2. Left transfer Let us suppose that \(\mathsf{D}\) is endowed with a cofibrantly generated model structure. The sets of maps in \(\mathsf{D}\) that form a model structure induce sets of maps in \(\mathsf{C}\) via the adjunction \(L\dashv R\). **Definition 86** (Sets of maps in \(\mathsf{C}\)).: Let \(f:X\longrightarrow Y\) be a morphism in \(\mathsf{C}\). Let us call it * a _cofibration_ if \(L(f)\) is a cofibration; * a _weak-equivalence_ if \(L(f)\) is a weak-equivalence; * an _acyclic cofibration_ if \(L(f)\) is an acyclic cofibration (equivalently if \(f\) is both a cofibration and a weak-equivalence); * a _fibration_ if it has the right lifting property with respect to acyclic cofibrations; * an _acyclic fibration_ if it is both a cofibration and a weak-equivalence; * a _right cofibration-lifting_ map is it has the right lifting property with respect to cofibrations. We will refer to them as _the sets of maps in \(\mathbb{C}\) induced by the adjunction \(L\dashv R\)_. **Proposition 73** ([10]).: _There exists a small set \(\mathcal{I}\) of cofibrations of \(\mathbb{C}\) so that cofibrations are retracts of transfinite compositions of pushouts of maps in \(\mathcal{I}\). Similarly, there exists a small set \(\mathcal{J}\) of acyclic cofibrations of \(\mathbb{C}\) so that acyclic cofibrations are retracts of transfinite compositions of pushouts of maps in \(\mathcal{J}\)._ _We call \(\mathcal{I}\) and \(\mathcal{J}\) respectively the set of generating cofibrations and the set of generating acyclic cofibrations of \(\mathbb{C}\)._ **Lemma 64**.: _The sets of maps in \(\mathbb{D}\) induced by the adjunction \(L\dashv R\) satisfy the following properties._ 1. _The weak-equivalences, the cofibrations and the fibrations are stable through composition and retracts, and they contain all isomorphisms._ 2. _The weak-equivalences follow the 2-out-of-3 rule and the 2-out-of-6 rule._ 3. _Every square in_ \(\mathbb{D}\)__ \[\tikzfig{height=1.5}{\includegraphics[height=1.5]{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5} {fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1.5}{fig-1. has a lifting \(p\), which yields the following retract commutative diagram Since the map \(f\) is a retract of \(b\), it is a right cofibration-lifting map. **Proposition 74** ([14]).: _Let us suppose that_ 1. _every object in_ \(\mathbf{C}\) _has a natural cofibrant replacement functor, that is, there exists an endo-functor_ \(Q\) _of_ \(\mathbf{C}\) _together with a natural transformation_ \(c:Q\longrightarrow\operatorname{Id}\) _so that for every object_ \(X\)_,_ \(Q(X)\) _is cofibrant and the map_ \(Q(X)\longrightarrow X\) _is a weak-equivalence;_ 2. _every cofibrant object_ \(X\) _has a cylinder object, that is, the codiagonal map_ \(X\sqcup X\longrightarrow X\) _may be factored by a cofibration followed by a weak-equivalence._ _Then the sets of maps in \(\mathbf{C}\) induced by the adjunction \(L\dashv R\) endow \(\mathbf{C}\) with a model category structure._ Proof.: This follows from dual arguments as those used to prove Proposition 72, using Theorem 16.
2304.00610
Ruling Out Short Proofs of Unprovable Sentences is Hard
If no optimal propositional proof system exists, we (and independently Pudl\'ak) prove that ruling out length $t$ proofs of any unprovable sentence is hard. This mapping from unprovable to hard-to-prove sentences powerfully translates facts about noncomputability into complexity theory. For instance, because proving string $x$ is Kolmogorov random ($x{\in}R$) is typically impossible, it is typically hard to prove "no length $t$ proof shows $x{\in}R$", or tautologies encoding this. Therefore, a proof system with one family of hard tautologies has these densely in an enumeration of families. The assumption also implies that a natural language is $\textbf{NP}$-intermediate: with $R$ redefined to have a sparse complement, the complement of the language $\{\langle x,1^t\rangle|$ no length $t$ proof exists of $x{\in}R\}$ is also sparse. Efficiently ruling out length $t$ proofs of $x{\in}R$ might violate the constraint on using the fact of $x{\in}R$'s unprovability. We conjecture: any computable predicate on $R$ that might be used in if-then statements (or case-based proofs) does no better than branching at random, because $R$ appears random by any effective test. This constraint could also inhibit the usefulness in circuits and propositional proofs of NOT gates and cancellation -- needed to encode if-then statements. If $R$ defeats if-then logic, exhaustive search is necessary.
Hunter Monroe
2023-04-02T19:58:15Z
http://arxiv.org/abs/2304.00610v1
# Ruling Out Short Proofs of Unprovable Sentences is Hard ###### Abstract If no optimal propositional proof system exists, we (and independently Pudlak) prove that ruling out length \(t\) proofs of any unprovable sentence is hard. This mapping from unprovable to hard-to-prove sentences powerfully translates facts about noncomputability into complexity theory. For instance, because proving string \(x\) is Kolmogorov random (\(x{\in}R\)) is typically impossible, it is typically hard to prove "no length \(t\) proof shows \(x{\in}R\)", or tautologies encoding this. Therefore, a proof system with one family of hard tautologies has these densely in an enumeration of families. The assumption also implies that a natural language is **NP**-intermediate: with \(R\) redefined to have a sparse complement, the complement of the language \(\{\langle x,1^{t}\rangle|\) no length \(t\) proof exists of \(x{\in}R\}\) is also sparse. Efficiently ruling out length \(t\) proofs of \(x{\in}R\) might violate the constraint on using the fact of \(x{\in}R\)'s unprovability. We conjecture: any computable predicate on \(R\) that might be used in if-then statements (or case-based proofs) does no better than branching at random, because \(R\) appears random by any effective test. This constraint could also inhibit the usefulness in circuits and propositional proofs of NOT gates and cancellation--needed to encode if-then statements. If \(R\) defeats if-then logic, exhaustive search is necessary. ## 1 Introduction We prove a deep linkage between noncomputability and complexity under a widely believed conjecture--that there is no optimal propositional proof system for tautologies.1 That conjecture originated as an assertion that a noncomputability result also holds with a resource bound. Godel's Second Incompleteness Theorem states that no consistent sufficiently powerful theory can prove its own consistency. Pudlak[19] and Friedman independently formulated a feasible consistency conjecture: it is hard to rule out any length \(t\) proof in a theory of its own inconsistency.2 Krajicek and Pudlak[12] proved the lack of efficient proofs (in a weaker theory) of inconsistency is equivalent to the nonexistence of an optimal proof system, which remains a key conjecture in proof complexity theory.3 Footnote 1: This paper was prepared in honor of past and present faculty of Davidson College, including Hansford Epes, L. Richardson King, Benjamin Klein, and Clark Ross. Comments are appreciated from Pavel Pudlák and Bill Gasarch. The ideas in this paper and earlier versions have benefited from discussions with the following: Scott Aaronson, Eric Allender, Olaf Beyersdorff, Ilario Bonacina, Maria Luisa Bonet, Cristian Calude, Marco Carmosino, Yuval Filmus, Vijay Ganesh, Bill Gasarch, Valentina Harizonov, Pavel Hrubès, Rahul Ilango, Russell Impagliazzo, Valentine Kabanets, Mehmet Kayaalp, Yanyi Liu, Ian Mertz, Daniel Monroe, Igor Oliveira, Toniann Pitassi, Hanlin Ren, Rahul Santhanam, Till Tantau, Neil Thapen, Luca Trevisan, Avi Wigderson, Ryan Williams, Marius Zimand, and other participants in seminars at George Washington University and Davidson College, the Simons Institute 2023 Meta-Complexity Program, the Computational Complexity Conference 2022, the Workshop on Proof Complexity 2022, and the Conference on Complexity with a Human Face 2022. Remaining errors are my own. Footnote 2: See Pudlák[21] Section 6.4 and [22]. Pudlák[19] shows the initial conjecture was incorrect—a theory \(\mathcal{T}\) can efficiently prove that \(\mathcal{T}\) lacks a length \(t\) proof of ‘0=1’. The 1989 reformulation refers to the lack of efficient proofs in a weaker theory. See also Theorem 59 of Pudlák[21]. Footnote 3: See also Krajicek[11] Section 21.3. We show: if it is possible to efficiently rule out length \(t\) proofs of some unprovable sentence \(\phi\), it is also possible to efficiently rule out a slightly shorter proof of inconsistency, which could be used in a length \(t\) proof of \(\phi\) by contradiction. This implies a powerful generalization--if it is hard to rule out length \(t\) proofs of inconsistency, it is hard to rule of length \(t\) proofs of any unprovable sentence. This in turn implies that facts about unprovability and noncomputability, which are well understood, can be imported into complexity theory. This has wide ramifications--diverse types of unprovable sentences translate into assertions that open questions in complexity theory have the expected answers. For instance, unprovable sentences of the form \(x{\in}R\) are dense, so hard families of tautologies encoding "no length \(t\) proof shows \(x{\in}R\)" are also dense. With \(R\) redefined to have a sparse complement--a string is in \(R\) unless exponentially compressible--the complement of the language \(\{\langle x,1^{t}\rangle|\) no length \(t\) proof exists of \(x{\in}R\}\) is neither in \({\bf P}\) nor \({\bf NP}\)-complete, but is \({\bf NP}\)-intermediate. The hardness of ruling out length \(t\) proofs of any unprovable sentence implies a deep linkage between noncomputability and complexity. We show that the implicit mapping from unprovable sentences to families of hard-to-prove sentences in a theory is an isomorphism. This would be a significant previously unnoticed structural feature of theories such as ZFC. Formalizing the intuition "ruling out length \(t\) proofs is hard" requires specifying which theory lacks length \(t\) proofs and which theory has difficulty ruling them out. These theories must be different, as a theory that proves it lacks short proofs of some \(\phi\) would prove its own consistency. Our main result is: **Theorem 1.1**: _The following are equivalent:4_ Footnote 4: Monroe[17] shows another equivalent condition: For any \(M\) accepting coBHP \(=\{\langle N,y,1^{t}\rangle|\) there is no accepting path of nondeterministic TM (NTM) \(N\) on input \(y\) with \(t\) or fewer steps\(\}\), there exists some \(\langle N^{\prime},y^{\prime}\rangle\) where \(N^{\prime}\) does not halt on \(y^{\prime}\) such that \(\langle N^{\prime},y^{\prime},1^{t}\rangle\) is a hard family of inputs. _(i) No optimal propositional proof system exists._ _(ii) For consistent theory \({\cal S}\), for some stronger theory \({\cal T}\), \({\cal S}\) cannot efficiently rule out length \(t\) proofs in \({\cal T}\) of \(0{=}1\) (that is, \({\cal S}\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\!\!\left| \!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\! \!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\!\left|\! In the notation above in parentheses, write \({\cal T}\)\(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! tautologies encoding "there is no length \(t\) proof of \(x{\in}R\)" is hard with positive density. There is no optimal proof system for tautologies, with dense set of hard \({\bf P}\)-uniform families witnessing the nonoptimality. * A natural language is \({\bf NP}\)-intermediate: the sparse complement of the language "\(x{\in}R\) lacks a length \(t\) proof" (where \(R\) is redefined, by requiring logarithmic incompressibility, to have a sparse complement). This language is not in \({\bf P}\) but has \({\bf P/poly}\) circuits. * The implicit mapping from unprovable to hard-to-prove sentences is an isomorphism. However, it is incomplete--for instance, stronger conjectures are required to imply that the polynomial hierarchy (\({\bf PH}\)) does not collapse--and substantial work may be needed to identify conjectures related to other open complexity questions and the associated isomorphisms. The paper is organized as follows. Section 2 provides preliminaries. Section 3 shows that unprovable sentences '\(x{\in}R\)' are dense among length \(n\) sentences. Section 4 discusses implications for tautologies and proof systems. Section 5 shows that a natural language is \({\bf NP}\)-intermediate. Section 6 shows that the mapping from unprovable to hard-to-prove sentences is an isomorphism and discusses open questions. Section 7 concludes. ## 2 Preliminaries _Strings_: With a binary alphabet \(\{0,1\}\), let \(S^{n}\) be the set of length \(n\) strings, which are ordered \(n\)-tuples. Let \(|x|\) be the length of a string and \(|S|\) be the cardinality of set \(S\). A language \(L\) is a subset of \(\cup_{n\geq 0}S^{n}\). _Density_: Say the share of length \(n\) strings in \(L\) is bounded above zero if there exists \(c>0\) such that \(|L\cap S^{n}|/n\geq c\) for sufficiently large \(n\). This implies the weaker condition that \(L\) has positive upper density, i.e., that \(\limsup_{n\to\infty}\frac{|L\cap\{1,2,\ldots,n\}|}{n}>0\). If an event depending on \(n\) occurs with probability that tends to one as \(n\) tends to infinity, such as \(x{\in}R\) where \(|x|{=}n\), say that it occurs with high probability (w.h.p.). _Theories_: Theories are assumed to be the Peano arithmetic (PA) or an extension of PA.9 To allow for average-case analysis, the standard definition of PA is modified so binary strings are encoded in arithmetic sentences as natural numbers, in binary not unary, adding a leading "1" to avoid losing leading zeros. _Proof Systems_: A propositional proof system is a polynomial time function \(h\in{\bf FP}\) with range TAUT (Cook and Reckhow[6]). For tautology \(\tau\), any string \(w\) such that \(h(w)=\tau\) is a proof of \(\tau\). The proof system \(h\) is _optimal_ if there exists \(c\geq 1\) such that the length of minimal \(f\) proofs of \(x\) are polynomially bounded in \(|x|\) with exponent \(c\) by minimal \(h\) proofs (Krajicek and Pudlak[12]). A proof system is not optimal if and only if there is a \({\bf P}\)-uniform family of tautologies for which it requires superpolynomial proof length. ## 3 Density of Unprovable Sentences Calude and Jurgensen[3] show that the share of length \(n\) arithmetic sentences that are true and unprovable is bounded above zero. The result relies on two facts: '\(x{\in}R\)' is typically unprovable, and length \(n\) strings are in \(R\) w.h.p.10 With that context, Theorem 1.1 implies that a similar result holds for coTHEOREMS\({}_{\leq t}\)= \(\{\langle\phi,1^{t}\rangle|{\cal T}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! strings not in \(R\). Therefore, \(R\)'s share of length \(n\) strings is at least \(1-2^{-n/2}\), so \(x{\in}R\) w.h.p. Calude and Jurgensen's result implies: **Theorem 3.3**: _For every theory \({\cal T}\), the share of sentences {'\(x{\in}R\)' \(|\)\(x{\in}R\) and \({\cal T}\)\(\not\)'\(x{\in}R\)' \(\}\) in length \(n\) arithmetic sentences is bounded above zero, for \(n\) sufficiently large. In an enumeration of sentences, for instance in lexicographic order, unprovable sentences have positive upper density._ **Proof:** Theory \({\cal T}\) cannot typically prove sentences '\(x{\in}R\)' where \(x{\in}R\), by Theorem 3.1. The sentences '\(x{\in}R\)' satisfy \(|\)'\(x{\in}R\)' \(|=|x|+c\), where \(c\) is a constant not depending on \(|x|\), giving the overhead of encoding '\(x{\in}R\)' net of \(|x|\). The share of length \(n\) sentences of form '\(x{\in}R\)' is exactly \(2^{-c}\) and these satisfy \(x{\in}R\) w.h.p. Therefore, for \({\epsilon}{>}0\), this share is bounded below by \(2^{-c}{-}\epsilon\) for \(n\) sufficiently large. Therefore, in an enumeration of sentences, unprovable sentences have positive upper density. The fact that a sentence '\(x{\in}R\)' needs only a constant \(c\) bits of overhead, net of \(|x|\), to encode \(x{\in}R\) is needed in the next section. ## 4 Tautologies and Proof Systems A tautology can encode the sentence \({\cal T}\)\(\not\)\(x{\in}R\)' as follows. For a given \(x\), \({\cal T}\)\(\not\)\(\not\)\(\not\)\(x{\in}R\)' is equivalent to \(\langle\)'\(x{\in}R\)', \(1^{t}\rangle{\in}{\tt coTHEOREMS}_{\leq t}\). \({\tt coTHEOREMS}_{\leq t}\) and TAUT are both \({\bf coNP}\)-complete languages, so some polynomial-time reduction \(r\) from \({\tt coTHEOREMS}_{\leq t}\) to TAUT maps \(\langle\phi,1^{t}\rangle\) to tautology \(r(\langle\phi,1^{t}\rangle)\). Tautologies produced by the reduction \(r\) confirm that every possible proof of \({\cal T}\)\(\not\)\(\not\)\(\not\)\(x{\in}R\)' is not a valid proof. The reduction \(r\) translates a family of sentences stating that no length \(t\) proof exists to a family of tautologies. It should not be confused with propositional translations, which translate sentences with a single universal bounded quantifier that are easy to prove in a weak fragment of arithmetic into easy-to-prove tautologies.11 Footnote 11: See Krajícek[11] and Cook and Nguyen[5]. With this encoding, two implications immediately follow: families of tautologies that are hard to prove have positive upper density in an enumeration of families, and there are dense witnesses to the nonoptimality of proof systems. ### Proving Tautologies is Hard with Positive Density \(R\)'s density immediately implies families of tautologies hard to prove have positive upper density in an enumeration of such families. Consider an enumeration of families of Boolean formulas encoding "no length \(t\) proof of \(\phi\) exists", with each family for \(\phi\) indexed by \(t\), with families enumerated in lexicographic order by \(\phi\). Some formulas will not be tautologies, when \(\phi\) is provable within length \(t\). In this enumeration, families with \(\phi\) of the form '\(x{\in}R\)' where \(x{\in}R\) have positive upper density, and these families are typically hard-to-prove tautologies. This definition does not necessarily imply that length \(n\) elements of TAUT are average-case hard to accept. For instance, an algorithm allowed to make errors with small probability can accept for any \(\phi\) of the form '\(x{\in}R\)' and be correct w.h.p. An error-free probabilistic polynomial time algorithm would necessarily fail with non-zero probability. ### Dense Witnesses to Nonoptimality If there is no optimal proof system, then for any proof system \(P\), there is a dense set of hard families of tautologies \(r(\langle`x{\in}R\colon,1^{t}\rangle)\) letting \(x\) range over all \(x{\in}R\). A probabilistic, polynomial-time computable procedure to produce such a family w.h.p. is to choose a sufficiently long random string \(x\). Then, \(x{\in}R\) w.h.p. by Lemma 3.2, so tautologies \(r(\langle`x{\in}R\colon,1^{t}\rangle)\) are hard for \(P\) w.h.p. Tautologies that are hard for ZFC to prove are also hard for any other known proof system, as their soundness is proved by ZFC. "Sufficiently long" is the same as \(k\) in Chaitin's theorem, based on the length of the description of a TM that enumerates the theorems of a theory. ## 5 From Turing Intermediate to NP Intermediate The set \(R\) is Turing intermediate--it is not computable, and its complement is c.e. but not complete under many-one computable reductions (Rogers[10] Theorem 8.I(a) and (c)). This raises the question whether Theorem 1.1 implies that some related language is **NP**-intermediate--that is, in **NP**, not in **P**, and not **NP**-complete under polynomial time many-one reductions. The final paragraph provides context on **NP**-intermediate languages. We show that deciding the language "has no proof of '\(x{\in}R\)' within length \(t\)" is **NP**-intermediate relaxing \(R\)'s definition to make its complement sparse. This relaxed definition counts strings as random unless they can be compressed exponentially, not just by half. This makes the set of possible short descriptions sparse, growing polynomially in \(|x|\), so the the set of non-random strings is also sparse. Define this sparse version of \(R\) as \(R^{sp}{=}\{x|\forall p}\): if \(|p|{\leq}\log|x|\), then \(p{\nearrow}\) or \(p{\downarrow}\) with \(U(p){\neq}x\}\). \(R^{sp}\), like \(R\), is noncomputable. Chaitin's Theorem still holds, but the parameter \(k\) is exponentially larger. Fix \({\cal S}\) and \({\cal T}\) per Theorem 1.1. \({\cal T}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! If no optimal proof system exists, then **NEXP\(\neq\)coNEXP** (Krajicek and Pudlak[12]), and therefore there are sparse languages in **NP** but not in **P** (Hartmanis et al[7]). Our example differs by providing an explicit natural language. ## 6 Isomorphisms and Open Questions If there is no optimal proof system, there is an implicit mapping from unprovable sentences \(\phi\) to families of hard-to-prove sentences "no length \(t\) proof exists of \(\phi\)". This mapping can be extended to map provable sentences to families of sentences with a length \(t\) proof. If this mapping were onto, it would be an isomorphism. This is an elegant picture--an unnoticed symmetry within mathematics. However, there are several loose ends. First, the mapping is not onto within the set of all families of hard-to-prove sentences. Suppose theory \({\cal S}\) cannot efficiently prove some family of sentences not of the form "no length \(t\) proof of \(\phi\) in \({\cal T}\) exists" and that this family is **P**-uniform. We can make the mapping onto as follows. For each such family hard for \({\cal S}\) not in the range of the mapping, there is a sentence unprovable in \({\cal S}\) which states "\({\cal S}\) cannot efficiently prove the family". This is unprovable since \({\cal S}\) is consistent by assumption, and \({\cal S}\) cannot prove that it has a hard family, as it would prove its own consistency. Therefore, map this unprovable sentence onto the hard family. This extended mapping is onto. A similar solution can address the fact that a mapping from unprovable sentences to families of tautologies encoding "no length \(t\) proof exists" is not onto.12 A curious interpretation is that the role of hard families of tautologies in proof complexity, with a powerful theory such as ZFC as a proof system, can be fully understood by focusing solely on the role of unprovable sentences in ZFC. Thus, one can understand proof complexity without reference to tautologies. Footnote 12: Suppose the **P**-uniform family of tautologies \(\tau_{n}\) is hard for proof system \(P\) proven sound by theory \({\cal S}\) such that the family \(\tau_{n}\) is also hard for \({\cal S}\). Then there unprovable sentences in \({\cal S}\): “\({\cal S}\) cannot efficiently prove \(\tau_{n}\)” and “\(P\) cannot efficiently prove \(\tau_{n}\)”. Second, additional conjectures are needed to extend this question to other open questions. For instance, the conjecture "no optimal proof system exists TAUT", a \(\Pi_{1}^{p}\)-complete language, and is not strong enough to imply that **PH** does not collapse. The stronger conjecture "no optimal proof system exists for a \(\Pi_{2}^{p}\)-complete language, even for a proof system with an oracle for TAUT" implies that \(\Pi_{2}^{p}{\neq}\Pi_{1}^{p}\).13 A version of Theorem 1.1(iii) would hold for \(\mathcal{S}\) with a predicate for membership in \(\Pi_{1}\) in the arithmetic hierarchy (**AH**), setting up an isomorphism for sentences with a higher degree of unsolvability.14 A set of such conjectures for each level of **PH** would assert: **PH** does not collapse due to the existence of unprovable sentences at each level of **AH**. These would assert, elegantly, that **PH** does not collapse because **AH** does not collapse. Footnote 13: Chen et al[4] show that a \(\Pi_{2}^{p}\)-complete language does not have an optimal proof system if and only if TAUT does not have an optimal proof system, so the reference to an oracle is necessary to separate \(\Pi_{2}^{p}\) and \(\Pi_{1}^{p}\). Footnote 14: See Pudlák[21] p. 569 for the construction for TAUT. This suggests a research program could identify a conjecture and implied isomorphism associated with each open question in complexity theory, or identify obstacles to doing so. For instance, the recent flurry of results by Liu and Pass[15] and others suggest that asserting the hardness of showing \(\mathcal{T}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! that \(R\) passes all known and conceivable effective tests of randomness (Li and Vitanyi[14] Section 2.4). It is possible that if-then statements and case-based proofs might appear to behave in a purely random manner in ruling out length \(t\) proofs of \(x{\in}R\). If so, a program or proof can do no better than loops that exhaustively check all cases. This constraint might also bind non-uniformly. Boolean circuits and propositional proofs require NOT gates and cancellation to implement conditional logic, such as encoding if-then statements and case-based reasoning. Such circuits and proofs may therefore gain limited benefit their use of NOT gates and cancellation, in line with an old conjecture. It is known that for some monotone Boolean functions, the gap between their non-monotone and monotone circuit complexity (the number of gates in minimal circuits with and without NOT gates respectively) is exponential (Razborov[23], Tardos[27]), and hoped that it is small for some other monotone Boolean functions such as CLIQUE (Razborov[24], Alon and Boppana[2]). This conjecture generalized to non-monotone Boolean functions is that for certain functions, the gap is small between their cancellative and non-cancellative circuit complexity is small, where a non-cancellative circuit has a formal polynomial in which no monomial includes both a literal and its negation (Sengupta and Venkateswaran[26]).15 This argument might support a claim that computational tasks such as decryption of small messages are hard in practice and not just asymptotically. Footnote 15: Shannon’s counting argument shows that most Boolean functions require \(2^{n}/n\) gates, the gap between cancellative and non-cancellative circuits for a random Boolean functions cannot be so large as to reduce circuits to polynomial size, as with Tardos’ example.
2307.16244
A Review of Media Copyright Management using Blockchain Technologies from the Academic and Business Perspectives
Blockchain technologies open new opportunities for media copyright management. To provide an overview of the main initiatives in this blockchain application area, we have first reviewed the existing academic literature. The review shows literature is still scarce and immature in many aspects, which is more evident when comparing it to initiatives coming from the industry. Blockchain has been receiving significant inflows of venture capital and crowdfunding, which have boosted its progress in many fields, including its application to media management. Consequently, we have complemented the review with a business perspective. Existing reports about blockchain and media have been studied and consolidated into four prominent use cases. Moreover, each one has been illustrated through existing businesses already exploring them. Combining the academic and industry perspectives, we provide a more general and complete overview of current trends in media copyright management using blockchain technologies.
Roberto García, Ana Cediel, Mercè Teixidó, Rosa Gil
2023-07-30T14:35:26Z
http://arxiv.org/abs/2307.16244v1
A Review of Media Copyright Management using Blockchain Technologies from the Academic and Business Perspectives ###### Abstract Blockchain technologies open new opportunities for media copyright management. To provide an overview of the main initiatives in this blockchain application area, we have first reviewed the existing academic literature. The review shows literature is still scarce and immature in many aspects, which is more evident when comparing it to initiatives coming from the industry. Blockchain has been receiving significant inflows of venture capital and crowdfunding, which have boosted its progress in many fields, including its application to media management. Consequently, we have complemented the review with a business perspective. Existing reports about blockchain and media have been studied and consolidated into four prominent use cases. Moreover, each one has been illustrated through existing businesses already exploring them. Combining the academic and industry perspectives, we provide a more general and complete overview of current trends in media copyright management using blockchain technologies. copyright, media, blockchain, digital rights management, social media, business, review ## 1 Introduction Blockchain technologies have opened new opportunities for media copyright management after previous shifts caused by digitization or communication networks Serrao et al. (2010). In some cases, blockchain promises solutions to problems resulting from those previous shifts like the ease of copying or uncontrolled digital distribution Li (2020). We aim to provide an overview of recent contributions addressing media copyright management using blockchain technologies. Considering just an academic perspective, our contribution goes beyond the state of the art as it is the first review paper about blockchain for copyright management, as the literature overview in Section 2.1 shows. Moreover, the contribution goes beyond just analyzing the topic from an academic perspective and shows that reducing the study to just that point of view is not enough to build a clear picture of the domain. On the contrary, our results show that it is crucial to also consider the business perspective as it is where most of the advancements regarding the use of blockchain for copyright management are being generated. By complementing the academic with the business perspective, it should be possible to provide a more complete overview of the most relevant contributions and trends. To summarise the aim of this review, these are the research questions being addressed: * **RQ1**: is the application of blockchain technologies to media copyright management a mature academic research area? * **RQ2**: which are the main areas of academic research dealing with media copyright management using blockchain? * **RQ3**: which are the main business use cases for media copyright management using blockchain? * **RQ4**: where do most "blockchain for media copyright management" originate, academia or industry? The rest of the paper is organized as follows. Next, Section 1.1 presents copyright management and the contributions it can receive from blockchain technologies. Then, Section 2 overviews the state of the art of academic research through an analysis of the Scopus database and tries to answer to research questions RQ1 and RQ2. We complete the review with the business perspective in Section 3, where the main business cases and examples of initiatives in each of them are presented, addressing RQ3 and RQ4. Finally, Section 4 presents the conclusions regarding the research questions drawn from these reviews, from both the academic and business points of view. ### Copyright Management and Blockchain The full copyright lifecycle, from its generation, when a creator first manifests a new work to its consumption through different embodiments, from physical or digital objects to performances or media streams Garcia and Gil (2010). The copyright lifecycle view provided by the Copyright Ontology summarises it as shown in **Figure 1**. The ontology model includes the different "stages" creations can go through (Creation Model), the actions that move creations along their lifecycle (Actions Model) and the rights that restrict these actions (Rights Model). The first step in the copyright value chain is when the creator embodies the creation into something tangible (a Manifestation). That manifestation can be used to claim authorship if it is the first time the underlying work (the abstract idea behind the creation) has been manifested. The way to decide who the original creator is in case of dispute is to determine who first manifested the creation. Then, the other creators might have had access to it and just made an unoriginal copy. Alternatively, it might be considered a derivation if it is not an exact copy and sufficiently original. In that case, manifesting this derivation is regulated by the Transformation Right. The motivation to use blockchain technologies to support this part of the copyright life cycle is because they facilitate time-stamping those manifestations and linking them to the claimed creator in a decentralized and trustless way, i.e., that does not require trusted third parties and centralized registries. From this initial step of setting authorship and associating all copyright to the original creator, the whole copyright value chain emerges, regulated by different rights, through actions like performing a creation (e.g. a music composition), recording or streaming it. Blockchain also contributes by transparently tracking all these actions along the copyright value chain, facilitating splitting royalties' payments to all the involved actors (for instance: composer, performer, lyricist, label, etc.) using smart contracts in a more timely manner than current systems, where artists might need to wait years to receive their payments. Additionally, blockchain can control rights themselves, bookkeeping who owns the different kinds of rights on a particular creation, including their temporal and territorial dimensions. This control includes who holds the rights, the percentage held, tracking rights transfers, calculating royalties' splits based on those rights, etc. ## 2 Literature Review The literature review explores existing academic publications addressing media copyright management using blockchain technologies. It starts with an overview of the literature based on bibliometric analysis, in Section 2.1, and then conducts a more detailed literature review by first clustering the papers based on their content and then studying some of the most representative publications per cluster in Section 2.2. The analysis is based on statistical and visualization tools applied to the set of papers resulting from queries to academic literature databases, concretely those provided by Scopus to analyse query results. We have used the Scopus database, which includes both high-quality journals but, contrary to other databases like Web of Science, it also includes conferences, where most of the research about blockchain is currently being published as shown later. The query to retrieve the relevant documents about media copyright management using blockchain from Scopus is shown in **Table 1**. Figure 1: The copyright life cycle as represented by the Copyright Ontology Garcia and Gil (2010). From Roberto García (2022) with permission The query is more complex than expected because the "right" or "copyright" terms are common in publication abstracts or as part of the paper text, even when the paper has nothing to do with these topics. They appear in the abstract or at the end of the paper body as part of the typical copyright statements added by publishers, e.g. "(c) Copyright 2020" or "all rights reserved". This fact introduces a lot of noise in the results, going from almost 2.000 results, if we look for "copyright" or "right" in the abstract, to 38 using the final version of the query on October 20th 2022, which only looks for "right" or "copyright" in the title or the keywords of documents in English. We used an equivalent search with the Web of Science database. However, it produced less than half of the Scopus results, and all the relevant ones were already in Scopus. It is also important to note that the query is for any publication without restricting it to a predefined time span. Next, we provide an overview of the Scopus' results using different points of view (publication years, subject areas and publication types) and then perform a more detailed analysis based on their content. ### An Overview Blockchain-based copyright management is a very young topic and, as shown in **Figure 2**, we have been able to retrieve publications based on Table 1 starting from 2017. It is also important to note that despite the significant increase in outputs during 2020, moving from 5 publications in 2019 to 15, the results for 2021 and 2022 are back down to 7 and 6 papers respectively. Though it is too early to draw long-term conclusions, it seems that for the moment, this is not a topic drawing a lot of attention from the academic research community. Especially, as we will see later in Section 3, if we compare it to the amount of activity in the business domain. The documents retrieved from Scopus can be grouped by different subject areas as shown in **Figure 3**. The three most common subject areas are Computer Science (32.3%), Engineering (17.2%) and Mathematics (11.8%). These \begin{table} \begin{tabular}{|c|} \hline ( TITLE(right OR copyright) OR KEY(right OR copyright) ) \\ AND TITLE-ABS-KEY ( media AND blockchain ) \\ AND ( LIMIT-TO ( LANGUAGE, “English” ) ) \\ \hline \end{tabular} \end{table} Table 1: Scopus query for media copyright management and blockchain documents. Figure 2: Number of publications per year in Scopus about media copyright and blockchain three subject areas alone account for more than 60% of the retrieved literature, showing that most focus is on the technological foundations of blockchain applied to media copyright. On the other hand, contributions in other areas like Social Sciences or Business and Management are still scarce, representing 8.6% and 5.4% respectively. To complete this literature overview, **Figure 4** shows results based on the type of document. The most common one is the Conference Paper accounting for slightly more than two-thirds of the documents. The other third is mainly articles in journals. The prevalence of documents in conferences usually signals that most research is still in the early stages Kim (2019). In this case, the publication ratio between the number of conference papers minus the journal ones divided by the total amount of documents is 0.4. For comparison, the average ratio in Computer Science, the discipline with a higher ratio of documents published in conferences, is 0.15, being a ratio of 1 complete dominance of conferences. Another indicator of the lack of maturity of this research area in academia is that there is just one review document, and it is a conference review providing an overview of just the proceedings of a conference Katsikas and Zorkadis (2020). Moreover, the conference does not include any of the selected documents, just one about social media and another about e-voting using blockchain that, combined, made the query match the conference review. ### Analysis of the Relevant Literature We have partially automated the detailed analysis of the literature using Bibliometrix Aria and Cuccurullo (2017), which has received as input the selected 37 documents after excluding the conference review paper discarded in the previous section. This tool makes it possible to cluster the analyzed documents based on their abstract and keywords as detailed by Aria and Cuccurullo (2022). Following this approach, we have identified four topics that we can use to categorize the literature about media copyright management using blockchain: _Digital Rights Management_, _Copyright Protection_, _Social Media_, and _Intellectual Property Rights_. Next, we detail each of these topics and the documents corresponding to each of them are listed in **Table 2**. #### 2.2.1 Digital Rights Management This topic includes all documents addressing the use of blockchain technologies for managing the media lifecycle taking into account its copyright. They range from those about media registration to prove ownership to copyright transfer, licensing or controlled access by consumers. All of them explore the use of blockchain technologies to Figure 3: Main subject areas of the publications in Scopus for the query about media copyright and blockchain enhance systems that support different parts of this lifecycle. For instance, Chen et al. (2018) focuses on improving over-the-top (OTT) media services, which offer it directly to viewers using Internet technologies. Similarly, Kuo and Shieh (2019) applies blockchain technologies for access control, though in this case for medical content. Other papers, like Garba et al. (2021), address both content distribution but also registration. In contrast, Holland et al. (2017) or Engelmann et al. (2018) focus on the exchange of certification and license data, in this case, about 3D models between owner and print service providers. Finally, Garcia et al. (2021), in addition to registration, applies blockchain technologies like Non-Fungible Tokens (NFTs) to copyright transfer and licensing. #### 2.2.2 Copyright Protection This topic includes papers about blockchain-based mechanisms to improve media protection against piracy or fake content. Most focus on content identification mechanisms to help creators register their content and detect near-duplicates potentially infringing their rights. Likewise, Dobre et al. (2020) contributes an algorithm that can extract a signature that is resistant to different levels of JPEG compression. In both cases, the hash is stored on the blockchain \begin{table} \begin{tabular}{l l} \multicolumn{2}{l}{**Main Topics**} & **Documents** \\ \hline Digital Rights Management & Holland et al. (2017), Xu et al. (2017), Chen et al. (2018), Engelmann et al. (2018), Holland et al. (2018), Kuo and Shieh (2019), Xin and Zhang (2020), Schneider (2020), Garba et al. (2021), Garcia et al. (2021), Ramani et al. (2022), Geethanjal et al. (2022) \\ Copyright Protection & Bhowmik et al. (2018), Qureshi and Megias (2019), Agyekum et al. (2019), Zhao et al. (2020), Dobre et al. (2020), Jiang et al. (2020), Yang et al. (2020), Temmermans et al. (2020), Cui and Pang (2021), Igarashi et al. (2021), Stallin et al. (2022), Yang and Yu (2022b), Yang and Yu (2022a) \\ Social Media & Tripathi (2019), Garcia and Gil (2019), Daskal et al. (2020), Milkovic et al. (2020), Fu and Fan (2021), Liu et al. (2021), Kripa et al. (2021), Guerar and Migliardi (2022) \\ Intellectual Property Rights & Zeilinger (2018), Li (2020), Konashevych (2020), Kudumakis et al. (2020) \\ \hline \end{tabular} \end{table} Table 2: Literature clustering into main topics based on document content. Figure 4: Document types for the Scopus query about media copyright and blockchain along with the identification data of the copyright owner. Then, they can use it to detect copies when someone tries to register the same or a similar image, as determined by the algorithm. Other examples are Stallin et al. (2022), which uses watermarks for copyright protection, or Igarashi et al. (2021), enabling photos traceability through certified digital cameras. #### 2.2.3 Social Media All papers classified under this topic are those that, among other aspects, place their focus on social media copyright management. For instance, Kripa et al. (2021) also addresses copyright protection using a method of hashing images that is resistant to modification, rotation, and colour alteration. However, unlike papers on the previous topic, it focuses on applying it in the context of social media. Another example is Fu and Fan (2021), which also explores copyright protection but in the context of a particular social media platform and its business model. On the other hand, Daskal et al. (2020) focuses on a regulatory perspective, the creation of a blockchain-enabled network of ombudspersons that help to deal with malicious content like fake news in social media. Finally, also connected with fake content in social media, Garcia and Gil (2019) and Guerar and Migliardi (2022) report about the application of blockchain technologies for the management of verified social media to facilitate its re-use for journalistic purposes. #### 2.2.4 Intellectual Property Rights This topic collects all the papers dealing with the legal aspects of media copyright management and the opportunities offered by blockchain technologies in this context. For instance, Li (2020) proposes to add a remix right implemented using blockchain technologies. This new right would include some elements of compulsory licensing and Creative Commons, allowing remixers to do so without permission but requiring proper attribution and remuneration. Finally, another paper addressing legal issues is the one about applying distributed ledger technologies to real estate, property rights and public registries Konashevych (2020). Though the main topic deviates from media copyright, the legal implications analyzed regarding legal identity and privacy are also interesting from the copyright perspective. ## 3 Business Review Complementing the review of media copyright management using blockchain from the academic perspective in Section 2, this section addresses activities in the business sector. The focus is placed on the potential of blockchain technologies to disrupt existing business models and generate new ones. The impact of blockchain in the media industry is even more relevant due to the profound changes that digitization and the Internet have caused. The issues caused by digitization and the Internet are still there even after the widespread adoption of new business models like streaming. In fact, though streaming might have generated new opportunities for digital service providers, it has made things even worse for other media industry actors, especially creators Ben Sisario (2021). Existing initiatives trying to apply blockchain technologies in the media industry are analyzed. First, to better provide an overview of the market, the primary use cases these initiatives try to address are identified. We have considered existing reports about media and blockchain use cases to provide a relevant and diverse set. The reports under consideration are Deloitte's from 2017 Sallaba et al. (2017), Protokol's from 2020 Pro (2020), JP Morgan's also from 2020 JPM (2020) and The Capital's from 2021 Shilina (2021). For convenience, the full list of use cases proposed by each report is shown in **Table 3**. The table also shows a consolidated set of business use cases we propose, as detailed later in this section. Most of the reports consider the media industry in general, though the report from The Capital Shilina (2021) focuses on music. It is also relevant as music is one of the most active media domains and a reference for the others due to its complexity and the wide range of actors involved. All reports consider the whole media value chain, from content creators, including aggregators, platform providers and, when relevant, collecting societies handling royalty payments. The first impression after analyzing the previous reports is that most of them seem to follow the path established by the oldest one, Deloitte's report from 2017 Sallaba et al. (2017). The only report that is clearly outside this trend is the most recent one by The Capital Shilina (2021). The first use case identified in Deloitte's report is _Use Case 1: New pricing options for paid content_. It focuses on micro-payments as a mechanism to make new pricing opportunities arise. As mentioned, Protokol's and JP Morgan's reports follow the path set by this report and also include micropayments, also as _Use Case 1_ in the case of JP Morgan's and as _Use Case 2_, also including usage-based payment models, in Protokol's report. On the other hand, the newest report by The Capital does not consider micropayments as a separate use case. As shown in Table 3, the set of use cases we propose after analyzing the previous reports does not include micropayments. Our view is that this is not a separate use case any longer. Micropayments have not become a primary driver in the media industry and, in any case, they are used in combination with other use cases. The second use case in Deloitte's report is _Use Case 2: Content bypassing aggregators_. Its focus is mainly on bypassing aggregators from a media marketing perspective, but the use case also includes distributors. A similar use case is also present in Protokol's report, though focusing just on advertising as stated in its title _Use Case 3: Immutable Advertising Engagement Metrics_. Protokol's report includes a separate use case about disintermediation from the distribution perspective. Similarly, both JP Morgan's and The Capital's reports feature use cases about disintermediation mainly from the content distribution perspective. Additionally, The Capital's report includes one use case related to engagement but as _Monetary incentives for listeners_. Since most reports separate the aggregation and distribution dimensions when talking about disintermediation use cases made possible by blockchain technologies, we propose to consider them as two separate use cases, as shown in Table 3. The proposed ones are _Use Case 3: Marketing, Fan Engagement and Fundraising_ and _Use Case 4: Disintermediated Distribution_. The third use case proposed in Deloitte's report is _Use Case 3: Distribution of royalty payments_, which is also present in Protokol's and JP Morgan's reports. The Capital proposes more detailed use cases related to copyright management, dealing with specific aspects that allow implementing royalty distribution using blockchain technologies. These are associated with a digital rights database, tokenized rights management and data transparency regarding revenue streams. Our proposal is a more general _Use Case 1: Copyright Management_, shown at the top of Table 3. It goes beyond royalty distribution and includes other aspects required for properly splitting royalties, thus accommodating a broader view of copyright management facilitated by blockchain technologies. This use case includes using smart contracts to carry out royalties' splits and content registration to associate creators and their content. Moreover, they facilitate linking creations to use conditions that determine how royalties are generated and distributed. The fourth use case in Deloitte's report is _Use Case 4: Secure and transparent C2C sales_, which is also present in JP Morgan's report. Protokol's report proposes a broader perspective on consumer-to-consumer sales focusing on fraud and piracy prevention. Our view is that all these use cases try to address one of the main weaknesses of digital content from the copyright perspective, which is the lack of the scarcity constraint that drove many business models before the digitization revolution, especially those connected with art and collection. Blockchain technologies make scarcity possible in the digital world, and this is why we propose to consider a more generic use case called _Use Case 2: Digital Content Scarcity_. This use case includes the previous use cases by addressing the scarcity issue, but also the related one in the report by The Capital called _New revenue sources for artists_. The latter mostly corresponds to new revenue streams that digital scarcity makes possible, though it also overlaps with our proposed _Use Case 3_, as shown in Table 3. This overlap is because some aspects regarding the connection with consumers, like fundraising, might also become new sources of revenue for creators. Finally, Deloitte's report also proposes a fifth use case, _Use Case 5: Consumption of paid content without boundaries_. We have also included this use case in our proposed _Use Case 2: Digital Content Scarcity_ because the mechanisms introduced by blockchain technologies regarding scarcity are not constrained, at least technically, by country or regional boundaries. Thus, they can also be used to address this use case. The result of our review of media and copyright business use cases involving blockchain technologies, consolidating the different reports that we have considered, is the following list of main use cases: * _Use Case 1: Copyright Management_. * _Use Case 2: Digital Content Scarcity_. * _Use Case 3: Marketing, Fan Engagement and Fundraising_. * _Use Case 4: Disintermediated Distribution_. The summary of how the proposed use cases relate to all the considered ones is summarised in Table 3, which tries to highlight using ranges of similar colours those use cases with some similarities across the different reports and our proposed set of use cases. In the following subsections, we present each use case and illustrate them through existing initiatives trying to address each using blockchain technologies in the media industry context. The final objective is to have a clearer picture of the domain from the business perspective. ### Use Case 1: Copyright Management This use case considers the full copyright life cycle as presented in Section 1.1. It starts from copyright inception when a creator first manifests a new work into something tangible (a Manifestation). As detailed in the next subsections, there are many initiatives addressing that part of this use case because blockchain technologies facilitate time-stamping those manifestations and linking them to the claimed creator. Another relevant part of the copyright life cycle considered by initiatives addressing this use case is to track all the actions along the copyright value chain once authorship has been set. This includes consumption by end users or facilitating splitting of royalties' payments to all the involved actors. The following subsections also illustrate that part of the use case through different business initiatives. #### 3.1.1 Wipo Proof2 Footnote 2: [https://wipoproof.wipo.int/wdts](https://wipoproof.wipo.int/wdts) is an example of a business initiative addressing this use case, particularly the first step on the value chain. It is a digital service that provides a time-stamped digital fingerprint of any file, proving its existence at a specific time. These records can be then used as trusted digital evidence. Other similar services are FileProtected3 or Binded4. Footnote 3: [https://www.fileprotected.com](https://www.fileprotected.com) Footnote 4: [https://binded.com](https://binded.com) #### 3.1.2 Kelp Digital5 Footnote 5: [https://kelp.digital](https://kelp.digital) aims to make photography copyrights easy to check and prove by creating verifiable digital statements associated with the image and rendered with it, together with all the associated licenses and copyright transfer. To do so, Kelp Digital first verifies ownership over the physical equipment used to generate the creation. Currently, ownership validation and copyright claims are available only for professional photo equipment, called Proof of Camera & Lens ownership. The copyright statements and transaction records are stored on Kep's blockchain. #### 3.1.3 Unison6 Footnote 6: [https://www.unisonrights.es/en/](https://www.unisonrights.es/en/) aims to facilitate managing, collecting and distributing royalties in a simple, fair and efficient way using blockchain technology. External services are used to track the use of music, which is then analyzed to pay creators timely. Unison provides access to a broad, high-quality music catalogue for music users such as TV channels, radio stations, hotels, gyms, or store chains. Users will pay exclusively for the music they use without approximations or estimations. Similar or related initiatives, also focusing on the music industry, are Blokur7 and Verifi Media8. Footnote 7: [https://www.blokur.com](https://www.blokur.com) Footnote 8: [https://www.verifi.media](https://www.verifi.media) Footnote 9: [https://revelator.com](https://revelator.com) #### 3.1.4 Revelator9 focuses on later steps in the value chain, to ease the management of digital rights and royalties. It can simplify the complex calculation of multi-licensor and multi-territory rights administration. This copyright platform is designed to track and capture the value of digital music for all rights owners in the copyright chain. Revelator uses this information to speed up royalties operations, including splits with collaborators. Other similar initiatives are Vevue 10 or FilmChain11, which focuses on the film and TV industries. Footnote 10: [https://www.vevue.com](https://www.vevue.com) Footnote 11: [https://filmchain.co](https://filmchain.co) #### 3.1.5 The Creative Passport12 Footnote 12: [https://www.creativecpassport.net](https://www.creativecpassport.net) is a verified digital identifier that allows music creators to update, manage and control all information about them and their works. It can push updated profile information into other music services and pull relevant information from them or music representatives. This digital identity aims to become a unique login solution for music services. Moreover, the creator's identity can be verified by linking it to a government identifier or other industry identifiers like IPI, IPN, ISNI. Footnote 12: [https://www.creativecpassport.net](https://www.creativecpassport.net) ### Use Case 2: Digital Content Scarcity This use case includes many topics in the analyzed use case reports, including consumer-to-consumer sales or fraud and piracy prevention. Moreover, part of the use case is about new revenue sources for artists. All the previous have in common benefit from a feature evident in the physical world and traditionally the basis of copyright law. This feature is scarcity, something missing for a long time in the digital space due to the ease of copying the same bits repeatedly. Though digital copies are a feature in many senses, which eases scale economies on top of the Internet, it introduces issues like piracy or pressing down the value of content in digital form. Cryptographic mechanisms can be used on top of blockchains to introduce scarcity of digital assets, using unique tokens that can be owned, traded and verified to prevent piracy. The solution is Non-Fungible Tokens (NFT). Unlike fungible tokens that are interchangeable, like cryptocurrencies or fiat money, they present some unique properties that make them non-interchangeable, i.e. non-fungible. This uniqueness can be tied to digital content like a song or a picture, making it ownable and scarce. The only weak point is the link between the NFT and the digital content, especially if it points to a file in centralized storage. Alternatively, to strengthen this link, digital content can be stored on-chain, usually just if it means a small amount of data or code that generated the content, or off-chain but using decentralized storage. NFTs are also being used to represent ownership of many other assets, from stocks to houses. In these cases, mechanisms are also required to provide trustful ties between the token and the asset. It is usually helpful to think about NFTs as some kind of "receipt". You own a piece of digital crypto art by proving control of the "receipt" NFT, but the content file for the work might be replicated many times across the Internet. That piece might be even a meme, copied thousands of times across social media. However, you can prove that you hold the unique NFT linked to its ownership. At this point, the real issue is if the person who mints the token, from the point of view of copyright law, holds the copyright supposedly transferred through NFT ownership. It is necessary to combine NFTs with systems capable of managing copyright, like those described for _Use Case 1_ in the previous section. Thus, it becomes essential to have a way to prove authorship and enable tracing it from the NFT. #### 3.2.1 Valuables by Cent13 Footnote 13: [https://v.cent.co](https://v.cent.co) is one of the easiest ways to create NFTs. It allows minting an NFT for any publicly available tweet. It is also possible to buy tweets from other users, which should be publicly available. The NFT metadata pointing to the referenced tweet is signed using the creator's private key, so we can say that they autograph the NFTs. The process is integrated into the social network, Twitter in this case, as the media is initially available there, and the NFT metadata points to the corresponding tweet. Even if the original tweet is erased by its creator, the metadata included in the NFT will remain as it is available on-chain. Moreover, a screenshot of the tweet is also stored in Cent's servers. However, it is important to note that just the image corresponding to the tweet is stored, not the full content if the tweet includes an animated GIF or a video. Additionally, if Cent's servers go down or the service is discontinued, that screenshot will be lost as it is just available in centralized storage. #### 3.2.2 Zora14 Footnote 14: [https://zora.co](https://zora.co) is an NFTs marketplace that allows creators to define a configurable percentage of future sales of their NFTs. This percentage of sales beyond the first one implements a mechanism like royalties, though it is a proprietary solution and only works for sales on the Zora marketplace. Zora is also developing the Catalog platform on top of the Zora Protocol, allowing artists to mint their music as one-of-one NFTs, i.e. artists can just press one edition of their music works. Songs are free to listen to everyone and individually ownable by collectors. In addition to the royalties-like feature provided by the Zora Protocol, the plans include that Catalog also supports revenue splits for collaborators. #### 3.2.3 Heni Nft15 Footnote 15: [https://nft.heni.com](https://nft.heni.com) provides a NFT marketplace for digital art. Through limited editions, HENI shows how blockchain technologies are used to introduce scarcity into digital art and provide new revenue streams for digital artists. ### Use Case 3: Marketing, Fan Engagement and Fundraising This use case includes all mechanisms to manage and improve the communication between creators and consumers, and it aims to create a much more direct connection between them. Nowadays, the emergence of aggregators or streaming services makes creators unaware of how their creations are being consumed. A clear example of this is streaming data. All happens through big platforms like Spotify, which have access to all the aggregated data while it is hard for the artist to get feedback beyond overviews and no way to get it promptly. Blockchain technologies might help to build these channels for direct communication with fans. And this goes beyond usage information, which might also be used for royalties' payments as described for _Use Case 1_ in Section 3.1. New opportunities include using tokens for fan engagement, i.e. a kind of "Proof of Fandom". These tokens can provide additional incentives like ticket discounts or verifiable merchandise. Or they can be accompanied by loyalty badges or reward tokens. Another interesting approach is to engage fans to play the role of "Curators" of different kinds of media registries using incentivized strategies like Token Curated Registries Kaur and Visveswaraiah (2021). For instance, to offer rewards to fans for curating personalized playlists. Or reward fans with a native token for contributing to a database of artists, venues or events. Finally, blockchain facilitates artists going into fundraising campaigns that help to align artists' and fans' interests. This kind of crowdfunding helps creators get more independent from centralized sources of funds and makes it possible for consumers to invest and trade in the creators they like. #### 3.3.1 DaOrecords16 Footnote 16: [https://www.daorecords.org](https://www.daorecords.org) is both a record label and a platform to connect musicians and artists to their fans using blockchain technologies. Artists have complete control over their music and their relationship with their fans and community. Additionally, DAOrecords is experimenting with the Crypto Art space, minting on-chain Audio NFTs and hosting The Popup, an art and music event series in the Cryptovoxels metaverse. #### 3.3.2 Rac17 Footnote 17: [https://rac.fm](https://rac.fm) is the first Portuguese artist to win a Grammy and one of the first musicians to sell his music using blockchain technologies in 2017. The album purchase is represented on-chain by the EGO token. RAC has recently rewarded his fans, for instance, those holding an EGO token, with his community token called RAC. A RAC holder can access a private discord server or receive exclusive early access to future merchandise. Future plans include tokenized advertisement space on RAC's Twitch channel, discounts on merchandise or access to unique crypto-artwork. #### 3.3.3 Steemit18 Footnote 18: [https://steemit.com](https://steemit.com) stores content in an immutable blockchain and rewards users for their contributions with a digital token called STEEM. The Steem blockchain mints new STEEM tokens every day and adds them to a community's rewards pool. These tokens are then awarded to users for their contributions, based on their content's votes. Users who hold more tokens in their account will decide where a larger portion of the rewards pool goes. Up to 50% of a post's payout is awarded to curators, who upvoted the post first, as a reward for discovering relevant content. The other 50% is awarded to the author. A similar initiative is Cent19. Footnote 19: [https://beta.cent.co](https://beta.cent.co) #### 3.3.4 YellowHeart20 Footnote 20: [https://yh.io](https://yh.io) is a blockchain-powered ticketing company whose mission is to eradicate scaling and bad players in the secondary ticketing market, thus putting the power back into the hands of fans and artists. Moreover, they consider the rest of the ticketing ecosystem by rewarding venue promoters and the resellers themselves. YellowHeart uses blockchain technologies, particularly smart contracts, to set concert ticket rules. How many seats there are and how much do they cost? These rules include what they can be resold for, how many times, and even where to resell money goes. For instance, split among artists and promoters or entirely to charity. ### Use Case 4: Disintermediated Distribution This use case includes all disintermediation actions facilitated by blockchain technologies that allow creators to distribute their content to consumers without intermediaries. These intermediaries control distribution channels, including music streaming platforms, and thus can easily influence what content is consumed. Efforts to change this situation include different kinds of utility tokens that provide access to alternative and decentralized content platforms, for instance, bandwidth tokens for music consumers to compensate creators. Consumers give them part of their bandwidth to reach more consumers without having to rely on other centralized distribution channels. #### 3.4.1 Livepeer21 Footnote 21: [https://livepeer.org](https://livepeer.org) is looking to build a decentralised infrastructure of video transcoding. Developers can use it for adding live video to their projects using the Livepeer public network. Video miners run a Livepeer node and transcode video on their GPUs for token rewards. The network is secured by token holders, who help improve and secure the Livepeer network by acquiring and staking the reward token on video miners. They also are rewarded if they stake in productive video miners. Other examples of initiatives about using blockchain to facilitate media distribution are Audius22 or D.Tube23. Footnote 22: [https://audius.org](https://audius.org) Footnote 23: [https://d.tube](https://d.tube) #### 3.4.2 Resonate24 Footnote 24: [https://resonate.is](https://resonate.is) is a music streaming cooperative that allows listeners to "pay as you a stream" until they own the song. It's a new listening model called "stream to own." Only pay for what you play, making a seamless transition from casual listening into becoming a dedicated fan. Resonate is a cooperative owned by the musicians, indie labels, fans and workers that build it. Footnote 25: [https://www.contentos.io](https://www.contentos.io) #### 3.4.3 Contentos25 uses a dedicated blockchain to build a decentralized digital content community that allows content to be freely produced, distributed, rewarded and traded while protecting author rights. With a decentralized revenue system, the value of creation is open, transparent, and returns rewards directly to users. Through rewards, users are encouraged to share and promote content to the right audience. Users are responsible for their credit score, calculated based on every contribution they make. Blockchain technology enables copyright authentication and transactions to be trackable. Joystream26 makes a similar proposal, materialized as a decentralized platform for streaming and sharing video content. Footnote 25: [https://www.joystream.org](https://www.joystream.org) ### Evaluation To evaluate the completeness of the proposed use cases, we have considered 31 scenarios applying blockchain to the media industry. These scenarios were collected by Jack Spallone. He was involved from 2017 to 2020 in Ujo Music27, one of the first initiatives applying blockchain to the music industry and currently is currently Head of Crypto at HIFI Labs. Jack describes them in a set of tweets28. Each use case has been classified into one of the proposed use cases: * _Use Case 1: Copyright Management_: Payment Splits, Right Registry (TCR), Artist Identity, Per-stream Payments, Usage and Reporting, On-Chain Licensing for Off-Chain Rights, NFTs as Synch Licenses. * _Use Case 2: Digital Content Scarcity_: Non-transferable Token as Access, Bonding Curves to Price Music, NFT as License, Scarce Sounds Marketplace, Scarce Music Releases, 1 of 1 Digital Records. * _Use Case 3: Marketing, Fan Engagement and Fundraising_: Tipping, NFT as Recording Advance, NFTs as Proof-of-Patronage, Tickets w/ Secondary Market Price Capture, Music Chart Curation with Token Rewards, Non-Copyright Record Deals, Music Crypto Community Token, Community Token Fan Club, Streaming Payment Advances using DeFi, Retroactive Airdrop of Social Tokens, Social Token Community Fund, Stake Social Tokens to Earn Song NFTs, Physical goods redeemed by tokens sold on a bonding curve. * _Use Case 4: Disintermediated Distribution_: Programmatic Licensing, Streaming Co-op, Publishing DAO, Label DAO, Market-making Distribution Models. As can be observed, the proposed used cases are complete as they accommodated all of the scenarios. ## 4 Discussion From the literature analysis about media copyright management using blockchain, we highlight that the number of publications indexed by Scopus or Web of Science, including journals and conferences, is relatively low compared to other topics. Just 31 papers have been retrieved. From an overview of this literature, analyzing aspects like publications per year, subject area or type, we can conclude that this is a very young and still immature area from an academic perspective. In addition to the small number of documents available, their time span is very narrow, starting in 2017 and with almost half of the papers originating in 2020 plus a decline to just six documents in 2021. Considering their subject areas, Computer Science accumulates one-third of the documents and, together with Engineering and Mathematics, they account for more than 60%. On the other hand, Social Sciences or Business and Management represent 9% and 5.1% respectively. It is also important to note the absence of published reviews when considering the types of documents. Beyond this overview, this academic literature has also been analyzed in detail. First, we used an automated approach to cluster the documents based on their content. As a result of this analysis, we identified the following four main topics: * _Digital Rights Management_: this topic clusters all the documents focusing on the management of media copyright lifecycle using blockchain technologies. From registration to licensing or controlled consumption. * _Copyright Protection_: the documents classified under this topic propose blockchain-based mechanisms for media protection to fight copyright infringement. * _Social Media_: though papers under this topic also address copyright management and protection issues, their focus is on the specificities of social media. * _Intellectual Property Rights_: this topic includes the documents dealing with the legal aspects of media copyright management, focusing on the opportunities that blockchain technologies bring from a legal standpoint. We contextualized each topic by providing details about some of the corresponding documents, as detailed in Section 2.2. The complete list of all the documents in each topic is presented in Table 2. The academic review of media copyright management using blockchain technologies has been complemented from the business perspective. The starting point has been analyzing existing reports and identifying the most relevant use cases to apply blockchain to the media industry. Four relevant reports have been identified, by Deloitte Sallaba et al. (2017), Prottokol Pro (2020), JP Morgan JPM (2020) and The Capital Shilina (2021). The analysis has consolidated all the use cases proposed by these reports into four: _Use Case 1_: Copyright Management, _Use Case 2_: Digital Content Scarcity, _Use Case 3_: Marketing, Fan Engagement and Fundraising and _Use Case 4_: Disintermediated Distribution. Table 3 provides an overview of the consolidation process. To evaluate the completeness of the proposed use cases, we have successfully classified 31 scenarios applying blockchain to the media industry into one of them, as detailed in Section 3.5. The evaluation shows that the proposed use cases are complete as they accommodated all of the scenarios. Detailed descriptions of each use case are provided in Section 3, together with 14 representative examples of business initiatives and 11 similar additional ones. Altogether, 25 initiatives illustrate the scope of each use case and show a very active business ecosystem, in many cases far beyond the state of the art in academic literature. For instance, the initiatives highlighted for _Use Case 1: Copyright Management_ implement solutions that are beyond those depicted in the papers related to the topic _Digital Rights Management_, which in all cases are at most just proofs of concept. It is fair to note that there has been a lot of funding from venture capital, Initial Coin Offerings (ICOs), and other crowdfunding mechanisms for blockchain-related initiatives. For instance, more than USD 31 billion was raised via ICOs (03/2020) since 2016 Schuckes and Gutmann (2021). This economic inflow seems to have boosted the blockchain industry beyond the state of the art in academia. Overall, most of the literature is related to _Use Case 1: Copyright Management_, which is related to the main literature topics _Digital Rights Management_ and _Copyright Protection_. On the other hand, little literature addresses the other use cases, especially Use Case 3 and 4, which are the most related to new business models emerging from applying blockchain technologies to the media industry. Combined, these facts highlight the importance of taking both the academic and business perspective when reviewing emerging and very business-oriented domains like blockchain and media. ## 5 Conclusions Based on the previous analysis and discussion about the situation regarding the use of blockchain technologies for media copyright management, from both academic and business perspectives, it is possible to address the research questions highlighted in the introduction as detailed next. **RQ1: is the application of blockchain technologies to media copyright management a mature academic research area?** Many results point in the direction of a lack of maturity. First of all, the small number of publications in quality journals and conferences indexed by Scopus or Web of Science. Additionally, there is a downtrend since 2020 in the number of publications and most of them are in conferences, less than a third of them in journals and a complete absence of review papers on this topic. **RQ2: which are the main areas of academic research dealing with media copyright management using blockchain?** The analysis of the retrieved literature dealing with blockchain for media copyright management highlights four main topics under which it can be classified. They are _Digital Rights Management_, _Copyright Protection_, _Social Media_ and _Intellectual Property Rights_. **RQ3: which are the main business use cases for media copyright management using blockchain?** The main use cases that have been identified are _Use Case 1: Copyright Management_, _Use Case 2: Digital Content Scarcity_, _Use Case 3: Marketing, Fan Engagement and Fundraising_ and _Use Case 4: Disintermediated Distribution_. RQ4: where do most "blockchain for media copyright management" initiatives originate, academia or industry? The bigger number of initiatives emerging from industry compared to those from the academic world shows that the former is much more active in this topic. It has been possible to identify more than 25 business initiatives addressing all business use cases. Moreover, it has been possible to also classify into the identified use cases 31 scenarios applying blockchain to the media industry. As a final takeaway from the previous conclusions, we think the results make clear that any kind of research work on this particular topic coming from academia has to pay special attention to what is being done in the industry. Consequently, to keep our review on the use of blockchain technologies for media copyright management updated, our future plans include to continue monitoring academic publications while also analysing grey literature, like white papers or business reports, using the business use cases identified as the analysis framework. ## Funding Supported by project ONTOCHAIN, which has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 957338.
2305.18410
Understanding Breast Cancer Survival: Using Causality and Language Models on Multi-omics Data
The need for more usable and explainable machine learning models in healthcare increases the importance of developing and utilizing causal discovery algorithms, which aim to discover causal relations by analyzing observational data. Explainable approaches aid clinicians and biologists in predicting the prognosis of diseases and suggesting proper treatments. However, very little research has been conducted at the crossroads between causal discovery, genomics, and breast cancer, and we aim to bridge this gap. Moreover, evaluation of causal discovery methods on real data is in general notoriously difficult because ground-truth causal relations are usually unknown, and accordingly, in this paper, we also propose to address the evaluation problem with large language models. In particular, we exploit suitable causal discovery algorithms to investigate how various perturbations in the genome can affect the survival of patients diagnosed with breast cancer. We used three main causal discovery algorithms: PC, Greedy Equivalence Search (GES), and a Generalized Precision Matrix-based one. We experiment with a subset of The Cancer Genome Atlas, which contains information about mutations, copy number variations, protein levels, and gene expressions for 705 breast cancer patients. Our findings reveal important factors related to the vital status of patients using causal discovery algorithms. However, the reliability of these results remains a concern in the medical domain. Accordingly, as another contribution of the work, the results are validated through language models trained on biomedical literature, such as BlueBERT and other large language models trained on medical corpora. Our results profess proper utilization of causal discovery algorithms and language models for revealing reliable causal relations for clinical applications.
Mugariya Farooq, Shahad Hardan, Aigerim Zhumbhayeva, Yujia Zheng, Preslav Nakov, Kun Zhang
2023-05-28T17:07:46Z
http://arxiv.org/abs/2305.18410v1
# Understanding Breast Cancer Survival: Using Causality and Language Models on Multi-omics Data ###### Abstract The need for more usable and explainable machine learning models in healthcare increases the importance of developing and utilizing causal discovery algorithms, which aim to discover causal relations by analyzing observational data. Explainable approaches aid clinicians and biologists in predicting the prognosis of diseases and suggesting proper treatments. However, very little research has been conducted at the crossroads between causal discovery, genomics, and breast cancer, and we aim to bridge this gap. Moreover, evaluation of causal discovery methods on real data is in general notoriously difficult because ground-truth causal relations are usually unknown, and accordingly, in this paper, we also propose to address the evaluation problem with large language models. In particular, we exploit suitable causal discovery algorithms to investigate how various perturbations in the genome can affect the survival of patients diagnosed with breast cancer. We used three main causal discovery algorithms: PC, Greedy Equivalence Search (GES), and a Generalized Precision Matrix-based one. We experiment with a subset of The Cancer Genome Atlas, which contains information about mutations, copy number variations, protein levels, and gene expressions for 705 breast cancer patients. Our findings reveal important factors related to the vital status of patients using causal discovery algorithms. However, the reliability of these results remains a concern in the medical domain. Accordingly, as another contribution of the work, the results are validated through language models trained on biomedical literature, such as BlueBERT and other large language models trained on medical corpora. Our results profess proper utilization of causal discovery algorithms and language models for revealing reliable causal relations for clinical applications. 1-242023 Understanding Breast Cancer Survival: Using Causality and Language Models on Multi-omics Data ## 1 Introduction The application of deep learning (DL) and machine learning (ML) in biomedical sciences paves a new way of understanding the underlying causes of various fatal diseases, including cancer. Since cancer is a multi-factorial disease, different types of data are used to predict or understand its outcomes using various ML approaches. Despite the vast amount of research on the diagnosis and prognosis of breast cancer, it remains the second leading cause of cancer death for women.1 Throughout the disease progression, alterations in the genes, such as perturbations in the gene structure, function, or expression, have a significant impact. These perturbations or mutations tend to skew the normal cellular pathways, consequently changing the normal functioning cell to a cancer cell (Hanahan and Weinberg, 2000). Footnote 1: [https://www.cdc.gov/cancer/dcpc](https://www.cdc.gov/cancer/dcpc) Our work focuses on two different types of breast cancer: invasive lobular carcinoma (ILC) and invasive ductal carcinoma (IDC). There is a lack of genomic studies that investigate the underlying biological causes of ILC, as the focus is higher on IDC. However, in clinical practice, ILC patients may not show any symptoms at first, and the cancerous areas are difficult to spot on mammograms (Ciriello et al., 2015). Multi-omics data can be used to better understand the progression of ILC and the underlying causes of oncogenesis. Aiming to understand the factors affecting the survival of breast cancer patients, ML approaches can be applied to discover the underlying patterns in gene alterations. Causality is a fundamental notion in science and plays an important role in explanation, prediction under interventions, and decision-making (Pearl, 2009). In contrast to plenty of other ML areas, causal discovery (CD) (Spirtes et al., 2000) aims to estimate causal relations among the variables, often represented by Directed Acyclic Graph (DAG) rather than directly making passive predictions.2 The high interpretability that it provides is especially beneficial in the biomedical field as it aids decision-making and the comprehension of the analysis given by ML models. There are two major search strategies in causal discovery: score-based and constraint-based. Score-based methods, such as Greedy Equivalence Search (GES) (Chickering, 2002) and Fast GES (Ramsey et al., 2017), select the causal graph based on the score assigned to each candidate graph. Constraint-based methods find causal relationships based on conditional independence constraints discovered from data. Two examples of constraint-based methods are the PC algorithm and the Fast Causal Inference (FCI) (Spirtes et al., 2000). Another recent approach could readily deal with mixed continuous and discrete data types without strong assumptions on the functional relations between the variables (Zheng et al., 2023), producing a Generalized Precision Matrix (GPM) to analyze the conditional independence structure in the data, which can then be refined to produce information about the causal structure. The outcome of the aforementioned methods is a DAG or a set of DAGs (often known as an equivalence class) that shows the existence and the nature of the relationship between any two variables. Footnote 2: In this work, we assume the causal relationship follows a DAG, i.e., there is no feedback loop. Estimating cyclic causal models is more complicated and figuring out whether cyclic models are more appropriate is one line of our future research. Moreover, generally speaking, there is a challenge to the authenticity of the claims made by CD methods due to the absence of ground truth to validate the results obtained from these methods. The ground-truth causal relations are often unknown in real problems (Tu et al., 2019). Researchers that use CD in application fields often rely on domain expertise to examine the outcomes. Interestingly, thanks to the developments of Large Language Models (LLMs), which are pretrained on large medical corpora, can actually help in the task of validation of the results from CD methods: they automatically extract relevant information from the literature. Our approach uses state-of-the-art Natural Language Processing (NLP) architectures that help in validating the claims for further authentication by biologists. The usage of NLP methods reduces the cost and the effort spent on annotating data by specialists by giving them a smaller refined set to work with or verifying novel discoveries in the medical field. Several approaches, such as perplexity and masked language modeling, can be leveraged for validation tasks. To the best of our knowledge, our work is the first that leverages CD methods to understand the factors affecting the survival of breast cancer patients, with LLMs to verify the findings. Our method aims to find and validate the influences of different alterations in genes on vital status. The contributions of this work can be summarized as follows: * Unlike plenty of breast cancer ML studies, we leverage multi-omics data to unravel patterns concerning the survival of patients through suitable causal discovery methods. The usage of multi-omics data in research is a relatively new and powerful approach that considers multiple levels of biology. * We strategically dealt with a dataset with mixed data types, which is challenging in the case of causality while being cognizant of the established mathematical assumptions. * We propose a novel approach to validate the claims made by CD methods using state-of-the-art NLP models as a way to filter the most relevant claims out of numerous claims made by the models. We believe this validation approach will have direct implications in other application domains. #### Generalizable Insights about Machine Learning in the Context of Healthcare For machine learning models to be embedded into the healthcare system, we should make them explainable enough to allow effective deployment by clinical practitioners. Unlike deep learning models, causality enables the investigation of causes and effects of the different variables in the multi-omics data. In our approach, we leverage CD methods properly, complemented with validation approaches based on LLMs, for understanding the factors affecting the survival of breast cancer patients. Our research supports the usage of data with mixed types that are available in real-life scenarios. Our evaluation approach helps medical practitioners to re-verify smaller subsets of filtered claims hence expediting the verification process. Overall, our study facilitates the adoption of computational causal approaches for healthcare data for both clinical practitioners and machine learning experts. ## 2 Related Work The causal discovery field gains more focus from a theoretical perspective and less so from applications. The potential to apply CD methods in the biomedical sciences is large, but it is still not highly leveraged. The high interpretability provided by causality leads to more reliable decision-making approaches and high-quality intervention procedures for precision medicine. In genetics, Amar et al. (2021) used a dataset from the UK Biobank to develop their framework. The approach combined both causal analysis and Mendelian randomization, which attained the true relationships between features and reduced the false positive rate by 30%. Another medical application discovered the leading features of Alzheimer's disease using FGES and FCI (the Alzheimer's Disease Neuroimaging Initiative et al., 2020). The study included a "gold standard" graph that was used as ground truth to evaluate the accuracy of the CD graphs. The results showed the robustness of FGES compared to FCI. At the same time, there are relatively few investigations attempting to discover the underlying causes of cancer with CD. Budd et al. (2021) developed a causal research and inference platform to be implemented on multi-omics data for oncology problems. The study combined six causal discovery approaches into one to provide a more accurate insight into the biological problem. Moreover, Cai et al. (2019) examined somatic genome alterations (SAGs) to build a causal procedure that found the alterations that are closely related to the tumors. Their approach used The Cancer Genome Atlas and explained the effect of SAGs on the adoption of disease mechanisms in a patient's body. Another study applied CD to understand gene regulation behind head-and-neck carcinoma, but with a model that accounts for missing data (Foraita et al., 2020). The motivation came from the incorrect assumption of CD methods presuming complete data, and thus, authors connected CD with multiple imputations according to Rubin's rule. They attempted to put less weight on some strong assumptions that CD takes into consideration as they found it led to higher robustness in the outcomes. To the best of our knowledge, perplexity and masked language modeling have not been used to verify complex biomedical claims, leading to a dearth of literature in this domain. However, recent developments in NLP, especially in language models, motivate using these methods. Measuring the perplexity score of a hypothesis can give us a basic understanding of the authenticity of the claim. As postulated by Lee et al. (2020), compared to the verified claims, any unverified claims or misinformation would have higher perplexity (degree of falseness). Moreover, much research has been done on fine-tuning language models like BERT or its variants, such as SciBERT (Beltagy et al., 2019) or Bio-BERT (Lee et al., 2019) for various NLP tasks in the bio-medical domain. Additionally, Petroni et al. (2019) proposed that language models can be used as knowledge bases with the help of underlying relational information intrinsic to the training data. The recent improvement in many challenging NLP tasks can be credited to the focused research on developing task-agnostic architectures. However, there is still the need for task-specific datasets, which was the motivation behind the research by Brown et al. (2020). Recent research in training GPT models on biomedical data has proven to be successful for various tasks. So far, language models have been used for inference in NLP tasks; however, interestingly, as shown in this paper, they can be leveraged to verify causality claims or hypotheses. ## 3 Methods and Materials In this work, we started with dataset exploration, followed by applying feature selection methods to reduce the number of features as it was initially large in the raw data. Once we had the selected subset, we applied causal discovery methods using the appropriate statistical tests. The overall flow of the method is illustrated in Figure 1. ### Dataset The dataset used is a subset of The Cancer Genome Atlas (TCGA) Breast Cancer dataset (Ciriello et al., 2015). It contains records of 705 patients, of which 490 have IDC, 127 have ILC, and 88 have both types.3 It includes 1,936 features belonging to four main types of variables: copy number variations (represented as _cn_), somatic mutations (represented as _mu_), gene expression (represented as _rs_), and protein levels (represented as _pp_). There are 860 genes with records of copy number variations, which refer to the number of copies in each gene cell. They are represented as categorical variables calculated using the Gistic score. There are records of somatic mutations for 249 genes. Mutation variables are categorical and refer to whether a gene has been mutated or not. Additionally, the dataset contains gene expressions measured by RNA sequencing for 604 genes. The protein levels are also quantified for 223 genes. The gene expression and protein level variables are continuous. All genes in the dataset are represented by their Hugo symbols. Finally, the target variable is the vital status, which refers to whether the breast cancer patient survived or not. Out of the 705 patients, 611 survived and 94 died. We will examine the relationship between the vital status and the four aforementioned types of features. The aim is to understand the features causing the vital status and how the different features affect each other. Figure 1: Overall flow of our approach. Due to the size of the relevant TCGA multi-omics data, feature selection needs to be applied first to acquire a smaller subset. Then, four CD methods: PC, FCI, GPM, and GES (or its variant, FGES) were utilized. The CD methods produced causality claims related to the factors that affect the survival of patients. These relationships were then validated using language models through two approaches: perplexity and masked language modeling. ### Data Exploration Since the main outcome of our study is to understand the factors affecting the survival of patients, we visualized the distribution changes of specific variables across two categories of patients: survived and deceased. Regarding copy number variations, Figure 2 shows their distribution in the gene IDO1, where there is a difference in the distribution and the number of categories present for surviving and deceased patients. Another example of the case of the gene TGFRB3 is shown in Figure 8 in Appendix A. Also, the examination of the distribution of gene mutations and the two types of continuous variables can be found in Appendix A. ### Feature Selection As the dataset includes over 1,900 variables, there is a need to perform feature selection to reduce the complexity of the causal graphs and to improve the visualization. Our feature selection approach is based on two methods: max-min Markov blanket (MMMB) (Tsamardinos et al., 2003) and mutual information (MI) (Cover and Thomas, 1991), chosen depending on the type of data used. MMMB discovers the local structure by finding the minimal feature subset that consists of parents, children, and parents of the children of a target variable. In (Tsamardinos et al., 2003), the authors explain that MMMB is based on another CD method, Max-Min parents and children (MMPC), which is able to only find parents and children of a target variable. Therefore, MMPC should be applied before adding the spouses of the target variable. The parents and the children of the target variable are combined with the parents and children of the children's parents and children, building a set called candidate Markov blanket (CMB). After this, a filtering approach is adopted to remove the false positives detected by the additional combinations of variables coming from the different CMBs. It has been demonstrated that MMMB performs well with large datasets and outperforms other Markov blanket methods such as incremental association Markov blanket (IAMB) (Tsamardinos et al., 2003). For our case, we used MMMB for discrete data with the independence test being multinomial logistic regression, setting the significance level at 0.05. The Markov blanket was derived using the R package MXM. Figure 2: The patient distribution difference across the two vital status categories regarding the copy number variations in gene IDO1. Due to the limitations of applying MMMB to mixed data, we applied supervised feature selection to choose variables that are related to the target variable (vital status). In this regard, MI feature selection finds a value that is zero if the variables are independent and non-negative if they are dependent, with higher values referring to higher dependency. For this method, we used the Python library sklearn, which allows specifying the discrete variables to obtain accurate results. Upon implementation, we noticed that the variables extracted from MI feature selection were all continuous, which is beneficial to study the relationship between continuous variables in the dataset using a smaller sample size. Aiming to produce informative and clear graphs, we only extract 10 variables (including vital status) from the outcome of both feature selection methods. We will then use this subset of features, instead of the whole set of available features, for causal discovery. It is interesting and important that applying suitable causal discovery methods to the Markov blanket of the target variable, together with the target variable (vital status), is able to identify the parents and children of the target variable (Gao and Ji, 2015). The different CD methods applied to such a subset of the features are introduced in the following section. ### Causal Discovery Methods In our analysis, we used the methods that are broadly applied in learning causal graphs from the data. They can be divided into constraint-based and score-based algorithms. Constraint-based methods find the conditional independencies in the dataset and consequently produce a DAG or a set of DAGs, corresponding to a Markov equivalence class and represented by a pattern, to satisfy those conditional independence constraints (Spirtes et al., 2000). On the other hand, score-based methods rely on finding the optimal graph \(\mathcal{G}\) that maximizes a properly defined score of the data given this graph \(\mathcal{G}\). The graphs were obtained using Tetrad 6.9.0. In this section, we explain the theory behind four CD approaches: PC, GES, FGES, and GPM. GPM is a nonparametric method that estimates the Markov network which can be used to create a causal structure. #### 3.4.1 Pc The PC algorithm (Spirtes et al., 2000) assumes the Markov condition and the faithfulness assumption, and its causal discovery results have been shown to be correct in the large sample limit if there are no latent confounders (an unobserved direct common cause of two measured variables). PC starts with a completely undirected graph and deletes the edges based on conditional independence tests by: * Testing all pairs of nodes for marginal independence and deleting the edge if they are marginally independent. * Testing conditional independence between the remaining adjacent pairs of nodes (\(A\), \(B\)), given any other single node with an edge connected to either of them. If there is any node (\(C\)) such that \(A\rotatebox[origin={c}]{$\models$}B|C\), the edge between \(A\) and \(B\) is removed and node \(C\) is saved as a separation set. * Repeating the same procedure of the independence test by increasing the size of the conditioning set one at a time until there are no more adjacent pairs \((A,B)\) such that all variables in the conditioning set are adjacent to \(A\) or all adjacent to \(B\). In the end, we get the skeleton, where all edges are undirected. Once we have the undirected graph, we start by finding V-structures. For the set of three variables \((A,B,C)\), where only one pair \((A,C)\) is not adjacent and other pairs \((A,B)\), \((B,C)\) are adjacent, orient the edges \(A\) - \(B\) - \(C\) as \(A\to B\gets C\), based on the information saved in the conditioning sets. The next step is orientation propagation. If there is an adjacent node \((D)\) to node \((B)\) in the V-structure \((A\to B\gets C)\), we form a Y-structure by directing the edge from \(B\) to \(D\). Finally, we produce the equivalence class that describes the conditional independence information in the data. The edges in the graph can be either undirected or directed (Glymour et al., 2019; Kalisch et al., 2012). If all DAGs in the equivalence class have the same direction for a particular edge, that edge is directed; otherwise, it is undirected. #### 3.4.2 Ges and Fges GES is one of the score-based methods that starts with an empty graph and adds one edge at a time according to improvements in the score. Then, the resulting Markov equivalence class is formed. This is done until no more improvements in the score can be made when adding an edge. In the second stage, the algorithm goes backward by removing edges until no more improvements in the score can be made. Fast GES (FGES) is a modification of GES that uses parallelization to make the algorithm faster. It assumes a penalty for the score and a weaker version of the faithfulness assumption (Ramsey et al., 2017). #### 3.4.3 Generalized Precision Matrix The work in (Zheng et al., 2023) proposes using a generalized precision matrix to construct a Markov network. The method aims to address some of the limitations of the previously mentioned algorithms; for instance, 1) the probability measure is assumed to be from a certain family, 2) a CD method can either handle discrete or continuous variables, and 3) having restrictions on the differentiability and the cardinality of the continuous and discrete variables, respectively. Since we have mixed data types in the dataset and some causal influences can be complex in nature, we experiment with GPM with the features chosen from the feature selection approaches. This approach produces a Markov network, which is further refined by the PC algorithm to produce an equivalence class. ### Statistical Tests An essential part of the causal discovery is the statistical tests used for evaluating the relationships between the data. Constraint-based methods, such as PC and FCI, require conditional independence tests for implementation. On the other hand, score-based methods, such as GES and FGES, use scoring methods for model selection of the entire DAG. The conditional independence tests adopted for the constraint-based methods are the conditional Gaussian likelihood ratio test, the Chi-square test, and the randomized conditional independence test (RCIT) (Strobl et al., 2017). The conditional Gaussian likelihood ratio test uses a mixture of continuous and discrete variables (Andrews et al., 2018). As for the Chi-square test, it is used for testing the independence of categorical variables. RCIT is an approximation for the kernel conditional independence (Zhang et al., 2012) that speeds up the CD methods and imposes fewer assumptions about the data. Since our dataset is of mixed types, we used the Chi-square test when we exclude the continuous variables and the conditional Gaussian likelihood ratio test and RCIT when we consider a mixed subset. The score-based methods use multiple scores including discrete BIC (Andrews et al., 2018) and the conditional Gaussian BIC score (Andrews et al., 2019). The discrete BIC test is used only when all variables in the dataset are categorical and are based on a modification to the original BIC score function. The conditional Gaussian BIC score is adopted when the dataset includes discrete and Gaussian variables and is computed under the conditional Gaussian assumption (Andrews et al., 2018). Similar to the constraint-based methods, we used the discrete BIC test when the continuous variables are excluded, and the conditional Gaussian BIC score when both types are included. However, these score-based methods are built on some hard assumptions based on the distribution of data in question and the underlying causal mechanisms. Thus, we delved into other forms of score functions for CD named generalized score functions (Huang et al., 2018). The score function we used is the generalized score with cross-validation (CV) that calculates the local score where the score setting used is negative k-fold cross-validated log-likelihood. In our code, we used the causal-learn package to use the generalized CV score. ## 4 Experiments Given the nature of the dataset, and to conduct fair experimentation, both the continuous and the categorical data types had to be included. With the feature selection methods mentioned above, a subset of the data was obtained. Different statistical tests were conducted according to the type of data and the CD method implemented. CD methods produced directed graphs as is shown in Figures 3, 4, 5, 6, and 7. The nodes in the graphs represent the features, for example, "cn_IDO1" represents the copy number variations in gene IDO1. To demonstrate the efficacy of the CD methods, we backed our results from findings in the biomedical literature. \begin{table} \begin{tabular}{l l l} \hline \hline **Model** & **Claim** & **Perplexity** \\ \hline GPT-2 & “Mutation in gene UBR4 is related to the survival in cancer” & 76.68 \\ GPT-2 & “Mutation in gene UBR4 is not related to the survival in cancer” & 39.65 \\ SciBERT & “Mutation in gene UBR4 is related to the survival in cancer” & 110.87 \\ SciBERT & “Mutation in gene UBR4 is not related to the survival in cancer” & 167.8 \\ BlueBERT & “Mutation in gene UBR4 is related to the survival in cancer” & 31.01 \\ BlueBERT & “Mutation in gene UBR4 is not related to the survival in cancer” & 2502.9 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of perplexity scores for different models on the same claim where the claim column represents the direct association of the variables with vital status in the graphs. ### Using Categorical Data For categorical data, we used MMMB as the feature selection method to remove the independent variables given the Markov blanket of the target variable. The independence test between the variables was carried out by testIndMultinom in R. Then, we applied PC, FCI, and GES to the selected set of features. In the following sections, the results of PC and GES are described, while the results of FCI can be found in Appendix C. #### 4.1.1 Using PC We applied the PC algorithm with the Chi-square test, which gave us the results shown in Figure 6. From the causal graph, we found that vital status was connected to the mutation in genes MLL3 and TNXB and copy number variation in TNFRSF11B. Studies have shown that MLL3 is the \(6^{th}\) most frequently mutated gene in ER+ breast cancer patients. In addition, it has been proven that mutations in MLL3 result in endocrine therapy resistance in patients, which can only be analyzed if the tumor samples of patients are genotyped (Stauffer et al., 2021). The other genes also play an important role in the survival of the patients. For example, the TNXB gene, being a gene that is not mutated too frequently, is not discovered by mutation frequency-based software. However, the TNXB gene is crucial in the carcinogenesis of breast cancer (Campbell et al., 2016). In terms of survival, the TNXB gene has been validated as a biomarker for early metastasis of breast cancer. Thus, it is beneficial that the CD methods can identify the factors affecting survival that are hard to obtain from the commonly used tools. Moreover, studies showed that the dysregulation of the TNFRSF11B gene results in poor prognosis of breast cancer patients (Luo et al., 2017), which ultimately results in distant organ metastasis of breast cancer. Therefore, this result backs the shorter median overall survival data of patients, which is also reflected in our results using the CD methods. #### 4.1.2 Using GES Lastly, we used the generalized CV score (Huang et al., 2018) with the data obtained from MMMB on the GES method. From the graph shown in Figure 4, we observed that the target variable (vital status) was related to many variables including mutations in genes UBR4, TNXB, MXRA5, USH2A, and MLL3. It was also connected to the copy number variations in genes IDO1, TNFRSF11B, and NCOR1. The role of genes TNXB, MLL3, and TNFRSF11B is mentioned above. Apart from that, MXRA5 plays a role in cancer cells in forming metastases hence affecting survival (Minafra et al., 2014). IDO1 plays a Figure 7: Graph of the PC algorithm applied on the Markov network produced by GPM using a mixture of features selected from MMMB and MI feature selection. role in the differentiation of monocytes, a type that has been found associated with tumor progression in breast cancer (Meireson et al., 2020). For the scope of this paper, we will not be discussing other complex biological mechanisms and pathways for other genes, but there are studies suggesting the association of these genes with breast cancer and consequently overall survival. ### Using Mixed Data For mixed data, we applied the MI feature selection method. The subset of data obtained included the 10 features that have the highest mutual information with the target variable (vital status). This subset was included to use different CD methods like FCI, FGES, PC, and GPM. We now show and discuss the results derived from the aforementioned CD methods. As mentioned, applying suitable causal discovery methods to the target variable (vital status) and its Markov blanket can find parents (direct causes) and children of the target variable (Gao and Ji, 2015). All details regarding FCI can be found in Appendix C. #### 4.2.1 Using FGES We applied FGES with the conditional Gaussian BIC score. It was observed that the target variable was related to the gene expression of the genes SLC7A2, SLC7A10, and MMRN1, as shown in Figure 3. Research has propounded that increased levels of SLC7A2 in breast cancer patient samples are usually associated with poor overall prognosis and are considered a stand-alone variable for decreased survival. Regarding SL7A10, it has been found to express less in breast cancer tissues compared to normal breast tissue. However, the relation with overall survival has not been investigated yet. MMRN1 has been recorded as a differentially expressed gene in many cancers and has the capability to be identified as a potential cancer biomarker. Moreover, MMRN1 expression has been observed to be related to the stage of breast cancer (Shi et al., 2017). \begin{table} \begin{tabular}{l l l l} \hline \hline Claim & Masked Token & Top Predicted Token & Score \\ \hline [MASK] in gene UBR4 is related to the survival in cancer & Mutation & Mutation & 0.773 \\ Changes in protein levels in gene CDK1.PY15 & Survival & Progression & 0.135 \\ is related to the [MASK] in cancer & Patients & Patients & 0.179 \\ Mutation in gene TNXB is related to the survival in [MASK] & Cancer & Cancer & 0.299 \\ \hline \hline \end{tabular} \end{table} Table 2: Prediction for masked tokens with BlueBERT and their respective scores. The masks are placed randomly in a sentence to evaluate the performance of the model. #### 4.2.2 Using PC Using the mixed data from MI feature selection, we implemented the PC algorithm with the conditional Gaussian likelihood ratio test. In this case, only one variable was observed to be related to the target variable, which was SLC7A2, as shown in Figure 5. The justification from the literature review was mentioned in the previous section. #### 4.2.3 Using GPM Lastly, since GPM creates a Markov network using mixed data, we combined the features from MMMB (discrete) and the features from MI (continuous) along with the vital status as input to GPM. A Markov network was obtained using the adjacency matrix obtained from GPM. Then, an equivalence class was obtained using the PC algorithm (with RCIT) from the Markov network. Figure 7 shows the dense graph that is created using GPM. As can be observed, several variables have a relationship with the target variable, vital status, which show us the biological relations between the multi-omics features and a patient's survival. Validation of most of these variables from a biological perspective is mentioned in previous sections. ### Verification with Language Models The graphs produced by different CD methods showcase a wide variety of relationships between the target variable and other multi-omics data variables. Although we relied on the biomedical literature to validate our CD results, a more efficient approach was needed. The authenticity of these claims made by the causality model was validated by language models through different experiments. Various models were used for experimentation, but we only describe those that yielded the best results. #### 4.3.1 Perplexity Score The perplexity of a language model measures the degree of uncertainty when it generates a new token, averaged over all words in the input. It has been used to do fact-checking (Lee et al., 2021), the idea being that claims that are supported by a given text corpus are expected to have a low perplexity, while those that are not supported would have a high perplexity. If we have a claim made by the causality model, e.g, "A is dependent on B", we take its negation "A is not dependent on B", and we calculate the perplexity for both. The one with the lower perplexity is considered to be correct. We evaluated a set of claims using BlueBERT and SciBERT (using the same configuration and size as BERT-base). The results obtained conform with the claims from the causality models as shown in Table 1. To elaborate, the results from the CD models indicate that the mutation in gene UBR4 is associated with the survival status of cancer. In Table 1, the perplexity score for that particular claim with the best performing model (BlueBERT) is less, which represents the truthfulness of the statement. #### 4.3.2 Masked Language Modeling One of the interesting uses of LLMs is that they can be used as knowledge bases (Lee et al., 2020; Petroni et al., 2019). Using LLMs for such a purpose remains largely unexplored in the biomedical domain. We first used DistilBERT to assess its performance on the prediction of masked tokens from the claims. The claims from the causality model were masked randomly and fed into the model after they were tokenized. For every sentence, it gave us a computed probability score of different probable tokens for the masked token. As expected, the correct token predictions were less in number as compared to the incorrect predictions. To obtain good performance for this task, a model trained on biomedical data had to be used. Among the three models used for the prediction of masked tokens, BlueBERT produced the best results, with an accuracy of 81.25%. DistilBERT and SciBERT lagged behind with the accuracy of predicting tokens at 56.25% and 68.75%, respectively. The performance of other models was lower compared to BlueBERT mainly because of the datasets they were trained on. We also show the performance of BlueBERT on various claims on different masks and most of the predicted masks were correct as we can see in Table 2. The model was tested on a variety of claims made by CD models varying the placement of the masks. We also employed PubMedBERT (Gu et al., 2020) trained on PubMed Abstracts and full-text articles from PubMedCentral and ClinicalBERT (Huang et al., 2019) trained on clinical notes from MIMIC-III dataset (Johnson et al., 2016) however, the scores for predicted masked tokens were lower than BlueBERT. We also used BioGPT (Luo et al., 2022) to generate the relevant claims by providing a prompt. We observed the relevancy of the generated biomedical text produced by BioGPT when the input to the model consisted of a part of the claim. The generated text matched the claims made by the CD methods. Examples of generated text are in Appendix B. ## 5 Discussion Using causal discovery for biomedical data certainly provides a different perspective on the problem. It can be an efficient approach to discovering various hidden changes in a patient's body. Causal discovery allows us to understand these relationships and to use the results for various downstream tasks, including the effect of newly discovered drug targets and the cause of a normal gene turning into a cancer gene. However, the reliability of the results from CD methods is questionable and the tolerance for incorrect results is particularly low in the biomedical domain. As a result, validation through research or professionals is required. In our paper, we chose to validate our findings using biomedical corpora. LLMs have largely helped the research body in several fields. The usage of language models for verifying the claims made by the causality model is efficient and reliable as it aids domain experts in further verification of the claims. Our experimental results give an in-depth understanding of the different causal discovery methods used for multi-omics data. The data being mixed required us to delve deeper into the CD methods, their underlying assumptions (e.g., the faithfulness assumption for the PC algorithm), and technical assumptions (e.g., the linear-Gaussian assumption in the original GES for continuous data) in order for the results to be trustable. As can be deduced from our results, there are overlaps between the results of different methods. However, the differences suggest a need for increased consistency between the findings to assist decision-making in a medical setting. Overall, FGES and GES produced denser graphs compared to PC, but they are less efficient at handling mixed data in the current implementation. GPM helped in leveraging beneficial information from mixed data and modeling the distribution of our data which did not necessarily follow the extensively studied families of distributions. It adapts to our categorical variables containing different cardinalities. Causal graphs derived from different CD methods infer several important insights about the target variable. Out of all the graphs, the one produced by GPM followed by PC seems to be the best in terms of explainability from a medical lens. Vital status was found to be affected by several important variables. In some cases across the graphs, the vital status was observed to be the effect of a number of variables, for example, the mutations in gene MXRA5, and the cause for several specific variables such as the mutations in NCOR1. The latter observation, i.e., that vital status is the cause of changes in genes, might be unexpected at first glance. However, one example from the biomedical literature suggests that chemotherapy given to patients can sometimes result in alterations in the genome. Mutations in a patient's body can be related to resistance to therapy, dysregulation of cellular pathways, and metastasis. ## 6 Conclusion and Limitations Our work 1) exploited causal discovery approaches to unravel relationships between the survival of breast cancer patients with the multi-omics variables used in a subset of the TCGA dataset for breast cancer and 2) made use of language models for verification of the results discovered by CD. The CD approaches provide more interpretable ways to analyze data in the biomedical domain. Among the various CD methods implemented, we found that the GPM-based method yields the most comprehensive result. Due to the lack of methods that can validate the reliability of CD graphs, we leveraged the existing pre-trained language models to evaluate the claims made by the CD models. We conclude that models trained on relevant medical corpora like BlueBERT demonstrated superiority than other LLMs for the validation of biomedical claims. LimitationsFor CD models to be used for various complex applications, more efficient methods that can handle the mixed types of data are needed. Until now, the available software used for generating graphs is not scalable when a large dataset is used, which led us to reduce the number of features in our dataset drastically. This process might have affected the efficacy of our findings, especially since some related biological information can be considered crucial. In addition, more focus needs to be directed towards exploring validation methods for CD models. Furthermore, language models have been trained on data up to a certain point in time. For them to be used as a source of evaluation for a continuously evolving field of biology, they need to be trained on up-to-date biomedical data to ensure accurate evaluation. Moreover, one should use language models with caution and explore a variety of them for higher reliability, especially in the medical context.
2305.17723
SAP HANA Data Volume Management
Today information technology is a data-driven environment. The role of data is to empower business leaders to make decisions based on facts, trends, and statistical numbers. SAP is no exception. In modern days many companies use business suites like SAP on HANA S/4 or ERP or SAP Business Warehouse and other non-SAP applications and run those on HANA databases for faster processing. While HANA is an extremely powerful in-memory database, growing business data has an impact on the overall performance and budget of the organization. This paper presents best practices to reduce the overall data footprint of HANA databases for three use cases like SAP Business Suite on HANA, SAP Business Warehouse, and Native HANA database.
Subhadip Kumar
2023-05-28T13:42:34Z
http://arxiv.org/abs/2305.17723v1
# SAP HANA Data Volume Management ###### Abstract Today's information technology is a data-driven environment. The role of data is to empower business leaders to make decisions based on facts, trends, and statistical numbers. SAP is no exception. In modern days many companies use business suites like SAP on HANA (S/4 or ERP) or SAP Business Warehouse and other non-SAP applications and run those on HANA databases for faster processing. While HANA is an extremely powerful in-memory database, growing business data has an impact on the overall performance and budget of the organization. This paper presents best practices to reduce the overall data footprint of HANA databases for three use cases - SAP Business Suite on HANA, SAP Business Warehouse, and Native HANA database. SAP, HANA, NSE ## 1. Introduction Many organizations adopt HANA (High-performance ANalytic Appliance) as primary database for SAP and non-SAP application over traditional databases (Oracle/SQL Server) because of its in-memory computing and real time results[(3)]. It retrieves data 3600 times faster than traditional databases and can scan up to 3.5 billion records per second per core. By design HANA is a multi-model database that stores data in its memory instead of keeping it on disk. HANA utilizes column store mechanism to store the data on the tables. Memory on SAP HANA database is divided in two parts - actual table data and working memory. SAP typically recommends to maintain 1:1 ratio between table data and workspace memory. Increasing table data will also increase workspace memory requirement. SAP HANA runs on certified hardware either on-premise or cloud. Typically, memory configuration runs from 256GB and all the way up to 12TB in a scale up architecture with matching CPU configuration[(4)]. It is very important to properly size the database to accommodate current data and future growth. Upgrading memory of existing HANA databases are not only complicated but also very expensive. Most of the mission critical application that run on HANA usually configured in a HA - high availability (same data center) and DR - disaster recovery (different data center) setup. In a typical example where a mission critical HANA database grew from 1TB to 2TB, memory and CPU has to be upgraded in 3 places - 2 HA and 1 DR. In some instances, entire physical server has to be replaced in order to accommodate the growth. For example a database which is currently running on a 6TB/4 socket hardware on Cisco C480 and now requires 8TB memory due to data growth - it is not possible just to upgrade the memory and CPU module but a new upgraded hardware C890 with 12TB/8 socket has to be procured. Setting up new hardware not only take up space in a datacenter but also storage, power supply, networking, OS installation, maintenance and setup need to complete before HANA can be installed and existing database can be migrated to new hardware. All these activities having a direct impact on Total Cost of Ownership (TCO). This also have a direct impact on the overall green day initiative of the organization[(8)]. Therefore, it is extremely important to control the database growth to control the TCO. Fig 1 shows a pie chart of TCO distribution. ## 2 Data Tiering - Basics In this section overview of multi temperature data tiering and methods to implement that will be explained ### Multi Temperature Data and Data Tiering HANA database data can be managed according to how frequently it is accessed. This sometimes referred as Multi-temperature database management. It gives you the ability to keep mission critical data to HOT layer i.e. in memory and move infrequently access data to WARM layer i.e. to disk and rarely used data to an inexpensive storage solution like Hadoop. For an example, finance team requires to generate a daily revenue report for C-level executives and therefore they access last 48hrs of data very frequently which can reside in HOT layer (\(0-48\)hrs). They also need to generate a monthly, quarterly and yearly report and need to pull last 30, 90 and 365 days of financial data once in a while which can reside in WARM layer (48hrs to 365 days). Also finance team needs to retain 7 years of data for audit purpose which but it has been rarely accessed can be stored in COLD layer (1 year to 7 year). Fig 2 below represents significance of each layer. ### Multi-temperature Data Figure 1: Total Cost of Ownership Hot layer: * Very fast data retrieval * Frequently accessed data * High Cost due to in-memory in nature * Technology: DRAM, PMEM Warm Layer: * Very fast data retrieval * Infrequently accessed data * Low Cost compared to Hot Layer as it resides in disk * Technology: HANA NSE (Native Storage Extension), Dynamic Tiering, Extension Node, Data Aging Cold Layer: * Slow retrieval of data * Rarely accessed data * Lowest cost Technology - NLS (Near Line Storage), SAP IQ, SAP ILM, SAP DWF (Data Warehousing Foundation). Fig 2: Multi Temperature Data-Tiering ### Data Tiering Methodology It is confusing which option to choose for data tiering in HANA[10]. Unfortunately, not one-size-fits-all in HANA - it finally boils down to whether and what business suites on top of HANA or whether it is a standalone native HANA database. Figure 3 explains different methodologies that can be used for each layer of data. ## 3 Data Tiering - Native SAP HANA ### Hot Data For all HANA database whether it is a native HANA DB or SAP Business Suite on HANA - hot layer will always be the DRAM (Dynamic Memory) or PMEM (Persistent Memory) [7]. HANA stores columnar data in memory for faster processing. Use of Optane aka PMEM is decreasing over time both on premise and cloud. Except Microsoft Azure no other cloud providers offer PMEM based hardware. Intel recently announced to winding down its Optane business. Figure 3: Data Tiering Technologies ### Warm Data #### 3.2.1 Warm Data HANA extension node is introduced on HANA 2.0 SP03 for native HANA database. Extension node is based on the HANA scale-out feature. In this architecture one worker node (slave node) in the scale-out landscape is reserved for warm-data storage and processing. Extension node allows larger data footprint by default 100% of the node DRAM size (optional: 200%). Extension node is also having a relaxed core/memory ratio with HANA TD15. In order to accommodate larger data footprint which is up to 200% of DRAM, reconfiguration on storage I/O level, table partitioning on extension node required in order to accommodate 200% of warm data. It degrades the performance[6] of extension node because of increased disk access and unload/reload. Advantages: * Easy to implement and manage as it is based on HANA scale-out mechanism * It gives almost same in-memory performance * Full functional parity with HANA database * It stores WARM data upto 100% of DRAM and optionally 200% * Multiple extension node possible but not comes by default Disadvantages: * Higher TCO * It can only store WARM data up to 100% of DRAM and optionally 200% * It requires HANA TD15 certified hardware #### 3.2.2 SAP HANA Dynamic Tiering HANA Dynamic Tiering is another option for WARM data storage for managing less frequently used data. This is based on disk-centric technology where columnar data resides on disk. SAP HANA dynamic tiering exists within the SAP HANA system architecture as a dedicated database process, named esserver. Like the indexserver process, which stores and processes in-memory data, the esserver process stores data in columnar, disk-based structures and offers disk-optimized data processing. In non-production HANA environment, esserver can be co-deployed on the same host as SAP HANA for scale up architecture. For production environments, SAP recommends dedicated host for essserver. For scale-out systems, ess server should still be installed on its own machine. With multiple tenant databases, a dedicated esserver process and dynamic tiering extended store is required for each tenant database using dynamic tiering. Currently, dynamic tiering does not support high tenant isolation. Even though it is possible to add HANA dynamic tiering for small databases however SAP recommends dynamic tiering for databases larger than 512GB or larger where large data volumes begin to necessitate a data lifecycle management solution. The recommended ratios of SAP HANA memory to SAP HANA dynamic tiering extended storage are: * SAP HANA memory <= 2.5TB: size of dynamic tiering storage should not exceed 4x the size of SAP HANA memory. SAP HANA memory \(>\) 2.5TB: size of dynamic tiering storage should not exceed 8x the size of SAP HANA memory. HANA Dynamic Tiering host memory requirement is much smaller than HANA in-memory host. Fig 5 represents a table for memory requirement of HANA Dynamic Tiering host by GCP (Google Cloud Platform). Advantages: * This applies to both SAP scale out and scale up architecture * Low TCO * No requirement of TDI certified hardware * No separate license required to implement HANA Dynamic Tiering Disadvantages: * In certain circumstances, HANA dynamic tiering doesn't support an operation where entire dataset being transferred from dynamic tiering to SAP HANA and HANA host doesn't have sufficient memory to perform that. * HANA Dynamic Tiering is slow compared to Extension Node #### 3.2.3Native Storage Extension SAP HANA Native Storage Extension (NSE) [2]is a general-purpose, built-in warm data store in SAP HANA that lets you manage less-frequently accessed data without fully loading it into memory. It integrates disk-based or flash-drive based database technology with the SAP HANA in-memory database for an improved price-performance ratio. This solution is available from HANA 2.0 SP04 onwards. By default all HANA columnar tables are Column Loadable that means they load on the memory for better performance - using NSE you can convert a table to page loadable which means data from table will be loaded in memory in granular units of pages for query processing, remaining pages will reside in disk. For NSE you can add warm storage upto 1:4 ratio of HANA hot data in memory to warm data on disk. NSE disk should be no larger than 10TB. Dynamic Tiering is having much larger capacity. SAP HANA Native Storage Extension (NSE) Advisor in SAP HANA to get suggestions on load units for tables, partitions, or columns based on how frequently they are accessed. It determines the temperature of data and uses rule-based heuristics to identify hot and warm objects as candidates to be either page-loadable or column-loadable. A rule-based (data temperature threshold, access pattern and frequency, and data density) algorithm is used to derive these recommendations. Figure 4: Dynamic Tiering Memory GCP Main component of NSE is buffer cache. Which is required for performance access to pages on disk - With the current NSE feature by default, 10% of the main memory is reserved for buffer cache and Not allocated. The Buffer Cache uses LRU (Last Recently Used) and HBL (Hot Buffer List) strategies and reuses the pages from the internal pools instead of allocating/deallocating pages via HANA memory management. Advantages: * No separate component needs to be installed * Low TCO * NSE can be enabled table level, column level, partition level * No separate license required to implement HANA NSE * NSE can be used both scale-up and scale-out architecture * No separate hardware is needed Disadvantages: * NSE advisor requires setup of buffer cache, which inturn requires NSE advisor and few round of iteration to determine the candidates for NSE * HANA Dynamic Tiering is slow compared to Extension Node In contrast to Dynamic Tiering, the query execution in the HANA service, storing the NSE data, creates transient data and interim result in memory only. Thus, the memory requirement for a comparable workload can be higher with NSE. A solution to migrate data from Dynamic Tiering to NSE in on the road map for SAP HANA. #### 3.2.4 SAP Data Aging Data aging offers the option of moving large amounts of data within a database to get more working memory. The SAP application helps you move data from the current area to the historical area. The application controls the move by specifying a data temperature for the data. You can influence the move by an aging-object-specific customizing, usually using a residence time. Moving the data influences the visibility of the data during data access [1]. Data aging exercise has to be performed at ABAP layer. High Level steps are: - Create and manage partitions - Activate the Data Aging object - Define residence time for Data Aging - Create and manage data aging group - Schedule Data Aging runs Advantages: * No separate license is required * Low TCO * No separate hardware is needed Disadvantages: * Only applicable to certain ABAP business suite Only applicable to predefined data aging objects - any new object that is not part of standard data aging object needs a custom development which is time consuming ### Cold Data Cold data refers to the data that is seldom or sporadically accessed. Separating cold data from the SAP HANA database reduces the database footprint with tables or partitions moved from SAP HANA to external storage with mostly read-only data access and separate high availability, disaster recovery, encryption, and admin functionality. #### 3.3.1 DLM with DataHub/Spark Controller There are two approaches to access SAP HANA cold storage: SAP Data Hub and SAP Spark Controller. SAP Data Hub deployed in Kubernetes cluster, SAP Data hub distributed runtime engine (also known as Vora) can persist cold data in disk-based, streaming tables. Technically, these streaming tables are viewed as Virtual tables by SAP HANA. SAPHANA queries those virtual tables from the SAP Data Hub using Vora ODBC Adapter with SAP HANA Smart Data Access (SDA). Data Lifecycle Management tool (DLM) of SAP Data Warehouse Foundation (DWF) facilitates the bi-directional movement of data between hot, warm and cold layer. For external storage in this option we can either utilize on premise HDFS or cloud based S3, Azure Data Lake Storage (ADLS) or GCP. Another option is to use HANA Spark Controller which allows SAP HANA to access Hadoop [9] data through SQL interface and primarily works with Spark SQL to connect to an existing Hive metastore. It uses SparkSQL smart data access (SDA) Adapter which moderates query execution and data transfer by enabling SAP HANA to fetch data in a compressed columnar format. Advantages: * Faster access of data * Any underlying external storage can be used (on-premise and cloud) Disadvantages: * Requires separate license * Main use case is Native HANA DB * Complex architecture that requires early planning #### 3.3.2 Data Archiving In an operative SAP BW system, the volume of data increases constantly because of business and legal requirements. The large volume of data can affect the performance of the system and increase the administration effort, which results in the need to implement a data-aging strategy. If you want to keep the amount of data in your SAP BW system without deleting, you can use data archiving. The data is first moved to archive or near-line storage and then deleted from the SAP BW system. You can either access the data directly or load it back as required, depending on how the data has been archived. Near-line storage (NLS) is used to archive Business warehouse (BW) data which can still be available for reporting using Bex queries. Once data is archived to NLS, BW still needs some adjustments in order to effectively report data from NLS. Once data is archived to NLS, BW still needs some adjustments in order to effectively report data from NLS. It mainly uses SAP IQ as database for the NLS solution. Advantages: * SAP NLS based on SAP IQ works with all traditional databases as well as SAP HANA * SAP IQ being a SAP product support for SAP NLS is robust Disadvantages: * Requires separate license * Main use case is SAP BW or BW/4HANA * Slower retrieval #### 3.3.3 Data Archiving This is traditional data archiving process at ABAP layer using toodes SARA, SARI, TANA[5]. It is only applicable for standard archiving objects for OLTP systems like SAP ECC and SAP S/4 HANA. In this process data that is being archived stores as a flat file either at Application server layer or HSM systems (Hierarchical Storage Management) or other third-party provider's storage system like OpenText using SAP Archvielink. This storage system stores the processed archive file after a delete program is successfully completed. Advantages: * This is traditional data archiving and valid for all traditional databases including SAP HANA * No separate license required Disadvantages: * Only applicable for standard objects otherwise it requires custom development * Main use case is SAP ECC or S/4HANA * Slower retrieval ## 4 Summary SAP HANA Data Tiering is complex in nature and no 'one size fits all'. It utilizes different mechanism to archive data to cold and warm storage based on the HANA suites (whether Native, ECC or BW). Also, it depends on the price and complexity. This document summarizes each options and give a holistic overview of the processes. ## 5 Acknowledgments I would like to thank anonymous reviewers for the comments and suggestions.
2307.01738
Mitigating Calibration Bias Without Fixed Attribute Grouping for Improved Fairness in Medical Imaging Analysis
Trustworthy deployment of deep learning medical imaging models into real-world clinical practice requires that they be calibrated. However, models that are well calibrated overall can still be poorly calibrated for a sub-population, potentially resulting in a clinician unwittingly making poor decisions for this group based on the recommendations of the model. Although methods have been shown to successfully mitigate biases across subgroups in terms of model accuracy, this work focuses on the open problem of mitigating calibration biases in the context of medical image analysis. Our method does not require subgroup attributes during training, permitting the flexibility to mitigate biases for different choices of sensitive attributes without re-training. To this end, we propose a novel two-stage method: Cluster-Focal to first identify poorly calibrated samples, cluster them into groups, and then introduce group-wise focal loss to improve calibration bias. We evaluate our method on skin lesion classification with the public HAM10000 dataset, and on predicting future lesional activity for multiple sclerosis (MS) patients. In addition to considering traditional sensitive attributes (e.g. age, sex) with demographic subgroups, we also consider biases among groups with different image-derived attributes, such as lesion load, which are required in medical image analysis. Our results demonstrate that our method effectively controls calibration error in the worst-performing subgroups while preserving prediction performance, and outperforming recent baselines.
Changjian Shui, Justin Szeto, Raghav Mehta, Douglas L. Arnold, Tal Arbel
2023-07-04T14:14:12Z
http://arxiv.org/abs/2307.01738v2
Mitigating Calibration Bias Without Fixed Attribute Grouping for Improved Fairness in Medical Imaging Analysis ###### Abstract Trustworthy deployment of deep learning medical imaging models into real-world clinical practice requires that they be calibrated. However, models that are well calibrated overall can still be poorly calibrated for a sub-population, potentially resulting in a clinician unwittingly making poor decisions for this group based on the recommendations of the model. Although methods have been shown to successfully mitigate biases across subgroups in terms of model accuracy, this work focuses on the open problem of mitigating calibration biases in the context of medical image analysis. Our method does not require subgroup attributes during training, permitting the flexibility to mitigate biases for different choices of sensitive attributes without re-training. To this end, we propose a novel two-stage method: Cluster-Focal to first identify poorly calibrated samples, cluster them into groups, and then introduce group-wise focal loss to improve calibration bias. We evaluate our method on skin lesion classification with the public HAM10000 dataset, and on predicting future lesional activity for multiple sclerosis (MS) patients. In addition to considering traditional sensitive attributes (e.g. age, sex) with demographic subgroups, we also consider biases among groups with different image-derived attributes, such as lesion load, which are required in medical image analysis. Our results demonstrate that our method effectively controls calibration error in the worst-performing subgroups while preserving prediction performance, and outperforming recent baselines. Keywords:Fairness Bias Calibration Uncertainty Multiple Sclerosis Skin Lesion Disease activity prediction ## 1 Introduction Deep learning models have shown high prediction performance on many medical imaging tasks (e.g.,[3, 15, 21, 24]). However, deep learning models can indeed make errors, leading to distrust and hesitation by clinicians to integrate them into their workflows. In particular, models that show a tendency for overconfident incorrect predictions present real risk to patient care if deployed in real clinical practice. One way to improve the trustworthiness of a model is to ensure that it is well-calibrated, in that the predicted probabilities of the outcomes align with the probability of making a correct prediction [8]. While several methods have been shown to successfully improve calibration on the _overall_ population [8, 16], they cannot guarantee a small calibration error on _sub-populations_. This can lead to a lack of fairness and equity in the resulting diagnostic decisions for a subset of the population. Figure 1(a) illustrates how a deep learning model can achieve good calibration for the overall population and for younger patients, but produces significantly overconfident and incorrect predictions for older patients. Although various methods have been shown to successfully mitigate biases by improving prediction performance (e.g. accuracy) in the worst-performing subgroup [28, 13, 18, 1, 27], improved prediction performance does not necessarily imply better calibration. As such, this paper focuses on the open problem of mitigating calibration bias in medical image analysis. Moreover, our method does not require subgroup attributes during the training, which permits the flexibility to mitigate biases for different choices of sensitive attributes without re-training. This paper proposes a novel two-stage method: Cluster-Focal. In the first stage, a model \(f_{id}\) is trained to identify poorly calibrated samples. The samples are then clustered according to their calibration gap. In the next stage, a prediction model \(f_{\text{pred}}\) is trained via group-wise focal loss. Extensive experiments are performed on (a) skin lesion classification, based on the public HAM10000 dataset [3], and (b) Figure 1: Illustration of calibration bias for a model that predicts future new lesional activity for multiple sclerosis (MS) patients. (a) Reliability diagram: ERM (training without considering any fairness) exhibits good calibration overall and also for younger patients, whereas it produces significantly overconfident and incorrect predictions for older patients. (b) Two MS patients depicting highly confident predictions, with incorrect results on the older patient and correct results on the younger patient. Poorer calibration for older patients results in older patients being more likely to be incorrect with high confidence. on predicting future new lesional activity for multiple sclerosis (MS) patients on a proprietary, federated dataset of MRI acquired during different clinical trials [26, 2, 7]. At test time, calibration bias mitigation is examined on subgroups based on sensitive demographic attributes (e.g. age, sex). In addition, we consider subgroups with different image-derived attributes, such as lesion load. We further compare Cluster-Focal with recent debiasing methods that do not need subgroup annotations, such as EIIL (Environment Inference for Invariant Learning) [4], ARL (Adversarially Reweighted Learning) [10], and JTT (Just Train Twice) [14]. Results demonstrate that Cluster-Focal can effectively reduce calibration error in the worst-performing subgroup, while preserving good prediction performance, when split into different subgroups based on a variety of attributes. ## 2 Methodology We propose a two-stage training strategy, Cluster-Focal. The first stage consists of _identifying different levels of poorly calibrated samples_. In the second stage, we introduce a group-wise focal loss to mitigate the calibration bias. At test time, our model can mitigate biases for a variety of relevant subgroups of interest. We denote \(D=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{N}\) as a dataset, where \(\mathbf{x}_{i}\) represents multi-modal medical images and \(y_{i}\in\{1,2,\dots\}\) are the corresponding ground-truth class label. A neural network \(f\) produces \(\hat{p}_{i,y}=f(y|\mathbf{x}_{i})\), the predicted probability for a class \(y\) given \(\mathbf{x}_{i}\). The predicted class for an \(\mathbf{x}_{i}\) is defined as \(\hat{y}_{i}=\operatorname*{argmax}_{y}\,\hat{p}_{i,y}\), with the corresponding prediction confidence \(\hat{p}_{i}=\hat{p}_{i,\hat{y}_{i}}\). Figure 2: Cluster-Focal framework. The training procedure is a two-stage method, poorly calibrated sample identifications (clustering) and group-wise focal loss. At test time, the trained model \(f_{\text{pred}}\) is deployed, then calibration bias and prediction performance are evaluated across various subgroup splittings such as sex or age. (Female/male patients are visualized as an example.) ### Training procedure: two-stage method Stage 1: Identifying poorly calibrated samples (Clustering)In this stage, we first train a model \(f_{\mathrm{id}}\) via ERM [25], which implies training a model by minimizing the average training cross entropy loss, without any fairness considerations. \(f_{\mathrm{id}}\) is then used to identify samples that have potentially different calibration properties. Concretely, we compute the gap between prediction confidence \(\hat{p}_{i}\) and correctness via \(f_{\mathrm{id}}\): \[\mathrm{gap}(\mathbf{x}_{i})=|\hat{p}_{i}-\mathbf{1}\{\hat{y}_{i}=y_{i}\}|, \tag{1}\] where \(\hat{p}_{i}\) is the confidence score of the predicted class. Intuitively, if \(\mathrm{gap}(\mathbf{x}_{i})\) is small, the model made a correct and confident prediction. When \(\mathrm{gap}(\mathbf{x}_{i})\) is large, the model is poorly calibrated (i.e. incorrect but confident) for this sample. When the model makes a relatively under-confident prediction, \(\mathrm{gap}(\mathbf{x}_{i})\) is generally in between the two values. We apply _K-means_ clustering on the gap values, \(\mathrm{gap}(\mathbf{x}_{i})\), to identify \(K\) clusters \((C_{1},\ldots,C_{K})\) with different calibration properties. Stage 2: Group-wise focal lossWe then train a prediction model \(f_{\mathrm{pred}}\) with a group-wise focal loss on the clusters \(C_{1},\ldots,C_{K}\) identified in the first stage. Formally, the following loss is used: \[\mathcal{L}_{\mathrm{g-focal}}=\frac{1}{K}\sum_{k=1}^{K}\mathcal{L}_{C_{k}}(f_ {\mathrm{pred}}),\] where \(\mathcal{L}_{C_{k}}(f_{\mathrm{pred}})=-\mathbb{E}_{(\mathbf{x}_{i},y_{i}) \sim C_{k}}\left[(1-f_{\mathrm{pred}}(y_{i}|\mathbf{x}_{i}))^{\gamma}\log(f_{ \mathrm{pred}}(y_{i}|\mathbf{x}_{i}))\right]\) with \(\gamma>0\). Intuitively, the focal loss penalizes confident predictions with an exponential term \((1-f_{\mathrm{pred}}(y_{i}|\mathbf{x}_{i}))^{\gamma}\), thereby reducing the chances of poor calibration [16]. Additionally, due to clustering based on \(\mathrm{gap}(\mathbf{x}_{i})\), poorly calibrated samples will end up in the same cluster. The number of samples in this cluster will be small compared to other clusters for any model with good overall performance. As such, doing focal loss separately on each cluster instead of on all samples will implicitly increase the weight of poorly calibrated samples and help reduce bias. ### Test time evaluation on subgroups of interest At test time, we aim to mitigate the calibration error for the **worst-performing subgroup** for various subgroups of interest [6]. For example, if we consider sex (M/F) as the sensitive attribute and denote \(\mathrm{ECE}_{A=M}\) as the expected calibration error (ECE) on male patients, then the worst-performing subgroup ECE is denoted as \(\max(\mathrm{ECE}_{A=F},\mathrm{ECE}_{A=M})\). Following the strategy proposed in [17, 19], we use Q(uantile)-ECE to estimate the calibration error, an improved estimator for ECE that partitions prediction confidence into discrete bins with an _equal number of instances_ and computes the average difference between each bin's accuracy and confidence. In practice, calibration performance cannot be considered in isolation, as there always exists a _shortcut_ model that can mitigate calibration bias but have poor prediction performance, e.g, consider a purely random (under-confident) prediction with low accuracy. As such, there is an inherent **trade-off** between calibration bias and prediction error. When measuring the effectiveness of the proposed method, the objective is to ensure that calibration bias is mitigated without a substantial increase in the prediction error. ## 3 Experiments and Results Experiments are performed on two different medical image analysis tasks. We evaluate the performance of the proposed method against popular debiasing methods. We examine whether these methods can mitigate calibration bias without severely sacrificing performance on the worst-performing subgroups. **Task 1: Skin lesion multi-class (n=7) classification.** HAM10000 is a public skin lesion classification dataset containing 10,000 photographic 2D images of skin lesions. We utilize a recent MedFair pipeline [27] to pre-process the dataset into train (80%), validation (10%) and test (10%) sets. Based on the dataset and evaluation protocol in [27], we test two demographic subgroups of interest: age (age \(\leq 60\), age \(>60\)), and sex (male, female). **Task 2: Future new multiple sclerosis (MS) lesional activity prediction (binary classification).** We leverage a large multi-centre, multi-scanner proprietary dataset comprised of MRI scans from 602 RRMS (Relapsing-Remitting MS) patients during clinical trials for new treatments [2, 7, 26]. The task is to predict the (binary) presence of new or enlarging T2 lesions or Gadolinium-enhancing lesions two years from their current MRI. The dataset was divided as follows: training (70%) and test (30%) sets, validation is conducted through 4-fold cross validation in training set. We test model performance on four different subgroups established in the MS literature [11, 12, 22, 5, 23]. This includes: age (age \(<50\), age \(\geq 50\)), sex (male, female), T2 lesion volume (vol \(\leq 2.0\)ml, Figure 3: HAM10000: worst performing subgroup results. Cluster-Focal: Proposed method; ERM: Vanilla model; EIIL, ARL, JTT: Bias mitigation methods. Cluster-Focal demonstrates a better trade-off, significantly improving worst-performing calibration with only a small degradation in prediction performance. vol \(>\) 2.0ml) and Gad lesion count (count = 0, count \(>\) 0). Age and sex are sensitive demographic attributes that are common for subgroup analysis. The image-derived attributes were chosen because high T2 lesion volume, or the presence of Gad-enhancing lesions, in baseline MRI is generally predictive of the appearance of new and enlarging lesions in future images. However, given the heterogeneity of the population with MS, subgroups _without_ these predictive markers can still show future lesional activity. That being said, these patients can form a subgroup with poorer calibration performance. **Implementation Details:** We adopt 2D/3D ResNet-18 [9] for Task 1 and Task 2 respectively. All models are trained with Adam optimizer. Stage 1 model \(f_{\text{id}}\) is trained for 10 (Task 1) and 300 (Task 2) epochs and Stage 2 prediction model \(f_{\text{pred}}\) for 60 (Task 1) and 600 (Task 2) epochs. We set the number of clusters to 4 and \(\gamma=3\) in group-wise focal loss. Averaged results across 5 runs are reported. **Comparisons and Evaluations:** Macro-F1 is used to measure the performance for Task 1 (7 class), and F1-score is used for Task 2 (binary). Q-ECE [16] is used to measure the calibration performance for both tasks. The performance of the proposed method is compared against several recent bias mitigation methods that do not require training with subgroup annotations: ARL [10], which applies a min-max objective to reweigh poorly performing samples; EIIL [4], which proposes an adversarial approach to learn invariant representations, and JTT [14], which up-weights challenging samples. Comparisons are also made against ERM, which trains model without any bias mitigation strategy. For all methods, we Figure 4: MS: worst performing subgroup results. Cluster-Focal: proposed method; ERM: Vanilla Model; EIIL, ARL, JTT: bias mitigation methods. evaluate the trade-off between the prediction performance and the reduction in Q-ECE error for the **worst-performing subgroups** on both datasets. ### Results, ablations, and analysis **Results:** The resulting performance vs. Q-ECE errors tradeoff plots for worst-performing subgroups are shown in Fig. 3 and 4. The proposed method (Cluster-Focal) consistently outperforms the other methods on Q-ECE while having minimal loss in performance, if any. For instance, when testing on sex (male/female) for the MS dataset, (Cluster-Focal) loses around 2% prediction performance relative to (ERM) but has around 8% improvement in calibration error. When testing on sex in the HAM10000 dataset, we only observe a 2% performance degradation with a 4% improvement in Q-ECE. In addition to subgroups based on sensitive demographic attributes, we investigate how the methods perform on subgroups defined on medical image-derived features. In the context of MS, results based on subgroups, lesion load or Gad-enhancing lesion count are shown in Fig. 4(c-d). The proposed method performs best, with results that are consistent with demographic based subgroups. For Gad-enhancing lesion count, when compared with JTT, Cluster-Focal improves Q-ECE by 20%+ with a reduction in the prediction performance on the worst-performing subgroup of 2%. Detailed numeric values for the results can be found in the Supplemental Materials. **Ablation Experiments:** Further experiments are performed to analyze the different components of our method. The following variant methods are considered: (1) Focal: Removing stage 1 and using regular focal loss for the entire training set; (2) Cluster-ERM: Group-wise focal loss in stage 2 is replaced by standard cross entropy; (3) Cluster-GroupDRO: Group-wise focal loss in stage 2 is replaced by GroupDRO [20]; (4) Oracle-Focal: In stage 1, the identified cluster is replaced Figure 5: Ablation Experiments for MS. Focal: regular focal loss without stage 1; Cluster-ERM: In stage 2, cross entropy loss is used; Cluster-GroupDRO: In stage 2, GroupDRO loss is used; Oracle-Focal: identified cluster in stage 1 is replaced by the subgroup of interest (oracle); Oracle-GroupDRO: GroupDRO method applied on the subgroups of interest. by the true subgroups evaluated on at test time (oracle); (5) Oracle-GroupDRO: We use GroupDRO with the true subgroups used at test time. Results for MS, shown in Fig. 5, illustrate that each stage of our proposed model is required to ensure improved calibration while avoiding performance degradation for the worst-performing subgroups. **Calibration Curves:** Fig. 6 shows the reliability diagram for competing methods on Task 2: predicting future new MS lesional activity, with age being the chosen subgroup of interest (also see Fig. 1(a) for ERM results). Results indicate that popular fairness mitigation methods are not able to correct for the calibration bias in older patients (i.e. the worst-performing subgroup). With ARL, for example, most of the predictions were over-confident, resulting in a large calibration error. In contrast, our proposed method (Cluster-Focal) could effectively mitigate the calibration error in the worst-performing subgroup. ## 4 Conclusions In this paper, we present a novel two stage calibration bias mitigation framework (Cluster-Focal) for medical image analysis that (1) successfully controls the trade-off between calibration error and prediction performance, and (2) flexibly overcomes calibration bias at test time without requiring pre-labeled subgroups during training. We further compared our proposed approach against different debiasing methods and under different subgroup splittings such as demographic subgroups and image-derived attributes. Our proposed framework demonstrates Figure 6: MS: Reliability diagram for bias mitigation methods with age-based subgroups: (a) EIIL, (b) ARL, (c) JTT, and (d) Cluster-Focal. smaller calibration error in the worst-performing subgroups without a severe degradation in prediction performance. **Acknowledgements** This paper was supported by the Canada Institute for Advanced Research (CIFAR) AI Chairs program and the Natural Sciences and Engineering Research Council of Canada (NSERC). The MS portion of this paper was supported by the International Progressive Multiple Sclerosis Alliance (PA-1412-02420), the companies who generously provided the MS data: Biogen, BioMS, MedDay, Novartis, Roche/Genentech, and Teva, Multiple Sclerosis Society of Canada, Calcul Quebec, and the Digital Research Alliance of Canada.
2306.11602
Chemodynamical models of our Galaxy
A chemodynamical model of our galaxy is fitted to data from DR17 of the APOGEE survey supplemented with data from the StarHorse catalogue and gaia DR3. Dynamically, the model is defined by action-based distribution functions for dark matter and six stellar components plus a gas disc. The gravitational potential jointly generated by the model's components is used to examine the galaxy's chemical composition within action space. The observational data probably cover all parts of action space that are populated by stars. The overwhelming majority of stars have angular momentum J_\phi>0 implying that they were born in the Galactic disc. High-alpha stars dominate in a region that is sharply bounded by J_\phi \la J_\phi(solar). Chemically the model is defined by giving each stellar component a Gaussian distribution in ([Fe/H],[Mg/Fe]) space about a mean that is a linear function of the actions. The model's 47 dynamical parameters are chosen to maximise the likelihood of the data given the model in 72 three-dimensional velocity spaces while its 70 chemical parameters are similarly chosen in five-dimensional chemo-dynamical space. The circular speed falls steadily from 237\kms at R=4\kpc to 218\kms at R=20\kpc. Dark matter contributes half the radial force on the Sun and has local density 0.011\msun\pc^{-3}, there being 24.5\msun\pc^{-2} in dark matter and 26.5\msun\pc^{-2} in stars within 1.1\kpc of the plane.
James Binney, Eugene Vasiliev
2023-06-20T15:29:26Z
http://arxiv.org/abs/2306.11602v2
# Chemodynamical models of our Galaxy ###### Abstract A chemodynamical model of our galaxy is fitted to data from DR17 of the APOGEE survey supplemented with data from the StarHorse catalogue and gaia DR3. Dynamically, the model is defined by action-based distribution functions for dark matter and six stellar components plus a gas disc. The gravitational potential jointly generated by the model's components is used to examine the galaxy's chemical composition within action space. The observational data probably cover all parts of action space that are populated by stars. The overwhelming majority of stars have angular momentum \(J_{\phi}>0\) implying that they were born in the Galactic disc. High-\(\alpha\) stars dominate in a region that is sharply bounded by \(J_{\phi}\lesssim J_{\phi}\)(solar). Chemically the model is defined by giving each stellar component a Gaussian distribution in ([Fe/H],[Mg/Fe]) space about a mean that is a linear function of the actions. The model's 47 dynamical and 70 chemical parameters are chosen to maximise the likelihood of the data given the model in 72 three-dimensional velocity spaces and 30 two-dimensional chemical spaces. The circular speed falls steadily from \(237\,{\rm km\,s}^{-1}\) at \(R=4\,{\rm kpc}\) to \(218\,{\rm km\,s}^{-1}\) at \(R=20\,{\rm kpc}\). Dark matter contributes half the radial force on the Sun and has local density \(0.011\,{\rm M}_{\odot}\,{\rm pc}^{-3}\), there being \(24.5\,{\rm M}_{\odot}\,{\rm pc}^{-2}\) in dark matter and \(26.5\,{\rm M}_{\odot}\,{\rm pc}^{-2}\) in stars within \(1.1\,{\rm kpc}\) of the plane. keywords: Galaxies, stars: kinematics and dynamics - star, Galaxy: abundances - The Galaxy, Galaxy: disc - The Galaxy, Galaxy: fundamental parameters - The Galaxy, Galaxy: structure ## 1 Introduction ESA's Gaia satellite provides locations and space velocities for tens of millions of stars (Gaia Collaboration et al. 2021, 2022). In anticipation of the arrival of Gaia astrometry, several teams around the world have been accumulating the spectra of millions of stars at higher resolution than Gaia can achieve and using these spectra to derive the stars' chemical compositions, which are expected to yield insight into our Galaxy's history. The APOGEE survey (Majewski et al. 2017) is particularly powerful in this respect because, being based on the mid-infrared H band, it can probe the disc nearer the plane and over a wider radial range than other surveys, which are more strongly restricted by dust. Since the middle of the 20th century we have known that the ages and chemical compositions of stars vary systematically with their locations and velocities (Roman 1950, 1999; Eggen et al. 1962; Gilmore & Wyse 1998; Fuhrmann 2011). With the emergence of the theory of nucleosynthesis (Burbidge et al. 1957) and models of the chemical evolution of the ISM (Tinsley 1980; Pagel 1997) conviction grew that by studying the chemodynamical structure of our Galaxy we should be able to trace its history (Freeman & Bland-Hawthorn 2002). Over the last two decades efforts to realise this goal have taken two lines of attack. One line centres on simulations of galaxy formation that include gas, stars and dark matter, usually in some sort of cosmological context (e.g. Brook et al. 2004; Grand et al. 2017). Another line of attack models the Galaxy as a series of annuli within which stars form from gas that they simultaneously enrich (Matteucci & Francois 1989; Chiappini et al. 1997; Schonrich & Binney 2009a; Schonrich & McMillan 2017; Sharma et al. 2021; Chen et al. 2022). Strengths of the latter line of attack include the ability to fit models to the very detailed data now available for our Galaxy and to develop understanding of how specific physical processes, such as radial migration and the late arrival of type Ia supernovae manifest themselves in observational data. The central premise of Schonrich & Binney (2009a, hereafter SB09) and much prior work is that all disc stars were born on nearly circular orbits in the plane from gas that is azimuthally well mixed, so its metallicity [Fe/H] and \(\alpha\)-abundance [Mg/Fe] are functions of Galactocentric radius \(R\) and look-back time \(\tau\). Since the gross chemistry of stel lar atmospheres evolves little, to a good approximation it then follows that the location ([Fe/H],[Mg/Fe]) of a star in the chemical plane is an (a priori unknown) function of its birth radius \(R_{b}\) and age. SB09 modelled the functions [Fe/H](\(R_{b},\tau\)) and [Mg/Fe](\(R_{b},\tau\)) by adopting a radial profile of star formation and following the production of heavy elements and the dispersal of these elements by radial flows and winds. This effort lead to predictions for the number and chemistry of stars born at each radius and time. Fluctuations in the Galaxy's gravitational potential cause the orbits of stars to drift (e.g. Binney & Lacey, 1988). Sellwood & Binney (2002) divided this drift into (i) 'blurring', which is the drift away from circular orbits to more eccentric and inclined orbits, and (ii) 'churning', by which stars change their angular momenta without increasing their random velocities. By adjusting the intensity of blurring and churning, SB09 were able to match the distribution of solar-neighbourhood stars in chemical space. Remarkably, the observed bimodality of the chemical distribution emerged naturally in a model in which the star-formation rate declined monotonically with time and radius. The bimodality was a consequence of the rapid decline in [Mg/Fe] about a gigayear after the start of star formation as type Ia supernova set. The methodology of SB09 has been extended in various directions. Chen et al. (2022) updated their work by (a) using recent nucleosynthetic yields, and (b) comparing the model predictions at 24 locations (\(R,|z|\)) in the Galaxy rather than just in the solar neighbourhood. This important latter step was made possible by DR14 of the APOGEE survey. Sharma et al. (2021) had previously fitted models to these data but instead of deriving chemical compositions from a model of star formation and nucleosynthetic yields, they specified a functional form for [Fe/H](\(R_{b},\tau\)) that contains parameters to be fitted to the data. They further assumed that [Mg/Fe] is a function of [Fe/H] and age that has a specified functional form, and fitted the form's parameters to the data. Both Sharma et al. (2021) and Chen et al. (2022) followed SB09 in adopting a Schwarzschild DF \(f(E_{R},J_{\phi},E_{z})\) (e.g. Binney & Tremaine, 2008, SS4.4.3). Several powerful arguments favour use of DFs that are functions \(f({\bf J})\) of the action integrals rather than the approximate energies \(E_{R}\) and \(E_{z}\)(e.g. Binney & McMillan, 2016), so Sanders & Binney (2015) reformulated SB09 in terms of action-based DFs. Specifically, they assumed that in the absence of churning, the DF of each coeval cohort of disc stars would have the form of the quasi-isothermal DF that was introduced by Binney & McMillan (2011). Blurring was represented by the radial and vertical velocity dispersion parameters of the DF being functions of age, and the cohort's current DF was obtained by convolution of this quasi-isothermal DF with a kernel that represented diffusion in \(J_{\phi}\). Unfortunately at that time model-data comparisons were only possible in the solar neighbourhood, but the work of Sharma et al. (2021), in which a model was successfully fitted to the wide-ranging APOGEE data, is methodologically similar to Sanders & Binney (2015). These studies approach the data with a clear preconception of our Galaxy's history. Our aim here is to be more data-driven: regardless of history, how are the Galaxy's stars distributed in 'chemodynamical space' - the five-dimensional space spanned by the actions, [Fe/H] and [Mg/Fe]? Logically this question should precedes the question of _why_ stars are distributed as they are. A map of this distribution would be an invaluable descriptor of our Galaxy that would transcend theories of galaxy formation. Binney & Vasiliev (2023, hereafter BV23) used the _AGAMA_ software package (Vasiliev, 2019) to fit to Gaia DR2 data a model in which both stars and dark matter were represented by DFs of the form \(f({\bf J})\) and moved in the potential \(\Phi({\bf x})\) that they and interstellar gas jointly generate. Here we revise and extend this model: we revise it by modelling the Galaxy's bulge as a fat disc rather than a spheroid; we update it by fitting to data from APOGEE and the third rather than the second Gaia data release, and we extend it by assigning a chemical model to each of its stellar components. In Section 2 we describe the stellar sample that we have analysed. Section 3 first examines how the means of [Mg/Fe] and [Fe/H] vary with location in action space and then studies the stellar density and mean values of the actions in four regions of action space within which particular stellar components are expected to dominate. Section 4 introduces a scheme for modelling the Galaxy's chemodynamical structure: Section 4.1 explains how we extend a model based on standard DFs \(f({\bf J})\) to a chemodynamical model; Section 4.2 defines the functional forms we have assumed for \(f({\bf J})\); Sections 4.3 to 4.6 specify the values of the model parameters from which data-fitting commenced, explain our strategy for dealing with the survey's selection function, and define the likelihood that the search maximises and the maximising procedure. Section 5 describes the model fitted to the data and discusses the quality of the fit it provides. Section 6 compares the scope of the present model to that of the widely used Besancon model, and Section 7 sums up and identifies next steps. Appendix A validates our procedure for maximising a likelihood. ## 2 The data From the 17th data release of the Sloan Digital Sky Survey (Abdurro'uf et al., 2022) we selected data for stars that have Gaia DR3 astrometry (Gaia Collaboration et al., 2021, 2022) and parameters from the StarHorse catalogue (Queiroz et al., 2018; Anders et al., 2022). We removed stars with a probability \(>0.5\) of belonging to a globular cluster according to Vasiliev & Baumgardt (2021), stars with the STAR_WARN bit (7) of the ASPCAP_FLAG set, and stars with a StarHorse distance with an uncertainty larger than 0.75 kpc. We further required the astrometric 'fidelity' parameter of Rybizki et al. (2022) to exceed 0.5. Finally, the sample was restricted to giants by requiring \(\log g<3.5\) and \(T_{\rm eff}<5500\). We convert heliocentric data to galactocentric coordinates assuming the Sun's phase-space coordinates are \((R,z)=(8.27,0.025)\) kpc (GRAVITY Collaboration et al., 2022) and \((V_{R},V_{z},V_{\phi})=(14,7,251)\) km s\({}^{-1}\)(Schonrich, 2012; Reid & Brunthaler, 2020). Fig. 1 shows the spatial distribution of the sample's stars projected onto the \(xy\) plane and \(xz\) planes in the upper and lower panels, respectively. The upper panel shows very clearly the rapid variation of the selection function inevitable in a pencil-beam spectroscopic survey. Also evident is the strong bias towards the Sun inevitable in a magnitude-limited survey. Notwithstanding these regrettable character istics, the figure shows that the survey provides good coverage of the Sun's side of the Galaxy below \(|z|\sim 4\,\)kpc in the radial range \(1\,\)kpc \(\leq R\leq 14\,\)kpc. ## 3 Action-space chemistry We computed the giants' phase-space coordinates and from them computed the actions \(J_{r}\), \(J_{z}\) and \(J_{\phi}\) in the gravitational potential of the self-consistent Galaxy model that is presented in Section 5 below. In this potential 217 863 stars are bound and 97 unbound; actions cannot be computed for unbound stars, so these stars were eliminated from the sample. The panels of Fig. 2 show the mean value of [Mg/Fe] in cells in action space, while Fig. 3 shows mean values of [Fe/H]. The left columns show projections onto the \((J_{\phi},J_{z})\) plane grouped by their values of \(J_{r}\), while the right columns show projections onto the \((J_{\phi},J_{r})\) plane grouped by values of \(J_{z}\). In the left columns the values of \(J_{r}\) increase from the Figure 1: The spatial distribution of the stellar sample: projections along the \(z\) axis (upper panel) and the \(y\) axis (lower panel). The colour scale is (base 10) logarithmic. Figure 4: The number of stars in each cell shown in the top and bottom panels of the left columns of Figs. 2 and 3. Figure 3: Mean values of [Fe/H] of stars as a function of position in action space. Values are shown for four bands in \(J_{r}\)) (left column) and \(J_{z}\) as marked in the upper right of each panel. Figure 2: Mean values of [Mg/Fe] of stars as a function of position in action space. Values are shown for four bands in \(J_{r}\) (left column) and \(J_{z}\) as marked in the upper right of each panel. bottom panel upwards, while in the right columns the values of \(J_{z}\) increase upwards - the relevant range in \(J_{r}\) or \(J_{z}\) is shown at top-right of each panel. Stars on orbits that are either nearly circular or in the plane contribute, respectively, to the bottom left or right pair of panels in each figure, while stars on orbits that are either highly eccentric or highly inclined contribute to the top pair of panels. Fig. 4 shows the number of stars contributing to the bottom and top left panels of Figs. 2 and 3. These numbers are strongly influenced by APOGEE's selection function. In particular, the lower panel for low \(J_{r}\) shows a strong concentration around the Sun's location. The plots of mean [Mg/Fe] and [Fe/H] in Figs. 2 and 3 show no sign of this bias. Note the extraordinarily wide coverage in \(J_{\phi}\) - at low \(J_{z}\) there are stars with \(J_{\phi}\) down to zero from a maximum value that exceeds twice the Sun's value of \(J_{\phi}\) (\(\sim 2000\,{\rm kpc\,km\,s}^{-1}\)). This wide coverage of \(J_{\phi}\) at small \(J_{r},J_{z}\) is possible because APOGEE extends to low Galactic latitude \(b\) and covers an unprecedentedly wide range in Galactic radius \(R\), so it includes stars on near circular orbits at a wide range of radii. In every panel of Figs. 2 to 4 the populated region has a sharp left edge that lies just to the left of the line \(J_{\phi}=0\). If stars were in fact strictly confined to \(J_{\phi}>0\), observational errors would still cause some stars to scatter to \(J_{\phi}<0\). Indeed, a small over-estimation of the distance \(s\) to a star at \(\ell\simeq 0\) and \(R\ll R_{0}\) can move a star from the near to the far side of the Galactic centre without much effect on its apparent velocity, and thus reverse the sign of its measured \(J_{\phi}\). The sharpness of the left boundaries of the populated regions in Figs. 2 to 4 attest to the accuracy of the StarHorse distances used to compute \({\bf J}\). The near total confinement of stars to \(J_{\phi}\geq 0\) is a clear indication that the overwhelming majority of stars were born in a disc within our Galaxy - stars accreted from other galaxies would end up on orbits with both signs of \(J_{\phi}\) with roughly equal probability. A significant part of the stellar halo is thought to comprise such accreted stars, and as a consequence the halo shows negligible net rotation. The stellar halo should be dominant at small \(J_{\phi}\) and significant \(J_{r}\) and/or \(J_{z}\). Hence the sharpness of the boundary at \(J_{\phi}=0\) in all the panels of Figs. 2 to 4 implies that the stellar halo contributes rather few stars to the sample. It may well be that halo stars, being metal-poor and thus weak-lined, have trouble passing our spectroscopic quality cuts. There are several remarkable features of Fig. 2: * Along the \(J_{\phi}\) axis of the bottom left panel of Fig. 2 there is a narrow orange region indicative of solar [Mg/Fe]. This is the low-\(\alpha\) disc. Above it a blue-shaded region of high [Mg/Fe] fills the interior of a U centred on \(J_{\phi}\sim 1200\,{\rm kpc\,km\,s}^{-1}\). To left and right as well as below, this region transitions sharply to yellow shades indicative of lower [Mg/Fe]. The blue region is part of the high-\(\alpha\) disc. * In this bottom-left panel, to the right of \(J_{\phi}\simeq 2000\,{\rm kpc\,km\,s}^{-1}\) yellow shades extend to the highest populated values of \(J_{z}\sim 150\,{\rm kpc\,km\,s}^{-1}\). This phenomenon clearly shows that low-\(\alpha\) stars, even ones on significantly inclined orbits, dominate at \(J_{\phi}\gtrsim 2000\,{\rm kpc\,km\,s}^{-1}\). It implies that the high-\(\alpha\) disc has a remarkably sharp outer edge at roughly \(R_{0}\). * As one proceeds up the left column of Fig. 2 through samples of stars with increasing \(J_{r}\), the orange/yellow region of the thin, low-\(\alpha\) disc yields ground to the blue high-\(\alpha\) region. This phenomenon indicates that the high-\(\alpha\) disc extends down to \(J_{z}=0\); at low \(J_{r}\) it is overwhelmed by the low-\(\alpha\) disc, but comes to the fore as \(J_{r}\) increases because its DF decreases less rapidly with increasing \(J_{r}\). * Similar trends are evident as one proceeds up the right column of Fig. 2 through samples of stars with increasing \(J_{z}\), except that in the range \(500\,{\rm kpc\,km\,s}^{-1}<J_{\phi}<2000\,{\rm km\,s}^{-1}\,{\rm kpc}\) the yellow shades of low-\(\alpha\) stars recede more rapidly: in fact they have already disappeared from the sample with \(50\,{\rm kpc\,km\,s}^{-1}<J_{z}<100\,{\rm kpc\,km\,s}^{-1}\). This phenomenon indicates that the main body of the low-\(\alpha\) disc covers a wider range in \(J_{r}\) than in \(J_{z}\), as is natural in a 'thin' disc. * The V of green colours that pushes down at \(J_{\phi}\simeq 1500\,{\rm kpc\,km\,s}^{-1}\) in the bottom right panel of Fig. 2 suggests that within the low-\(\alpha\) disc \(\langle J_{r}\rangle\) has a minimum just interior to the Sun. * Whereas in the bottom left panel of Fig. 2 brown shades are confined to a narrow band above the \(J_{\phi}\) axis, in the bottom-right panel at \(J_{\phi}>2000\,{\rm kpc\,km\,s}^{-1}\) they extend to \(J_{r}\gtrsim 50\,{\rm kpc\,km\,s}^{-1}\). This indicates that the outer low-\(\alpha\) disc is radially hot. * In the top two panels of the right column of Fig. 2, blue colours indicative of high-\(\alpha\) extend right down to the \(J_{\phi}\) axis except at very low \(|J_{\phi}|\) and \(J_{\phi}>2200\,{\rm kpc\,km\,s}^{-1}\). This indicates that at \(J_{z}\gtrsim 100\,{\rm kpc\,km\,s}^{-1}\) the high-\(\alpha\) disc dominates outside the bulge and the outer disc. Fig. 3 shows mean values of [Fe/H] in the format used to display [Mg/Fe] in Fig. 2. Blue shades now imply low metallicity, so the high-\(\alpha\) disc with \(\langle{\rm[Fe/H]}\rangle\sim-0.7\) is coloured cyan, while the metal-rich, inner low-\(\alpha\) disc is coloured yellow. With this colour scheme very similar patterns are observed in Fig. 3 to those discussed above for Fig. 2: high-\(\alpha\) often implies low-metallicity. The most striking difference occurs in the region \(J_{\phi}>2000\,{\rm kpc\,km\,s}^{-1}\): in Fig. 3 this region is largely green because the outer low-\(\alpha\) disc is metal poor. However in the bottom right panel of Fig. 3 a tongue of yellow is evident at \(J_{\phi}\sim 2000\,{\rm kpc\,km\,s}^{-1}\) that signals that the low-\(\alpha\) disc becomes radially excited before it turns metal-poor. The bottom-left panel of Fig. 3 (for \(J_{r}<50\,{\rm kpc\,km\,s}^{-1}\)) shows [Fe/H] \(>0\) is confined to small \(J_{z}\) except at small \(|J_{\phi}|\). The bottom-right panel shows that at small \(J_{z}\) and \(|J_{\phi}|\), [Fe/H] \(>0\) occurs predominantly at large \(J_{r}\). These facts are consistent with the idea that metal-rich stars populate the kind of tightly bound, eccentric, low-inclination orbits that form the backbone of the bar. In the bottom right panel of Fig. 3 (for \(0<J_{z}<50\,{\rm kpc\,km\,s}^{-1}\)) in the range \(1400\,{\rm kpc\,km\,s}^{-1}<J_{\phi}<2200\,{\rm kpc\,km\,s}^{-1}\) a yellow-brown ridge reaches upwards: this is the signature of a vertically thin annulus of fairly metal-rich stars on eccentric orbits. ### The ([Fe/H], [Mg/Fe]) plane Figs. 2 and 3 show just mean values of [Mg/Fe] and [Fe/H]. To gain further insight we now explore distributions in the ([Fe/H],[Mg/Fe]) plane for the four regions of the \((J_{\phi},J_{z})\) plane that are marked in Fig. 5 over a plot of the standard deviation in [Mg/Fe] in the slice of action space with smallest \(J_{r}\). The standard deviation tends to be large where components with differing chemistry overlap. The first of these regions, labelled A, lies along the \(J_{z}\) axis at \(J_{z}>10\,{\rm kpc\,km\,s^{-1}}\) and \(J_{\phi}<300\,{\rm kpc\,km\,s^{-1}}\). Stars in this region are on highly inclined orbits, so they dominate the stellar density near the \(z\) axis. We call this the bulge/halo region. The second region, B, occupies the heart of the high-\(\alpha\) region of action-space - \(J_{z}>50\,{\rm kpc\,km\,s^{-1}}\) and \(300<J_{\phi}<2000\,{\rm kpc\,km\,s^{-1}}\). The third region, labelled C, lies just above the \(J_{\phi}\) axis at \(J_{\phi}>300\,{\rm kpc\,km\,s^{-1}}\). Stars in this region are on nearly planar orbits, so will be predominantly thin-disc stars. The fourth region, D, at \(J_{\phi}>2500\,{\rm kpc\,km\,s^{-1}}\), is dominated by the outer, flaring disc. The contours in Fig. 6 show the chemical structure of these four regions: the bulge/halo, high-\(\alpha\) disc, thin-disc and outer disc regions are depicted by rows from top to bottom. In the left, centre and right columns the colours show mean values of \(J_{\phi}\), \(J_{z}\) and \(J_{r}\), respectively. It should be borne in mind that the stellar densities contoured, unlike the mean values plotted in Fig. 2, reflect the APOGEE selection function. Most significantly, stars near the Sun are over-represented. * The top row of Fig. 6 shows that the bulge/halo region is indeed a superposition of two components, one centred on ([Fe/H],[Mg/Fe]) \(\simeq(-0.65,0.35)\) and the other on ([Fe/H],[Mg/Fe]) \(\simeq(0.36,0.03)\). The first population has a tail that extends to [Fe/H] \(\sim-1\) at slightly lower [Mg/Fe]. It is probably a mixture of the high-\(\alpha\) disc and the stellar halo with the former dominating. The metal-rich, low-\(\alpha\) component is presumably the bulge. The mean values of \(J_{\phi}\) and \(J_{r}\) shown by the colours vary little with chemistry, although there is a marginal tendency for \(\langle J_{\phi}\rangle\) to decrease with [Fe/H] and to be negative at [Fe/H]\(<-1\). The central panel shows that \(\langle J_{z}\rangle\) increases systematically with decreasing [Fe/H]. That is, \(\langle J_{z}\rangle\) is lowest in the bulge, highest in the halo and intermediate in the high-\(\alpha\) disc. * The second row in Fig. 6 shows the chemistry of the high-\(\alpha\) region. The dominant component is centred on ([Fe/H],[Mg/Fe]) \(\simeq(-0.49,0.33)\), with a tail sloping down to solar chemistry (0,0) and beyond. Within the dominant, high-\(\alpha\) component, the left panel shows that any gradient in \(\langle J_{\phi}\rangle\) is weak, although at [Fe/H]\(<-1\) there is a slight tendency for \(\langle J_{\phi}\rangle\) to decrease with [Fe/H]. The central panel shows a systematic increase in \(\langle J_{z}\rangle\) with decreasing [Fe/H] within the high-\(\alpha\) component but no evidence of an analogous gradient in the low-\(\alpha\) component. * this is likely the signature of stochastic heating: stars with larger [Mg/Fe] are older and kinematically hotter. Another effect of age increasing with [Mg/Fe] is the widening spread in [Fe/H] with increasing [Mg/Fe]: older stars have migrated further with the consequence that in the over-represented solar-neighbourhood group an older population displays a wider spread in [Fe/H]. The colours in the left panel of the third row indicate that the dominant component has a clear gradient in \(\langle J_{\phi}\rangle\), which must be a manifestation of the familiar metallicity gradient in the disc (e.g. Mendez-Delgado et al. 2022, and references thermin). There is no sign of an analogous gradient in the high-\(\alpha\) component. Surprisingly, the third row shows a tendency for \(\langle J_{z}\rangle\) to increase with [Mg/Fe] - given that stars contributing to this row have by construction \(J_{z}<10\,{\rm kpc\,km\,s^{-1}}\), it is puzzling to observe significant variation of \(\langle J_{z}\rangle\). The signal is unmistakable nonetheless. Stars with exceptionally low \(J_{z}\) must lie very close to the plane, so they are either very close to the Sun or are significantly extincted. Could high extinction artificially lower their measured values of [Mg/Fe]? Taken together the second and third rows confirm the presence of two populations that coexist at many points in action space. The low-\(\alpha\) population dominates at low \(J_{z}\) and the high-\(\alpha\) population dominates at high \(J_{z}\). The first and second rows show that stars metal-poorer than [Fe/H] \(\simeq-0.8\) are only encountered at [Mg/Fe] \(\gtrsim 0.2\). Moreover, these stars are confined to \(J_{z}\gtrsim 200\,{\rm kpc\,km\,s^{-1}}\) and \(J_{r}\simeq 100\,{\rm kpc\,km\,s^{-1}}\). * The bottom panels of Fig. 6 show the chemistry of the outer, flaring disc. It is almost entirely accounted for by a single population centred on ([Fe/H],[Mg/Fe]) \(\simeq(-0.4,0.1)\). This low-\(\alpha\) population shows the gradient in \(\langle J_{\phi}\rangle\) that we encountered in the inner low-\(\alpha\) disc, so we have every reason to suppose that this population is just an extension of the dominant component in the third row (the low-\(\alpha\) disc). Moreover, the central and right panels of the fourth row show the same trends in \(\langle J_{z}\rangle\) and \(\langle J_{r}\rangle\) we encountered in the third row, and attributed to increasing age and stochastic heating with [Mg/Fe]. Perhaps the most striking aspects of Fig. 6 are two 'dogs that didn't bark': (i) the absence of stars at low-metallicity and low-\(\alpha\) in the top row (bulge/halo region) and (ii) the Figure 5: The standard deviation in [Mg/Fe] in a slice of action space. The dotted lines mark the regions for which chemical pdfs are presented in Fig. 6. complete disappearance of the high-\(\alpha\) population between the second and fourth rows (high-\(\alpha\) and outer disc regions). Another notable result is the contrast between the strong gradients in \(\langle J_{\phi}\rangle\), \(\langle J_{z}\rangle\) and \(\langle J_{r}\rangle\) in the low-\(\alpha\) disc and the extremely weak gradients in mean actions in the high-\(\alpha\) component. Fig. 7 reveals key differences in how low- and high-\(\alpha\) stars are distributed in action space by plotting projections of the sample onto the \((J_{\phi},J_{z})\) plane in the upper, and the \((J_{\phi},J_{r})\) plane in the lower, pairs of panels. The absence of high-\(\alpha\) stars at \(J_{\phi}>3000\,{\rm kpc\,km\,s^{-1}}\) is striking, as is the extent to which the low-\(\alpha\) stars extend to high \(J_{z}\) at both small and large \(J_{\phi}\) while being restricted to lower \(J_{z}\) at intermediate \(J_{\phi}\). In \(J_{r}\), the pattern of the low-\(\alpha\) stars is very different: the populated region reaches highest in \(J_{r}\) at intermediate \(J_{\phi}\) even though stars with large \(|J_{\phi}-J_{\phi\odot}|\) can reach the Sun, and thus boost their chances of entering the sample, only if they have large \(J_{r}\). The wide spread in \(J_{r}\) at \(J_{\phi}\simeq J_{\phi\odot}\) may be the result of resonant scattering by spirals, while the wide spread in \(J_{z}\) at large \(J_{\phi}\) might be a legacy of tidal interactions with objects in the dark halo. Figure 6: Contours show the chemistry of the four regions A–D marked by dotted lines in Fig. 5. The top row is for the bulge/halo region A around the \(J_{z}\) axis. The second row is for the high-\(\alpha\) region B. The third row is for the thin-disc region C just above the \(J_{\phi}\) axis, and the bottom row is for the outer, flaring low-\(\alpha\) disc, D. The panels are coloured by mean \(J_{\phi}\), \(J_{z}\) and \(J_{r}\) in the left, centre and right columns, respectively. Note that while the [Fe/H] axis covers the same range \((-1.2,0.5)\) in the lower three rows, in the top row it extends to metal-poorer stars: \({\rm[Fe/H]}=-1.7\). ## 4 Modelling the data We now turn to the construction of a chemodynamical model of our Galaxy that reproduces as closely as possible the trends discovered in the last section. ### EDF structure Sanders & Binney (2015) introduced the concept of an _extended distribution function_ (EDF), that is a density of stars in the five-dimensional space spanned by \(\mathbf{J}\) and chemistry, \[\mathbf{c}\equiv\big{(}[\mathrm{Fe}/\mathrm{H}],[\mathrm{Mg}/\mathrm{Fe}] \big{)}. \tag{1}\] An excellent way of summarising the chemodynamical structure of our Galaxy would be to decompose it into components that individually have simple EDFs. Specifically, we seek components that have analytic dynamical DFs \(f(\mathbf{J})\) that are extended to EDFs by multiplication by an analytic probability density \(P(\mathbf{c}|\mathbf{J})\) that gives the probability that a star with actions \(\mathbf{J}\) has the chemistry \(\mathbf{c}\). _Any_ EDF \(F(\mathbf{c},\mathbf{J})\) can be written as a product \(f(\mathbf{J})P(\mathbf{c}|\mathbf{J})\) - simply define \(f(\mathbf{J})\equiv\int d^{2}\mathbf{c}\,F(\mathbf{c},\mathbf{J})\) and \(P(\mathbf{c}|\mathbf{J})\equiv F(\mathbf{c},\mathbf{J})/f(\mathbf{J})\) and is trivial to show that \(\int d^{2}\mathbf{c}\,P=1\) - but we want to define our components so \(P(\mathbf{c}|\mathbf{J})\) can be approximated by a Gaussian distribution in \(\mathbf{c}\) with mean and dispersion depending on \(\mathbf{J}\). The general Gaussian two-dimensional probability density can be written \[P(\mathbf{c}|\mathbf{J})=\frac{\sqrt{\det(\mathbf{K})}}{2\pi}\exp\left(\tfrac {1}{2}(\mathbf{c}-\mathbf{c_{z}})^{T}\cdot\mathbf{K}\cdot(\mathbf{c}-\mathbf{ c_{z}})\right), \tag{2}\] where \(\mathbf{K}\) is a \(2\times 2\) symmetric matrix and the subscripts imply dependence on \(\mathbf{J}\) - in order to limit the number of parameters to determine, we assume that \(\mathbf{K}\) is independent of \(\mathbf{J}\). A simple assumption is that \(\mathbf{c_{z}}\) depends linearly on \(\mathbf{J}\): \[\mathbf{c_{j}}=\mathbf{c_{0}}+\mathbf{C}\cdot(\mathbf{J}-\mathbf{J}_{0}), \tag{3}\] where \(\mathbf{c_{0}}\) is a two-component object and \(\mathbf{C}\) is a \(2\times 3\) matrix. Without loss of generality, we can choose the reference actions \(\mathbf{J}_{0}=(0,0,V_{e}R_{0})\) to be similar to those of the Sun, with the implication that \(\mathbf{c_{0}}\) becomes the mean chemistry of stars in the given component that are on solar-type orbits. ### DFs of the components We follow BV23 in modelling the stellar component of our Galaxy with six DFs \(f(\mathbf{J})\). Five of these are instances of a generalisation of the exponential DF introduced by Vasiliev (2019). The overall DF is a product of factors \[f(\mathbf{J})=f_{\phi}(J_{\phi})f_{r}(J_{r},J_{\phi})f_{z}(J_{z},J_{\phi})f_{ \mathrm{int}}(J_{\phi})f_{\mathrm{ext}}(J_{\phi}) \tag{4}\] The function \(f_{r}\) controls the breadth of the distribution in \(J_{r}\) and thus the velocity dispersions \(\sigma_{R}\) and \(\sigma_{\phi}\) near the equatorial plane. The function \(f_{z}\) similarly controls the breadth of the distribution in \(J_{z}\) and hence controls both the thickness of the disc and the velocity dispersion \(\sigma_{z}\). The other three factors control the disc's radial profile. On its own, the factor \(f_{\phi}\) generates a roughly exponentially declining surface density \(\Sigma(R)\simeq\exp(-R/R_{\mathrm{d}})\). The factor \(f_{\mathrm{int}}\) tapers this profile towards the centre. At a theoretical level, this factor is motivated by the notion that the central portion of the disc has morphed into the bar/bulge leaving a depression at the centre of the surviving disc. At an empirical level, modelling the young disc through the observed distribution of OB stars, Li & Binney (2022) concluded that an exponential that extends right to the centre predicts too many stars at small radii. The factor \(f_{\mathrm{ext}}\) truncates the disc at some outer radius. This feature is motivated by the sharp transition from blue to yellow at \(J_{\phi}\simeq 2000\,\mathrm{kpc}\,\mathrm{km}\,\mathrm{s}^{-1}\) in Fig. 2, which suggests that the high-\(\alpha\) disc has a sharp outer edge. These factors take the form \[f_{i}(J_{\phi})=\frac{E}{E+1/E} \tag{5}\] where \[E=\begin{cases}\exp\big{[}(J_{\phi}-J_{\mathrm{int}})/D_{\mathrm{int}}\big{]}& \text{for $i=\mathrm{int}$,}\\ \exp\big{[}-(J_{\phi}-J_{\mathrm{ext}})/D_{\mathrm{ext}}\big{]}&\text{for $i= \mathrm{ext}$.}\end{cases} \tag{6}\] Here the action \(J_{\mathrm{int}}\) determines the characteristic radius of the central depression and \(D_{\mathrm{int}}\) is a parameter that determines the sharpness of its boundary. Similarly, the action \(J_{\mathrm{ext}}\) determines the disc's outer truncation radius and \(D_{\mathrm{ext}}\) sets the sharpness of the cutoff there. If \(D_{i}\) is set to a neg Figure 7: The action-space distributions of low- and high-\(\alpha\) stars. The colour scale gives the base-10 logarithm of the star density when projected onto the \((J_{\phi},J_{z})\) plane in the upper panels and the \((J_{\phi},J_{r})\) plane in the lower panels. In each pair the upper panel is for \([\mathrm{Mg}/\mathrm{Fe}]>0.2\) and the lower panel is for \([\mathrm{Mg}/\mathrm{Fe}]<0.15\). It is worth noting how much the bottom panel differs from similar plots of stellar density in \((J_{\phi},J_{r})\) for stars near the Sun (e.g. Hunt et al., 2019): here we see no lines associated with resonances. To see these features the sample must be restricted to a narrow band in radius or \(J_{\phi}\). ative value, \(f_{i}=1\) is returned, ensuring that there is no central depression or radial truncation. The functional forms adopted for \(f_{r}\) and \(f_{z}\) are essentially the same as those adopted by BV23: \[f_{i}=x_{i}\mathrm{e}^{-x_{i}J_{i}} \tag{7}\] where \[x_{i}\equiv(J_{\mathrm{v}}/J_{\mathrm{\phi 0}})^{p_{i}}\big{/}J_{\mathrm{\phi 0}} \quad\mathrm{for}\ i=r,z. \tag{8}\] Here \(J_{\mathrm{\phi 0}}\) is the action that sets the disc's characteristic scale length and the action \(J_{\mathrm{\phi 0}}\) sets the velocity dispersions \(\sigma_{R}\) and \(\sigma_{z}\). The exponent \(p_{i}\) controls the radial variation of these dispersions: the larger \(p_{i}\) is, the faster the dispersions fall with radius. The action \(J_{\mathrm{v}}\), a surrogate for energy, is taken to be \[J_{\mathrm{v}}\equiv J_{r}+J_{z}+J_{\mathrm{\phi}}+J_{\mathrm{v0}}, \tag{9}\] where \(J_{\mathrm{v0}}\) is a constant that controls the way the dispersions vary at small radii as discussed by BV23. The factor \(f_{\phi}\) has the form \[f_{\phi}(\mathbf{J})=\begin{cases}0&\mathrm{when}\ J_{\mathrm{\phi}}<0\\ \frac{M}{(2\pi)^{3}}\frac{J_{\mathrm{d}}}{J_{\mathrm{\phi 0}}^{2}}\mathrm{e}^{-J_{ \mathrm{d}}J_{\mathrm{\phi 0}}}&\mathrm{otherwise.}\end{cases} \tag{10}\] where \[J_{\mathrm{d}}\equiv J_{r}+J_{z}+J_{\mathrm{\phi}}+J_{\mathrm{\phi}}+J_{\mathrm{ \phi 0}}, \tag{11}\] with \(J_{\mathrm{d0}}\) a constant that controls the way the surface density varies at small radii, as discussed by BV23. These formulae for \(f_{r}\), \(f_{z}\) and \(f_{\phi}\) are the same as those in BV23 except that here \(J_{\mathrm{v}}\) and \(J_{\mathrm{d}}\) depend on \(J_{r}\) and \(J_{z}\) in addition to \(J_{\mathrm{\phi}}\). Since disc stars typically have \(J_{\mathrm{\phi}}\gg J_{r},J_{z}\) the additional dependence of \(J_{\mathrm{v}}\) and \(J_{\mathrm{d}}\) is generally insignificant. It does, however, yield more plausible velocity distributions in the neighbourhood of \(V_{\mathrm{\phi}}=0\). We follow BV23 in modelling the low-\(\alpha\) disc as a superposition of three discs, presumed to be of increasing age and velocity dispersion: the young, middle-aged and old discs. Away from their centres, these discs are simple exponentials, but they may have central depressions in their surface density consistent with the bar/bulge having formed out of them. BV23 modelled the bulge by a spheroidal DF. Figs. 2, 3 and 7 indicate that this was a poor choice by suggesting that the bulge is almost entirely confined to \(J_{\mathrm{\phi}}>0\), as is natural in a component that formed from a thin disc. Therefore we model the bulge as a truncated exponential, in which \(J_{\mathrm{\phi 0}}\) is small and \(J_{\mathrm{\phi 0}}\) and \(J_{\mathrm{\phi 0}}\) are large. The high-\(\alpha\) disc is assumed to be a radially truncated exponential. The stellar halo is modelled by the same non-rotating spheroidal DF used by BV23. This probably provides a poor representation of the truth, but since the halo contributes very little to the APOGEE data, we defer improvement of its DF to future work. ### Initial parameters The chemodynamical model we are fitting has a large number of parameters. An automated search for a good model through a high-dimensional space is unlikely to succeed if started from a random location. Hence we started by hand-fitting the DFs to the dynamical data in the manner described by BV23. The resulting model differed from that of BV23 principally because the data employed extended from \(R=1\,\mathrm{kpc}\) to \(14\,\mathrm{kpc}\) rather than the narrower range \(R_{0}\pm 3\,\mathrm{kpc}\). The starting parameters of the DFs were largely taken to be those determined by BV23. Experiment showed that central tapers in the surface densities of disc components tend to yield circular-speed curves that rise more slowly than the data imply, so in the final model only the young disc has a central taper. For the high-\(\alpha\) disc, the sharp colour transition in Fig. 2 around \(J_{\mathrm{\phi}}=2000\,\mathrm{kpc}\,\mathrm{km}\,\mathrm{s}^{-1}\) motivated the choice \(J_{\mathrm{ext}}=2000\,\mathrm{km}\,\mathrm{s}^{-1}\,\mathrm{kpc}\), \(D_{\mathrm{ext}}=200\,\mathrm{kpc}\,\mathrm{km}\,\mathrm{s}^{-1}\). The other fresh choices required for the DFs were all the parameters of the bulge's DF. The bar/bulge extends out to \(R\simeq 3\,\mathrm{kpc}\), which corresponds to \(J_{\mathrm{ext}}\simeq 800\,\mathrm{kpc}\,\mathrm{km}\,\mathrm{s}^{-1}\), so \(J_{\mathrm{ext}}\) should be a value of this order; we started from \(J_{\mathrm{ext}}=1000\,\mathrm{kpc}\,\mathrm{km}\,\mathrm{s}^{-1}\) and \(D_{\mathrm{ext}}=200\,\mathrm{kpc}\,\mathrm{km}\,\mathrm{s}^{-1}\). The bulge is a hot component, so we started from \(J_{\mathrm{\phi 0}}=100\,\mathrm{kpc}\,\mathrm{km}\,\mathrm{s}^{-1}\) and \(J_{\mathrm{\phi 0}}=50\,\mathrm{kpc}\,\mathrm{km}\,\mathrm{s}^{-1}\). Table 1 lists the initial values of the chemical parameters. On the left we have the tilt \(\theta\) of the Gaussian ellipsoids with respect to the chemical axes1 and the means \(x_{0},y_{0}\) and dispersions \(\sigma_{x},\sigma_{y}\) of those Gaussians at the Sun's action-space location. As we proceed from young disc to old and high-\(\alpha\), the mean value of [Fe/H] is presumed to decline while that of [Mg/Fe] is presumed to increase. The dispersions \(\sigma_{x}\) and \(\sigma_{y}\) increase along this sequence consistent with the evidence that stars diffuse from circular orbits as they age. The bulge is assumed to be metal-rich and \(\alpha\)-normal, with a large dispersion \(\sigma_{x}\). Footnote 1: The angle \(\theta\) is defined such the \(P(\mathbf{e})\propto\mathrm{e}^{-(x^{2}/\sigma_{x}^{2}+y^{2}/\sigma_{y}^{2})/2}\), where \(x=\delta\mathrm{[Fe/H]}/\mathrm{l}\mathrm{d}J_{\mathrm{\phi}}\) by Spina et al. (2022) for open clusters. Other non-zero values are of \(C_{2,J_{r}}\) and \(C_{2,J_{z}}\) for the old disc; these imply that [Mg/Fe] increases with eccentricity and inclination, and a value for the high-\(\alpha\) disc’s \(C_{1,J_{z}}\) that implies strongly decreasing [Fe/H] with inclination. The gradients of chemistry in action space are listed on the right of Table 1. Most values are set to zero. Exceptions are the coefficients \(C_{1,J_{\mathrm{\phi}}}\) that set the radial metallicity gradients in the thin disc and the stellar halo - these are set to a value similar to that, \(-0.31\pm 0.02\,\mathrm{dex}\,\mathrm{per}\,\mathrm{Mpc}\,\mathrm{km}\,\mathrm{s}^{-1}\), that was inferred for d[Fe/H]/d\(J_{\mathrm{\phi}}\) by Spina et al. (2022) for open clusters. Other non-zero values are of \(C_{2,J_{r}}\) and \(C_{2,J_{z}}\) for the old disc; these imply that [Mg/Fe] increases with eccentricity and inclination, and a value for the high-\(\alpha\) disc's \(C_{1,J_{z}}\) that implies strongly decreasing [Fe/H] with inclination. ### Managing the APOGEE selection function As remarked above, the contours in Fig. 6 depend on APOGEE's very complex, dust-dependent, selection function (SF) in addition to the EDF. Our approach to circumventing this problem is as follows. Our basic assumption is that the SF is independent of both velocity and chemistry but depends strongly on position. The independence of velocity is clear; less so is the independence of \(\mathbf{e}\) since colour criteria were involved in the selection of stars. Nonetheless any dependence of the SF on \(\mathbf{c}\) must be tiny by comparison with the dependence on \(\mathbf{x}\). Since our model is axisymmetric and symmetric in \(z\), velocity distributions are functions of \((R,|z|)\) only. So we bin the real stars in the 72 bins in \((R,|z|)\) space that are specified by Table 22 and determine the barycentre \(\mathbf{X}_{\alpha}\) of the stars in the \(\alpha\)th bin - let \(N_{\alpha}\) be the number of stars in this bin. Then, given a trial set of components and their EDFs, we determine the density \(\rho_{i\alpha}=\int\mathrm{d}^{3}\mathbf{v},f_{i}\) contributed by the \(i\)th component at \(\mathbf{X}_{\alpha}\) and let \[k_{i\alpha}\equiv n\frac{\rho_{i\alpha}}{\sum_{j}\rho_{j\alpha}} \tag{12}\] be \(n\sim 5\) times the fraction of real stars predicted to be contributed by the \(i\)th component. Now we create \(k_{i\alpha}N_{\alpha}\) mock stars by sampling velocity space at \(\mathbf{X}_{\alpha}\) under the velocity distribution specified by \(f_{i}(\mathbf{J})\). Each chosen velocity \(\mathbf{V}_{i\alpha}\) corresponds to an action \(\mathbf{J}_{i\alpha}\), and we use this to choose a chemistry \(\mathbf{c}_{i\alpha}\) by sampling the Gaussian \(P(\mathbf{c}|\mathbf{J}_{i\alpha})\). When this is done, each spatial bin contains \(nN_{\alpha}\) mock stars in addition to the \(N_{\alpha}\) real stars. The mock stars, unlike the real stars, have known component memberships. ### Defining the likelihood Given a model potential \(\Phi(\mathbf{x})\), a set of DFs and a model chemistry, we compute the log likelihood \(\ln L\) of the data as follows. We distribute each bin's mock stars over a grid in velocity space by the cloud-in-cell algorithm (e.g. Binney & Tremaine 2008, Box 2.4). The velocity grid has \(n_{g}^{3}\) cells covering the cuboid with boundaries at \(\pm V_{\mathrm{Rmax}}\), \(\pm V_{\mathrm{rmax}}\) and \(V_{\mathrm{\theta min}}\) to \(V_{\mathrm{\theta max}}\), where \(V_{\mathrm{Rmax}}=2.5\sigma_{R}\), \(V_{\mathrm{rmax}}=2.5\sigma_{z}\), \(V_{\mathrm{\theta min}}=-50\,\mathrm{km}\,\mathrm{s}^{-1}\) and \(V_{\mathrm{\theta max}}=\langle V_{\mathrm{\phi}}\rangle+2.5\sigma_{\mathrm{ phi}}\). Here the grid size \(n_{g}\) is proportional to the cube root of the number of real stars and \(\sigma_{R}\) etc are the standard deviations of the components of the velocities of the real stars: \(n_{g}\) ranged up to \(23\) with average value \(9.2\). The mass assigned to each grid cell is then divided by the number of mock stars in the relevant spatial bin. Then we use the cloud-in-cell algorithm to determine the mock-star density at the location of each real star and compute the contribution \(\ln L_{\alpha\,\mathrm{dyn}}\) of the \(\alpha\)th spatial bin to the overall dynamical log likelihood as the sum over stars of the logarithms of these densities. In Appendix A we show that the maximum value of \(\ln L_{\alpha\,\mathrm{dyn}}\) under variation of the masses assigned to cells, subject to a fixed total mass on the grid, is achieved when the mass of mock stars in each cell coincides with the mass that would be obtained by distributing the real stars over the grid in the same way that the mock stars are distributed. The dynamical log likelihood is the sum \[\ln L_{\mathrm{dyn}}=\frac{1}{N_{\mathrm{real}}}\sum_{\mathrm{bins}\alpha}\ln L _{\alpha\,\mathrm{dyn}} \tag{13}\] over spatial bins normalised by the number of real stars. This normalisation prevents the likelihood being dominated by grid cells that lie close to the Sun and are in consequence heavily populated: sparsely populated cells far from the Sun are powerful probes of the Galaxy's structure. The computation of the chemical contribution to the log likelihood is similar. We use the cloud-in-cell algorithm to distribute the mock stars over a four-dimensional grid \(([\mathrm{Fe}/\mathrm{H}],[\mathrm{Mg}/\mathrm{Fe}],J_{\phi},J_{z})\). Then the mass assigned to each cell is divided by the number of mock stars, and \(\ln L_{\mathrm{chem}}\) is computed as the mean of the logarithm of the grid density at the location of each real star. The final log likelihood is \[\ln L=\ln L_{\mathrm{dyn}}+\ln L_{\mathrm{chem}}. \tag{14}\] An indication of how closely a model approximates the data can be obtained by distributing real rather than mock stars over the grids and then computing \(\ln L\) as before by determining the grid density at the locations of the real stars. We call this the perfect-fit value of \(\ln L\). The perfect-fit and actual-fit values of \(\ln L_{\mathrm{dyn}}\) for the chosen model are \(-6.460\) and \(-6.573\), while the corresponding values of \(\ln L_{\mathrm{chem}}\) are \(-8.367\) and \(-8.480\). We will see below that the algorithm chooses surprisingly large values for some dispersions \(\sigma_{y}\) and gradient coefficients \(C_{ij}\). We tested the robustness of these choices by including in the quantity to be maximised a Gaussian prior term proportional to \(-\sigma_{y}^{2}\) and/or \(-C_{ij}^{2}\). These priors had little effect. #### 4.5.1 A shortcut The procedure just described assumes that the mock stars are drawn from the current DFs. It is expedient to be able to compute \(\ln L\) from mock stars obtained by sampling a DF \(f_{0}\) that differs slightly from the current DF, \(f\). To do this we weight each mock star by \(f(J)/f_{0}(J)\), the ratio of the star's current probability density to the probability density when it was randomly drawn. The sum of these weights is the number of 'effective' mock stars and this is the number used to normalise the masses in cells after distributing mock stars on a grid. The need to re-sample was determined by \begin{table} \begin{tabular}{l c c c c c c c c c c} Component & \(\theta\) & \(x_{0}\) & \(y_{0}\) & \(\sigma_{x}\) & \(\sigma_{y}\) & \(C_{1,J_{r}}\) & \(C_{1,J_{s}}\) & \(C_{1,J_{\phi}}\) & \(C_{2,J_{r}}\) & \(C_{2,J_{s}}\) & \(C_{2,J_{\phi}}\) \\ \hline young disk & \(-7\) & \(-0.06\) & \(0.036\) & \(0.1\) & \(0.035\) & \(0\) & \(0\) & \(-0.29\) & \(0\) & \(0\) & \(0\) \\ middle disk & \(-7\) & \(-0.1\) & \(0.04\) & \(0.12\) & \(0.03\) & \(0\) & \(0\) & \(-0.29\) & \(0\) & \(0\) & \(0\) \\ old disk & \(-8\) & \(-0.4\) & \(0.1\) & \(0.16\) & \(0.04\) & \(0\) & \(0\) & \(-0.29\) & \(0.1\) & \(0.1\) & \(0\) \\ high-\(\alpha\) disk & \(-8\) & \(-0.5\) & \(0.33\) & \(0.2\) & \(0.05\) & \(0\) & \(-10\) & \(0\) & \(0\) & \(0\) & \(0\) \\ stellar halo & \(-3\) & \(-1.1\) & \(0.3\) & \(0.15\) & \(0.05\) & \(0\) & \(0\) & \(-0.29\) & \(0\) & \(0\) & \(0\) \\ bulge & \(-6\) & \(0.4\) & \(0.03\) & \(0.3\) & \(0.04\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) & \(0\) \\ \end{tabular} \end{table} Table 1: Initial values of the chemical pdfs (for fitted values see Table 5 below). The units of \(\theta\) are degrees while \(x_{0}\), \(y_{0}\), \(\sigma_{x}\), \(\sigma_{y}\) are given in dex. The values quoted for the gradient matrices \(\mathbf{C}\) (eqn 3) are in dex per \(\mathrm{Mpc}\,\mathrm{km}\,\mathrm{s}^{-1}\). \(C_{1,J_{r}}\equiv C_{\mathrm{[Fe}/\mathrm{H]},J_{r}}\) while \(C_{2,J_{r}}\equiv C_{\mathrm{[Mg}/\mathrm{Fe]},J_{r}}\). \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \(|z|\) & \(0\) & \(0.3\) & \(0.7\) & \(1\) & \(1.5\) & \(2\) & \(3\) & & & \\ \(R\) & \(0.5\) & \(1.5\) & \(3\) & \(4\) & \(5\) & \(6\) & \(7\) & \(8\) & \(9\) & \(10\) & \(11.5\) & \(13\) & \(14.5\) \\ \end{tabular} \end{table} Table 2: Boundaries (in kiloparsecs) between spatial bins monitoring the largest values of \(f({\bf J})/f_{0}({\bf J})\). There is no need to re-sample after varying the chemical model. ### Likelihood maximisation We have used the Nelder-Mead algorithm encapsulated in the agama function findMinNdim to minimise \(-\ln L\). We alternated a few hundred steps in which the chemical model was fixed while the DFs were varied with a few hundred steps in which the DFs were fixed and the chemical model was varied. The code sought to maximise the sum of \(\ln L_{\rm dyn}\) and \(\ln L_{\rm chem}\) regardless of which parameters were being varied because \(\ln L_{\rm chem}\) is useful when changing the DFs: it is sensitive to the relative masses of components and indicates whether a deficiency of stars at some phase-space location should be remedied by increasing the mass of the high-\(\alpha\) disc or the old disc, for example. When the DFs are varied without updating the potential \(\Phi({\bf x})\), the fit quality is invariant under multiplication of the masses \(m_{\alpha}\) of every component's DF by a common factor: with \(\Phi\) fixed, the fit depends only on the relative masses of components. So the Nelder-Mead algorithm varied the DFs with the total stellar mass held constant. After a few hundred Nelder-Mead adjustments of the DFs, the DF parameters were reviewed and potentially changed by hand before updating \(\Phi\) to self-consistency. Then a new sample of mock stars was drawn. ## 5 Results Regardless of whether it is the DFs or the chemical model that is being adjusted, the Nelder-Mead algorithm yields a sequence of \(\ln L\) values that increases rapidly at first and then more and more slowly. So one does not have the impression that it has located a global maximum. Hence the model presented here has the status of fitting the data about as well as any model we have tried rather than being a definitive model. The full curve in Figs 8 shows the circular-speed curve \(V_{\rm c}(R)\) of the final potential; also shown are the contributions to \(V_{\rm c}\) from the baryons (blue dotted curve) and dark matter Figure 11: Upper panel: contributions to the density at \((R,0)\). Lower panel: rotation rates of the components at \((R,0)\). Figure 8: Circular speed of the potential with prior estimates. Figure 10: Full curve: surface density of disc stars. Dotted curve: surface density of all stars. Red dashed curve: exponentially falling surface density with scale length \(R_{\rm d}=3.6\,{\rm kpc}\). Figure 9: Vertical density profiles at \(R_{0}\). (black dashed curve) along with several previous estimates of \(V_{\rm c}(R)\) derived from tracers presumed to be on near-circular (Mroz et al., 2019; Ablimit et al., 2020), and a study of 23 000 red giants by Eilers et al. (2019). At \(R\lesssim 4\) kpc, where circular orbits are prohibited by the bar, the model curve deviates significantly from the triangular points from Wegg & Gerhard (2013), which describe the axisymmetric component of a non-axisymmetric potential. Clearly we cannot expect an axisymmetric model to reproduce the data for this region, but the extent of the conflict with earlier work suggests that our bulge is too massive and insufficiently compact. At \(R\gtrsim 4\) kpc the model curve in Fig. 8 runs just above most of the points from previous studies. This is a consequence of taking \(V_{\rm\phi\odot}=251\) km s\({}^{-1}\) for the Sun: when \(V_{\rm\phi\odot}\) is increased, the black (data) curves in the right columns of Figs. 12 to 14 move to the right and the potential has to be adjusted to make the red (model) curves follow them. Our value for \(V_{\rm\phi\odot}\) follows directly from the observations of Sgr A* and the Figure 12: Velocity distributions within 24 spatial cells. The left column shows the distributions in \(V_{R}\), \(V_{z}\) and \(V_{\rm\phi}\) in the cells that cover the equatorial plane and have Galactocentric radii \(R\) that increase from 1.1 kpc at the bottom to 13.5 kpc at the top. The right column shows the corresponding velocity distributions for the adjacent cells – these have barycentres at \(|z|\sim 0.5\) kpc. The numbers in the panels for \(V_{z}\) give the exact coordinates of the relevant barycentre. In each panel the velocity scale covers \(\pm 3\sigma_{i}\), where \(\sigma_{i}\) is the standard deviation of the data histogram – the numbers along the bottom row of panels. assumption that this black hole can define the Galactic rest frame (Reid & Brunthaler, 2020; GRAVITY Collaboration et al., 2022). Since the model predicts \(V_{\rm c}(R_{0})=234\,{\rm km\,s^{-1}}\) the Sun's peculiar velocity is \(V_{\rm\phi\odot}=17\,{\rm km\,s^{-1}}\). These values are consistent with the values \(V_{\rm c}(R_{0})=238\pm 9\,{\rm km\,s^{-1}}\) and \(V_{\rm\phi\odot}=12\pm 9\,{\rm km\,s^{-1}}\) reported by Schonrich (2012). The full curve in Fig. 9 shows the model density of stars as a function of \(|z|\) at the solar radius, while the points describe the observational estimates of this by Gilmore & Reid (1983) and Juric et al. (2008), which can be moved vertically because they relate to star density rather than mass density. The agreement is satisfactory. The black dashed line shows the almost constant density of dark matter, while the blue curves show the density profiles of the young disc (full curve), the middle disc (dotted curve), the old disc (short-dashed curve) and the high-\(\alpha\) disc (long-dashed curve). The red dotted curve shows the tiny contribution from the stellar halo. The dark halo makes the largest contribution to the density at \(|z|>400\) pc. Comparison with Figs 14 and 15 in BV23 shows that the differences between the circular-speed curves of the two models are confined to the bar-bulge region \(R\lesssim 5\,{\rm kpc}\) - the curve of the BV23 model rises much more steeply at \(R<1\,{\rm kpc}\). The BV23 discs are less massive, yield a shorter overall radial scale length \(R_{\rm d}\), and have smaller scale heights - this is especially true of the old disc, for which the \(J_{\rm 20}\) pa Figure 13: As Fig. 12 but for cells with barycentres at \(|z|\sim 0.8\,{\rm kpc}\) (left column) and \(|z|\sim 1.2\,{\rm kpc}\). rameter has jumped from 5 to \(24\,{\rm kpc\,km\,s^{-1}}\). At \(R>10\,{\rm kpc}\) the old and high-\(\alpha\) discs of BV23 have much lower \(\sigma_{R}\) and significantly lower \(\sigma_{z}\), changes that are clearly mandated by the new data. The local densities of dark matter are almost identical in the two models. The full blue curve in Fig. 10 shows the surface density of disc stars as a function of radius while the black dotted curve shows the surface density of all stars. At \(R\gtrsim 5\,{\rm kpc}\) the surface density falls nearly exponentially with scale length \(R_{d}\simeq 3.6\,{\rm kpc}\) (marked by the red dashed curve). This is a significantly longer scale-length than was inferred by Robin et al. (2003) from the 2MASS survey. Recently, Robin et al. (2022) inferred a scale-length parameter \(H_{\rho}=2900\,{\rm kpc}\) for the DFs of the thin disc components, which is best compared to the values of \(J_{{}_{50}}/V_{c}(R_{0})\simeq 4.27\,{\rm kpc}\) for the young and middle discs and \(2.1\,{\rm kpc}\) for the old disc. Bovy et al. (2012) derived scale lengths for mono-abundance populations in APOGEE and found that these increased steadily from \(R_{\rm d}<2\,{\rm kpc}\) to \(R_{\rm d}>4\,{\rm kpc}\) as [Mg/Fe] decreases and [Fe/H] increases. The results in Table 3 are qualitatively consistent with that early work. The full curve in the upper panel of Fig. 11 shows as a function of radius the density of stars at \(z=0\), while the blue curves show the corresponding densities of the disc components. The contributions of the bulge and the stellar halo are shown by the black and red dotted curves. The Figure 14: As Fig. 12 but for cells with barycentres at \(|z|\sim 1.7\,{\rm kpc}\) (left column) and \(|z|\sim 2.4\,{\rm kpc}\). density of the dark halo is shown by the black long-dashed curve. The blue curves in the lower panel of Fig. 11 show the mean streaming velocities at \(z=0\) of the three thin-disc components and of the high-\(\alpha\) disc. Asymmetric drift causes the streaming velocity to fall further and further below the circular speed as the velocity dispersions increase. The black dotted curve shows the rising mean streaming velocity in the bulge. Tables 3 to 5 list the model's parameters, while Figs 12 to 17 show the resulting fits to data. These figures show that this model fits many aspects of the data very well while showing shortcomings in other aspects. ### Fits to kinematics Figs. 12 to 14 show, from left to right in each column, distributions of \(V_{R}\), \(V_{z}\) and \(V_{\phi}\) marginalised over the other two velocity components for the 72 spatial bins that were used to compute in \(L\). In the panels for \(V_{R}\) and \(V_{z}\) the histograms cover \(\pm 3\sigma\) where \(\sigma\) is the standard deviation, while the histograms of \(V_{\phi}\) cover \(-50\,\mathrm{km\,s^{-1}}\) to \(\langle V_{\phi}\rangle+3\sigma\). From bottom to top of each column the radius of the spatial bin increases, while different columns correspond to different values of \(|z|\): the numbers at the bottom of each \(V_{z}\) histogram give the mean values of \(R\) and \(z\) of stars in the relevant bin. The black histograms show the APOGEE DR17 + Gaia DR3 data while the red histograms show the distributions of mock stars. The largest discrepancies between data and model occur in \(V_{\phi}\) at small \(R\) and \(|z|\) (bottom right of each column in Fig. 12). These bins are dominated by the Galactic bar, so it is natural that our axisymmetric model fails to model the data well. More surprising is how well the model fits the \(V_{R}\) and \(V_{z}\) distributions for these bins, and even provides passable fits to the \(V_{\phi}\) distributions above \(|z|=0.7\,\mathrm{kpc}\) (lower right of the columns of Figs 13) and 14). The model generally provides good fits to the \(V_{\phi}\) histograms at \(R\gtrsim 4\,\mathrm{kpc}\). This result implies that the potential correctly gives the circular speed \(V_{\mathrm{c}}(R)\) because, as BV23 explained, even a small mismatch between a model's circular speed and the true circular speed leads to an unmistakable displacement of the red and black curves in the \(V_{\phi}\) plots. If some \(V_{\phi}\) histograms for \(R>4\,\mathrm{kpc}\) show discrepancies, it is because the model under-predicts stars with \(V_{\phi}\sim 0\). In a few of the histograms for \(V_{R}\) and \(V_{z}\) (see the top of Fig. 12) the red model curve fits the black data curve on one side much better than on the other. Since the red curves are by construction left-right symmetric (up to shot noise), such one-sided fits point to deviations from equilibrium in the data caused by spiral structure or tidal interactions, for example (e.g. McMillan et al., 2022; Khanna et al., 2023). #### 5.1.1 Parameters of the DFs Tables 3 and 4 give the parameters of the fitted DFs. The middle disc is the most massive disc component: with \(1.2\times 10^{10}\,\mathrm{M_{\odot}}\) it is 50 percent more massive than the old and high-\(\alpha\) discs and nearly three times as massive as the young disc. The bulge with \(1.27\times 10^{10}\,\mathrm{M_{\odot}}\) is slightly more massive than the middle disc. The mass, \(\sim 4\times 10^{8}\,\mathrm{M_{\odot}}\), of the stellar halo is gravitationally negligible. As discussed in Section 3, our sample may be biased against very metal-poor stars, so the mass we report for the stellar halo is likely an underestimate - for the halo within \(100\,\mathrm{kpc}\) Deason et al. (2019) estimate luminosity \(L=9.4\pm 2.4\times 10^{8}L_{\odot}\) leading to mass \(M=1.4\pm 0.4\times 10^{9}\,\mathrm{M_{\odot}}\) at least twice our value. The young and middle discs have comparable scale actions \(J_{\phi 0}\sim 1000\,\mathrm{kpc\,km\,s^{-1}}\), while the old and high-\(\alpha\) discs have much smaller scale actions, \(\sim 500\) and \(400\,\mathrm{kpc\,km\,s^{-1}}\), respectively. In the context of inside-out growth of galaxies, this result is to be expected. The parameter values \(J_{\mathrm{int}}\sim 190\,\mathrm{kpc\,km\,s^{-1}}\) and \(D_{\mathrm{int}}\sim 280\,\mathrm{kpc\,km\,s^{-1}}\) for the young disc imply that it does not differ strongly from a pure exponential. The high-\(\alpha\) disc is quite sharply radially truncated near \(R_{0}\): \(J_{\mathrm{ext}}\simeq 2200\), \(D_{\mathrm{ext}}\simeq 210\,\mathrm{kpc\,km\,s^{-1}}\). The scale actions \(J_{\mathrm{r}0}\) and \(J_{\mathrm{i}0}\) that control the in-plane and vertical velocity dispersions increase, as expected, along the sequence young disc, middle disc, old disc, high-\(\alpha\) disc. The biggest jump in \(J_{\mathrm{r}0}\) occurs between the young and the middle disc, while the biggest jump in \(J_{\mathrm{a}0}\) occurs between the middle and old disc, so the disc with the largest velocity anisotropy is the middle disc. Such anisotropy is the signature of heating by spiral arms (e.g. Binney & Lacey, 1988). The value of \(J_{\mathrm{r}0}\) for the bulge is very similar to that of the high-\(\alpha\) disc, while the bulge's value for \(J_{\mathrm{z}0}\) is only half that of the high-\(\alpha\) disc. The values of the parameter \(p_{\mathrm{r}}\) that controls the radial gradient of the in-plane velocity dispersions increases systematically along the sequence young - high-\(\alpha\) disc. This result implies that the radial gradient in \(\sigma_{R}\) steepens along this sequence. The parameter \(p_{z}\) that controls the radial gradient in \(\sigma_{z}\) does not vary systematically along the disc sequence. The middle disc has the most negative value of \(p_{z}\) and therefore the weakest radial gradient, while the high-\(\alpha\) disc has the steepest radial gradient. The bulge DF's values \(J_{\phi 0}=127\,\mathrm{kpc\,km\,s^{-1}}\) and \(J_{\mathrm{ext}}=611\,\mathrm{kpc\,km\,s^{-1}}\) ensure that the bulge is compact. Its in-plane dispersions are similar to those of the high-\(\alpha\) disc (\(J_{r0}=122\,\mathrm{kpc\,km\,s^{-1}}\)) but it has much smaller vertical dispersions and extent because \(J_{\mathrm{z}0}=34\) versus \(64\,\mathrm{kpc\,km\,s^{-1}}\) for the high-\(\alpha\) disc. For some reason \(p_{r}=0.82\) has been set large and positive (causing \(\sigma_{R}\) to decline steeply with \(z\)) while \(p_{z}=-0.13\) causes \(\sigma_{z}\) to decline much more slowly with \(R\). Consequently the bulge is most anisotropic at small radii. The bulge mass, \(1.27\times 10^{10}\,\mathrm{M_{\odot}}\) may be compared with the value \(1.43\pm 0.18\times 10^{10}\,\mathrm{M_{\odot}}\) that Portail et al. (2015) estimated from made-to-measure modelling of data for red-clump giants. The total stellar mass is \(4.60\times 10^{10}\,\mathrm{M_{\odot}}\), on the lower side of recent estimates. At \(R_{0}\) the stellar density is \(0.036\,\mathrm{M_{\odot}\,pc^{-3}}\) in the plane falling to \(0.0024\,\mathrm{M_{\odot}\,pc^{-3}}\) at \(z=1.1\,\mathrm{kpc}\). At \(R_{0}\) the density of dark matter is \(0.011\,\mathrm{M_{\odot}\,pc^{-3}}\) in the plane falling to \(0.0095\,\mathrm{M_{\odot}\,pc^{-3}}\) at \(z=1.1\,\mathrm{kpc}\) in agreement with most recent estimates (see Fig. 1 of de Salas & Widmark, 2021, for a review). The vertical component of the gravitational acceleration at \(R_{0}\) satisfies \[K_{z}(R_{0},1.1\,\mathrm{kpc}) =1.84\,\mathrm{km\,s^{-1}\,Myr^{-1}}\] \[\frac{K_{z}(R_{0},1.1\,\mathrm{kpc})}{2\pi G} =66.4\,\mathrm{M_{\odot}\,pc^{-2}} \tag{15}\] \begin{table} \begin{tabular}{l c c c c c c c c c c c} Component & \(\theta\) & \(x_{0}\) & \(y_{0}\) & \(\sigma_{x}\) & \(\sigma_{y}\) & \(C_{1,J_{r}}\) & \(C_{1,J_{z}}\) & \(C_{1,J_{\phi}}\) & \(C_{2,J_{r}}\) & \(C_{2,J_{z}}\) & \(C_{2,J_{\phi}}\) \\ \hline young disk & \(-6.94\) & \(0.00119\) & \(0.0138\) & \(0.132\) & \(0.0586\) & \(1.18\) & \(8.19\) & \(-0.447\) & \(2.27\) & \(24.8\) & \(0.00923\) \\ middle disk & \(-6.93\) & \(-0.0553\) & \(0.0376\) & \(0.117\) & \(0.0104\) & \(0.246\) & \(0.00967\) & \(-0.347\) & \(-0.034\) & \(1.81\) & \(0.0231\) \\ old disk & \(-7.89\) & \(-0.292\) & \(0.129\) & \(0.0316\) & \(1.33\) & \(-0.687\) & \(-0.288\) & \(-0.263\) & \(0.138\) & \(0.0207\) \\ high-\(\alpha\) disk & \(-7.96\) & \(-0.471\) & \(0.323\) & \(0.131\) & \(0.0113\) & \(1.19\) & \(-1.13\) & \(0.071\) & \(-0.394\) & \(0.21\) & \(-0.0196\) \\ stellar halo & \(-3\) & \(-1.1\) & \(0.272\) & \(0.497\) & \(0.114\) & \(-0.444\) & \(0.0862\) & \(-0.0743\) & \(0.159\) & \(-0.171\) & \(0.0562\) \\ bulge & \(-6\) & \(0.409\) & \(0.0598\) & \(0.383\) & \(0.0879\) & \(-1.64\) & \(-17.8\) & \(-0.0464\) & \(2.01\) & \(6.39\) & \(0.204\) \\ \end{tabular} \end{table} Table 5: Parameters of the fitted chemical pdfs. The units of \(\theta\) are degrees while \(x_{0}\), \(y_{0}\), \(\sigma_{x},\sigma_{y}\) are given in dex. The values quoted for the gradient matrices \(\mathbf{C}\) (eqn 3) are in dex per Mpc km s\({}^{-1}\). \(C_{1,J_{r}}\equiv C_{\rm[Fe/H],J_{r}}\) while \(C_{2,J_{r}}\equiv C_{\rm[Mg/Fe],J_{r}}\), etc. The conventional radial metallicity gradient, d[Fe/H]/d\(R\simeq C_{1,J_{\phi}}\times V_{\rm c}\simeq-0.00035\times 234\simeq-0.0 82\) dex kpc\({}^{-1}\) for the middle disc. \begin{table} \begin{tabular}{l c c c c c c c c c c c} Component & \(M\) & \(J_{\phi 0}\) & \(J_{r0}\) & \(J_{z0}\) & \(J_{\rm int}\) & \(D_{\rm int}\) & \(J_{\rm ext}\) & \(D_{\rm ext}\) & \(p_{r}\) & \(p_{z}\) & \(J_{\chi 0}\) & \(J_{d0}\) \\ \hline young disk & \(0.45\) & \(977.9\) & \(2.806\) & \(1.296\) & \(186.9\) & \(278\) & – & – & \(-0.76\) & \(-0.23\) & \(152.1\) & \(102.1\) \\ middle disk & \(1.2\) & \(1030\) & \(22.82\) & \(3.24\) & – & – & – & – & \(-0.23\) & \(-0.7\) & \(146.4\) & \(731.9\) \\ old disk & \(0.87\) & \(508\) & \(47.14\) & \(24.04\) & – & – & – & – & \(0.034\) & \(-0.043\) & \(132.7\) & \(733.9\) \\ high-\(\alpha\) disk & \(0.766\) & \(399\) & \(116.4\) & \(64.6\) & – & – & \(2212\) & \(207.8\) & \(0.1\) & \(0.17\) & \(150\) & \(40\) \\ bulge & \(1.27\) & \(127.5\) & \(122.2\) & \(34.21\) & – & – & \(611.2\) & \(217.5\) & \(0.82\) & \(-0.13\) & \(150\) & \(20\) \\ \end{tabular} \end{table} Table 3: Parameters of the disc DFs. Masses are in units of \(10^{10}\) M\({}_{\odot}\) and actions in kpc km s\({}^{-1}\). Figure 15: The chemical composition of stars that lie in 10 cells in action space. The left column is for cells with \(J_{\phi}\in(-200,200)\) kpc km s\({}^{-1}\) and \(J_{z}\) in intervals that increase from bottom to top; specifically \((0,20)\), \((25,55)\), \((60,90)\), \((95,125)\) and \((130,160)\) kpc km s\({}^{-1}\). The right column is for \(J_{\phi}\in(300,700)\) kpc km s\({}^{-1}\) and the same intervals in \(J_{z}\). Within each column the right panels show the distribution of real stars, while the left panels show mock stars. Figure 16: The same as Fig. 15 but for \(J_{\phi}\in(800,1200)\,{\rm kpc\,km\,s^{-1}}\) (left column) \(J_{\phi}\in(1300,1700)\,{\rm kpc\,km\,s^{-1}}\) (right column). Figure 17: The same as Fig. 15 but for \(J_{\phi}\in(1800,2200)\,{\rm kpc\,km\,s^{-1}}\) (left column) \(J_{\phi}\in(2300,2700)\,{\rm kpc\,km\,s^{-1}}\) (right column). The actual surface density of stars and dark matter within 1.1 kpc of the plane is slightly lower than the naive estimate based on \(K_{z}\) because the gravitational field has a large, position dependent, radial component. It is \[\Sigma(R_{0},1.1\,{\rm kpc})=63.9\,{\rm M}_{\odot}\,{\rm pc}^{-2}. \tag{16}\] and comprises \(26.5\,{\rm M}_{\odot}\,{\rm pc}^{-2}\) in stars, \(24.7\,{\rm M}_{\odot}\,{\rm pc}^{-2}\) in dark matter and \(12.6\,{\rm M}_{\odot}\,{\rm pc}^{-2}\) in gas. In Table 4 the mass of the dark halo is given as \(0.94\times 10^{12}\,{\rm M}_{\odot}\) but the great majority of this mass lies outside the volume for which we have data. The scale radius \(r_{s}\) of the dark halo is set by the parameter \(J_{\phi 0}\), which was arbitrarily set to \(10\,000\,{\rm kpc}\,{\rm km}\,{\rm s}^{-1}\), so the volume modelled lies inside \(r_{s}\), where, in the absence of the stars, the dark-matter density would satisfy \(\rho\sim r^{-2}\). As explained by BV23, the actual density profile of the dark halo is more complex on account of the pull of the stars. ### Fits to chemistry Figs. 15 to 17 show predicted (left panels) and observed distributions in the ([Fe/H],[Mg/Fe]) plane within 30 bins in the \((J_{\phi},J_{z})\) plane - the actions at the centre of the bin are given at the bottom of the bin's model panel. As one proceeds up each column, the mean value of \(J_{z}\) increases, so the bulge or thin disc dominates the bottom panels of each column. As one proceeds from column to column, the mean value of \(J_{\phi}\) increases, so the typical Galactocentric radius of stars increases from the left column of Fig. 15 to the right column of Fig. 17. Fig. 10 of Eilers et al. (2022) show similar data in a slightly different representation by plotting \(\sqrt{J_{\phi}^{2}+J_{z}^{2}}\) vertically rather than \(J_{z}\). In Fig. 15 the largest discrepancies between data and model occur around \(J_{\phi}=0\). This region of action space will be dominated by the stellar halo and the bulge and we have no confidence in the functional forms we have have adopted for their DFs, so it is not surprising that the model performs least well there. In fact it is encouraging that the model does capture the main features of the chemistry even there: in particular a strongly populated ridge that slopes from ([Fe/H],[Mg/Fe])\(\simeq(0.4,0.05)\) up towards \((-0.08,0.38)\), and a less well populated ridge \(\sim 0.8\) dex to the left of it. At \(J_{\phi}\sim 500\,{\rm kpc}\,{\rm km}\,{\rm s}^{-1}\) and \(J_{z}<50\,{\rm kpc}\,{\rm km}\,{\rm s}^{-1}\) (bottom of the right column of Fig. 15) the model underestimates the principal ridge at high [Fe/H] and low [Mg/Fe]. Fig. 16, which covers \(750\,{\rm kpc}\,{\rm km}\,{\rm s}^{-1}<J_{\phi}<1750\,{\rm kpc}\,{\rm km}\,{ \rm s}^{-1}\), shows good agreement between model and data at all values of \(J_{z}\) except the largest: at the top of each column the model struggles to reproduce a population of metal-rich, low-\(\alpha\) stars. This issue with a deficit of metal-rich, low-\(\alpha\) stars at high \(J_{z}\) is the only significant shortcoming of the fits shown in Fig. 17 for the largest angular momenta. Data for the stars that the model fails to provide is sparse and most liable to observational error, so it is not clear how significant the deficit in the model is. #### 5.2.1 Chemical parameters Table 5 gives the values of the parameters of the chemical model that yields the fits shown in Figs. 15 to 17, while each row of Fig. 18 displays the chemodynamical structure of the component listed along the figure's right-hand edge. Contours show the density of stars with a given chemistry that one obtains by sampling the DF without any selection function and then using the chemical model of Table 5 to assign a chemistry to each selected star. This plot differs from all our other plots in being representative of what the model claims is out there, rather than what APOGEE can see. In the left column each pixel is coloured by the mean of the \(J_{\phi}\) values of the stars in that pixel, while the centre and right columns are coloured by the mean values of \(J_{z}\) and \(J_{r}\), respectively. The numbers in brackets in each panel show the values (in \({\rm kpc}\,{\rm km}\,{\rm s}^{-1}\)) associated with red and blue. The contours in the top row of Fig. 18 show that the main body of the bulge is modelled as a very metal-rich, roughly \(\alpha\)-normal structure. From this there extends a tail of stars with progressively lower metallicity and higher [Mg/Fe]. The ridge line is similar to the standard evolutionary trajectory of a population (e.g. Schonrich & Binney 2009b). The colours in the top row show that in the bulge the dominant dependence of chemistry upon action is that on \(J_{z}\), the dependence on \(J_{r}\) is also strong. The second row in Fig. 18 shows that the stellar halo is modelled as a metal-poor, moderately \(\alpha\)-enhanced structure (\(x_{0}=-1.1\), \(y_{0}=0.27\)) widely scattered over the chemical plane (\(\sigma_{x}=0.47\), \(\sigma_{y}=0.11\)). Its chemistry depends princi Figure 18: The components in chemical space. Each row shows the structure of the component listed on the right-hand edge. Contours show the density of stars of the given chemistry while colours show, from left to right, the mean values of \(J_{\phi}\), \(J_{z}\) and \(J_{r}\). The numbers in brackets give the values in \({\rm kpc}\,{\rm km}\,{\rm s}^{-1}\) corresponding to red and blue hues. The density increases by a factor 2 between adjacent contours. pally on \(J_{z}\) and \(J_{r}\) - high \(J_{z}\) is associated with low values of both [Fe/H] and [Mg/Fe] while high \(J_{r}\) is associated with low [Fe/H] and high [Mg/Fe]. The stellar halo is now believed to owe much to the 'Enceladus' merger event, which formed a mildly counter-rotating, relatively metal-rich, radially anisotropic and flattened component of the halo (Belokurov et al., 2018; Helmi et al., 2018; Myeong et al., 2018). According to this picture the mean value of \(J_{r}\) should increase with [Fe/H] while the mean value of \(J_{z}\) should decrease with [Fe/H]. The trends shown by the colours in Fig. 18 are the opposite. The third row of Fig. 18 shows that the high-\(\alpha\) disc is confined to the upper part of the ridge line of the bulge, as one expects of a component that largely formed before type Ia supernovae became important. [Fe/H] depends only weakly on \(J_{\phi}\) and in the sense that metallicity _increases_ with \(J_{\phi}\) and therefore radius. Schonrich & McMillan (2017) discuss the origin of this 'inverse metallicity gradient'. In the high-\(\alpha\) disc [Fe/H] depends quite strongly on \(J_{r}\) and \(J_{z}\): it increases with \(J_{r}\) and declines with \(J_{z}\). The bottom three rows of Fig. 18, for the thin-disc components, show distributions that slope in the same sense as the bulge and thick disc but more gradually so they do not reach such large values of [Mg/Fe]. In these components the dominant dependence of chemistry on actions is upon \(J_{\phi}\) and the value \(C_{1,J_{\phi}}=-0.3476\times 10^{-3}\,{\rm dex}/\,{\rm kpc}\,{\rm km}\,{ \rm s}^{-1}\) given for the middle disc implies \({\rm d}[{\rm Fe}/{\rm H}]/{\rm d}R\simeq-0.081\,{\rm dex}\,{\rm kpc}^{-1}\), consistent with the values \(-0.077\pm 0.013\) to \(-0.059\pm 0.012\,{\rm dex}\,{\rm kpc}^{-1}\) deduced by Mendez-Delgado et al. (2022) for interstellar N and O, respectively. The young and middle discs extend to much lower [Fe/H] than the old disc, which is a priori surprising. The reason is that the these discs have lager scale actions \(J_{\phi 0}\) so they are predicted to extend to larger radii and higher values of \(J_{\phi}\). Given the negative value of \({\rm d}[{\rm Fe}/{\rm H}]/{\rm d}J_{\phi}\), the outer (largely unobserved) parts of these discs are predicted to be very metal-poor. It is likely that \(|{\rm d}[{\rm Fe}/{\rm H}]/{\rm d}J_{\phi}|\) diminishes for \(J_{\phi}\) much larger than the solar value, with the consequence that \(\langle{\rm[Fe/H]}\rangle\) does not fall to the low values predicted under our assumption of constant \({\rm d}[{\rm Fe}/{\rm H}]/{\rm d}J_{\phi}\). All four discs have \(C_{1,J_{r}}>0\) implying that metallicity increases with eccentricity. Radial migration, combined with radial gradients in metallicity and \(J_{r}\) can generate this correlation: at a given value of \(J_{\phi}\) there are stars that migrated outwards and inwards. On average, the former will have higher metallicities and radial actions than the latter, thus inducing a correlation between metallicity and \(J_{r}\) at given \(J_{\phi}\). In the young and middle disc \(C_{1,J_{z}}\) is positive. A positive value implies that [Fe/H] increases with inclination and could also be a consequence of radial migration. The very large value of \(C_{1,J_{z}}\) for the young disc is surprising astrophysically, but the signal for it in the data is evident from the colour gradient in the centre panel of the second row from the top of Fig. 6 (also Fig. 19 below). The high-\(\alpha\) disc has a significantly negative value of \(C_{1,J_{z}}\) so metallicity _decreases_ with inclination. This result is consistent with small inverse radial metallicity gradient in this disc. The thin disc components have \(C_{2,J_{\phi}}>0\), implying an outward increase in [Mg/Fe], while for the high-\(\alpha\) discs \(C_{2,\phi}<0\). Thus the signs of both \(C_{1,J_{\phi}}\) and \(C_{2,J_{\phi}}\) reverse as one passes to the high-\(\alpha\) disc from the thin disc. Given that high [Mg/Fe] is the signature of a high star-formation rate, it is natural for \(C_{2,J_{\phi}}\) to be negative in the absence of radial migration. The discs' values of \(C_{2,J_{r}}\) decrease along the young - high-\(\alpha\) sequence so [Mg/Fe] increases with eccentricity in the young disc and decreases with eccentricity in the old and high-\(\alpha\) discs. \(C_{2,J_{z}}\) is positive but decreasing along the young - old sequence and then increases slightly as one moves to the high-\(\alpha\) disc. A picture in which [Mg/Fe] increases with age and stars are secularly scattered away from planar, circular orbits implies that [Mg/Fe] should increase with eccentricity and inclination. Table 5 shows that it does increase with inclination but in the older components it is flat or decreasing with eccentricity. The structure of the young disc, shown by the bottom row of Fig. 18, is surprisingly broad, which leads to significant numbers of stars at [Mg/Fe] \(\gtrsim 0.2\). The breadth is a consequence of the large value recovered for \(\sigma_{y}\). To check the case for this breadth, a new Nelder-Mead search for chemical parameters was started with \(\sigma_{y}\) reduced to the value recovered for the middle disc. After 200 iterations \(\sigma_{y}\) increased four-fold while no other parameter changed significantly, and a further 4000 iterations produced no significant changes. This experiment suggests that the unexpectedly large value of \(\sigma_{y}\) is a genuine response to the data rather than a consequence of becoming trapped in a local maximum of \(\mathcal{L}\). The left columns of Figs 2 and 3 show the mean values of observed [Mg/Fe] and [Fe/H] split into tranches of \(J_{r}\). The right panels of Fig. 19 show the same data but regardless of \(J_{r}\). The U-shaped boundary to the (blue) region dominated by the high-\(\alpha\) disc is now spectacularly evident, and in the lower plot the metal-rich thin disc is seen to be bounded below by \(J_{\phi}\simeq 200\,{\rm kpc}\,{\rm km}\,{\rm s}^{-1}\) and above by \(J_{\phi}\simeq 2000\,{\rm kpc}\,{\rm km}\,{\rm s}^{-1}\), interestingly coinciding with the right-hand boundary of high-\(\alpha\) disc. The left panels of Fig. 19 show the model's version of these plots. The principal features are captured but the match is far from perfect. Most notably, in the upper left plot the boundary of the U-shaped region is insufficiently sharp and along \(J_{\phi}\sim 0\) the \(\partial[{\rm Mg}/{\rm Fe}]/\partial J_{\phi}\) is too steep in the model. ## 6 Comparison with the Besancon Model A natural question is how the present model compares with the 'Besanon Galaxy model' (BGM), which has recently been updated by Robin et al. (2022) to self-consistency in the context of Gaia DR3 astrometry. Major differences include: (i) In the BGM the bulge, the stellar halo and the dark halo were represented by pre-defined density distributions rather than DFs. The BGM bulge is spherical while the two haloes are ellipsoidal; (ii) In the BGM the integrals used are \((E,J_{\phi},I_{3})\) rather than **J**; (iii) The BGM was fitted to sky-plane velocities \((V_{t},V_{0})\) computed using inverse parallaxes rather than StarHorse distances, and line-of-sight velocities were not used; (iv) The BGM was fitted to (a) the \((V_{t},V_{b})\) distributions of stars within 100 pc of the Sun viewed in 36 directions, and (b) the \((V_{t},V_{b})\) and parallax distributions of more distant stars viewed along 26 lines of sight. Fits were made to the 0.1, 0.25, 0.5, 0.75 and 0.9 quantiles in \(V_{\ell}\) and \(V_{b}\) rather than to densities in three-dimensional velocity space at 72 locations in \((R,|z|)\). (v) Modelling the parallax distributions required engagement with the Gaia scanning law, and adoption of both a dust model and a luminosity functions for each DF. (vi) The Galaxy's circular-speed curve was an input to the BGM model whereas here it follows from the fits to the kinematics. While there are significant differences in the data employed and how the model parameters were fitted, the two models have similar scope: both are axisymmetric, self-consistent dynamical models from which mock samples can be drawn by specifying selection criteria. The present model does not come with pre-defined luminosity functions but the new version of agama provides for each DF to include a luminosity function appropriate to the component's age and chemistry. The new version defines classes for lines of sight and dust models, so magnitude-limited samples can be drawn for any line of sight. ## 7 Conclusions The APOGEE spectroscopic survey combined with Gaia astrometry casts a brilliant light on the chemodynamical structure of our Galaxy. From DR17 of the Sloan Digital Sky Survey we selected just under 218 000 giant stars that are unlikely to belong to globular clusters and have reliable distances in the Bayesian StarHorse catalogue (Anders et al., 2022). The selected stars cover the radial range \(0.5<R<14.5\) kpc and lie within 4 kpc of the plane. We have avoided engaging with the complex selection function of the APOGEE survey by examining the kinematic and chemical distributions within 72 spatial bins that extend from \(R=1\) kpc to \(R=14\) kpc at \(|z|<4\) kpc. We have used these distributions to update and extend the self-consistent dynamical model of BV23. This model is defined by distribution functions \(f(\mathbf{J})\) for six stellar components and the dark halo, together with a prescribed surface density of gas. The gravitational potential that these eight ingredients jointly generate was computed iteratively and the resulting predictions for the kinematics of stars at 72 spatial locations were compared with the data. The functional forms adopted for the DFs differ from those used by BV23. A major difference is that here the bulge is assumed to be a fat disc rather than a spheroidal component. Another difference is that here the high-\(\alpha\) disc is taken to be radially truncated. A minor difference is in the way the variables \(J_{\rm d}\) and \(J_{\rm v}\) introduced by Vasiliev (2019) are defined. Another difference with BV23 is in how the kinematic data are fitted: whereas BV23 fitted by hand histograms of \(V_{R}\), \(V_{z}\) and \(V_{\phi}\) marginalised over the other two components of velocity, we have used the Nelder-Mead algorithm to fit the predicted densities of stars in 72 three-dimensional velocity spaces. In a major extension of BV23 we have added a chemical dimension to the model by assigning to each stellar DF a probability density in the chemical space \(\mathbf{c}=([\mathrm{Fe}/\mathrm{H}],[\mathrm{Mg}/\mathrm{Fe}])\). These probability densities are Gaussians with mean values that are linear functions of \(\mathbf{J}\). We examined the variation in action space of the mean values of [Mg/Fe] and [Fe/H]. Plots of this distribution Figure 19: Mean values of [Mg/Fe] (upper panels) and [Fe/H] (lower panels) in the \((J_{\phi},J_{z})\) plane from the observations (right panels) and as predicted by the model (left panels). (Figs. 2 and 19) show a sharp boundary at \(J_{\phi}=0\) that is a testament to the accuracy of the StarHorse distances we have used. It also implies that within the (wide) spatial region covered by the data, few bulge stars counter-rotate, so the bulge should not be modelled as a spheroidal component but as a hot disc. In addition to the sharp boundary at \(J_{\phi}=0\), plots of \(\langle\)[Mg/Fe]\(\rangle\) show a sharp transition at \(J_{\phi}\simeq 2000\,\mathrm{kpc\,km\,s^{-1}}\) that we interpret as radial truncation of the high-\(\alpha\) disc. We started this work in the hope that it would prove possible to model the chemical structure of action space satisfactorily by assigning a simple chemical pdf to each stellar component. To an extent the quality of the fits displayed by Figs 15 to 19 dashes this hope. Eleven parameters are required to specify the pdf of each component and yet the model fails to capture some significant aspects of the data. Complete success in our endeavour would have vindicated the reality of the components that we represent with individual DFs. The limited success actually achieved leaves the reality of the standard components open to doubt. It is likely that the replacement of the halo and bulge DFs by better functional forms would make it possible to obtain better fits even without restructuring the chemical model. DFs that are not confined to one sign of \(J_{\phi}\) are subject to subtle constraints, as will be explained elsewhere (Binney in preparation). The parameters of the functional form used for the stellar and dark-matter DFs cannot be freely varied if unphysical kinematics are to be avoided, with the consequence that in the model neither component is as radially biased as it probably should be. The data imply that the bulge like the discs lies overwhelmingly at \(J_{\phi}>0\) but its DF should surely not vanish completely at \(J_{\phi}<0\), as it does in the model. This artificial vanishing of the bulge DF may be responsible for the poor match of the model's circular-speed curve to the points from Wegg & Gerhard (2013) in Fig. 8. Including in the modelling some observational bias against low-metallicity stars may also improve fits. So long as we use DFs of the current forms, it seems we must contemplate assigning chemistry in a more sophisticated way than we have here. Fig. 6, which examines the distribution of stars over the ([Fe/H],[Mg/Fe]) plane at locations in action space that are dominated by particular components indicates the nature of the challenge through the extent to which mean values of the actions (represented by colour) vary with chemistry. \(C_{1,J_{\phi}}\) is the only gradient coefficient conventionally considered, but Table 5 shows that it is by no means the only significant coefficient, even bearing in mind that \(J_{\phi}\) spans a wider range than the other two actions. A model such as ours can be used to make innumerable predictions regarding orbits around the Galaxy and the density of stars of given chemistry at any location in phase-space; here we have shown only a tiny selection of such predictions. For example, it would be interesting to produce plots like Fig. 10 of Eilers et al. (2022) or to fit the functional forms of Lian et al. (2022) to the spatial structures of our low- and high-\(\alpha\) discs and compare with the values Lian et al obtained. Using the agama package it is easy to make predictions in seconds by drawing from the model samples of stars with known component memberships. In addition to yielding mock observations, such samples can be used as initial conditions for an N-body simulation. Vasiliev (2019) showed that simulations started in this way form precisely equilibrium systems, and subtle departures from equilibrium can be explored by slightly shifting the phase-space coordinated returned by agama before advancing the N-body model in time. ## Data Availability The code that generates Galaxy models and fits the present chemical models can be downloaded from the agama website [https://github.com/GalacticDynamics-Oxford/Agama](https://github.com/GalacticDynamics-Oxford/Agama).
2309.01294
AlphaZero Gomoku
In the past few years, AlphaZero's exceptional capability in mastering intricate board games has garnered considerable interest. Initially designed for the game of Go, this revolutionary algorithm merges deep learning techniques with the Monte Carlo tree search (MCTS) to surpass earlier top-tier methods. In our study, we broaden the use of AlphaZero to Gomoku, an age-old tactical board game also referred to as "Five in a Row." Intriguingly, Gomoku has innate challenges due to a bias towards the initial player, who has a theoretical advantage. To add value, we strive for a balanced game-play. Our tests demonstrate AlphaZero's versatility in adapting to games other than Go. MCTS has become a predominant algorithm for decision processes in intricate scenarios, especially board games. MCTS creates a search tree by examining potential future actions and uses random sampling to predict possible results. By leveraging the best of both worlds, the AlphaZero technique fuses deep learning from Reinforcement Learning with the balancing act of MCTS, establishing a fresh standard in game-playing AI. Its triumph is notably evident in board games such as Go, chess, and shogi.
Wen Liang, Chao Yu, Brian Whiteaker, Inyoung Huh, Hua Shao, Youzhi Liang
2023-09-04T00:20:06Z
http://arxiv.org/abs/2309.01294v1
# AlphaZero Gomoku ###### Abstract In the past few years, AlphaZero's exceptional capability in mastering intricate board games has garnered considerable interest. Initially designed for the game of Go, this revolutionary algorithm merges deep learning techniques with the Monte Carlo tree search (MCTS) to surpass earlier top-tier methods. In our study, we broaden the use of AlphaZero to Gomoku, an age-old tactical board game also referred to as "Five in a Row." Intriguingly, Gomoku has innate challenges due to a bias towards the initial player, who has a theoretical advantage. To add value, we strive for a balanced game-play. Our tests demonstrate AlphaZero's versatility in adapting to games other than Go. MCTS has become a predominant algorithm for decision processes in intricate scenarios, especially board games. MCTS creates a search tree by examining potential future actions and uses random sampling to predict possible results. By leveraging the best of both worlds, the AlphaZero technique fuses deep learning from Reinforcement Learning with the balancing act of MCTS, establishing a fresh standard in game-playing AI. Its triumph is notably evident in board games such as Go, chess, and shogi. ## 1 Introduction Reinforcement learning (RL) is a pivotal and rapidly advancing domain within contemporary artificial intelligence research. It offers a distinctive framework wherein agents progressively improve their performance not through explicit instruction but through continual interaction with their surroundings [1; 2]. As these agents take actions in a given environment, they are provided feedback in the form of either rewards for desirable actions or penalties for undesirable ones [3]. This trial-and-error mechanism aids the agents in understanding the consequences of their actions and refining their strategies accordingly. The primary goal of RL is to determine an optimal strategy, often termed as a "policy", which instructs the agent on the best possible action to take in any given situation. The optimal policy is one that, when followed, will lead to the maximization of the cumulative rewards over a period, ensuring that the agent's actions result in the most favorable outcomes in its environment. Board games, with their intricate complexities and well-defined reward structures, make a fitting domain for RL, offering a perspective ripe for academic inquiry. MCTS (Monte Carlo tree search) [4] has emerged as a leading algorithm for decision-making in these complex environments. Recently, deep learning catalyzed groundbreaking developments in diverse research domains, from computer vision and natural language processing to state-of-the-art recommender systems [5; 6; 7]. It constructs a search tree by exploring possible future moves, using statistical sampling to evaluate the potential outcomes. It constructs a search tree by exploring possible future moves, using statistical sampling to evaluate the potential outcomes. Bridging the gap between RL and MCTS, the original AlphaGo [8] algorithm showcased a fusion of deep learning and tree search techniques, revolutionizing the game-playing AI landscape. This groundbreaking approach further evolved with the introduction of AlphaZero [9], which uses zero human knowledge and experience in this game and removes the supervised learning stage. It allows the algorithm to master the game with self-learning only. Gomoku, often referred to as "Five in a Row," is usually played on a 15x15 grid (though variations can feature larger grids). The game's objective is straightforward yet captivating: two players, typically designated as black and white, take turns placing stones on the board with the aim to align five of their own stones consecutively in a vertical, horizontal, or diagonal line. The game's seemingly simple rules mask a depth of strategy. Early moves tend to be concentrated around the center of the board, providing players with maximal opportunities to expand and form their sequences. As the game progresses, the board transforms into a complex battleground of potential sequences, blocked attempts, and intricate traps. The nature of the game allows for both defensive and offensive tactics. A player might focus on preventing their opponent from completing a sequence, or strategically placing their stones to create multiple potential winning avenues simultaneously. Its straightforward rules combined with its profound complexity make it an ideal candidate for studying artificial intelligence's progress and potential in mastering classic board games. In recent decades, there have been concerted efforts to solve Gomoku using computational methods. One notable attempt was by Allis [10], who employed proof-number search algorithms to analyze specific game positions and paths, making significant strides in understanding the game's complexities. Another significant contribution came from Chen [11], who utilized pattern recognition and threat-space search techniques to advance AI capabilities in Gomoku, offering a fresh perspective on potential winning strategies. Driven by the recent monumental strides in board game artificial intelligence, especially the unparalleled triumphs of the AlphaZero algorithm, we were compelled to believe that harnessing this cutting-edge approach for the game of Gomoku was not only feasible but imperative. In embarking on this ambitious journey, our contributions to the Gomoku AI research landscape manifest in two significant dimensions: 1. We generalized the AlphaZero approach for the Gomoku game, achieving impressive results. Initiating from a state of random play, and without any domain knowledge apart from the game rules, our model swiftly learned a winning strategy on a 6x6 table after just a few hours of training on an economical GPU. 2. We embarked on an extensive research endeavor, wherein we juxtaposed the efficacy of our refined AlphaZero methodology against a conventional method that exclusively leverages the Monte Carlo tree search (MCTS). Our aim was to critically assess how these two distinct techniques fare in terms of both efficiency and effectiveness under comparable conditions, to shed light on their relative strengths and potential areas of improvement. ## 2 Method ### Value and Policy Network Deep neural network in AlphaZero [9] and AlphaGo [8] often employs two primary neural networks: the Value Network (\(V\)) and the Policy Network (\(\pi\)). * **Value Network (\(V\))**: This network estimates the value of a given state, i.e., the expected outcome from that state. Formally, for a given state \(s\), \(V(s)\) predicts the expected outcome, with values close to +1 indicating favorable outcomes for the player and values close to -1 indicating unfavorable outcomes. \[V(s)\approx\mathbb{E}[r|s]\] (1) where \(\mathbb{E}\) is the expectation and \(r\) is the eventual reward. * **Policy Network (\(\pi\))**: This network provides a probability distribution over all possible moves from a given state. For a state \(s\) and an action \(a\), \(\pi(a|s)\) represents the probability of taking action \(a\) as a highly optimized game player. \[\pi(a|s)=P(a\text{ is the best move }\text{l}\,s)\] (2) The neural network structure we use is shown in Figure 1. ### Monte Carlo Tree Search (MCTS) Monte Carlo Tree Search (MCTS) stands out as a revolutionary algorithm, reshaping decision-making processes in intricate environments through the methodical construction of a search tree. At its core, the algorithm meticulously evaluates prospective game moves, striking a harmonious equilibrium between the dual tenets of exploration (unearthing new moves) and exploitation (leveraging known advantageous moves). The integration of Policy and Value networks into this framework bestows it with unparalleled depth and precision: * The Policy Network, through its discerning output, serves as the beacon guiding the expansion of the search tree. Instead of branching out indiscriminately, it casts the spotlight on moves radiating promise and potential, ensuring that the exploration process remains strategic and focused. * On the other hand, the Value Network steps in as an adept evaluator, meticulously scrutinizing leaf nodes within the tree. This network diminishes the traditional reliance on random rollouts for evaluation, infusing the process with a heightened level of precision. This capability not only speeds up the evaluation but also endows it with a more profound insight into the game's dynamics. In essence, the Policy Network acts as a compass, navigating the vast possibilities in the MCTS landscape and directing it towards potentially rewarding paths. Simultaneously, the Value Network functions as an astute analyst, swiftly gauging the potential outcomes of different game scenarios. Their synergistic interplay ensures that the search process within MCTS remains both streamlined and enriched, leading to decisions that are both efficient and strategically sound. ### Environment and Supervised Learning Within the realm of reinforcement learning, our agent actively interacts with a specially-designed Gomoku gaming environment, drawing feedback in the form of rewards or penalties based on its moves. As depicted in Figure2, we meticulously implemented a Gomoku game board that closely mirrors traditional gameplay. Given the computational overhead associated with larger boards, we strategically focused our experimental investigations on boards sized \(6\times 6\), targeting a 4-in-a-row win condition, and \(8\times 8\), targeting the standard 5-in-a-row. To provide a comprehensive representation Figure 1: Value and Policy Networks of each game state, we innovatively devised four distinct binary feature matrices. These matrices encapsulate essential game facets, including the current player's move, the adversary's move, the most recent move, and the initiating player. Notably, these matrices not only serve as a holistic game state representation but also act as pivotal input layers for our deep learning neural network. In terms of game mechanics, we faithfully incorporated Gomoku's conventional victory criteria. Moves are delineated at board intersections, eschewing placement within board squares. Traditionally, the white player initiates the game, with both players alternating their moves in succession until a conclusive game outcome is achieved. The essence of the game revolves around players positioning a stone of their chosen color on an unoccupied intersection. Triumph is heralded by the first player to strategically place five of their stones consecutively, irrespective of orientation - horizontal, vertical, or diagonal. In the rare scenario where the board reaches saturation without either contender achieving the coveted five-in-a-row, the game is ceremoniously declared a stalemate. While Gomoku might superficially seem straightforward, it belies a profound strategic depth, characterized by its myriad winning patterns and tactical nuances. Gomoku's strategic intricacies are underscored by the delicate balance and importance of specific board configurations, notably the 'threes' and 'fours'. These patterns play pivotal roles in dictating the pace and outcome of a match. When leveraged effectively, they can swiftly shift the advantage to one player, often pushing the opponent into a corner from which recovery becomes arduous. Diving into the nuances, the 'four' configuration is a fascinating tactical alignment where four stones of identical color stand in unison, beckoning a potential game-winning fifth stone in the subsequent move. The looming threat of this alignment is palpable, sending clear signals of an impending victory. Recognizing this, an opponent is thrust into a defensive mode, compelled to respond instantly. Failure to address this sequence by obstructing the alignment invariably results in a loss, testament to its lethal efficacy. Equally compelling, yet distinct in its strategic implications, is the 'fork' configuration. In this maneuver, a player crafts a masterstroke with a single move, spawning two formidable attack sequences in tandem. The duality of the threat is what sets the 'fork' apart: it presents a dual quandary that the opponent must navigate. The challenge is steep; it's nearly impossible to stymie both threats concurrently. Thus, successfully engineering a fork is often tantamount to clinching the game, rendering the adversary powerless in the face of this double-edged assault. To truly appreciate the visual elegance and tactical profundity of these configurations, one can refer to Figure 3 and Figure 4. These illustrations vividly capture the essence of 'threes', 'fours', and the enigmatic 'fork', underscoring their pivotal roles in the beautiful complexity of Gomoku. ## 3 Result Our experimentation painted an optimistic picture when applying the AlphaZero methodology to the Gomoku game. Significantly, our rendition not only succeeded but boasted an impeccable 100% victory rate as the initiating player during self-play assessments. Moreover, as the succeeding player, the algorithm manifested a keen aptitude for defense, coupled with a proactive stance towards Figure 3: ’Four’ winning pattern Figure 2: Gomoku game board from our implementation identifying and capitalizing on counterattack chances. A detailed exemplification of this nuanced behavior is cataloged in Appendix I. In our research, we undertook an in-depth comparative study, juxtaposing the performance of the AlphaZero approach against the traditional Monte Carlo tree search (MCTS) method. To furnish a comprehensive perspective, we analyzed a spectrum of iterations, spanning from 500 to 2500 in number. The ensuing patterns and performance distinctions are graphically represented in Figure 5. The empirical data unambiguously accentuates AlphaZero's superior efficacy, as it continually eclipses the performance benchmarks set by the MCTS method. ## 4 Conclusion We meticulously crafted a simulation environment tailored for the game of Gomoku and, within this context, developed a specialized agent. By adopting and integrating the AlphaZero methodology into our platform, we were able to achieve not only functional outcomes but also results that exceeded our initial expectations. These findings underscore the profound efficacy of the AlphaZero technique in Figure 4: ‘Fork’ winning pattern Figure 5: Compare AlphaZero with MTCS mastering intricate board games, such as Gomoku, demonstrating its versatility and robustness in diverse gaming scenarios.
2307.09364
Local Minima Drive Communications in Cooperative Interaction
An important open question in human-robot interaction (HRI) is precisely when an agent should decide to communicate, particularly in a cooperative task. Perceptual Control Theory (PCT) tells us that agents are able to cooperate on a joint task simply by sharing the same 'intention', thereby distributing the effort required to complete the task among the agents. This is even true for agents that do not possess the same abilities, so long as the goal is observable, the combined actions are sufficient to complete the task, and there is no local minimum in the search space. If these conditions hold, then a cooperative task can be accomplished without any communication between the contributing agents. However, for tasks that do contain local minima, the global solution can only be reached if at least one of the agents adapts its intention at the appropriate moments, and this can only be achieved by appropriately timed communication. In other words, it is hypothesised that in cooperative tasks, the function of communication is to coordinate actions in a complex search space that contains local minima. These principles have been verified in a computer-based simulation environment in which two independent one-dimensional agents are obliged to cooperate in order to solve a two-dimensional path-finding task.
Roger K. Moore
2023-07-18T15:48:37Z
http://arxiv.org/abs/2307.09364v1
# Local Minima Drive Communications in Cooperative Interaction ###### Abstract An important open question in human-robot interaction (HRI) is precisely _when_ an agent should decide to communicate, particularly in a cooperative task. Perceptual Control Theory (PCT) tells us that agents are able to cooperate on a joint task simply by sharing the same 'intention', thereby distributing the effort required to complete the task among the agents. This is even true for agents that do not possess the same abilities, so long as the goal is observable, the combined actions are sufficient to complete the task, and there is no local minimum in the search space. If these conditions hold, then a cooperative task can be accomplished _without_ any communication between the contributing agents. However, for tasks that _do_ contain local minima, the global solution can only be reached if at least one of the agents adapts its intention at the appropriate moments, and this can only be achieved by appropriately timed communication. In other words, it is hypothesised that in cooperative tasks, the function of communication is to coordinate actions in a complex search space that contains local minima. These principles have been verified in a computer-based simulation environment in which two independent one-dimensional agents are obliged to cooperate in order to solve a two-dimensional path-finding task. cooperation, communication, interaction, perceptual control theory, search, local minima ## I Introduction An important open question in human-robot interaction (HRI) is precisely _when_ an agent should decide to communicate [1]. Unfortunately, research in human-human interaction has been obsessed with 'turn-taking' as the underlying mechanism [2, 3, 4, 5], somewhat overlooking the observation that conversation can overlap as well as interleave [6, 7], as well as ignoring the question as to _why_ agents should communicate in the first place [8]. Clearly, communication supports information exchange [9, 10] and learning [11], but more importantly it facilitates collaborative problem solving [12] and goal sharing [13], i.e. _cooperation_. However, little research has been conducted into what conditions the timing and structure of communication in continuous cooperative interaction [14]. This paper addresses these issues from the perspective of Perceptual Control Theory (PCT) [15]. Results are presented from a PCT-based simulation of a cooperative task, and it is shown how appropriately timed communication between agents can overcome local minima in a joint problem space. ## II Communication in Cooperation Perceptual Control Theory (PCT) is founded on the mantra "_behaviour is the control of perception_", and agents are modelled as a hierarchy of negative-feedback control loops. Solidly grounded in the tradition of 'cybernetics' [16], PCT has been shown to be capable of accounting for a wide range of 'intelligent' phenomena based on a parsimonious architecture of replicated closed-loop structures [17]. In particular, PCT tells us that agents are able to cooperate on a joint task simply by sharing the same reference signal, i.e. by having the same _intention_[18]. The consequence is that the effort required to complete a task may be distributed among the agents involved. However, it is claimed here that successful convergence towards a solution of a joint task is based on three assumptions: * the goal is observable (that is, each agent has an appropriate input function), * the combined actions are sufficient to complete the task (that is, the agents possess complimentary output functions), and * the goal is accessible (that is, there are no _local minima_ in the search space). If these three conditions are met, then a cooperative task can be accomplished _without_ any communication between the contributing agents. This means that, for tasks that _do_ have local minima, the global solution can only be reached if at least one of the agents adapts its intention at the appropriate moment(s). That is, an agent may need to abandon its original goal in favour of a temporary alternative that facilitates an escape from a local minimum. Such behaviour requires timely coordination between the agents, and this can only be achieved by appropriately-timed _communication_. In other words, it is hypothesised that, in cooperative tasks, one function of communication is to coordinate actions in a complex search space that contains one or more local minima. From a PCT perspective, this implies that a perceived signal from one agent should trigger a change in a reference signal for another agent. This hypothesis has been verified in a computer-based simulation in which two independent one-dimensional agents are obliged to communicate (that is, actively cooperate) in order to solve a two-dimensional path-finding task [19]. ## III Simulation Environment The simulation environment - implemented in the Pure Data (Pd) dataflow programming language [20, 21] - is illustrated in Fig. 1. Two 1D agents control the X and Y positions of a'vehicle' in a 2D space, the task being to steer the vehicle towards a 'target' location. Each agent can only'see' the target in its single dimension, hence cooperation _may_ be required to solve the joint 2D problem. The difficulty of the task is scaled by the introduction of various forms of obstruction (as illustrated in Fig. 1), and the'solution time' (ST) for each successful run was measured. There are many configurations in which each controlling X and Y agent can move the vehicle towards the target by reducing their individual 'error' in a monotonic fashion (that is, by gradient descent), even if there are barriers present. For example, Fig. 2 shows a configuration with three barriers but _no_ local minimum. Also, some barrier configurations create situations in which it is impossible for the vehicle to reach the target at all - see Fig. 3. However, some configurations (such as the one shown in Fig. 1) create situations which require agents to _increase_ their error momentarily in order for the vehicle to eventually reach the target. For example, if one agent has reached its target (in 1D), but the other is stuck behind a barrier, then the first needs to be requested to abandon its target temporarily in an attempt to free the second agent. Hence, the presence/absence of timely communications is critical in determining whether a run is ultimately successful or not. Since not all configurations are solvable, the simulation environment was set up such that any experimental run lasting longer than 30 seconds was terminated and marked as 'did not finish' (DNF). In such cases, the solution time was ignored in subsequent data analysis. Fig. 1: Screenshot of the Pd-based simulation environment showing the target (green square), the vehicle (red circle) and two barriers (yellow lines). The X and Y axes depict the 1D projection of the vehicle, target and barriers (if visible from the agent’s perspective). The number in the top-right corner indicates the elapsed time in the current run, and the number in the top-left corner shows the number of runs completed. Fig. 3: Screenshot of the Pd-based simulation environment showing a configuration with three barriers in which it is impossible for the vehicle to reach the target. Fig. 2: Screenshot of the Pd-based simulation environment showing a configuration with three barriers but _no_ local minimum. This means that the vehicle can reach the target without getting ‘stuck’. Table I lists the variables instantiated in the simulations. Overall, five cooperation modes were implemented _per agent_ (listed at the bottom of Table I), and different combinations were able to be specified by means of an agent-specific 4-bit binary code. This meant that there was a total of 16 possible levels of cooperation available for each agent. Two of these involved no communication at all, but distinguished between just stopping at an obstruction (i.e. no active cooperation) versus moving randomly (i.e. potential cooperation _without_ communicating). Of particular interest are each agent's'status' parameters that were available to be communicated _for a given level of cooperation_. These are marked with a * in Table I. The first parameter - "_stuck_" - relates to the identification of a potential local minimum. Such a condition arises when one agent has collided with a barrier and the other has arrived at the target, or when both agents have collided with barriers. Crucially, it was realised that just one agent being stuck at a barrier is not sufficient evidence for a local minimum, as the other agent may be making progress which could resolve the problem. The second parameter - "_access_" - relates to whether the target was accessible, i.e. there was no barrier between the agent and the target. However, it is important to appreciate that such a condition does not guarantee a successful approach, as the target may subsequently become inaccessible for one agent due to the activities of the other agent. ## IV Experiments & Results A number of experiments have been conducted, each using multiple simulation runs to investigate different configurations of obstacles and levels of cooperation. For example, Fig. 4 shows the distribution of solution times resulting from 1000 runs in an environment containing two fixed barriers (configured as shown in Fig. 1.) for four incremental levels of cooperation. As expected, enabling explicit communication between the agents had a measurable effect in speeding up solution times. However, it was also noted that the low solution times for [1000] was due to the high number of runs that did not finish (DNF). In particular, the results revealed that [1000] gave rise to 64% DNFs, whereas [1100] had 13% DNFs, [1110] had 1% DNFs, and [1111] had only 0.6% DNFs. In attempting to analyse the results of the more complex cooperation experiments, it became clear that an overall 'goodness measure' was needed in order to resolve the compromise between fast solution times and the numbers of runs that did not finish. This was necessary because, as seen above, a high number of DNFs tends to give rise to a low mean solution time because the runs that succeed have less challenging barrier configurations. Likewise, a low number of DNFs may be associated with higher mean solution times as a consequence of the cooperating agents taking longer to solve more challenging barrier configurations. Hence, an appropriate 'goodness measure' was defined as: \[GM=log(ST^{1+\frac{DNF}{nran}}), \tag{1}\] where \(GM\) is the goodness measure (low is good), \(ST\) is the mean solution time for a run, \(DNF\) is the number of times a run did not finish, and \(nruns\) is the number of runs. Fig. 5 shows the combined results from the solution times shown in Fig. 4 and the corresponding number of DNFs plotted using the goodness measure. This representation clearly shows that, as expected, increasing the level of cooperation between the agents leads to significant improvements in their ability to solve the designated task. Fig. 4: Distributions of solution times for different levels of cooperation in an environment containing two fixed barriers. ### _Matched Agents_ As mentioned above, the simulation environment allowed the cooperation level to be set for each agent independently. However, due to the combinatorics, the majority of experiments were conducted with _matched_ agents. For example, Fig. 6 shows the impact of all sixteen levels of cooperation ranked by the 'goodness' of the outcome for matched agents in an environment containing three randomly placed barriers with random lengths and orientations. As can be seen in Fig. 6, the relationship between different combinations of cooperation and the goodness measure reveals that the cooperation combination [0111] "_arrived_"+"_stuck_", "_stuck_"+"_stuck_" and "_access_"+"_access_" gives rise to the best overall performance. The second-best is [0110] "_arrived_"+"_stuck_" and "_stuck_"+"_stuck_". Next is [1110] "_random movements_", "_arrived_"+"_stuck_", and "_stuck_"+"_stuck_", and fourth is [0010] "_stuck_"+"_stuck_". The following three combinations [1010], [0011] and [1011] also have relatively high 'goodness', and confirm that the top eight all have [0010] "_stuck_"+"_stuck_" enabled, and performance drops significantly without it. The highest number of DNFs was 854/1000 (for [0001]), the lowest was 284/1000 (for [0111]), and there were 769/1000 DNFs for no cooperation at all ([0000]). These results imply that up to 28% of barrier configurations were unsolvable and \(\sim\)23% were solvable _without_ cooperation, which means that \(\sim\)49% were able to be solved _with_ cooperation. The fact that [0001] resulted in a higher number of DNFs than [0000] implies that enabling the "_access_"+"_access_" strategy was actually detrimental to performance. With regard to the proportion of time agents spent communicating, the results shown in Fig. 7 reveal that there is a clear relationship between the goodness of the cooperation combinations and the proportion of time the agents spent communicating. As noted above, this is a function of whether [0010] "_stuck_"+"_stuck_" is enabled or disabled, and it clearly reflects the frequency with which situations containing local minima arise given the three random barriers. It is also interesting to note that the highest levels of communication occurred for the top two cooperation combinations. Finally, the correlation between the mean % time spent communicating and mean solution times was consistently \(\sim\)0.4 for the best eight cooperation combinations. This shows that harder barrier configurations required proportionally more inter-agent communications, as well as taking longer to solve. Fig. 5: Relationship between the cooperation level and the ‘goodness measure’ (low is good) in an environment containing two fixed barriers. Fig. 6: Relationship between different cooperation combinations ordered by their ‘goodness’ (low is good) for matched agents in an environment containing three randomly placed barriers with random lengths and orientations. Fig. 7: Relationship between different combinations of cooperation and the % of time the X and Y agents were communicating for matched agents in an environment containing three randomly placed barriers with random lengths and orientations. The results are ordered by ‘goodness’ from left to right. ### _Mismatched Agents_ As an example of the consequences of allowing the cooperation level to be set independently for each agent, Fig. 8 shows results for all combinations of matched and _mismatched_ agents in an environment containing three randomly placed barriers with random lengths and orientations. The results of this experiment showed that up to 26% of barrier configurations were unsolvable and \(\sim\)30% were solvable _without_ cooperation. This meant that \(\sim\)44% were able to be solved _with_ cooperation. What is particularly interesting in the results shown in Fig. 8 is that the good solutions are not confined to the diagonal, i.e. not restricted to the matched agents conditions. In fact the outcomes resulting from the best mismatched agents are comparable to those for the best matched agents. For example, the best performance over all cooperation combinations was obtained for [1111]+[0010] (i.e. where one agent had all cooperation modes enabled, and the other agent was only responding to "_stuck_"), and this result was slightly better than the best matched agents at [0110]+[0110]. Further investigation into the consequences of allowing the cooperation level to be set independently for each agent was made by comparing the performance of the best matched and mismatched combinations mentioned above (i.e. [1111]+[0010] versus [0110]+[0110]) with varying numbers of barriers. The results (shown in Fig. 9) reveal that the matched agents performed slightly better than the mismatched agents. However, as can be seen in Fig. 10, the mismatched agents communicated less than the matched agents, with the difference being proportionally larger for the more difficult barrier configurations. ## V Discussion Clearly, the task posed here is related to finding an optimal route on a map. As such, two scenarios are possible: (i) the distance between the vehicle's current position and the target is known, but gradient descent may lead to a local minimum, or (ii) the distance between the vehicle's current position and the target is unknown due to an obstruction. The first of these may be solved by _planning_ (assuming that the map is known), or by recognising that a local minimum has occurred and trying to jump out stochastically. In the second scenario, only random search is possible. However, this map-based analysis is based on the privileged perspective of a 2D agent. In the task posed in this paper, the agents were purposefully designed to be 1D, precisely so that they did _not_ have access to a 2D map. This meant Fig. 8: Heat map of the goodness measure for all combinations of X and Y cooperation for mismatched agents in an environment containing three randomly placed barriers with random lengths and orientations. Blue corresponds to the best, and red to the worst. Fig. 10: Relationship between the number of random barriers and the total % of time the X and Y agents were communicating for matched and mismatched agents. Fig. 9: Relationship between the goodness measure and the number of random barriers for matched and mismatched agents. that planning was not possible, and the recognition of arriving at a local minimum (or of simply not being able to see the target) required message-passing between the agents, i.e. explicit cooperation by communication. Another insight to emerge from this work is the realisation that communication may be achieved by signalling (i.e. a 'push' from a sending agent) or by observation (i.e. a 'pull' from a receiving agent). Clearly, the latter is less efficient due to the need for continuous monitoring. Hence it can be said that, while an attention mechanism may be important, raising alerts in a timely manner are critical to success in a cooperative task. It is also interesting to note that the overall paradigm is not specifically concerned with explicit message passing. Given that the _meanings_ of the particular messages have implications for the subsequent behaviour, the scenario may also be viewed as one agent needing to appreciate the other's situation. In other words, timely communications to overcome local minima in a cooperative problem space may be viewed as instantiating a primitive 'theory-of-mind' [22]. ## VI Summary and Conclusion This paper has addressed the question as to what conditions the timing and structure of communication in continuous cooperative interaction. Experiments have been conducted using a PCT-based simulation of a cooperative task in which two independent one-dimensional agents are obliged to communicate in order to solve a two-dimensional path-finding problem. Results from a number of simulation experiments have confirmed the hypothesis that appropriately timed communication between agents can overcome local minima in a joint problem space. It has also been shown that asymmetric levels of cooperative communication can be as effective as equally matched partners, and can even reduce the level of communications required to achieve the same level of performance. Finally, although this study was aimed at _extrinsic_ communication between multiple agents, it is interesting to note that the results also apply to _intrinsic_ communications within a single agent.
2308.08827
Factuality Detection using Machine Translation -- a Use Case for German Clinical Text
Factuality can play an important role when automatically processing clinical text, as it makes a difference if particular symptoms are explicitly not present, possibly present, not mentioned, or affirmed. In most cases, a sufficient number of examples is necessary to handle such phenomena in a supervised machine learning setting. However, as clinical text might contain sensitive information, data cannot be easily shared. In the context of factuality detection, this work presents a simple solution using machine translation to translate English data to German to train a transformer-based factuality detection model.
Mohammed Bin Sumait, Aleksandra Gabryszak, Leonhard Hennig, Roland Roller
2023-08-17T07:24:06Z
http://arxiv.org/abs/2308.08827v1
# Factuality Detection using Machine Translation - a Use Case for German Clinical Text ###### Abstract Factuality can play an important role when automatically processing clinical text, as it makes a difference if particular symptoms are explicitly not present, possibly present, not mentioned, or affirmed. In most cases, a sufficient number of examples is necessary to handle such phenomena in a supervised machine learning setting. However, as clinical text might contain sensitive information, data cannot be easily shared. In the context of factuality detection, this work presents a simple solution using machine translation to translate English data to German to train a transformer-based factuality detection model. ## 1 Introduction Factuality refers to the concept that a speaker can present statements about world events with varying degrees of uncertainty as to whether they happened. Factuality reflects, for instance, if an event is affirmed, negated, or uncertain. In the medical domain, detecting if symptoms or diseases are signaled as present, not present, possibly or doubtfully present, and therefore uncertain is essential. Detecting factuality is challenging since it can be expressed by very different linguistic categories (e.g. verbs, nouns, adjectives, adverbs), plus it must be taken into account how they are embedded in a sentence Rudinger et al. (2018). Additionally, linguistic factuality cues can be very domain-specific, so the availability of relevant datasets is essential. Classical supervised machine learning requires training data, and, at the same time, most existing datasets are published in English. In addition, clinical text contains sensitive patient data, which often makes it difficult to share due to ethical and legal aspects. Although the situation has slowly changed regarding the availability of German clinical text resources Modersohn et al. (2022), many other languages suffer a similar situation. Conversely, the quality of machine translation has significantly improved in the last decade, also regarding the translation of biomedical text/publications, including clinical case reports Neves et al. (2022). For this reason, this work explores the usage of machine translation to create (translated) text resources for factuality detection in German clinical text. Clinical notes are short text documents written by physicians during or shortly after the treatment of a patient. In general, this kind of text contains much valuable information about the current health condition, as well as treatment, of the patient. They differ from biomedical publications and clinical case reports, as notes are often written under time pressure with a high information density, a telegraphic writing style, non-standardized abbreviations, colloquial errors, and misspellings. Therefore, it is unclear if current machine translation systems can handle this text, considering that data might contain sensitive information and should not be shared with a third party outside the hospital. This work makes the following contributions: 1) We successfully use a local machine translation to train a model for factuality detection on German clinical text. 2) Our model outperforms the only 'competitor' NegEx, and 3) will be published as open access model1. Finally, 4) for those interested in NegEx, we release it as a modular PyPI package with a few important fixes2 and also propose improvement suggestions to the used trigger sets. Footnote 1: [https://huggingface.co/binsumait/factual-med-bert-de](https://huggingface.co/binsumait/factual-med-bert-de) Footnote 2: [https://github.com/DFKI-NLP/pynegex](https://github.com/DFKI-NLP/pynegex) ## 2 Methods and Data The idea of this work is based on the usage of machine translation to generate a German corpus to train a classifier dealing with factuality in clinical text. In the following, we outline the approach, the necessary methods, and the dataset used. ### Factuality Detection In literature, (medical) factuality detection is often reduced to a simple classification. Given a sentence and an entity, the task is to define the factuality of the entity in the given context. In most cases, the entity of interest is a symptom or medical condition. Most related work targets the three classes **affirmed**, **negated** and **possible**. However, as simple as this sounds, factuality cannot always be easily mapped to those few classes. One of the most prominent tools to deal with factuality in the medical text is NegEx Chapman et al. (2001), a rule-based approach with pre-defined regular expressions, so-called triggers, and can detect the three aforementioned factuality classes. It achieves, particularly in the context of negations, quite good results on clinical text. Hedges instead offer more possibilities for how they are described, therefore achieving a much lower performance. Initially, it was developed for English, but over the years, it has also been translated into other languages, such as Spanish or Swedish Cotik et al. (2016); Chapman et al. (2013). In addition, many alternative (machine learning) solutions have been published in the last two decades. We refer to the overview by Khandelwal and Sawant (2019) for more details. For German, however, only one negation detection exists, which relies on the NegEx solution and uses a set of translated trigger words (English to German) Cotik et al. (2016). ### Data In the following, we briefly introduce the data used for this work. First, we present i2b2, which has been used for machine translation and to train our model. In addition, we later test our model on additional German data, namely Ex4CDS and NegEx-Ger, and in the appendix also BRONCO150. The **2010 i2b2/VA** data Uzuner et al. (2011) consists of English medical text and includes three tasks - extraction of concepts, assertions identification, and relation detection. In this work, we focus on the assertion task. Overall a total of six assertion types were considered, namely present, absent, possible, conditional, hypothetical and not associated with the patient. However, this work focused only on the first three labels, as only those are considered within NegEx. i2b2 data is translated to German to train a German machine learning model. **Ex4CDS** Roller et al. (2022) is a small dataset of physicians' notes containing explanations in the context of clinical decision support. The notes are written in German and include various annotation layers, including factuality. As the data includes multiple factuality labels, we reduced the labels to our three target labels, mapping _possible-future_ and _unlikely_ to _possible_, and _minor_ to _affirmed_. As target entities, we consider only sentences containing _medical-conditions_. **NegEx-Ger** is a small dataset consisting of sentences taken from clinical notes and discharge summaries and has been used initially to evaluate the German NegEx version in Cotik et al. (2016). For our use case, the data has been used for testing, and for this, we merged the sentences of both clinical text types. However, the number of sentences containing the possible label is small (22 for discharge summaries and 4 for clinical notes). ### Translation Approach For our proposed idea, two aspects need to be considered: First, we aim at a solution that could be applied to sensitive data. Therefore, the machine translation component must run locally. This means we cannot rely on the variety of existing state-of-the-art online approaches. Second, as we define factuality as a classification problem with a given sentence (context) and an entity, our translations need to keep track of the target entity within a sentence. A simple example is given in Table 1, which shows an English sentence with a target entity 'headache' and the label 'negation'. The German translation needs to keep the focus on the target entity. In this work, we rely on TransIns Steffen and van Genabith (2021), an open-source machine translation that can be installed locally. TransIns is built on MarianNMT Junczys-Dowmunt et al. (2018) framework and enables translating texts with an embedded markup language. Specifically, we translate sentences with tagged entities, as shown in Table 1. A manual inspection revealed multiple problems \begin{table} \begin{tabular}{c|l|l} **Factuality** & **English** & **German translation** \\ \hline affirmed & Clinically, a \textless{}E\textgreater{}severe neuropsychological syndrome\textless{}E\textgreater{} was found when the patient \textless{}E\textgreater{}schweres neuropsychological\textless{} Symptom\textless{}/E\textgreater{}. \\ & was taken over. \\ \hline negation & Patient denies \textless{}E\textgreater{}headache\textless{}/E\textgreater{}. & Patient vernenit \textless{}E\textgreater{}Kopfschmerzen\textless{}/E\textgreater{}. \\ \hline possible & Thus, a \textless{}E\textgreater{}tumour\textless{}/E\textgreater{} cannot be ruled out. & Ein \textless{}E\textgreater{}Tumor\textless{}/E\textgreater{} kann daher nicht ausgeschlossen werden. \\ \hline \end{tabular} \end{table} Table 1: Example sentences with target entities, factuality label, and possible translations. with the translations: In some cases (roughly 40% of the issues), translations were corrupt as they contained cryptic and/or repetitive text sequences that were foreign from the original text. Such noise patterns could partially or entirely affect the target texts' context. Or, in very few cases (only 4%), no translation output could be produced. In the rest of the cases, the markup no longer included the target entity. In any way, such output has been discarded from the data, and we resulted in 18,297 data points (initially 18,397), which we used to train and evaluate our machine learning model. ## 3 Experiments and Results We conduct three different experiments - starting with the English i2b2 data, we use Bio+Discharge Summary BERT Alsentzer et al. (2019) and compare the results to NegEx. Similar experiments have also been conducted in other papers. However, in our case, those results serve as a comparison. Thus, the model is not optimized to achieve the best possible performance. Next, we train German-MedBERT Shrestha (2021) on the translated i2b2 data and compare the results to the performance of the German NegEx implementation. Finally, we apply both German factuality approaches to different German medical texts to determine how well the models perform in a more realistic setup. The results of the first two experiments are presented in Table 2 and show various interesting findings: Firstly, NegEx provides impressive results on the affirmed label, good results for negations, and unsatisfying results for the possible label. Moreover, on both datasets, English and German, the BERT-based model outperforms NegEx, on all scores. Additionally, results on the English dataset are always higher than those on the translated dataset. This might be unsurprising as data quality decreases. Finally, the table shows that BERT-based models show a substantial increase in performance for the possible label. Table 3 presents the performance of the NegEx and the BERT-based model on two German datasets. In the upper part of the table, the results on NegEx-Ger are presented and the results on Ex4CDS are in the lower part. Similarly, as on the translated i2b2 dataset in Table 2, the machine learning model outperforms NegEx. However, this time the performance gain is not so strong anymore. The NegEx-Ger is small and relatively homogeneous (regarding the variety of negations), and NegEx already performs well on the negations. Therefore the machine learning model achieves only a performance boost of two points in F1. In case of possible, the number of examples might be too small to see the benefit of the ML model. On Ex4CDS data, NegEx already struggles with _negated_ (0.76) and performs low in the case of _possible_ (0.26) - although the results are much better in comparison to the results on i2b2 (English and German). Here, the machine learning model leads to a performance boost of 14 points for _negated_ and 21 points for _possible_. ## 4 Analysis and Discussion Our results indicate that we can successfully apply machine translation to generate a German clinical dataset to train a machine learning model with. Most notably, this model can outperform NegEx, which partially already provides satisfying results. While it is important that a negation detection tool for German clinical text needs to run within a hospital infrastructure, it might be questionable if BERT-based approaches might be the right solution, as it requires much more hardware resources than the simple NegEx solution. This is supported by the results on NegEx-Ger, in which the BERT achieves only a minor performance gain. However, as this data is small and homogeneous, the results on Ex4CDS affirm the usage of machine learning, \begin{table} \begin{tabular}{l|c c c|c c c} & \multicolumn{3}{c|}{NegEx} & \multicolumn{3}{c}{BERT-based} \\ \hline Label & Prec & Rec & F1 & Prec & Rec & F1 \\ \hline E Affirmed & 0.88 & 0.97 & 0.93 & **0.97** & **0.99** & **0.98** \\ N Negated & 0.89 & 0.79 & 0.84 & **0.98** & **0.97** & **0.97** \\ G Possible & 0.79 & 0.04 & 0.08 & **0.85** & **0.64** & **0.73** \\ \hline G Affirmed & 0.84 & 0.96 & 0.90 & **0.96** & **0.98** & **0.97** \\ E Negated & 0.83 & 0.65 & 0.73 & **0.95** & **0.93** & **0.94** \\ R Possible & 0.28 & 0.02 & 0.04 & **0.80** & **0.64** & **0.71** \\ \hline \end{tabular} \end{table} Table 2: Performance results between NegEx baselines and BERT-based models on the original English i2b2 dataset (upper part) and German translation (lower part). \begin{table} \begin{tabular}{l|c c c|c c c} & \multicolumn{3}{c|}{NegEx} & \multicolumn{3}{c}{BERT-based} \\ \hline Label & Prec & Rec & F1 & Prec & Rec & F1 \\ \hline N Affirmed & 0.96 & 0.94 & 0.95 & **0.97** & **0.96** & **0.96** \\ E Negated & 0.93 & 0.96 & 0.95 & **0.97** & **0.98** & **0.97** \\ G Possible & 0.46 & 0.50 & 0.48 & **0.50** & 0.50 & **0.50** \\ \hline E Affirmed & 0.85 & 0.88 & 0.86 & **0.88** & **0.92** & **0.90** \\ X Negated & 0.66 & 0.89 & 0.76 & **0.86** & **0.95** & **0.90** \\ 4 Possible & 0.50 & 0.18 & 0.26 & **0.61** & **0.38** & **0.47** \\ \hline \end{tabular} \end{table} Table 3: Performance results on different German medical text sources, namely the original German NegEx (upper part), and Ex4CDS dataset (lower part). as we achieve a notable performance gain. Note, information about the frequency of each label in the test data is provided in the appendix. As our BERT model was trained on potential suboptimal translations, we analyse some errors in more detail in the following. ### Linguistic Error Analysis Our analysis focuses on the prediction errors caused by the translation or by differences in the features of the German and English language. Table 7 contains full-text examples illustrating the issues described below. In various cases, a factuality cue was completely missing in the translation, or the sense of the cue was not preserved (e.g., _to rule out_ was translated with _Vorschriften_ instead of _ausschliessen_). In those cases, NegEx and BERT labeled the instances wrongly as affirmations. In other cases, we observe that the factuality cues are outside of the original data's entities but in the translation they are placed within the entity markup. That is often correlated with the prediction changing from negation or possible to affirmation. For example, both NegEx and BERT correctly recognized the negated assertion of the original phrase _did not notice [any blood]_, whereas both German models consider the translation _bemerkte [kein Blut]_ as affirmed in which the negation cue (_not / kein_) became part of the entity. For NegEx, a further problem are missing factuality cues in the trigger list. For example, it systematically does not recognize the cue _verleugnen_ (one of the possible translations of the word _deny_, which is included in the English NegEx). Additionally, some problems with factuality cues are specific to the German language and require additional handling: (a) German compounds must be written as one word; unfortunately, German NegEx cannot handle cases when a compound consists of words referring to a medical problem and its negation (e.g. _schmerzfrei / pain free_), since it seems not to recognize a factuality cue if it is not written as a separate phrase, (b) cues with umlauts in text such as _aufgelost_ seem not to be recognized, because the umlauts are encoded as _oe_ in the German trigger list, (c) missing possible word orders of factuality phrases (e.g. word order might depend on the embedding syntactic structure; e.g. _wurde ausgeschlossen_ vs. _ausgeschlossen wurde_ in a main vs. subordinate clause). ## 5 Related Work **Machine Translation for Cross-lingual Learning** MT is a popular approach to address the lack of data in cross-lingual learning Hu et al. (2020); Yarmohammadi et al. (2021). There are two basic options - translating target language data to a well-resourced source language at inference time and applying a model trained in the source language Asai et al. (2018); Cui et al. (2019), or translating source language training data to the target language, while also projecting any annotations required for training, and then training a model in the target language Khalil et al. (2019); Kolluru et al. (2022); Frei and Kramer (2023). Both approaches depend on the quality of the MT system, with translated data potentially suffering from translation or alignment errors Aminian et al. (2017); Ozaki et al. (2021). While the quality of machine translation for health-related texts has significantly improved Neves et al. (2022), using MT in the clinical domain remains underexplored, with very few exceptions Frei and Kramer (2023). **Factuality Detection** Previous research focused mainly on assigning factuality values to events and often framed this task as a multiclass classification problem over a fixed set of uncertainty categories Rudinger et al. (2018); Zerva (2019); Pouran Ben Veyseh et al. (2019); Qian et al. (2019); Bijl de Vroe et al. (2021); Vasilakes et al. (2022). In the biomedical/clinical domain, Uzuner et al. (2011) present the i2b2 dataset for assertion classification, and Thompson et al. (2011) introduce the Genia-MK corpus, where biomedical relations have been annotated with uncertainty values. van Aken et al. (2021) release factuality annotation of 5000 data points sourced from MIMIC. Kilicoglu et al. (2017) introduce a dataset of PubMed abstracts with seven factuality values, and find that a rule-based model is more effective than a supervised machine learning model on this dataset. ## 6 Conclusion This work presented a machine learning-based factuality detection for German clinical text. The model was trained on translated i2b2 data and tested, first on the translations and then on other German datasets and outperformed an existing method for German, NegEx. The simple machine translation approach might interest the Non-English clinical text processing community. The model will be made publicly available. ### Ethical Considerations We use the original datasets "as is". Our translations of i2b2 thus reflect any biases of the original dataset and its construction process, as well as biases of the MT models (e.g., rendering gender-neutral English nouns to gendered nouns in German). We use BERT-based PLMs in our experiments, which were pretrained on a large variety of medical source data. Our models may have inherited biases from these pretraining corpora. Since medical data is highly sensitive with respect to patient-related information, all datasets used in our work are anonymized. The authors of the original datasets Uzuner et al. (2011); Roller et al. (2022) have stated various measures that prevent collecting sensitive, patient-related data. Therefore, we rule out the possible risk of sensitive content in the data. ### Limitations A key limitation of this work is the dependence on a machine translation system to get high-quality translations and annotation projections of the source language dataset. Depending on the availability of language resources and the quality of the MT model, the translations we use for training and evaluation may be inaccurate, or be affected by translation noise, possibly leading to overly optimistic estimates of model performance. In addition, since the annotation projection is completely automatic, any alignment errors of the MT system will yield inaccurate instances in the target language. ## Acknowledgements This research was supported by the German Research Foundation (DFG, Deutsche Forschungsgemeinschaft) through the project KEEPHA (442445488) and the German Federal Ministry of Education and Research (BMBF) through the projects KIBATIN (16SV9040) and CORA4NLP (01IW20010).
2310.15382
Topological constraints on general relativistic galaxies: Exploring novel conical singularity networks
The van Stockum-Bonner class of spacetimes can be interpreted as fully general relativistic models for rigidly rotating disc galaxies. Frame-dragging effects in these geometries demand a recalibration of the dark matter content relative to models based on Newtonian gravity. We investigate the previously overlooked topological structure of these spacetimes, in relation to the viability of fully general relativistic galaxy toy models. We discuss the appropriate boundary conditions for these solutions to model disc galaxies. For this class of spacetimes, we show the existence of a network of quasi-regular singularities along the rotation axis of the galaxies. The existence of such novel conical defect structures further restricts the physical viability of the van Stockum-Bonner class. Unwinding these issues is key to avoiding pathologies in future fully general relativistic modelling of alternative to dark matter.
Marco Galoppo
2023-10-23T21:59:33Z
http://arxiv.org/abs/2310.15382v1
Topological constraints on general relativistic galaxies: Exploring novel conical singularity networks ###### Abstract The van Stockum-Bonner class of spacetimes can be interpreted as fully general relativistic models for rigidly rotating disc galaxies. Frame-dragging effects in these geometries demand a recalibration of the dark matter content relative to models based on Newtonian gravity. We investigate the previously overlooked topological structure of these spacetimes, in relation to the viability of fully general relativistic galaxy toy models. We discuss the appropriate boundary conditions for these solutions to model disc galaxies. For this class of spacetimes, we show the existence of a network of quasi-regular singularities along the rotation axis of the galaxies. The existence of such novel conical defect structures further restricts the physical viability of the van Stockum-Bonner class. Unwinding these issues is key to avoiding pathologies in future fully general relativistic modelling of alternative to dark matter. _Keywords--_ Quasi-Regular Singularities, Galaxy Models, General Relativity. ## 1 Introduction The description of galactic dynamics is the subject of ongoing debate in astrophysics. Indeed, a purely Newtonian description of galaxies is in irreconcilable conflict with the observed flat rotation curves [1, 2, 3, 4, 5]. To resolve this, many approaches have been developed including MOdified Newtonian Dynamics (MOND) [6, 7, 8, 9, 10, 11, 12, 13, 14], MOdified Gravity (MOG) theories [15] and the Dark Matter (DM) hypothesis [16]. Of these, the latter, namely the assumption of the existence of a significant non-baryonic component of matter, is arguably the most widely accepted. The DM hypothesis is indeed one of the foundations of the standard \(\Lambda\) Cold Dark Matter (\(\Lambda\)CDM) cosmological model [17]. It has been highly successful in interpreting a plethora of different astrophysical observations, such as: rotation curves of disc galaxies [1, 2, 3, 4, 5]; velocity distribution of galaxies in Galaxy Clusters (GCs) [18, 19]; thermodynamic properties of X-ray emitting gas in GCs [20]; gravitational lensing produced by GC mass distributions [21]; features of the two Bullet Clusters [22, 23, 24] ; the growth of cosmic structures from inhomogeneities in the matter density content of the early universe [25, 26]. Nonetheless, despite such remarkable success, the results of the many experiments aimed at the direct detection of DM particles are, to date, inconclusive [27, 28, 29, 30, 31, 32, 33]. In addition, several observations have challenged the validity of the \(\Lambda\)CDM model [34, 35, 36, 37]. In particular, if we only focus on galaxies, several recent independent observations appear to conflict with the standard DM paradigm [38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55]. Given the challenges to the DM hypothesis, as well as to MOND and MOG theories [56], a new approach is gaining traction, namely the use of exact solutions of Einstein's equations to model cosmological structures. Thus far, this approach has been applied only to model features of individual galaxies [57, 58, 59, 60, 61, 62, 63, 64, 65, 66]. Ultimately, it is important to extend this approach to the dynamics of galaxies within clusters. However, important topological issues have been overlooked in refs. [57, 58, 59, 60, 61], which need to be addressed in order to consistently model complex superposition of such structures. The new approach rests on the highly nonlinear nature of General Relativity (GR). The nonlinearities of GR introduce novel ingredients as compared to both Newtonian physics and special relativity. In such theories the transition between particles and effective fluid description is well understood in terms of coarse-graining and averaging. In GR any nongravitational binding energy is supplemented by quasilocal sources of gravitational energy: spacetime itself carries dynamical energy and angular momentum. Neglecting these quasilocal terms in galactic modelling is usually justified by invoking the weak field limit - the typical velocities of stars and gas are nonrelativisitc, \(\beta=v/c\approx 10^{-3}\). Nonetheless, the weak field presupposes a background, conventionally Minkowski spacetime, \(g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}\), \(|h_{\mu\nu}|\ll 1\), and the process of calibrating one Minkwoski background from one system to another in which is embedded can be highly nontrivial [67, 68, 69, 70]. This occurs already in the transition from the few body systems in which GR is directly tested (stars, black holes...) to many body systems (star clusters, galaxies, GCs...) [71]. Thus, the typical weak field limit will fail when naively applied to large scale systems such as galaxies. Furthermore, Ciotti and collaborators [72, 73, 74] proved that even a perturbative implementation of the standard gravitomagnetic limit is not feasible as a solution for galactic modelling without DM. Consequently, if the DM phenomenon is to be resolved within a GR framework, it will be in a nonlinear/nonperturbative regime. Balasin and Grumiller presented a full GR galaxy model to address the DM phenomenon [60]. Crosta and coworkers [61, 62] showed that the Balasin-Grumiller model (BG) fits the Milky Way (MW) rotation curve reconstructed from the GAIA satellite's kinematic data without any need for DM 1. Moreover, the BG model was shown to be preferred to MOND or DM-driven galactic dynamics, on account of a reduced set of free parameters and similar goodness of fit to the rotation curve. Nevertheless, despite its clear successes, the BG solution fails both to model the galaxy bulge [60, 61, 75] and match gravitational lensing observations [66]. In particular: (i) average stellar motion in the galactic bulge is not circular; and (ii) close encounters of stars are frequent enough to invalidate treating matter as a pressureless fluid. Furthermore, the rigid rotation assumption [60] leads to unphysical time delay differences which rule out inferences about strong lensing [66]. Footnote 1: We note that in ref. [60] the authors claimed a reduction of only 30% of DM. However, this can be traced to a naive comparison of the densities of BG and classical models. To bypass the deficiencies of rigid rotation [76, 77], Cacciatori and collaborators [63, 64] have investigated the full class of stationary, axisymmetric dust solutions of Einstein equations with boundary conditions appropriate for disc galaxies. Naturally, this begs the question of what "appropriate" boundary conditions are. To better appreciate this, in the present paper we investigate the presence and physical interpretation of topological defects and conical singularities for the entire rigidly rotating van Stockum-Bonner (vSB) class2[78, 79, 80, 81, 82]. In our work, we draw on well-established mathematical results [83, 84, 85, 86, 87] which have played a role in interpreting the physical nature of topological defects in various settings, i.e. formation in early universe phase-transitions [88, 89], gravitational lensing [90, 91] and shifts of atomic spectra [92, 93]. Footnote 2: This includes the BG [60] and Cooperstock-Tieu [57, 58, 59] models. The structure of this paper is as follows: in section 2 we introduce the concept of quasi-regular singularities and conical singularities; in section 3 we define the vSB models, we discuss the observers through which we read their physics, and we specialise to BG; in section 4 we discuss appropriate boundary conditions for galaxy models, prove the existence of nonisolated quasi-regular singularities in the vSB class and describe the resulting topological features; section 5 is dedicated to a brief overview of the results and the discussion of future perspectives. ## 2 Quasi-regular singularities A point \(q\) of a spacetime \((M,\mathbf{g})\) is defined as a quasi-regular singularity if it is the end point of an incomplete geodesic \(\gamma(\lambda)\), when \(\lambda\) is a generalised affine parameter, and it is not a curvature singularity [83, 84]. Namely, the curvature components \(h_{\mu\nu\rho\sigma}\) measured in an orthonormal frame are continuous in \(q\). The critical property of all quasi-regular singularities is their undetectability from local considerations. Indeed, local quantities are well-behaved when evaluated in any open set containing a quasi-regular singularity. Instead, their presence is imprinted on the global topological structure of the spacetime. We are interested in the quasi-regular singularities named conical singularities, such as the one identified by the tip of a cone. To understand their nature, let us consider Minkowski spacetime in cylindrical coordinates \[ds^{2}=-dt^{2}+dr^{2}+r^{2}d\phi^{2}+dz^{2}\,, \tag{1}\] where \(t,z\in(-\infty,\infty)\), \(r\in[0,+\infty)\) and \(\phi\in[0,2\pi]\). To obtain from this spacetime a conical singularity, we may proceed by identifying points related by the translation [83, 84] \[\phi=\phi+\alpha, \tag{2}\] where \(\alpha\neq 2\pi\). This results in a new spacetime with the same metric as (1) but for which \(t,z\in(-\infty,\infty)\), \(r\in[0,+\infty)\) and \(\phi\in[0,\alpha]\). The points on the z-axis in this new spacetime are conical singularities. To gauge their singular nature, we compute the circumference-to-radius ratio for any circle drawn around the z-axis on a 2-surface \(\{t=const,z=const\}\) in the limit \(r\longrightarrow 0\). Doing so gives us a ratio exactly equal to \(\alpha\). instead of the Euclidean \(2\pi\). These singularities result focusing (attractive) if \(\alpha<2\pi\) and defocusing (repulsive) if \(\alpha>2\pi\). We point out that the presence of conical singularities in a spacetime can be immediately inferred whenever its line element can be cast in the form \[ds^{2}=-dt^{2}+dr^{2}+b^{2}r^{2}d\phi^{2}+dz^{2}. \tag{3}\] Indeed, (3) is equivalent to (1) but with \(0<\phi<2\pi b\), as can be seen by applying the change of coordinates \(\phi^{\prime}=b\phi\). The identification of conical singularities is thus generally achieved by writing the respective line element in a form equivalent to (3), as in the case of cosmic strings [85, 87]. More complex procedures exist to identify conical singularities, e.g., calculating the holonomy group of the manifold for the suspected singular points [86]. However, these have proven unnecessary in our current work. ## 3 Galaxy Models The vSB spacetimes are a subclass of the galaxy models class which models disc galaxy dynamics using a stationary, axisymmetric metric expressed in standard cylindrical coordinates \[ds^{2}= g_{tt}(r,z)dt^{2}+2g_{t\phi}(r,z)dtd\phi\] \[+g_{\phi\phi}(r,z)d\phi^{2}+e^{\mu(r,z)}\left(dr^{2}+dz^{2}\right), \tag{4}\] where we use the convention c = 1 and the metric is coupled to a dust energy-momentum tensor of the form \[T_{\mu\nu}=\rho(r,z)u_{\mu}u_{\nu}. \tag{5}\] The coupling has been worked out in [76, 77]. Thus, we have \[u^{\mu}\partial_{\mu}=\sqrt{-H}\left(\partial_{t}+\Omega\partial_{\phi}\right), \tag{6}\] \[g_{tt}=\frac{\left(H-\eta\Omega\right)^{2}-r^{2}\Omega^{2}}{H}, \tag{7}\] \[g_{t\phi}=\frac{r^{2}-\eta^{2}}{H}\ \Omega+\eta, \tag{8}\] \[g_{\phi\phi}=\frac{\eta^{2}-r^{2}}{H}, \tag{9}\] \[\mu_{,r}=\frac{1}{2r}\left[g_{tt,r}g_{\phi\phi,r}-g_{tt,z}g_{\phi\phi,z}-(g_{t \phi,r})^{2}+(g_{t\phi,z})^{2}\right], \tag{10}\] \[\mu_{,z}=\frac{1}{2r}\left[g_{tt,z}g_{\phi\phi,r}-g_{tt,r}g_{\phi\phi,z}-2g_{t \phi,z}g_{t\phi,r}\right], \tag{11}\] \[8\pi G\rho=\frac{\left(\eta_{,r}^{2}+\eta_{,a}^{2}\right)\left[\eta^{2}r^{-2} \left(2-\ell\eta\right)^{2}-r^{2}\ell^{2}\right]}{4\eta^{2}e^{\mu}}, \tag{12}\] where \(\eta\) is a function of \(r\) and \(z\), \(H\) is an arbitrary negative function of \(\eta\), \(\ell=H^{\prime}/H\) is the logarithmic derivative of \(H\) and \(\Omega\) is defined as \[\Omega\coloneqq\frac{1}{2}\int\frac{H^{\prime}}{\eta}d\eta. \tag{13}\] The parameter \(\Omega(r,z)\) describes the angular velocity of the dust referred to the coordinates in use \[\Omega=\frac{d\phi}{dt} \tag{14}\] The function \(\eta(r,z)\) can be implicitly retrieved through \[\mathcal{F}=2\eta+r^{2}\int\ell(\eta)\left(\frac{1}{\eta}-\eta\right)d\eta, \tag{15}\] where, as a consequence of Einstein's equations, \(\mathcal{F}\) satisfies the harmonic equation \[\mathcal{F}_{,rr}-\frac{1}{r}\mathcal{F}_{,r}+\mathcal{F}_{,zz}=0. \tag{16}\] Equation (16) corresponds to the differential equation for \(\eta(r,z)\) \[\left(\eta_{,rr}-\frac{1}{r}\eta_{,r}+\eta_{,zz}\right)\left(2-\eta\ell(\eta) \right)+\left(\eta_{,r}^{2}-\eta_{,z}^{2}\right)\left[\ell^{\prime}(\eta) \left(\frac{r^{2}}{\eta}+\eta\right)-\ell(\eta)\left(1+\frac{r^{2}}{\eta^{2}} \right)\right]+r^{2}\frac{\ell(\eta)}{\eta}\left(\eta_{,rr}+-\frac{3}{r}\eta_{, r}+\eta_{,zz}\right)=0. \tag{17}\] (17) uniquely determines \(\eta(r,z)\) once \(H(\eta)\) and \(\eta(r,0)\) are arbitrarily assigned. From here on out, we refer to this class of galaxy models as the \((\eta,H)\) class. ### The ZAMO observers A physical interpretation of the field \(\eta(r,z)\) is achieved once we choose the appropriate class of observers for the galaxy. In the case of stationary, axisymmetric metrics, there exists a natural class of observers in terms of which to read the physics of the system: the Zero Angular Momentum Observers (ZAMO) [94, 95]. These are defined by the tetrad \[\mathbf{e^{0}}=\frac{r}{\sqrt{g_{\phi\phi}}}dt, \tag{18}\] \[\mathbf{e^{1}}=e^{\mu/2}dr, \tag{19}\] \[\mathbf{e^{2}}=e^{\mu/2}dz, \tag{20}\] \[\mathbf{e^{3}}=\sqrt{g_{\phi\phi}}\left(d\phi-\chi dt\right). \tag{21}\] We define the velocity of the dust in the galaxy as measured by the reference frame formed by the ZAMO, \(v(r,z)\), through \[-e_{\mu}^{0}u^{\mu}=:\frac{1}{\sqrt{1-v^{2}}} \tag{22}\] where \(u^{\mu}\) is the four-velocity of the dust. On the other hand, we also have \[-e_{\mu}^{0}u^{\mu}=\frac{\sqrt{-H}r}{\sqrt{g_{\phi\phi}}}=\frac{1}{\sqrt{1- \left(\eta/r\right)^{2}}}. \tag{23}\] Thus, we can identify \(\eta(r,z)\) as \[\eta(r,z)=rv(r,z). \tag{24}\] Therefore, the field \(\eta(r,z)\) is henceforth understood as the product of the velocity field of the dust, measured in the reference frame built by ZAMO, times the radial coordinate. Naturally, the question arises of whether the velocity profile measured by ZAMO can be interpreted as the one measured in astronomical observations. This view, held true in refs. [57, 58, 59, 60, 61, 62] has been challenged by Costa and collaborators [75]. However, we believe the original point of view to be correct, at least when implemented in the analysis of the rotation curve obtained by GAIA for the MW (see ref. [96, 97]). As such, we will consider ZAMO to be the fiducial observers henceforth. ### van Stockum-Bonner Galaxies To obtain the equations defining the sVB class we must make the mutually inclusive choices \[H(\eta)=-1, \tag{25}\] \[\Omega(\eta)=0, \tag{26}\] which imply that vSB galaxies undergo rigid rotation, a clear drawback of this class of models. From (7),(8),(9),(25) and (26), we have the metric terms as \[g_{tt}=-1, \tag{27}\] \[g_{t\phi}=\eta,\] (28) \[g_{\phi\phi}=r^{2}-\eta^{2}. \tag{29}\] So that \[ds^{2}=-dt^{2}+2\eta(r,z)dtd\phi+(r^{2}-\eta^{2})d\phi^{2}+e^{\mu(r,z)}(dr^{2 }+dz^{2}). \tag{30}\] From (10), (11), (27), (28) and (29) we get \[\mu_{,r}=-\frac{1}{2r}\left(\eta_{,r}^{2}-\eta_{,z}^{2}\right), \tag{31}\] \[\mu_{,z}=-\frac{1}{2r}\eta_{,r}\eta_{,z}. \tag{32}\] Through the use of (25) and (26), (17) reduces to \[\eta_{,rr}-\frac{\eta_{,r}}{r}+\eta_{,zz}=0. \tag{33}\] (31). (31) and (33) completely solve for the vSB spacetime class. The general solution to (33) is given by 3 Footnote 3: To see this, it is sufficient to substitute \(\eta(r,z)=rf(r,z)\) in (33). The equation reduces to \(-\triangle f+\frac{1}{r^{2}}=0\), where \(\triangle\) is the laplacian in cylindrical coordinates. The resulting equation in the static Schrödinger equation for \(E=0\), for a particle with mass \(m=\hbar^{2}/2\) in a central potential of the form \(V(r)=1/r^{2}\). It is a well-known result [98, 99, 100] that any solution of this equation can be written as an integral over the eigenvalues of its separable solutions. \[\eta(r,z)=\int_{0}^{+\infty}\left[A(\lambda)\cos(\lambda z)+B(\lambda)\sin( \lambda z)\right]\lambda rK_{1}(r\lambda)d\lambda+\eta_{c}=\hat{\eta}(r,z)+ \eta_{c}, \tag{34}\] where \(K_{1}(r\lambda)\) is the MacDonald function of the first order, \(A(\lambda)\) and \(B(\lambda)\) are, respectively, the spectral densities for the even and odd modes of the solution and \(\eta_{c}\) is the constant of integration. Finally, from (12), (25) and (26), the dust density is given by \[8\pi G\rho=\frac{\eta_{r}^{2}+\eta_{\perp}^{2}}{r^{2}e^{\mu}} \tag{35}\] ### Balasin-Grumiller Galaxy The BG model is a specific solution of the vSB class obtained by choosing [60] \[A(\lambda)=\frac{2}{\pi}\int_{0}^{+\infty}C(x)\cos{(\lambda x)}, \tag{36}\] \[B(\lambda)=0,\] (37) \[\eta_{c}=-\int_{0}^{+\infty}A(\lambda)d\lambda, \tag{38}\] where (37) corresponds to choosing a galaxy symmetrical with respect to the equatorial plane and \(C(x)\) is given by \[C(x)= V_{0}\left[(x-r_{0})(\theta(x-r_{0})-\theta(x-R)\right]\] \[+V_{0}(R-r_{0})\theta(x-R), \tag{39}\] where \(V_{0}\) is the asymptotic velocity measured by ZAMO, \(R\) is the galaxy radius and \(r_{0}\) is the bulge radius. (36), (37), (38) and (39) lead to the analytic expression for \(\eta(r,z)\) \[\eta(r,z)= \frac{V_{0}}{2}\sum_{\pm}\left(\sqrt{(z\pm r_{0})^{2}+r^{2}}- \sqrt{(z\pm R)^{2}+r^{2}}\right)\] \[+V_{0}(R-r_{0}). \tag{40}\] The other relevant quantities of the system are obtained through the equations previously discussed for the entire class of models. ## 4 Conical Singularities in van Stockum-Bonner Galaxies To investigate the topological structure of the vSB class, we start by considering the asymptotic line element at radial infinity \[ds^{2}\simeq-dt^{2}+2\eta_{c}dtd\phi+r^{2}d\phi^{2}+e^{\mu(+\infty,z)}\left( dr^{2}+dz^{2}\right), \tag{41}\] where \(\mu(+\infty,z)=\lim_{r\longrightarrow+\infty}\mu(r,z)\) and we have used that \(\lim_{r\longrightarrow+\infty}\hat{\eta}(r,z)=0\) (see (34)). As we are interested in using vSB spacetimes as galaxy models, we notice that the persistence of the off-diagonal term at spatial infinity is undesirable. Full GR galaxy models must be smoothly matched at large distances from the matter bulk with a void Kerr-like solution of Einstein's equations. Therefore, the term \(g_{\Phi_{c}}(r,z)\) must present an asymptotic behaviour of the type \(\propto A/r\), where \(A\) is a given constant. However, this is not true if \(\eta_{c}\) is non-null. In particular, the clocks of the fiducial observers placed at radial infinity will be desynchronised4, hindering their very role as reliable asymptotic inertial observers. Therefore, we have found the first boundary condition for the vSB galaxy models Footnote 4: This is precisely what happens in the BG model, as shown in [66]. \[\eta_{c}=0. \tag{42}\] We can further specialise the writing of \(\mu(+\infty,z)\). From (31) and (32) we have \[\mu(r,z)=-\int_{0}^{z}\frac{\eta_{r}(r,z^{\prime})\eta_{r^{\prime}}(r,z^{ \prime})}{2r}dz^{\prime}+F(r)+\mu_{c}, \tag{43}\] where \(F(r)\) must be so that (31) is satisfied and \(\mu_{c}\) is the integration constant. Since from (33) we get \(\lim_{r\longrightarrow+\infty}\eta_{r}(r,z)=\lim_{r\longrightarrow+\infty} \eta_{z}(r,z)=0\), we have \[\mu(+\infty,z)=\lim_{r\longrightarrow+\infty}F(r)+\mu_{c}. \tag{44}\] Therefore, (44) shows that \(\mu(+\infty,z)=\mu_{\infty}\). Moreover, we have the freedom to choose \[\mu_{c}=-\lim_{r\longrightarrow+\infty}F(r). \tag{45}\] Imposing (45) and (42) is equivalent to requiring an asymptotic Minkowskian structure at infinity. This is a fair boundary condition for would-be galaxy models, given their necessary matching to void Kerr-like solutions. However, the vanishing of the Riemannian tensor, \(R_{\mu\nu\rho\sigma}\), furnish twenty independent relationships and, yet, only ten curvature components, \(R_{\mu\nu}\), enter into the laws of the gravitational field. Therefore, even the less strict boundary condition of local flatness would still be a physically sound requirement for these spacetimes. Henceforth, we impose (45) in conjunction with (42) to facilitate calculations. Nonetheless, we hold the view that a simply locally flat spacetime, as the one generated by a cosmic string (see (3)), would still be physically acceptable. As we have discussed the appropriate boundary conditions at spatial infinity, we can shift our focus to investigate the presence of quasi-regular singularities. To gauge the presence of a conical singularity, we must consider the behaviour of \(g_{\phi\phi}(r,z)\) and \(g_{rr}(r,z)\) next to the rotation axis at a fixed value of \(z\). This is equivalent to studying, respectively, the limiting behaviour of \(\eta(r,z)\) and \(\mu(r,z)\). To the leading order, the MacDonald function \(K_{1}(x)\) reads [101] \[K_{1}(x)\simeq\frac{1}{x}+o(x\log(x)). \tag{46}\] Therefore, we get \[\eta(r,z)\big{|}_{r\ll 1}=\int_{0}^{\infty}\left[A(\lambda)\cos\left(\lambda z \right)+B(\lambda)\sin\left(\lambda z\right)\right]d\lambda. \tag{47}\] For global galaxy models (47) implies \[\int_{0}^{\infty}\left[A(\lambda)\cos\left(\lambda z\right)+B(\lambda)\sin \left(\lambda z\right)\right]d\lambda=0\ \forall\ z. \tag{48}\] Indeed, if (48) was not satisfied, \(g_{\phi\phi}(r,z)\) would necessarily become negative close to the galaxy centre. Though it is true that most galaxies possess a central supermassive black hole, we are investigating the possibility of globally modelling a galaxy using vSB models. Hence, we believe (48) to be a reasonable requirement on the metric function5. Furthermore, by applying the same reasoning, we require Footnote 5: This condition, even though reasonable, it is not entirely necessary. Indeed, it can not be realised, saved for a singular plane, for any galaxy model possessing reflection symmetry with respect to the equatorial plane (for which \(B(\lambda)=0\)), such as the BG solution. Thus, the subclass of symmetrical sVB galaxies models is forced to be considered as producing viable solutions only in a well-defined domain which excludes the bulge of the galaxy. However, this does not disqualify these models as effective outside the bulge of a disc galaxy. Indeed, even the Newtonian description of gravity is haunted by the presence of singularities, i.e. in the Newtonian potential for a point particle. Nonetheless, it is clearly a perfectly valid physical description on its scales of applicability. \[\lim_{r\longrightarrow 0}\mu_{,r}(r,z)\neq\pm\infty \tag{49}\] \[\lim_{r\longrightarrow 0}\mu_{,z}(r,z)\neq\pm\infty \tag{50}\] (49) and (50) are equivalent to \[\lim_{r\longrightarrow 0}\mu(r,z)=f(z)\ s.t.\ |f(z)|<+\infty. \tag{51}\] To study the condition (51), we must investigate the behaviour of \(\eta_{,r}(r,z)\) and \(\eta_{,z}(r,z)\) for small \(r\). From (33) we get \[\eta_{,r}(r,z)_{|r\ll 1}=o(r\log(r))\Rightarrow\lim_{r \longrightarrow 0}\eta_{,r}(r,z)=0, \tag{52}\] \[\lim_{r\longrightarrow 0}\eta_{,z}(r,z)=\int_{0}^{+\infty}\left[B( \lambda)\cos(\lambda z)-A(\lambda)\sin(\lambda z)\right]\lambda d\lambda, \tag{53}\] where we have used the expansion of \(K_{0}(x)\)[101] in (52) and (46) to obtain (53). Thus, (31) and (51) give a new condition on the spectral densities \[\int_{0}^{+\infty}\left[B(\lambda)\cos(\lambda z)-A(\lambda)\sin(\lambda z) \right]\lambda d\lambda=0. \tag{54}\] We notice that to satisfy (50), given (32) and (52), it is sufficient for the spectral densities to meet the condition (54). We can prove that if (48) is satisfied, so is (49). Let us define \[T(\lambda)=\theta(\lambda)\left[A(\lambda)+iB(\lambda)\right]. \tag{55}\] (55) allows us to write (48) as \[\mathcal{F}\left(T(\lambda)\right)(z)=-\left[\mathcal{F}\left(T(\lambda) \right)(z)\right]^{*}, \tag{56}\] where \(\mathcal{F}\) indicates the Fuorier-Plancherel transform and \({}^{*}\) denotes complex conjugation. By using the basic properties of the Fuorier-Plancherel transform, (54) becomes \[\frac{d}{dz}\left[\mathcal{F}\left(T(\lambda)\right)(z)\right]=-\frac{d}{dz} \left[\left(\mathcal{F}\left(T(\lambda)\right)(z)\right)^{*}\right], \tag{57}\] which follows directly from (56). Therefore, a vSB model satisfying (42), (45) and (48) represents a global full GR galaxy model as it produces an asymptotically globally Minkowskian spacetime at spatial infinity and the metric functions are well-behaved over the whole coordinate domain. However, these models are found to still harbour quasi-regular singularities. Let looks at the asymptotic form of the line element near the rotation axis \[ds^{2}\simeq-dt^{2}+r^{2}d\phi^{2}+e^{f(z)}\left(dr^{2}+dz^{2}\right))\,, \tag{58}\] where we have considered (51). Let us specialise to any 2D surface \(\{t=const,z=const=\hat{z}\}\). The line element reads \[ds^{2}_{2D}=r^{2}d\phi^{2}+e^{f(z)}dr^{2}, \tag{59}\] On the chosen 2D surface, we define \(r=e^{f(z)/2}\tilde{r}\) so that the line element in (59) becomes \[ds^{2}_{2D}=e^{-f(\hat{z})}\tilde{r}^{2}d\phi^{2}+d\tilde{r}^{2}=b^{2}(\hat{z} )\tilde{r}^{2}d\phi^{2}+d\tilde{r}^{2}. \tag{60}\] (60) is exactly equivalent to the 2D surface line element which signals the presence of conical singularities (see (3)). Moreover, given (32), \(f(z)\) could be null only for models cylindrically symmetric or invariant under radial translations, clearly unphysical conditions. Thus, any vSB global galaxy model defines a spacetime with a highly nontrivial topology. Indeed, each 2D spacetime slice of the type \(\{t=cost,z=cost\}\) harbours a conical singularity in \(r=0\). However, unlike more well known cases, i.e. cosmic strings, the conical structure is more complex, and it is entirely defined by the function \(f(z)\). We can interpret the topological structure of these vSB spacetimes 6 as obtained by slicing the spacetime near the rotation axis along 2D surfaces of the type \(z=const\) and folding each one in a cone with a varying angle of identification \(\alpha\). Fig. 1 shows a tentative visualisation of this topology. Footnote 6: Notice that the proof of the presence of quasi-regular singularities still holds even if (45) is not satisfied. We must stress that not all the points along the rotation axis will be quasi-regular singularities. Indeed, for asymptotically flat vSB spacetimes, a mixture of quasi-regular and curvature singularities should be expected [81]. In particular, for the BG solution it was shown that two limited disconnected region of the z-axis harbour curvature singularities [75]. Naturally, the presence of these singularities begs the question about their ultimate cause in the vSB class. A possible explanation may come from the unphysical condition of rigid rotation engrained in these spacetimes. Figure 1: Topology around the z-axis of an vSB galaxy. The slicing and identification procedure for different \(z=const\) is shown. If so, these problematic features could be cured by considering differentially rotating models in the larger \((\eta,H)\) class. However, the singularities might also be the result of the absence of pressure in the dust or even of enforcing axial symmetry on the spacetimes. All the aforementioned causes are worthy of consideration and should be thoroughly researched. As it stands, the presence of a nontrivial structure of quasi-regular singularities in vSB global galaxy solutions is in flat contrast with the very possibility of describing globally a galaxy with such models. Therefore, any vSB solution with reasonable asymptotic properties can only be considered as a viable galaxy model only for a limited portion of the galaxy. In particular, these models will fail in describing the bulge of the galaxy. However, we must stress that the limitation of applicability of vSB solutions in no way rules them out as effective galaxy models over their region of applicability. Any such model which, under the correct choice of reference frame, were to correctly account for current astronomical observations should be regarded as a functioning effective full GR modelling of galactic dynamics. Indeed, any physical model de facto posses a domain of applicability, beyond which it breaks down. Nonetheless, this does not preclude its employment in explaining physical observations in its domain of validity. The vSB class must be discarded as a viable choice for global, full GR galaxy models but it should still be considered as possibly producing domain-limited effective full GR galaxy models. Finally, given the presence of conical singularities, the proper definition of physical coordinates must be put into question. Indeed, in GR coordinates are a priori devoid of a physical meaning. They acquire one only when a measurement procedure is defined. The impact of conical singularities on the definition of the angle coordinate for physical observers inside a galaxy must be correctly addressed when using full GR models. If we specialise to sVB metrics, we notice that the conical singularity on the equatorial plane can always be negated by a proper choice of \(\mu_{e}\). We would argue that it is precisely this choice that a physical observer would take. However, such a choice, which validates the angular coordinate's common physical meaning, fixes a degree of freedom of the model. Therefore, any physical quantity calculated will be directly impacted by this choice - i.e. density and rotation curve calculations should be carried out only once the coordinate choice has been defined in a physically sound way. ## 5 Conclusions and Perspectives In 1973 John Archibald Wheeler famously summarised General Relativity as: _Space tells matter how to move, matter tells space how to curve_[102]. This historical quote directly points to the geometrical nature of GR. Indeed, GR describes space-time as a four-dimensional pseudo-Riemannian manifold whose local geometry is everywhere defined by its matter-energy content through Einstein's equations, but it does not prescribe its global topological structure. Nonetheless, topological questions play a crucial role in our understanding of the Universe [103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113], since topology can _limit_ possible matter content. Here we have used spacetime topology in this limiting fashion. In particular, even at the level of modelling galaxies in full GR, topological consideration must be thoroughly studied. They can restrict the scale of viability of a model or rule it out entirely. In this paper, we showed that the van Stockum-Bonner class does not contain physically viable full GR global galaxy models. Well-behaved solutions, including any larger class of asymptotic flat geometries, were shown in sec. 4 to be studded with quasi-regular singularities along the rotation axis. These conical singularities generate a highly nontrivial topological structure (see Fig. 1) which starkly contrast with their interpretation as global galaxy models. The vSB class was shown to contain only effective full GR galaxy models, whose domain of physical validity must necessarily be restricted outside the galactic bulge. Therefore, any serious search for a global, full GR galaxy model should focus on a larger class of models - possibly the \((\eta,H)\) class. The \((\eta,H)\) class investigated by Cacciatori _et al_[63, 64] which generalises the vSB class may prove to be a fruitful line of enquiry. By dropping the rigidity of the dust and allowing for differential rotation, the class of models is physically realistic. However, the inclusion of an effective pressure might still be necessary to fully model the galactic bulge and avoid singularities. As such, the introduction of pressure in full GR galaxy modelling still remains a high priority of the field. Nonetheless, we have shown the potential for topological considerations in model selection by limiting the class of viable global full GR galaxy models. We plan to make full use of these considerations in future work. ## Acknowledgments We thank David Wiltshire, Sergio Cacciatori and Chris Harvey-Hawes for useful discussion and Morag Hills for providing the plot in section 4.
2301.05808
Current and Future Space and Airborne Observatories for ISM Studies
A tremendous amount of radiation is emitted by the Interstellar Medium in the mid- and far-infrared (3-500 {\mu}m) that represents the majority of the light emitted by a galaxy. In this article we motivate ISM studies in the infrared and the construction of large specialized observatories like the Stratospheric Observatory For Infrared Astronomy (SOFIA), which just concluded its mission on a scientific high note, and the newly launched James Webb Space Telescope (JWST) that just begun its exciting scientific mission. We introduce their capabilities, present a few examples of their scientific discoveries and discuss how they complemented each other. We then consider the impact of the conclusion of SOFIA for the field in a historic context and look at new opportunities specifically for far-infrared observatories in space and in the stratosphere.
Bernhard Schulz, Margaret Meixner
2023-01-14T02:33:24Z
http://arxiv.org/abs/2301.05808v1
# Current and future space and airborne observatories for ISM studies ###### Abstract A tremendous amount of radiation is emitted by the Interstellar Medium in the mid- and far-infrared (3-500 \(\mu\)m) that represents the majority of the light emitted by a galaxy. In this article we motivate ISM studies in the infrared and the construction of large specialized observatories like the Stratospheric Observatory For Infrared Astronomy (SOFIA), which just concluded its mission on a scientific high note, and the newly launched James Webb Space Telescope (JWST) that just begun its exciting scientific mission. We introduce their capabilities, present a few examples of their scientific discoveries and discuss how they complemented each other. We then consider the impact of the conclusion of SOFIA for the field in a historic context and look at new opportunities specifically for far-infrared observatories in space and in the stratosphere. + Footnote †: Universität zu Kön 2022 ## 1 The Interstellar Medium The Interstellar Medium (ISM) constitutes the reservoir of matter, that was and still is turned into stars and planets and gave also rise to the existence of our own solar system and our world. If its study wasn't already interesting for just that reason, there are many complex processes that impact it chemically as well as energetically. The ISM is being enriched with heavier elements by the more massive stars in their late evolutionary phases, but also diluted by the influx of extragalactic matter. Feedback from the different phases of stellar life, but also cosmic rays and AGN inject energy, which is released by emission in many atomic and molecular lines ([C ii], [O i], C\({}_{2}\)H\({}_{2}\), H\({}_{2}\)O, PAHs, etc.) as well as thermal emission by different kinds of dust. Even though the general scenario of star formation is reasonably well understood, the details of the complex interplay of stellar radiation, gravitation, turbulence and magnetic fields, that determine the timescales and the interstellar mass function, are not. A large number of these lines as well as the peak of the thermal emission are located in the mid- to far-infrared (MIR 3-30\(\mu\)m, FIR 30-300\(\mu\)m) wavelength range as illustrated in Fig. 1 (top), making this portion of the electromagnetic spectrum key to studying the ISM and a multitude of related scientifically interesting phenomena. However, this is also a rather difficult spectral range to observe as shown in the lower part of Fig. 1, which illustrates the atmospheric transmission at the levels of the Atacama Large Millimeter/submillimeter Array (ALMA) and the Stratospheric Observatory for Infrared Astronomy (SOFIA). Telluric water vapor and ozone leave only certain windows in the MIR and sub-millimeter ranges, while the FIR is effectively unobservable from the ground and requires observatories in the stratosphere or in space. ## 2 Jwst Launched at the end of 2021, the James Webb Space Telescope (JWST) provides access to the full MIR spectrum in space since its first science data were released in July 2022. Its high spatial resolution, similar to that of Hubble in the visible spectrum, and its access to PAH emission as well as the ro-vibrational lines of molecular hydrogen and water, make JWST an excellent probe of star formation regions. JWST images of 30 Doradus, aka the Tarantula Nebula, show the stars, molecular hydrogen and PAHs with NIRCam (0.6-5 \(\mu\)m) and warm dust and PAHs with MIRI (4.9 to 28.8 \(\mu\)m). These kind of maps provide an unprecedented amount of detail at those wavelengths and will play an important role in further investigating the hot and warm ISM. The spectroscopic capabilities of JWST are considerable as well, yet limited in terms of spectral resolution with R \(\approx\) 2700 (Boker et al., 2022) for NIRSpec and R \(\approx\) 1300 to 3700 for MIRI (Wells et al., 2015). This is where SOFIA provided complementary high spectral resolution spectroscopy (R \(\approx\) 10\({}^{5}\)) with EXES (Richter et al., 2018), even though at lower spatial resolution and sensitivity. ## 3 Sofia ### Importance and Successes With its five exchangeable scientific instruments (SIs), SOFIA filled nicely the large spectral gap in the FIR between JWST and ALMA and provided further complementary capabilities like high spectral resolution at JWST wavelengths with EXES and the ability to observe very bright sources with FORCAST, that filled in the overexposed areas in MIR maps of the Galactic Center region, made by Spitzer (Hankins et al., 2020). The recent discovery of water on the sunlit surface of the Moon by Honniball et al. (2021) falls into that category as well. The heterodyne instrument GREAT covers such important atomic fine structure lines as [C II] and [O I] at the highest spectral resolutions of up to R \(\approx\) 10\({}^{6}\) and fills in the spectral gaps that are inaccessible for ALMA due to atmospheric extinction. This enabled not only the discovery of new molecules in the ISM like Helium Hydride (Gusten et al., 2019), but also very detailed kinematic studies e.g. feedback processes in Orion by Pabst et al. (2019) that triggered a very successful SOFIA legacy program by Schneider et al. (2020). When sensitivity became an issue and could be gained by sacrificing spectral resolution, in particular for extragalactic work, the FIFI-LS spectrometer provided a good alternative for observations of fine structure lines as shown by Fadda et al. (2021), Spinoglio et al. (2022) or Pineda et al. (2018). Last but not least, where very high sensitivity was required to reveal the peak of the cold dust emission of high redshift objects, HAWC+ provided the FIR imaging capability. HAWC+, however, also provided an entire new dimension for ISM studies, that had before only briefly been available with ISO in the FIR. Polarization mapping revealed the vectors of magnetic fields in the ISM thanks to the FIR emission of aligned elongated dust particles. Many publications sparked a lot of new observational as also theoretical interest in this previously rather dormant field (Pillai et al., 2020; Lopez-Rodriguez et al., 2021; Zielinski et al., 2021). ### Mission Success and Conclusion In the face of the tremendous scientific successes of this true ISM-Machine, the decision by NASA and DLR to end the SOFIA mission after only 9 observing cycles, is certainly very hard to understand. Following the Figure 1: The Spectral Energy Distribution (SED) of the interstellar medium from mid- to far-infrared wavelengths (top) and the corresponding transmission spectra of the Earth atmosphere at the operating altitudes of ALMA and SOFIA (bottom). recommendations from the Flagship Mission Review from 2019, the project has transformed since then with a tremendous growth in science productivity as demonstrated in the SOFIA Status and Future Prospects Report (Rangwala et al., 2022)1. Annual publication rates for SOFIA have doubled over the past three years on topics ranging from the Earth to high-z galaxies (Schmelz et al., 2021). Footnote 1: This report was already prepared for NASA’s Senior Review Process. The Decadal Survey Astro 2020 recommended to NASA to terminate the SOFIA mission, which unfortunately was based on outdated (\(>\) 2 years) and incorrect information2. NASA holds Astro 2020 recommendations as superior to Senior Review process results and hence removed SOFIA from the Senior Review.3 Arguments that SOFIA's science productivity was insufficient can be easily refuted by comparing the observing time that is spent on average per refereed publication to that of Herschel. Eight years after launch Herschel had provided about 23,500 hours of observing time and produced 2,145 publications, resulting in \(\approx\)11 h/paper. SOFIA with 3458 hours and 330 publications after 8 years since achieving full science operational capability in 2014 results in very similar 10.5 h/paper. Footnote 2: SOFIA science addresses 50% of Astro 2020 key science questions, not 10%. Footnote 3: This avoided potentially ending up with two contradicting recommendations. Fortunately the last year was particularly productive in terms of observations, so there is a considerable amount of science data in the IRSA archive. As there is only a minimal post-operational phase of one year planned by NASA at this point, we hope DLR will provide the means to conduct data reprocessing also for the time before Cycle 5, advanced water vapor and pointing analysis and more comprehensive corrections, which are currently not included in the plans. In the next section we'll lay out that the time to the next FIR mission might be rather long. Already collected FIR photons might thus be even more valuable for astronomy and funds for maximizing their scientific usability will be well spent. ## 4 Future Far-Infrared Observatories ### History and Guidance Fig. 0.2 illustrates the history of FIR astronomy by showing the operational phases of all major observatories as green boxes, starting in the sixties until today and the current outlook towards 2045. Up to today, there was an almost continuous capability to supply astronomers with current FIR observations except for the few years between ISO and Spitzer. With the sudden cancellation of SOFIA, which was originally scheduled to continue until 2034, and the cancellation of SPICA by ESA in 2021, the opportunities for FIR data collection have become sparse. In Rangwala et al. (2022) Page 4, a traceability matrix can be found, that links Astro 2020 science questions to key measurements in the MIR and FIR, that could have been performed with SOFIA. This list should still be useful as a collection of science requirements for the design of future stratospheric- and space-observatories. ### New Opportunities in Space Even though Astro 2020 recommended the cancellation of SOFIA, it acknowledged the importance of the FIR spectral region for astrophysics and recommended the launch of a Probe space mission for 2030 that will specialize either in FIR- or X-ray- astronomy. NASA followed this up by issuing an announcement of opportunity and a proposal deadline of October 2023, a downselection end of 2025, a cost cap of 1B$ excluding the launcher and a launch date not later than 2032 (NASA-SMD, 2022). If history is a guide, such a schedule is highly optimistic. In reality a launch might rather be expected in the mid 2030s, not to mention that continuing the SOFIA mission until its planned end would have cost substantially less, especially when taking into account the launcher as well. Given that the X-ray community is also competing for another opportunity, it is everything but a done deal that NASA's probe mission will be dedicated to the FIR. If that doesn't happen, then also the dream of a more ambitious true observatory for the FIR such as ORIGINS in the 2040s (Meixner et al., 2019) may become unrealistic with observational FIR astronomy having lost a lot of its expertise by then. Therefore at this point it is quite important for the FIR community to look towards the future which at least in space will be the Probe mission. There are four mission proposals for the FIR named PRIMA (PI, Jason Glenn)4, SPICE (PI Lee Mundy)5, FIRST (PI Asantha Cooray) and SALTUS (PI Chris Walker), which were presented at the IR Astrophysics Workshop 2022 in Colorado. The concepts comprise more traditional space observatories with cold telescopes like PRIMA and FIRSST, and more unusual ones like the interferometer SPICE or the large inflatable telescope concept SALTUS. Details as presented at the workshop are available at the workshop website (IRSTIG, 2022). ### Stratospheric Opportunities In the meantime the FIR community should also investigate other opportunities to reclaim a permanent capability in that part of the spectrum. This will in particular enable more time dependent FIR astronomy, that we consider being still in its infancy. The fairly short life spans of FIR missions so far have been a hindrance while time-domain astronomy has really taken off in other parts of the electromagnetic spectrum. The astrophysical community should investigate the available potential in the FIR. SOFIA was likely the last airplane observatory, and future stratospheric platforms will probably be of the lighter-than-air category. Current balloon experiments are, however, rather short lived, extremely weather dependent with very few launch opportunities, can't stay in a particular region for long and have only a 50 % survival rate upon landing. Such missions are still seen rather as serving technology maturation and the training of instrumentalists than being able to support serious general observatory type projects for the astronomical community. This school of thought needs to change as better technologies become available that could address many of the shortcomings mentioned above. Longer lived robotic stratospheric platforms with propulsion may also be interesting to a wider community including UV- and FIR-astronomy but also climate research and general Earth observation (Miller et al., 2014). ## 5 Conclusion Even though the end of SOFIA is a blow to FIR astronomy, the mission and its team have performed excellently and are concluding at peak performance with much data in the archive that await analysis and publication. JWST is the observatory now to study the ISM in warm/hot conditions, while there will be new opportunities for observatories that can study the cold ISM from space or from the stratosphere.
2310.05191
LLM-as-a-tutor in EFL Writing Education: Focusing on Evaluation of Student-LLM Interaction
In the context of English as a Foreign Language (EFL) writing education, LLM-as-a-tutor can assist students by providing real-time feedback on their essays. However, challenges arise in assessing LLM-as-a-tutor due to differing standards between educational and general use cases. To bridge this gap, we integrate pedagogical principles to assess student-LLM interaction. First, we explore how LLMs can function as English tutors, providing effective essay feedback tailored to students. Second, we propose three metrics to evaluate LLM-as-a-tutor specifically designed for EFL writing education, emphasizing pedagogical aspects. In this process, EFL experts evaluate the feedback from LLM-as-a-tutor regarding quality and characteristics. On the other hand, EFL learners assess their learning outcomes from interaction with LLM-as-a-tutor. This approach lays the groundwork for developing LLMs-as-a-tutor tailored to the needs of EFL learners, advancing the effectiveness of writing education in this context.
Jieun Han, Haneul Yoo, Junho Myung, Minsun Kim, Hyunseung Lim, Yoonsu Kim, Tak Yeon Lee, Hwajung Hong, Juho Kim, So-Yeon Ahn, Alice Oh
2023-10-08T15:00:04Z
http://arxiv.org/abs/2310.05191v2
# FABRIC: Automated Scoring and Feedback Generation for Essays ###### Abstract Automated essay scoring (AES) provides a useful tool for students and instructors in writing classes by generating essay scores in real-time. However, previous AES models do not provide more specific rubric-based scores nor feedback on how to improve the essays, which can be even more important than the overall scores for learning. We present FABRIC, a pipeline to help students and instructors in English writing classes by automatically generating 1) the overall scores, 2) specific rubric-based scores, and 3) detailed feedback on how to improve the essays. Under the guidance of English education experts, we chose the rubrics for the specific scores as _content_, _organization_, and _language_. The first component of the FABRIC pipeline is DREsS, a real-world **D**ataset for **R**ubric-based **E**ssay **S**coring (DREsS). The second component is CASE, a **C**orruption-based **A**ugmentation **S**trategy for **E**ssays, with which we can improve the accuracy of the baseline model by 45.44%. The third component is EssayCoT, the Essay Chain-of-Thought prompting strategy which uses scores predicted from the AES model to generate better feedback. We evaluate the effectiveness of the new dataset DRESS and the augmentation strategy CASE quantitatively and show significant improvements over the models trained with existing datasets. We evaluate the feedback generated by EssayCoT with English education experts to show significant improvements in the helpfulness of the feedback across all rubrics. Lastly, we evaluate the FABRIC pipeline with students in a college English writing class who rated the generated scores and feedback with an average of 6 on the Likert scale from 1 to 7. ## 1 Introduction In writing education, automated essay scoring (AES) offers benefits to both students and instructors by providing scores of students' essays in real-time. Many students fear exposing their errors to instructors, therefore immediate assessment of their essays with AES can reduce their anxiety and help them improve their writing [20]. For instructors, this AES model can ease the burdensome process of evaluation and offer a means to validate their own evaluation, ensuring accuracy and consistency in assessment. Existing AES models provide valuable overall scores, but they are insufficient for both learners and instructors desiring more details. Several studies have underscored English learners' preference for specific and direct feedback [21, 19, 22]. As students rarely seek clarifications on unclear feedback and may even disregard it, scoring and feedback must be clear and specific for easy comprehension [21]. However, existing AES models cannot be trained to provide detailed rubric-based scores because the datasets either do not have any rubric-specific scores, or when they do, the rubrics and criteria for scoring vary significantly among different datasets. We introduce **FABRIC**, Feedback generation guided with **AES By Figure 1: Overview of the pipeline. Rubric-based AES data (DREsS) is used to train AES model, which is enhanced by CASE to more accurately predict rubric-based scores. EssayCoT leverages these scores for essay feedback generation. FABRIC’s final outputs, scores and feedback, are used for EFL writing education. Incorporating **C**hatGPT, a combination of AES model and LLM. FABRIC comprises three major contributions: **DREsS**, a real-world **D**ataset for **R**ubric-based **E**ssay **S**coring, **CASE**, a **C**orruption-based **A**ugmentation **S**trategy for **E**ssays, and **E**ssayCoT**, the Essay Chain-of-Thought prompting strategy for feedback generation. DREsS includes 1,782 essays collected from EFL learners, each scored by instructors according to three rubrics: content, organization, and language. Furthermore, we rescale existing rubric-based datasets to align with our three primary rubrics. We propose as a standard rubric-based AES dataset this combination of a newly collected real-classroom dataset and an existing dataset rescaled to the same set of rubrics and standards. CASE is a novel data augmentation method to enhance the performance of the AES model. CASE employs three rubric-specific strategies to augment the essay dataset with corruption, and training with CASE results in a model that outperforms the quadratic weighted kappa score of the baseline model by 26.37%. EssayCoT is a prompting strategy to guide essay feedback generation, which is a new task on top of AES. EssayCoT leverages essay scores automatically predicted by the AES model when generating feedback, instead of manually composing few-shot exemplars. Feedback with EssayCoT prompting is significantly more preferred and helpful compared to standard prompting, according to the assessment by 13 English education experts. Lastly, we deploy FABRIC in an essay editing platform for 33 English as a Foreign Language (EFL) students. In summary, the main contributions of this work are as follows: * We propose a standard rubric-based dataset with the combination of our newly collected real-classroom DREsS dataset (1.7K) and unified samples of the existing datasets (2.9K). * We introduce corruption-based augmentation strategies for essays (CASE). We build 3.9K of content, 15.7K of organization, and 0.9K of language synthetic data for AES model training. * We introduce EssayCoT prompting for essay feedback generation, which significantly improves the helpfulness of feedback. * We propose FABRIC, a pipeline that generates both scores and feedback leveraging DREsS, CASE, and EssayCoT. We deploy FABRIC with the aim of exploring its practical application in English writing education. ## 2 Related Work ### Automated Essay Scoring Automated essay scoring (AES) systems are used in evaluating and scoring student essays based on a given prompt. However, there is only a limited amount of available rubric-based datasets for AES with limited utility because the rubrics are not consistent. Furthermore, AES dataset has to be annotated with the experts in English education, considering scoring task requires not only proficiency in English but also pedagogical knowledge in English writing. To the best of our knowledge, a real-world AES dataset has not yet been established, as existing AES datasets make use of scores annotated by non-experts in English education. #### 2.1.1 AES Datasets ASAPASAP dataset 1 is widely used in AES tasks, including eight different prompts. Six out of eight prompt sets (P1-6) have a single overall score, and only two prompts (P7-8) are rubric-based datasets. These two rubric-based prompts consist of 1,569 and 723 essays for each respective prompt. The two prompt sets even have distinct rubrics and score ranges, which poses a challenge in leveraging both datasets for training rubric-based models. The essays are graded by non-expert annotators, though the essays were written by Grade 7-10 students in the US. Footnote 1: [https://www.kaggle.com/c/asap-aes](https://www.kaggle.com/c/asap-aes) ASAP++Mathias and Bhattacharyya (2018) manually annotated different attributes of essays in ASAP Prompt 1 to 6, which only have a single overall score. ASAP++ P1-2 are argumentative essays, while P3-6 are source-dependent essays. However, most samples in ASAP++ were annotated by a single annotator, who is a non-expert, including non-native speakers of English. Moreover, each prompt set of ASAP++ has different attributes to each other, which need to be more generalizable to fully leverage such dataset for AES model. ICNALE Edited EssaysICNALE Edited Essays (EE) v3.0 (Ishikawa, 2018) presents rubric-based essay evaluation scores and fully edited versions of essays written by EFL learners from 10 countries in Asia. The essays were evaluated according to 5 rubrics: content, organization, vocabulary, language use, and mechanics, according to ESL Composition Profile Jacobs et al. (1981). Even though the essays are written by EFL learners, the essay is rated and edited only by five native English speakers, non-experts in the domain of English writing education. In addition, it is not openly accessible and only consists of 639 samples. Toeffl11Toeffl11Blanchard et al. (2013) corpus from ETS introduced 12K TOEFL iBT essays, which are not publicly accessible now. TOEFL11 only provides a general score for essays in 3 levels (low/mid/high), which is insufficient for building a well-performing AES system. #### 2.1.2 AES Models Recent AES models can be categorized into two distinct types: holistic scoring model and rubric-based scoring model. Holistic AESThe majority of the previous studies used the ASAP dataset for training and evaluation, aiming to predict the overall score of the essay only Tay et al. (2018); Cozma et al. (2018); Wang et al. (2018); Yang et al. (2020). Enhanced AI Scoring Engine (EASE) 2 is a commonly used, open-sourced AES system based on feature extraction and statistical methods. In addition, Taghipour and Ng (2016) and Xie et al. (2022) released models based on recurrent neural networks and neural pairwise contrastive regression (NPCR) model, respectively. However, only a limited number publicly opened their models and code, highlighting the need for additional publicly available data and further validation of existing models. Footnote 2: [https://github.com/edx/ease](https://github.com/edx/ease) Rubric-based AESThe scarcity of publicly available rubric-based AES datasets poses significant obstacles to the advancement of AES research. There are industry-driven services such as Intelligent Metric(r)Rudner et al. (2006) and E-rater(r)Attali and Burstein (2006) and datasets from ETS Blanchard et al. (2013), but none of them are accessible to the public. In order to facilitate AES research in the academic community, it is crucial to release a publicly available rubric-based AES dataset and baseline model. ### Essay Feedback Generation Feedback GenerationThough recent studies assume that LLMs can be used to facilitate education innovation by providing real-time and individualized feedback Yan et al. (2023); Kasneci et al. (2023), no study has addressed detailed approaches for feedback generation in education using LLMs to the best of our knowledge. Peng et al. (2023) demonstrate that LLM performances on task-oriented dialogue and open-domain question answering dramatically improves with access to golden knowledge, suggesting the benefit of incorporating more specific and targeted knowledge to LLMs. It assumes that appropriate golden knowledge, such as rubric explanations and accurate scores on essays, can nudge LLMs to generate better feedback on essay writing. Feedback Quality EvaluationZheng et al. (2023) evaluate the quality of responses of LLM-based assistants to open-ended questions using a holistic approach, considering four criteria: helpfulness, relevance, accuracy, and level of detail. Wang et al. (2023) evaluate responses generated by current LLMs in terms of helpfulness and acceptness, indicating which response is better by inputting win, tie, and lose. Jia et al. (2021) define each feature in peer-review comments into three: suggestion, problem, and positive tone. ## 3 FABRIC Pipeline We have examined the specific needs of the stakeholders in EFL education for both scores and feedback on essays through a group interview with six students and a written interview with three instructors. The interview details are in Appendix A.1. Along with AES for essay scores, we propose an essay feedback generation task to meet the needs of EFL learners for immediate and specific feedback on their essays. Specifically, the feedback generation task involves understanding a student's essay and generating feedback under three rubrics: content, organization, and language Cumming (1990); Ozfidan and Mitchell (2022). The objective is to provide feedback that is helpful, relevant, accurate, and specific Zheng et al. (2023) for both students and instructors. In this section, we present FABRIC, a serial combination of rubric-based AES models (SS3.1) and rubric-based feedback generation using EssayCoT (SS3.2). ### Rubric-based AES Models We fine-tune BERT for each rubric using 1.7K essays from DREsS (SS3.1.1), 2.9K essays from standardized data (SS3.1.2), and 1.3K essays augmented by CASE (SS3.1.3). BERT-based model architectures are the most state-of-the-art method in AES (Devlin et al., 2019), and there are no significant improvements in AES by using other pre-trained language models (PLM) (Xie et al., 2022). Experimental results of rubric-based AES with different PLMs are provided in Appendix B.2. #### 3.1.1 Dataset Collection Dataset DetailsDREsS includes 1,782 essays on 22 prompts, having 313.36 words and 21.19 sentences on average. Each sample in DREsS includes students' written essay, essay prompt, rubric-based scores (content, organization, language), total score, class division (intermediate, advanced), and a test type (pre-test, post-test). The essays are scored on a range of 1 to 5, with increments of 0.5, based on the three rubrics: content, organization, and language. We chose such three rubrics as standard criteria for scoring EFL essays, according to previous studies from the language education (Cumming, 1990; Ozfidan and Mitchell, 2022). Detailed explanations of the rubrics are shown in Table 1. The essays are written by undergraduate students enrolled in EFL writing courses at a college in South Korea from 2020 to 2023. Most students are Korean their ages span from 18 to 22, with an average of 19.7. In this college, there are two divisions of the EFL writing class: intermediate and advanced. The division is based on students' TOEFL writing scores (15-18 for intermediate and 19-21 for advanced). During the course, students are asked to write an in-class timed essay for 40 minutes both at the start (pre-test) and the end of the semester (post-test) to measure their improvements. Annotator DetailsWe collect scoring data from 11 instructors, who served as the teachers of the students who wrote the essays. All annotators are experts in English education or Linguistics and are qualified to teach EFL writing courses at a college in South Korea. To ensure consistent and reliable scoring across all instructors, they all participated in training sessions with a scoring guide and norming sessions where they develop a consensus on scores using two sample essays. Additionally, there was no significant difference among the score distribution of all instructors in the whole data tested by one-way ANOVA and Tukey HSD at a p-value of 0.05. #### 3.1.2 Standardizing the Existing Data We standardize three existing rubric-based datasets to align with the three rubrics in DREsS: content, organization, and language. We unify ASAP set 7 and 8, which are the only rubric-based datasets in ASAP. ASAP prompt set 7 includes four rubrics - ideas, organization, style, and convention - while prompt set 8 contains six rubrics - ideas and content, organization, voice, word choice, sentence fluency, and convention. Both sets provide scores ranging from 0 to 3. For language rubric, we first create synthetic labels based on a weighted average. This involves assigning a weight of 0.66 to the style and 0.33 to the convention in set 7, and assigning equal weights to voice, word choice, sentence fluency, and convention in set 8. For content and organization rubric, we utilize the existing data rubric (idea for content, organization as same) in the dataset. We then rescale the score of all rubrics into a range of 1 to 5. We repeated the same process with ASAP++ set 1 and 2, which has the same attributes as ASAP set 7 and 8. Similarly, for ICNALE EE dataset, we unify vocabulary, language use, and mechanics as language rubric with a weight of 0.4, 0.5, and 0.1, respectively. In the process of consolidating the writing assessment criteria, we sought professional consultation from EFL education experts and strategically grouped together those components that evaluate similar aspects. #### 3.1.3 Synthetic Data Construction To overcome the scarcity of data, we construct synthetic data for rubric-based AES. We introduce a \begin{table} \begin{tabular}{p{85.4pt}|p{284.5pt}} \hline \hline Content & Paragraphs is well-developed and relevant to the argument, supported with strong reasons and examples. \\ \hline Organization & The argument is very effectively structured and developed, making it easy for the reader to follow the ideas and understand how the writer is building the argument. Paragraphs use coherence devices effectively while focusing on a single main idea. \\ \hline Language & The writing displays sophisticated control of a wide range of vocabulary and collocations. The essay follows grammar and usage rules throughout the paper. Spelling and punctuation are correct throughout the paper. \\ \hline \hline \end{tabular} \end{table} Table 1: Explanation of rubrics corruption-based augmentation strategy for essays (CASE), which starts with a _well-written_ essay and incorporates a certain portion of sentence-level errors into the synthetic essay. In subsequent experiments, we define _well-written_ essays as an essay that scored 4.5 or 5.0 out of 5.0 on each criterion. \[\mathtt{n}(S_{c})=\lfloor\mathtt{n}(S_{E})*(5.0-x_{i})\div 5.0\rceil \tag{1}\] \(\mathtt{n}(S_{c})\) is the number of corrupted sentences in the synthetic essay, and \(\mathtt{n}(S_{E})\) is the number of sentences in the _well-written_ essay, which serves as the basis for the synthetic essay. \(x_{i}\) denotes the score of the synthetic essay. ContentWe substitute randomly-sampled sentences from _well-written_ essays with out-of-domain sentences from different prompts. This is based on an assumption that sentences in _well-written_ essays support the given prompt's content, meaning that sentences from the essays on different prompts convey different contents. Therefore, more number of substitutions imply higher levels of corruption in the content of the essay. OrganizationWe swap two randomly-sampled sentences in _well-written_ essays and repeat this process based on the synthetic score, supposing that sentences in _well-written_ essays are systemically structured in order. The more number of swaps implies higher levels of corruption in the organization of the essay. LanguageWe substitute randomly-sampled sentences into ungrammatical sentences and repeat this process based on the synthetic score. We extract 605 ungrammatical sentences from BEA-2019 data for the shared task of grammatical error correction (GEC) Bryant et al. (2019). We define ungrammatical sentences with the number of edits of the sentence over 10, which is the 98th percentile. The more substitutions, the more corruption is introduced in the grammar of the essay. We set a high threshold for ungrammatical sentences because of the limitation of the current GEC dataset that inherent noise may be included, such as erroneous or incomplete correction Rothe et al. (2021). #### 3.1.4 Data Statistics Table 2 shows the number of samples per rubric. We use the data for training and validating our AES model. It consists of our newly released DREsS dataset, unified samples of existing datasets (ASAP Prompt 7-8, ASAP++ Prompt 1-2, and ICNALE EE), and synthetic data augmented using CASE. In particular, we generate synthetic data with CASE under ablation study for exploring the optimal number of samples. ### EssayCoT We introduce EssayCoT (Figure 2), a simple but efficient prompting method, to enhance the performance of essay feedback generation. Chain-of-Thought (CoT) Wei et al. (2022) is a few-shot prompting technique that enhances problem-solving by incorporating intermediate reasoning steps, guiding LLMs toward the final answer. However, it requires significant time and effort to provide human-written few-shot examples. Especially, \begin{table} \begin{tabular}{l|r r r} \hline \hline & Content & Organization & Language \\ \hline DREsS & 1,782 & 1,782 & 1,782 \\ \hline ASAP P7 & 1,569 & 1,569 & 1,569 \\ ASAP8 & 723 & 723 & 723 \\ ASAP++ P1 & 1,785 & 1,785 & 1,785 \\ ASAP++ P2 & 1,800 & 1,800 & 1,800 \\ ICNALE EE & 639 & 639 & 639 \\ \hline CASE & 3,924 & 15,696 & 981 \\ \hline Total & 12,222 & 23,994 & 14,845 \\ \hline \hline \end{tabular} \end{table} Table 2: Data size Figure 2: Prompt for EssayCoT CoT may not be an efficient approach for essay feedback generation, considering the substantial length of the essay and feedback. Instead, EssayCoT can perform CoT in a zero-shot setting without any additional human effort, since it leverages essay scores which are automatically predicted by AES model. It utilizes three rubric-based scores on content, organization, and language as a rationale for essay feedback generation. ## 4 Experimental Result In this section, we present the performance of AES model with CASE (SS4.1) and essay feedback generation with EssayCoT (SS4.2). ### Automated Essay Scoring The performance of AES models is mainly evaluated by the consistency between the predicted scores and the gold standard scores, conventionally calculated using the quadratic weighted kappa (QWK) score. Table 3 shows the experimental results with augmentations using CASE on the combination of DREsS dataset and unified datasets (ASAP, ASAP++, and ICNALE EE). Detailed experimental settings are described in Appendix B.1. Fine-tuned BERT exhibits scalable results with the expansion of training data. The model trained with a combination of our approaches outperforms other baseline models by 45.44%, demonstrating its effectiveness. The results of existing holistic AES models underscore the need to examine existing AES models using new datasets. The QWK scores of EASE and NPCR drop from 0.699 to 0.360 and from 0.817 to 0.507, respectively, compared to the QWK scores of the models trained on ASAP. It implies that (1) our dataset may be more complex, considering that ASAP has 4-6 score classes while our DREsS contains 9 classes on each rubric with scores ranging from 1 to 5 with increments of 0.5 and 25 classes with a score range 3 to 15 on the total score, and (2) the existing models might be overfitted to ASAP. Another limitation of these models is their inability to compute rubric-based scores. Asking gpt-3.5-turbo to score an essay achieved the worst performances among all, showing high variances among the essays with the same ground truth score. The detailed results for Chat-GPT in different prompt settings are provided in Table 7 in Appendix B.3. We perform an ablation study to explore the effects of CASE and find the optimal number of CASE operations per each rubric. In Figure 3, we investigate how the number of synthetic data by each class per original data among all classes, \(n_{aug}\), affects the performance over all rubrics for \(n_{aug}=\{0.125,0.25,0.5,1,2,4,8\}\). CASE on content, organization, and language rubrics show their best performances on 0.5, 2, 0.125 of \(n_{aug}\), generating a pair of synthetic essays and corresponding scores in 4.5, 18, 1.125 times, respectively. We suppose that the detailed augmentation strategies for each rubric and the small size of the original data affect the optimal number of CASE operations. Organization, where corruption was made within the essay and irrelevant to the size of the original data, showed the highest \(n_{aug}\). Content, where the corrupted sentences were sampled from 874 _well-written_ essays with 21.2 sentences on average, reported higher \(n_{aug}\) than language, where the corrupted sentences were sampled from 605 ungrammatical sentences. ### Essay Feedback Generation We adapt evaluation criteria for the quality evaluation of LLM response (Zheng et al., 2023) and re-define those criteria to fit our domain of feedback generation. To overcome the limitation of previous research with holistic evaluation, we assess the feedback quality by each criterion. * Level of detail: the feedback is specific, supported with details. * Accuracy: the feedback content provides accurate information according to the essay. * Relevance: the feedback is provided according to the understanding of the essay criteria. * Helpfulness: the feedback is helpful for students to improve the quality of writing. Table 4 shows two different language feedback examples for the same essay with a score of 2.5 Figure 3: Ablation experimental results for CASE. \(n_{aug}\) is the number of synthetic data by each class per original data among all classes. out of 5.0. These examples are generated using different prompts: a standard prompt without the score and EssayCoT prompt with the score. The green text indicates detailed support and examples provided by the essay (level of detail), and the blue text describes the overall evaluation of the essay regarding the language criterion. By comparing blue text, EssayCoT suggests the improvements (helpfulness) such as _'errors and awkward pharsing'_ and _'punctuation and capitalization'_, while standard prompting only praises language use such as _'vocabulary and collocations'_. Considering that the language score of the essay is 2.5 out of 5.0, the feedback generated by EssayCoT appears to be more accurate. The orange text in the feedback generated by the standard prompt is irrelevant to the language criterion (relevance) and has similar expressions to an organization explanation in Table 1. To evaluate the quality of the feedback generated through these two prompting techniques (standard vs. EssayCoT), we recruited 13 English education experts with a Secondary School Teacher's Certificate (Grade II) for English Language, licensed by the Ministry of Education, Korea. These annotators were asked to evaluate both types of rubric-based feedback for the same essay on a 7-point Likert scale for each rubric. Then, they were asked to vote their general preference between the two feedback types with three options: A is better, B is better, and no difference. We randomly sampled 20 essays balancing the total score of the essay, and allocated 7 annotators to each essay. Results show that 52.86% of the annotators prefer feedback from EssayCoT prompting, compared to only 28.57% who prefer feedback from standard prompting. The remaining 18.57% reported no difference between the two feedback. It was shown \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt}|p{113.8pt} p{113.8pt}|p{113.8pt}} \hline \hline \multicolumn{2}{c|}{Standard Prompting} & \multicolumn{3}{c}{EssayCoT Prompting} \\ \hline The language used in the essay is sophisticated and displays a wide range of vocabulary and collocations. The writer follows grammar and usage rules throughout the paper, and spelling and punctuation are correct. The essay is well-written and easy to read. Overall, the language used in the essay is strong and effectively communicates the writer’s argument. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowadays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay’s control of vocabulary and grammar, but there are also several errors and awkward phrasings, and word choice. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’) and punctuation are correct. The essay is well-written and easy to read. Overall, the language used in the essay is strong and effectively communicates the writer’s argument. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings, and word choice. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings throughout. For example, the writer uses the phrase (‘nowdays’ instead of ‘nowdays’), and there are several instances of awkward phrasing and word choice. Additionally, there are several errors in punctuation and capitalization. The essay would benefit from more careful proofreading and editing, as well as more attention to precise and effective language use. & The essay’s control of vocabulary and grammar, but there are also several errors and awkward phrasings, and word choice. & The essay displays some control of vocabulary and grammar, but there are also several errors and awkward phrasings, and word choice. to be statistically significant at \(p\) level of < 0.05 using the Chi-squared test. Figure 4 presents further evaluation results on the two types of feedback. EasyCoT prompting performs better in terms of accuracy, relevance, and especially helpfulness, which achieves statistical significance across all rubrics. Feedback from standard prompting without essay scores tends to generate general compliments rather than criticisms and suggestions. EFL learners prefer constructive corrective feedback rather than positive feedback, according to the qualitative interview described in Appendix A.1, The only area where standard prompting performed better was in the level of detail regarding the content feedback. This suggests that standard prompting allows a higher degree of freedom by prompting without scores, which enables to generate more detailed feedback. Nevertheless, as it scored poorer on all other criteria, we suppose that this freedom was not quite helpful in essay feedback generation. The comparison of content feedback in Appendix B.4 represents that standard prompting only provided a specific summary of the essay instead of suggestions or criticisms. Furthermore, it even provided inaccurate information in language feedback. As shown in Table 4, the feedback generated with standard prompting incorrectly indicated that the spelling is correct. ## 5 Prototype Deployment and Evaluation We adopted our pipeline to the college's EFL writing courses using the RECIPE [10] platform to investigate both the usage and perception by students and instructors. The participants of our study were 33 students from EFL writing courses (intermediate: 11, advanced: 22). Student cohort compromise 32 Korean and 1 Italian student, of whom 12 are female and 21 are male. Students were asked to self-assess their essays, given a short description of each rubric as a guide. Subsequently, they received the scores and feedback generated by our system. Then they evaluated the helpfulness of the scores and feedback and engaged further by having a conversation with ChatGPT to better understand the feedback and to improve their essays. A detailed set of the questions posed to students is described in Appendix C.1.1. Figure 5 presents the responses of the EFL writing course students regarding the perceived performances and the learning experiences with the outputs of our pipeline. On average, students evaluated the performance of the AES model as well as the style and the quality of feedback generated as 6 out of 7 (Figure 4(a)). They reported confidence in their essay quality and understanding of each writing rubric significantly improved due to the engagement on our platform embedded with FABRIC (Figure 4(b)). ## 6 Discussion In this work, we propose a fully-automated pipeline capable of scoring and generating feedback on students' essays. As we investigated in SSA.1 and simulated in SS5, this pipeline could assist both EFL learners and instructors by generating rubric-based scores and feedback promptly. In this section, we discuss plausible usage scenarios to advance the FABRIC and integrate the pipeline into the general education contexts. Human-in-the-Loop PipelineThough our main finding shows the possibilities of the full automation of essay scoring and feedback generation, we also suggest a direct extension of our work that can be further developed by integrating a human-in-the-loop component into the pipeline to enhance the teaching and learning experience. As instructors can modify the style or contents of the feedback, FABRIC can be enhanced by implementing a personalized feedback generation model that aligns seamlessly with instructors' pedagogical objectives and teaching styles. Therefore, students can receive feedback that is more trustworthy and reliable which empowers them to engage in the learning process actively. In addition, feedback generation can be developed to provide personalized feedback for students to align with their difficult needs and learning styles. Figure 5: Students’ responses about scores and feedback and their perceptions in 7-point Likert scale questions (1: strongly disagree, 7: strongly agree). Asterisk denotes statistical significance tested by the Wilcoxon test at \(p\) value of ¡ 0.05. Check for Students' ComprehensionInstructors can incorporate our pipeline into their class materials to identify recurring issues in students' essays, potentially saving significant time compared to manual reviews. Our pipeline can be effectively used to detect similar feedback provided to a diverse set of students, which often indicates common areas of difficulty. By identifying these common issues, instructors can create targeted, customized, individualized educational content that addresses the specific needs of their students, thereby enhancing the overall learning experience. ## 7 Conclusion In conclusion, this paper contributes to English writing education by releasing new data, introducing novel augmentation strategies for automated essay scoring, and proposing EssayCoT prompting for essay feedback generation. Recognizing the limitations of previous holistic AES studies, we present DREsS, a dataset specifically designed for rubric-based essay scoring. Additionally, we suggest CASE, corruption-based augmentation strategies for essays, which utilizes DREsS to generate pairs of synthetic essays and the corresponding score by injecting feasible sentence-level errors. Through in-depth focus group interviews with EFL learners, we identify a strong demand for both scores and feedback in EFL writing education, leading us to define a novel task, essay feedback generation. To address this task, we propose FABRIC, a comprehensive pipeline for score and feedback generation on student essays, employing essay scores for feedback generation as Essay Chain-of-Thought (EssayCoT). Our results show that the augmented data with CASE significantly improve the performance of AES, achieving about 0.6 of QWK scores, and feedback generated by EssayCoT prompting with essay score is significantly more preferred compared to standard prompting by English education experts. We finally deployed our FABRIC pipeline in real-world EFL writing education, exploring the students' practical use of AI-generated scores and feedback. We envision several scenarios for the implementation of our proposed pipeline in real-world classrooms, taking into consideration of human-computer interactions. This work aims to inspire researchers and practitioners to delve deeper into NLP-driven innovation in English writing education, with the ultimate goal of advancing the field. ## Limitations Our augmentation strategy primarily starts from _well-written_ essays and generates erroneous essays and their corresponding scores, therefore it is challenging to synthesize _well-written_ essays with our method. We believe that _well-written_ essays can be reliably produced by LLMs, which have demonstrated strong writing capabilities, especially in English. We utilize ChatGPT, a black-box language model, for feedback generation. As a result, our pipeline lacks transparency and does not provide explicit justifications or rationales for the feedback generated. We acknowledge the need for further research to develop models that produce more explainable feedback, leaving room for future exploration. ## Ethics Statement We expect that this paper can considerably contribute to the development of NLP for good within the field of NLP-driven assistance in EFL writing education. All studies in this research project were performed under our institutional review board (IRB) approval. We have thoroughly addressed ethical considerations throughout our study, focusing on (1) collecting essays from students, (2) validating our pipeline in EFL courses, and (3) releasing the data. After the EFL courses ended, we asked the students who had taken them to share their essays written through the course to prevent any potential effects on their scores or grades. There was no discrimination when recruiting and selecting EFL students and instructors regarding any demographics, including gender and age. We set the wage per session to be above the minimum wage in the Republic of Korea in 2023 (KRW 9,260 \(\approx\) USD 7.25) 3. They were free to participate in or drop out of the experiment, and their decision did not affect the scores or the grade they received. Footnote 3: [https://www.minimummage.go.kr/](https://www.minimummage.go.kr/) We deeply considered the potential risk associated with releasing a dataset containing human-written essays in terms of privacy and personal information. We will filter out all sensitive information related to their privacy and personal information by (1) rule-based code and (2) human inspection. To address this concern, we will run a checklist, and only the researchers or practitioners who submit the checklist can access our data.
2306.17273
Quantum sensing via magnetic-noise-protected states in an electronic spin dyad
Extending the coherence lifetime of a qubit is central to the implementation and deployment of quantum technologies, particularly in the solid-state where various noise sources intrinsic to the material host play a limiting role. Here, we theoretically investigate the coherent spin dynamics of a hetero-spin system formed by a spin S=1 featuring a non-zero crystal field and in proximity to a paramagnetic center S'=1/2. We capitalize on the singular energy level structure of the dyad to identify pairs of levels associated to magnetic-field-insensitive transition frequencies, and theoretically show that the zero-quantum coherences we create between them can be remarkably long-lived. Further, we find these coherences are selectively sensitive to 'local' - as opposed to 'global' - field fluctuations, suggesting these spin dyads could be exploited as nanoscale gradiometers for precision magnetometry or as probes for magnetic-noise-free electrometry and thermal sensing.
Carlos A. Meriles, Pablo R. Zangara, Daniela Pagliero
2023-06-29T19:27:17Z
http://arxiv.org/abs/2306.17273v1
# Quantum sensing via magnetic-noise-protected states in an electronic spin dyad ###### Abstract Extending the coherence lifetime of a qubit is central to the implementation and deployment of quantum technologies, particularly in the solid-state where various noise sources intrinsic to the material host play a limiting role. Here, we theoretically investigate the coherent spin dynamics of a hetero-spin system formed by a spin \(S=1\) featuring a non-zero crystal field and in proximity to a paramagnetic center \(S^{\prime}=1/2\). We capitalize on the singular energy level structure of the dyad to identify pairs of levels associated to magnetic-field-insensitive transition frequencies, and theoretically show that the zero-quantum coherences we create between them can be remarkably long-lived. Further, we find these coherences are selectively sensitive to 'local' -- as opposed to 'global' -- field fluctuations, suggesting these spin dyads could be exploited as nanoscale gradiometers for precision magnetometry or as probes for magnetic-noise-frequency and thermal sensing. ## 1 Introduction Paramagnetic color centers in wide bandgap semiconductors are attracting broad interest as a platform for quantum information processing in the solid state, most notably, due to their favorable spin properties[1]. Indeed, the relative robustness of spin angular momentum as compared to other degrees of freedom often translates into long coherence lifetimes[2, 3], even at room temperature[4]. Magnetic fluctuations from the environment -- e.g., created by the surrounding bath of electronic and nuclear spins -- often set a limit on the time duration of these coherences, the reason why much effort has been devoted to reduce their impact. Adding to "static" strategies -- relying on higher sample purity and selective depletion of nuclear-spin-active isotopes from the host crystal -- "dynamical" schemes have been developed that effectively decouple the spin qubit from its environment, hence leading to extended coherence times[5]. These methods are proving useful not only in the context of quantum information processing but also for metrology, where they are being exploited to selectively highlight interactions otherwise obscured through quick relaxation of the probe. One complementary strategy to extending the qubit coherent evolution is to tune the Hamiltonian to render the dynamics selectively insensitive to deleterious sources of noise, most importantly, magnetic fluctuations[6]. This is usually attained by bringing the system to conditions where the energies \(E_{1}\), \(E_{2}\) of two eigenstates become unresponsive to changes in the magnetic field (i.e., where \(\partial E_{1,2}/\partial B\!\sim\!0\)). While fluctuation-insensitive spin dynamics have been observed at zero or low magnetic fields[7, 8, 9, 10, 11], level anti-crossings at higher fields often provide an alternative route to mitigating the effects of magnetic noise; the degree of protection depends on the curvature of the energy levels at the anti-crossing and hence disappears for sufficiently strong detuning. This "parametric" approach has been extensively applied to atomic clocks where hyperfine transitions are used as standards[12]. More recently, similar ideas have been adapted to solid state systems, including Bi and P donors in silicon[13, 14], N impurities in diamond[7], and rare-earth dopants in garnets[15, 16]; tuned Hamiltonians have also been developed to protect superconducting qubits from charge, flux, or current noise[17, 18]. While the methods above typically build on the properties of individual atom-like systems, optimal control techniques[19] and, most notably, entanglement[20] between individually addressable qubits provide yet another, arguably less explored path to enhanced sensing. For example, recent work with a spin-active color center hyperfine-coupled to neighboring nuclear spins capitalized on entangled states to demonstrate magnetic field detection with precision beyond the standard quantum limit [21]. Here we theoretically study a pair of interacting electronic spins featuring different spin numbers; for concreteness, we focus on a system comprising a nitrogen-vacancy (NV) center in diamond and a proximal spin-1/2 paramagnetic impurity (such as a substitutional neutral nitrogen impurity, the so-called P1 center [22]) but we later show the ideas can be broadly generalized. We first investigate the dynamics of the two-spin system near an energy anti-crossing to show that although level bending offers first-order protection against decoherence, the mechanism is impractically vulnerable to detuning of the operating magnetic field. We capitalize on these findings, however, to subsequently create two-spin, zero-quantum coherences, and show these states are robust against global magnetic field fluctuations to all orders, regardless the operating external magnetic field. Finally, we build on the underlying physical differences between the constituent spins in the dyad to show our approach can be exploited to implement magnetic-noise-insensitive electrometry or thermometry protocols. ## 2 Protecting Quantum Coherences ### Physical system Fig. 1a lays out the system under consideration: Spin \(S=1\) features a state triplet with a crystal field splitting \(\Delta\) while spin \(S^{\prime}=1/2\) represents a dipolarly coupled paramagnetic center in its proximity; we also assume a magnetic field \(B\) aligned along the quantization axis defined by the symmetry axis of spin \(S\). For a field \(B_{\rm m}\approx\Delta/2\), the energy difference between the \(m_{\rm S}=0\) and \(m_{\rm S}=-1\) states matches the Zeeman splitting between the \(m_{\rm S^{\prime}}=\pm\,1/2\) states of spin \(S^{\prime}\), a condition already exploited, e.g., to spin polarize the paramagnetic center and adjacent nuclei in the crystal host [23; 24; 25; 26]. Fig. 1b shows the energy \(E_{\rm S+S^{\prime}}\) eigenvalues for the combined spin system as a function of the applied magnetic field: At \(B_{\rm m}\), the dipolar coupling between the NV and the paramagnetic center produces a level anti-crossing between the \(|0,+1/2\rangle\) and \(|-1,-1/2\rangle\) branches in the diagram, whose gap is proportional to the inter-spin coupling \(J\). At the level anti-crossing, the two states hybridize into \(|\pm\rangle=1/\sqrt{2}\,\{|0,+1/2\rangle\pm|-1,-1/2\rangle\}\) and the transition frequency between the corresponding energies reaches a minimum, hence making coherences between these latter two states robust (to first order) against magnetic field fluctuations. More interestingly, though, the energy separation between \(|0,-1/2\rangle\) and \(|-1,+1/2\rangle\) is independent of the applied magnetic field meaning that single-quantum coherences between either level and \(|+\rangle\) or \(|-\rangle\) should also be protected (transition 1 in Fig. 1b). This is better seen in Fig. 1c, where we plot the modified eigen-energies Figure 1: **Energetics of the two-spin system.** (a) We consider a spin \(S=1\) featuring a crystal field \(\Delta\) and coupled to a paramagnetic impurity \(S^{\prime}=1/2\) via a dipolar interaction of amplitude \(\mathcal{I}\); for all values, we assume the magnetic field direction coincides with that defined by the crystal field at spin \(S\). At the level-crossing field \(B_{\rm m}=\Delta/2\), the energy separation between the \(m_{\rm S}=0\) and \(m_{\rm S}=-1\) states of spin \(S\) coincides with the Zeeman splitting \(m_{S^{\prime}}=\pm\,1/2\) of \(S^{\prime}\) (upper and lower energy diagrams, respectively). (b) Energy diagram of the combined system. Circled numbers denote individual transitions of distinct frequencies. (c) Same as in (b) but after adding a term \(\delta E=+|\gamma_{\rm e}|B\) to all energy levels. \(\tilde{E}_{\text{S+S^{\prime}}}=E_{\text{S+S^{\prime}}}+\delta E\) for \(\delta E=+|\gamma_{\text{e}}|B\) (\(\gamma_{\text{e}}\) denotes the electronic gyromagnetic ratio). Near \(B_{\text{m}}\), none of the four lower branches depends on the magnetic field, with the consequence that state superpositions near the level anti-crossing must be longer-lived. The same also applies away from \(B_{\text{m}}\) for coherences between \(|0,-1/2\rangle\) and \(|-1,+1/2\rangle\), though creating superpositions between these levels is more involved as they cannot be attained via microwave (MW) excitation alone. Below we investigate the dynamics of spin coherences both near and far away from the level anti-crossing and show only the latter case provides practical levels of protection against magnetic noise. In what follows, we ignore the hyperfine coupling of either point defect with their nuclear host, a simplification justified herein given the comparatively slow nuclear spin dynamics (see Supplementary Information, Section I). ### Coherent spin dynamics of the dyad near \(B_{\text{m}}\) To better assess the system response to magnetic fluctuations, we start by considering a semi-classical model where magnetic noise of amplitude \(\beta(t)\) stems from outside sources changing randomly over time with some characteristic rate (see Supplementary Information, Section I). Away from the level anti-crossing, the linear relation between the applied field and the transition frequencies of spins \(S\), \(S^{\prime}\) establishes a proportionality between the decoherence rate and the fluctuator-induced root-mean-square (rms) magnetic field \(\beta_{\text{rms}}\). We can therefore gauge the effectiveness of the anti-crossing as a shield against noise by comparing the system response as we approach \(B_{\text{m}}\). For future comparison, we first tune the MW frequency to the \(|0\rangle\leftrightarrow|-1\rangle\) transition and calculate the system evolution under a Hahn-echo protocol away from the level anti-crossing (Fig. 2a); consistent with experimental practice, we assume optical initialization and readout of spin \(S\) (here implicitly associated to an NV [27; 28]). We then make the magnetic field equal to \(B_{\text{m}}\) and derive the system response in the typical limit where the excitation bandwidth of all MW pulses is greater than the anti-crossing gap (Fig. 2b). Besides introducing a fast signal beating -- mainly arising from a double-quantum coupling term \(H_{\text{DQ}}=2\pi J_{\text{L}}(S_{+}S_{+}^{\prime}+S_{-}S_{-}^{\prime})\) exclusively active near \(B_{\text{m}}\), see Supplementary Information, Section I -- proximity to the level anti-crossing leads to significantly longer-lived coherences (here captured through the characteristic time \(T_{2,\text{m}}\) in a stretched exponential fit). Specifically, for the conditions assumed in the figure we find that the ratio \(\eta\equiv T_{2,\text{m}}/T_{2,\text{SQ}}\) between the single-quantum (SQ) coherence time of spin \(S\) at and away from \(B_{\text{m}}\) can be large (depending on the dipolar coupling and noise amplitude). This ratio remains unchanged if, rather than the Hahn-echo, we take the Figure 2: **Coherence protection near the level anti-crossing.** (a) Hahn-echo and DEER coherent response of spin \(S\) away from \(B_{\text{m}}\) (b) When \(B=B_{\text{m}}\), both protocols coincide; the plot displays the calculated response of spin \(S\). In (a) and (b), \(\beta_{\text{rms}}=1\)\(\upmu\)T and \(J_{\text{L}}=0.15\) MHz. (c) Hahn-echo spin transverse relaxation time \(T_{2,\text{SQ}}\) of spin \(S\) as a function of the field detuning \(\delta B\equiv B-B_{\text{m}}\) for different \(S-S^{\prime}\) coupling constants. The side panels display the calculated coherence lifetime enhancement \(\eta\) (relative to \(T_{2,\text{SQ}}\) at a field sufficiently far away from \(B_{\text{m}}\)) as a function of the inter-spin coupling and spin-noise amplitude (upper and lower plots, respectively). For reference, we express the upper horizontal axis in the upper right-hand insert in terms of the distance \(r_{\text{max}}\) representing the maximum separation for color centers featuring a coupling of magnitude \(\mathcal{I}\). In all cases, we assume \(\mathcal{I}=\mathcal{I}_{||}=J_{\text{L}}\). double electron-electron resonance (DEER) protocol as the reference because both lead to the same coherence lifetimes under the simplified conditions assumed herein (Fig. 2a). Importantly, we reach the same conclusion even when we consider the differing rotation angles experienced by either spin in the dyad under a common MW field at \(B_{\rm m}\) (a direct consequence of the hetero-spin nature of the dyad, see Supplementary Information, Section I). Numerical modeling as a function of the external field, however, indicates that noise protection is limited to a narrow band. In particular, Fig. 2c shows that even in the regime of strongly coupled dyads (\(J_{\perp}=0.75\) MHz, corresponding to an inter-defect separation of \(\sim\)4 nm), robustness against decoherence is limited to a window \(\delta B_{\rm m}\)\(\sim\)4 uT around \(B_{\rm m}\), hence making the system susceptible to slow fluctuations (e.g., stemming from temperature changes [29]). ### Protecting spin coherences far away from \(B_{\rm m}\) An intriguing characteristic in the energy diagram of Fig. 1c -- only exploited indirectly in Fig. 2 -- is that the energy separation between the \(|0,-1/2\rangle\) and \(|-1,+1/2\rangle\) levels remains constant (and equal to \(\Delta\)) at all fields, which suggests that coherences between these levels would be intrinsically protected against global magnetic fluctuations. Since the operating magnetic field is largely inconsequential, we move \(B\) away from \(B_{\rm m}\), and write the Hamiltonian as \[H=\Delta S_{x}^{2}+|\gamma_{\rm e}|B(S_{x}+S_{x}^{\prime})+2\pi J_{||}S_{x}S_{ x}^{\prime}, \tag{1}\] where we assume the magnetic field and crystal field axis are aligned, and \(J_{||}\) is the secular dipolar coupling amplitude (Supplementary Information, Section I). Note that the double-quantum term \(H_{\rm DQ}\) -- responsible for the energy gap at the level crossing -- becomes non-secular away from \(B_{\rm m}\), and can hence be ignored. Limiting our description to the manifold spanned by states \(|0,+1/2\rangle\), \(|-1,+1/2\rangle\), \(|0,-1/2\rangle\), and \(|-1,-1/2\rangle\), spin \(S\) can be described through \(\tilde{S}\), a fictitious spin-1/2 operator, and the Hamiltonian takes the reduced, more convenient form \[\tilde{H}=(|\gamma_{\rm e}|B-\Delta)\tilde{S}_{x}+(|\gamma_{\rm e}|B-\pi J_{|| })S_{x}^{\prime}+2\pi J_{||}\tilde{S}_{x}S_{x}^{\prime}. \tag{2}\] Following the energy diagram in Fig. 1c (see transition 4), coherences between states \(|-1,+1/2\rangle\) and \(|0,-1/2\rangle\) must be resilient to magnetic field fluctuations. Since direct microwave excitation cannot produce this type of coherences, we use a two-step strategy where we first initialize the spin system via a multi-pulse protocol whose timing ensures full transfer of the NV polarization to spin \(S^{\prime}\) (see Ref. [30] and Supplementary Information, Sections II and III). After NV spin re-pumping, we use a DEER-like sequence (with inter-pulse separation \(\tau_{\rm ZQ}=\left(4J_{||}\right)^{-1}\), Fig. 3a) to transform the initial state (here expressed as an effective density matrix operator \(\rho_{\rm eff}(0)=\frac{1}{2}\left(\tilde{S}_{x}-S_{x}^{\prime}\right)\)) into a zero-quantum coherence, namely Figure 3: **Extending coherence lifetimes by inter-spin entanglement.** (a) Spin control protocol; we choose \(\tau_{\rm ZQ}=\left(4J_{||}\right)^{-1}\). (b) Calculated system response as extracted from monitoring spin \(S\) upon a zero-quantum free evolution of variable time \(\tilde{\tau}\) and assuming \(\beta_{\rm rms}=1\) uT and \(J_{||}=50\) kHz (\(\tau_{\rm max}\cong 10\) nm). (c) Calculated zero-quantum (ZQ) coherence time \(T_{\rm Z,ZQ}\) as a function of the fractional magnetic noise difference \(\xi\); the dashed line represents the single-quantum coherence lifetime of spin \(S\) far from the level anti-crossing, here serving as a reference. \[\rho_{\text{eff}}(2\tau_{\text{DQ}})=\frac{1}{2i}\big{(}\tilde{S}_{-}S_{+}^{\prime} -\tilde{S}_{+}S_{-}^{\prime}\big{)}, \tag{3}\] where \(\tilde{S}_{\pm}=\tilde{S}_{\pm}\pm i\tilde{S}_{y}\) and analogously for \(S_{\pm}^{\prime}\). Importantly, this state is insensitive to global magnetic noise of arbitrary amplitude because the phase picked-up by \(\tilde{S}_{\pm}\) is cancelled by \(S_{\mp}^{\prime}\); we contrast this response with that observed in Fig. 2c, where the level of protection degrades as the rms noise amplitude increases. Our strategy is related to (but different from) the singlet-triplet long-lived coherences already introduced in nuclear magnetic resonance for inhomogeneity-free spectroscopy[31]. Interestingly, the magnetic-noise-resilient state in Eq. (3) is formally equivalent to the one produced via the manipulation of two spin-1/2 nuclei[32]; we show below, however, how the underlying physical differences between the two electron spins in the dyad can be exploited to enact alternative, otherwise unattainable sensing modalities. ## 3 Application to quantum metrology Complete robustness to magnetic noise must be seen, of course, as a limit case because in practice the physical separation between the electronic spins of the dyad introduces some finite difference between the magnetic noise amplitudes \(\beta_{S}(t)\) and \(\beta_{S^{\prime}}(t)\) experienced by either spin at a given time \(t\). The immediate consequence is an imbalance between the instantaneous frequency shifts in each spin with the concomitant decay of the two-spin entanglement. In other words, the system behaves as a gradiometer, selectively sensitive to magnetic field fluctuations occurring on the scale of the separation between the dyad spins. Interestingly, this class of dephasing can be mitigated through the intercalation of an inversion pulse at the midpoint of the zero-quantum evolution interval, which yields an echo not unlike that characteristic in single-quantum coherences (Supplementary Information, Section IV); we include a \(\pi\)-pulse here to facilitate comparison with the unprotected case (Fig. 3a). We benchmark the system response in Fig. 3b where we plot the time trace of the fractional NV population in \(|m_{S}=0\rangle\) -- detected upon a zero- to single-quantum conversion, Fig. 3a -- as a function of the zero-quantum evolution time \(2\tilde{\tau}\). To better gauge the impact of the local environment, we keep the rms noise amplitude at both spin sites equal (and constant), but gradually alter the fractional contribution from local, paramagnetic-center-selective sources; we quantify these changes through the parameter Figure 4: **Alternative sensing modalities.** (a) Schematics of the pulse sequence; the first composite pulse amounts to a phase rotation of spin \(S\) by a variable amount \(\theta\). (b) Electric-field-noise-selective relaxometry; the electric field has amplitudes \(\langle\xi_{x}^{2}\rangle=\langle\xi_{x}^{2}\rangle=\xi_{\rm rms}^{2}\) and we assume a constant temperature; note that since \(\theta=0\), the sequence is identical to that in Fig. 3 except that there is no \(\pi\)-pulse during the zero-quantum evolution. (c) Thermal sensing modality (assuming \(\xi_{\rm rms}=0\)); the temperature change \(\delta T\) can be extracted from the signal slope at early times (and the known thermal sensitivity of the NV at room temperature). In (b) and (c), \(\beta_{\rm rms}=1\)\(\mu\)T and \(\xi=0\). The absence of noise in (c) reflects the suppression of the magnetic fluctuations, for simplicity, the only noise source in these calculations. All other conditions as in Fig. 3. \(\xi\equiv\langle(\beta_{S}-\beta_{S^{\prime}})^{2}\rangle/\langle((\langle\beta_{S} \rangle^{2})+\langle(\beta_{S^{\prime}})^{2}\rangle)\rangle\), where we use brackets to indicate time average. Fig. 3c shows the extracted zero-quantum coherence lifetime \(T_{\rm 2,ZQ}\) as a function of \(\xi\). Noting that \(\xi\rightarrow\mathbf{1}\) as fluctuations at either spin site become independent, the quick coherence decay we observe indicates the system behaves as a sensitive gradiometer; on the other hand, comparison with \(T_{\rm 2,SQ}\) (far from \(B_{\rm m}\)) shows that longer system lifetimes can still be attained for considerably different noise environments. An immediate corollary to the results above is that the dyad remains selectively sensitive to environmental changes that only affect spin \(S\), hence allowing one to envision alternative sensing modalities protected from global magnetic noise. Fig. 4 shows two complementary instances of quantum sensing that leverage the distinctive physical roots underlying the spin-1 nature of spin \(S\). In the first example, we illustrate an application to electrometry[33, 34] -- here designed to expose electric noise through changes in \(T_{\rm 2,ZQ}\), Figs. 4a and 4b -- which we derive after including in the Hamiltonian the NV coupling terms to electric fields (ignored in Eq. (1) for simplicity, see Supplementary Information, Section IV). Since magnetic fluctuations often dominate the NV decoherence rate, the ability to remove this contribution can prove relevant to the realization of novel forms of electric-field-sensitive microscopy[35]. Fig. 4c shows an alternative sensing modality that builds on the same protocol, this time adapted to determining thermal shifts rather than electric noise. To this end, we selectively change the phase of spin \(S\) by \(\pi/2\) prior to zero-quantum evolution so as to create a time-dependent state \[\rho_{\rm eff}(\bar{\tau})=\frac{1}{2}\left(\mathcal{S}_{-}S_{+}^{ \prime}+\mathcal{S}_{+}S_{-}^{\prime}\right)\cos(\delta\omega\bar{\tau})\\ -\frac{1}{2i}\left(\mathcal{S}_{-}S_{+}^{\prime}-\mathcal{S}_{+} S_{-}^{\prime}\right)\sin(\delta\omega\bar{\tau}). \tag{4}\] Note that both terms in Eq. (4) are robust to global magnetic noise, thus making the temporal evolution resilient against this class of fluctuations. After zero- to single-quantum conversion, a temperature change can be calculated as \(\delta\mathcal{T}=\delta\omega(d\Delta/d\,\mathcal{T})^{-1}\), where \(\delta\omega\) is the observed frequency shift (in turn, extracted from the signal slope at early times), and \(d\Delta/d\,\mathcal{T}\) is the thermal change in the NV crystal field at room temperature[29]. ## 4 Discussion In summary, we showed that hetero-spin complexes can harbor long-lived coherences arising from transitions at level anti-crossings, or zero-quantum coherences emerging from energy levels associated to magnetic-field-insensitive frequencies. While the former tend to be fragile against field detuning, the latter are long-lived in the presence of strong magnetic noise provided the fields at each spin site remain equal. Under these conditions, zero-quantum coherences have a lifetime insensitive to the magnetic noise amplitude or the dyad's dipolar coupling strength, even though decoherence during longer preparation and readout intervals does lead to a fractional loss of signal contrast. To mitigate this problem, one could resort to focused implantation of molecular nitrogen or related techniques, already explored as a strategy to producing multi-NV clusters[36, 37]. Although we assumed herein an isolated electron spin dyad, a color center such as the NV would typically interact with not one but several paramagnetic impurities, themselves interconnected to form an electronic spin bath. Flip-flops between spins in the bath would lead to diffusion and ultimately to a loss of coherence in the entangled spin dyad. These dynamics, however, rest on energy matching between proximal spins, implying the process can be countered by introducing sufficiently large spectral shifts, attained, e.g., through magnetic field gradients[38, 39, 40]. Alternatively, one can resort to selective radio-frequency excitation and polarization transfer[41] to initialize the nuclear hosts of surrounding spin-1/2 centers into different hyperfine states so that only one is resonant at the applied magnetic field. Other than the spin dyad formed by an NV and a neighboring spin-1/2 center, similar dynamics are to be expected for hetero-spin pairs where the response of an individual spin to a global field change counters the other. Examples include pairs comprising a spin-1/2 center -- which tend to be ubiquitous -- and other optically addressable spin qubits in diamond, silicon carbide, or silicon, to mention only a few material hosts attracting present interest. ## Acknowledgements D.P. and C.A.M. acknowledge support from the National Science Foundation through grants NSF-1903839 and NSF-2203904. P.R.Z. acknowledges support from SeCyT-UNC through grant 33620180100154CB and CONICET through grant PIP 11220200102451CO. All authors also acknowledge access to the facilities and research infrastructure of the NSF CREST IDEALS, grant number NSF-2112550. ## Conflict of Interest The authors declare no conflict of interest. ## Data Availability Statement The data that support the findings of this study are available from the corresponding author upon reasonable request. * [1] D.D. Awschalom, R. Hanson, J. Wrachtrup, B.B. Zhou, "Quantum technologies with optically interfaced solid-state spins", _Nat. Phot._**12**, 516 (2018). * [2] M. Atature, D. Englund, N. Vamivakas, S.-Y. Lee, J. Wrachtrup, "Material platforms for spin-based photonic quantum technologies", _Nat. Rev. Mater._**3**, 38 (2018). * [3] A.M. Tyryshkin, S. Tojo, J.J.L. Morton, H. Riemann, N.V. Abrosimov, P. Becker, H-J. Pohl, T. Schenkel, M.L.W. Thewalt, K.M. Itoh, S.A. Lyon, "Electron spin coherence exceeding seconds in high-purity silicon", _Nat. Mater._**11**, 143 (2012). * [4] G. Balasubramanian, P. Neumann, D. Twitchen, M. Markham, R. Kolesov, N. Mizuochi, J. Isoya, J. Achard, J. Beck, J. Tissler, V. Jacques, P. R. Hemmer, F. Jelezko, J. Wrachtrup, "Ultra-long spin coherence time in isotopically engineered diamond", _Nat. Mater._**8**, 383 (2009). * [5] D. Suter, G.A. Alvarez, "Colloquium: Protecting quantum information against environmental noise", _Rev. Mod. Phys._**88**, 041001 (2016). * [6] K.C. Miao, J.P. Blanton, C.P. Anderson, A. Bourassa, A.L. Crook, G. Wolfowicz, H. Abe, T. Ohshima, D.D. Awschalom, "Universal coherence protection in a solid-state spin qubit", _Science_**369**, 1493 (2020). * [7] L.E. Erickson, "Electron-paramagnetic-resonance absorption by trivalent neodymium ions in single crystals of lanthanum trichloride and lanthanum ethyl sulphate in zero magnetic field", _Phys. Rev._**143**, 295 (1966). * [8] S.J. Strach, R. Bramley, "EPR of the vanadyl ion in Tutton salts at zero magnetic field", _Chem. Phys. Lett._**109**, 363 (1984). * [9] F. Kong, P. Zhao, P. Yu, Z. Qin, Z. Huang, Z. Wang, M. Wang, F. Shi, J. Du, "Kilohertz electron paramagnetic resonance spectroscopy of single nitrogen centers at zero magnetic field", _Sci. Adv._**6**, eaaz8244 (2020). * [10] M. Emondts, M.P. Ledbetter, S. Pustelny, T. Theis, B. Patton, J.W. Blanchard, M.C. Butler, D. Budker, A. Pines, "Long-lived heteronuclear spin-singlet states in liquids at a zero magnetic field", _Phys. Rev. Lett._**112**, 077601 (2014). * [11] G. Pileio, M. Carravetta, M.H. Levitt, "Extremely low-frequency spectroscopy in low-field nuclear magnetic resonance", _Phys. Rev. Lett._**103**, 083002 (2009). * [12] C. Langer, R. Ozeri, J. D. Jost, J. Chiaverini, B. DeMarco, A. Ben-Kish, R.B. Blakestad, J. Britton, D. B. Hume, W.M. Itano, D. Leibfried, R. Reichle, T. Rosenband, T. Schaetz, P.O. Schmidt, D. J. Wineland, "Long-lived qubit memory using atomic ions", _Phys. Rev. Lett._**95**, 060502 (2005). * [13] G. Wolfowicz, A.M. Tyryshkin, R.E. George, H. Riemann, N.V. Abrosimov, P. Becker, H-J. Pohl, M.L.W. Thewalt, S.A. Lyon, J.J.L. Morton, "Atomic clock transitions in silicon-based spin qubits", _Nat. Nanotech._**8**, 561 (2013). * [14] K.J. Morse, P. Dluhy, J. Huber, J.Z. Salvail, K. Saeedi, H. Riemann, N.V. Abrosimov, S.P. Becker, H-J. Pohl, S. Simmons, M.L.W. Thewalt, "Zero-field optical magnetic resonance study of phosphorus donors in 28-silicon", _Phys. Rev. B_**97**, 115205 (2018). * [15] D. McAuslan, J. Bartholomew, M. Sellars, J. Longdell, "Reducing decoherence in optical and spin transitions in rare-earth-metal-ion-doped materials", _Phys. Rev. A_**85**, 032339 (2012). * [16] M. Zhong, M.P. Hedges, R.L. Ahlefeldt, J.G. Bartholomew, S.E. Beavan, S.M. Wittig, J.J. Longdell, M.J. Sellars, "Optically addressable nuclear spins in a solid with a six-hour coherence time", _Nature_**517**, 177 (2015). * [17] D. Vion, A. Assume, A. Cottet, P. Joyez, H. Pothier, C. Urbina, D. Esteve, M.H. Devoret, "Manipulating the quantum state of an electrical circuit", _Science_**296**, 886 (2002). * [18] J. Koch, T.M. Yu, J. Gambetta, A.A. Houck, D.I. Schuster, J. Majer, A. Blais, M.H. Devoret, S.M. Girvin, R.J. Schoelkopf, "Charge-insensitive qubit design derived from the Cooper pair box", _Phys. Rev. A_**76**, 042319 (2007). * [19] F. Dolde, V. Bergholm, Y. Wang, I. Jakobi, B. Naydenov, S. Pezzagna, J. Meijer, F. Jelezko, P. Neumann, T. Schulte-Herbruggen, J. Biamonte, J. Wrachtrup, "High-fidelity spin entanglement using optimal control", _Nat. Commun._**5**, 3371 (2014). * [20] G-Q. Liu, Y-R. Zhang, Y-C. Chang, J-D. Yue, H. Fan, X-Y. Pan, "Demonstration of entanglement-enhanced phase estimation in solid", _Nat. Commun._**6**, 6726 (2015). * [21] T. Xie, Z. Zhao, X. Kong, W. Ma, M. Wang, X. Ye, P. Yu, Z. Yang, S. Xu, P. Wang, Y. Wang, F. Shi, J. Du, "Beating the standard quantum limit under ambient conditions with solid-state spins", _Sci. Adv._**7**, eabg9204 (2021). * [22] M.N.R. Ashfold, J.P. Goss, B.L. Green, P.W. May, M.E. Newton, C.V. Peaker, "Nitrogen in diamond", _Chem. Rev._**120**, 5745 (2020). * [23] S. Armstrong, L.J. Rogers, R.L. McMurtrie, N.B. Manson, "NV-NV electron-electron spin and NV-Ns electron-electron and electron-nuclear spin interaction in diamond", _Phys. Proc._**3**, 1569 (2010). * [24] R. Wunderlich, J. Kohlrautz, B. Abel, J. Haase, J. Meijer, "Optically induced cross relaxation via nitrogen-related defects for bulk diamond \({}^{13}\)C hyperpolarization", _Phys. Rev. B_**96**, 220407(R) (2017). * [25] D. Pagliero, K.R. Koteswara Rao, P.R. Zangara, S. Dhomkar, H.H. Wong, A. Abril, N. Aslam, A. Parker, J. King, C.E. Avalos, A. Ajoy, J. Wrachtrup, A. Pines, C.A. Meriles, "Multispin-assisted optical pumping of bulk \({}^{13}\)C nuclear spin polarization in diamond", _Phys. Rev. B_**97**, 024422 (2018). * [26] D. Pagliero, P. Zangara, J. Henshaw, A. Ajoy, R.H. Acosta, J.A. Reimer, A. Pines, C.A. Meriles, "Optically pumped spin polarization as a probe of many-body thermalization", _Science Adv._**6**, eaaz6986 (2020). * [27] J. Harrison, M.J. Sellars, N.B. Manson, "Optical spin polarisation of the N-V centre in diamond", _J. Lumin._**107**, 245 (2004). * [28] F. Jelezko, J. Wrachtrup, _J. Phys.: Condens. Matter_**16**, 1089 (2004). * [29] T. Plakhotnik, M.W. Doherty, J.H. Cole, R. Chapman, N.B. Manson, "All-optical thermometry and thermal properties of the optically detected spin resonances of the NV-center in nanodiamond", _Nano Lett._**14**, 4989 (2014). * [30] R.R. Ernst, G. Bodenhausen, A. Wokaun, _Principles of Nuclear Magnetic Resonance in One and Two Dimensions_, Clarendon Press, Oxford 1987, pages. * [31] R. Sarkar, P. Ahuja, P.R. Vasos, G. Bodenhausen, "Long-lived coherences for homogeneous line narrowing in spectroscopy", _Phys. Rev. Lett._**104**, 053001 (2010). * [32] H.P. Bartling, M.H. Abobeih, B. Pingault, M.J. Degen, S.J.H. Loenen, C.E. Bradley, J. Randall, M. Markham, D.J. Twitchen, T.H. Taminiau, "Entanglement of spin-pair qubits with intrinsic dephasing times exceeding a minute", _Phys. Rev._ X **12**, 011048 (2022). * [33] R. Li, F. Kong, P. Zhao, Z. Cheng, Z. Qin, M. Wang, Q. Zhang, P. Wang, Y. Wang, F. Shi, J. Du, "Nanoscale electrometry based on a magnetic-field-resistant spin sensor", _Phys. Rev. Lett._**124**, 247701 (2020). * [34] Z. Qiu, A. Hamo, U. Vool, T.X. Zhou, A. Yacoby, "Nanoscale electric field imaging with an ambient scanning quantum sensor microscope", _npj Quantum Inf_**8**, 107 (2022). * [35] D.J. McCloskey, N. Dontschuk, A. Stacey, C. Pattinson, A. Nadarajah, L.T. Hall, L.C.L. Hollenberg, S. Prawer, D.A. Simpson, _Nat. Phot._**16**, 730 (2022). * [36] I. Jakobi, S.A. Momenzadeh, F. Favaro de Oliveira, J. Michl, F. Ziem, M. Schreck, P. Neumann, A. Denisenko, J. Wrachtrup, "Efficient creation of dipolar coupled nitrogen-vacancy spin qubits in diamond", _J. Phys.: Conf. Series_**752**, 012001 (2016). * [37] M. Haruyama, S. Onoda, T. Higuchi, W. Kada, A. Chiba, Y. Hirano, T. Teraji, R. Igarashi, S. Kawai, H. Kawarada, Y. Ishii, R. Fukuda, T. Tanii, J. Isoya, T. Ohshima, O. Hanaizumi, "Triple nitrogen-vacancy centre fabrication by C5N4Hn ion implantation", _Nat. Commun._**10**, 2664 (2019). * [38] S. Bodenstedt, I. Jakobi, J. Michl, I. Gerhardt, P. Neumann, J. Wrachtrup, "Nanoscale spin manipulation with pulsed magnetic gradient fields from a hard disc drive writer", _Nano Lett._**18**, 5389 (2018). * [39] H. Zhang, K. Arai, C. Belthangady, J-C. Jaskula, R.L. Walsworth, "Selective addressing of solid-state spins at the nanoscale via magnetic resonance frequency encoding", _npj Quant. Inf._**3**, 31 (2017). * [40] M.S. Grinolds, M. Warner, K. De Greve, Y. Dovzhenko, L. Thiel, R. L. Walsworth, S. Hong, P. Maletinsky, A. Yacoby, "Subnanometre resolution in three-dimensional magnetic resonance imaging of individual dark spins", _Nat. Nanotech._**9**, 279 (2014). * [41] A. Laraoui, C.A. Meriles, "Approach to dark spin cooling in a diamond nanocrystal", _ACS Nano_**7**, 3403 (2013). **Supplementary Material for** **"Quantum sensing via magnetic-noise-protected states in an electronic spin dyad"** Carlos A. Meriles\({}^{1,2,\,\dagger}\), Pablo R. Zangara\({}^{3,4}\), and Daniela Pagliero\({}^{1}\) \({}^{1}\)_Department. of Physics, CUNY-City College of New York, New York, NY 10031, USA._ \({}^{2}\)_CUNY-Graduate Center, New York, NY 10016, USA._ \({}^{3}\)_Universidad Nacional de Cordoba, Facultad de Matematica, Astronomia, Fisica y Computacion, Cordoba, Argentina._ \({}^{4}\) _CONICET, Instituto de Fisica Enrique Gaviola (IFEG), Cordoba, Argentina._ \({}^{\dagger}\)_Corresponding author. E-mail: [email protected]._ **I-Spin Hamiltonian** We consider an electronic spin dyad comprising a spin \(S=1\) with a crystal field \(\Delta\) and a neighboring paramagnetic impurity \(S^{\prime}=1/2\). Assuming the magnetic field \(B\) is parallel to the crystal field axis, we write the system Hamiltonian as \[H=\Delta S_{\rm z}^{2}+|\gamma_{\rm e}|BS_{\rm z}+|\gamma_{\rm e}|BS_{\rm z}^{ \prime}+H_{\rm d},\] (S1) where \(H_{\rm d}\) represents the dipolar interaction given by \[H_{\rm d}=2\pi\jmath\left\{(1-3\cos^{2}\theta)\left(S_{\rm z}S_{ \rm z}^{\prime}-\frac{1}{4}\left(S_{+}S_{-}^{\prime}+S_{-}S_{+}^{\prime} \right)\right)\right.\] \[\left.-\frac{3}{4}\sin 2\theta\left(\left(S_{+}+S_{-}\right)S_{\rm z }^{\prime}+S_{\rm z}(S_{+}^{\prime}+S_{-}^{\prime})\right)-\frac{3}{4}\sin^{ 2}\theta\left(S_{+}S_{+}^{\prime}+S_{-}S_{-}^{\prime}\right)\right\}.\] (S2) In Eq. (S2), the coupling amplitude is given by \(2\pi\jmath=\frac{\mu_{0}\gamma_{\rm e}^{2}\hbar^{2}}{4\pi r^{3}}\), \(\mu_{0}\) is the vacuum permeability, \(\hbar\) is the reduced Planck constant, \(\gamma_{\rm e}\) is the electronic gyromagnetic ratio, \(\Delta\) is the crystal field (expressed in \({\rm rad\;s}^{-1}\)), \(\tau\) denotes the inter-spin separation, and \(\theta\) is the angle formed by the inter-spin vector and the magnetic field; we also use the standard notation for the ladder operators \(S_{\pm}=S_{\rm x}\pm iS_{\rm y}\) and similarly for spin \(S^{\prime}\). Provided the microwave (MW) excitation of spin \(S\) is limited to address the transition between the \(|m_{\rm S}=0\rangle\) and \(|m_{\rm S}=-1\rangle\) states, we restrict our description to the manifold formed by states \(|1\rangle=|0,+1/2\rangle\), \(|2\rangle=|-1,+1/2\rangle\), \(|3\rangle=|0,-1/2\rangle\), and \(|4\rangle=|-1,-1/2\rangle\), and describe spin \(S\) via a fictitious spin-1/2 operator \(\tilde{S}\). In this representation, we describe the NV via the virtual spin \(\tilde{S}=1/2\) and recast \(S_{\rm z}\) as \[S_{\rm z}\rightarrow\tilde{S}_{\rm z}-1/2\] (S3) The Hamiltonian then takes the simpler form \[H=(|\gamma_{\rm e}|B-\Delta)\tilde{S}_{\rm z}+(|\gamma_{\rm e}|B-\pi\jmath_{ \rm l})S_{\rm z}^{\prime}+2\pi\jmath_{\rm l}\tilde{S}_{\rm z}S_{\rm z}^{\prime} +2\pi\jmath_{\rm l}\sqrt{2}\big{(}\tilde{S}_{+}S_{+}^{\prime}+\tilde{S}_{-}S_{ -}^{\prime}\big{)},\] (S4) where we ignore contributions proportional to the identity operator \(\mathbb{I}\). The last two terms capture the secular contributions of the dipolar interaction with the notation \(\jmath_{\rm l}=\jmath(1-3\cos^{2}\theta)\) and \(\jmath_{\rm l}=-\frac{3}{4}\jmath\sin^{2}\theta\). Equation (S4) yields a level anti-crossing1 at \(|\gamma_{\rm e}|B_{\rm m}=(\Delta+\pi\jmath_{\rm l})/2\) where states \(|1\rangle\) and \(|4\rangle\) hybridize to yield the eigenstates \(|\pm\rangle=1/\sqrt{2}\) (\(|1\rangle\pm|4\rangle\)). Far enough from \(B_{\rm m}\), the last term becomes non-secular and can be ignored, thus leading back to the expression in Eq. (2) of the main text2. Lastly, we note that in the presence of MW, Eq. (S4) must be supplemented with a term of the form \(H_{\text{MW}}=\sqrt{2}|\gamma_{\text{e}}|B_{1}\tilde{S}_{x}\cos\omega t+|\gamma_{ \text{e}}|B_{1}^{\prime}S_{x}^{\prime}\cos\omega^{\prime}t\), where \(B_{1}\) and \(B_{1}^{\prime}\) denote the MW field amplitudes at frequencies \(\omega\) and \(\omega^{\prime}\), respectively resonant with spins \(S\) and \(S^{\prime}\). At the level anti-crossing, \(\omega\approx\omega^{\prime}\) with the practical consequence that the MW manipulation of spins \(S\) and \(S^{\prime}\) now relies on a common field of amplitude \(B_{1}=B_{1}^{\prime}\). Correspondingly, rotations of either spin species cannot be controlled independently, and the effectiveness of pulse sequences is negatively impacted (for example, in the Hahn-echo protocol of Fig. 2 in the main text, a rotation by an angle \(\theta\) of spin \(S\) amounts to a rotation \(\theta/\sqrt{2}\) for spin \(S^{\prime}\)). The latter, however, represents only a minor complication in the sense that although the overall signal contrast necessarily shrinks, imperfect spin rotations have no impact on the duration of the spin coherences, hence making the field-dependent results in Fig. 2 of the main text still valid. Further, the differing MW amplitudes at \(B_{\text{m}}\) can simply be seen as the result of "MW field heterogeneity", and thus its effect can be efficiently mitigated, e.g., by resorting to composite pulses. For clarity, we ignore in Eq. (S1) any hyperfine couplings of either paramagnetic center with their nuclear spin host. This simplification is valid so long as the nuclear spin lifetime is longer than the protocol duration, a condition met in most solid-state spin qubits. The latter also applies to systems exhibiting a dynamic Jahn-Teller distortion provided the process is sufficiently slow; specifically, this is the case of the P1 center, a system with C\({}_{\text{sv}}\) symmetry whose room-temperature reorientation -- and concomitant change of the hyperfine coupling -- takes place on a scale of several seconds[2, 3]. #### II-Polarization transfer Assuming optical spin initialization of spin \(S\) into the \(|m_{\text{S}}=0\rangle\) state, we write the system density matrix as \[\rho(0)=|0\rangle\langle 0|=\frac{\mathbb{I}}{4}+\frac{S_{x}}{2}.\] (S5) Following the polarization transfer protocol in Fig. 3a, the state at a time \(2\tau_{\text{ZQ}}^{(-)}\) (i.e., immediately before the \((\pi/2)_{y}\) pulse) takes the form \[\rho\left(2\tau_{\text{ZQ}}^{(-)}\right)=\frac{\mathbb{I}}{4}+\tilde{S}_{x}S_{ x}^{\prime},\] (S6) where we used the relation \(\exp\bigl{(}-i2\pi\jmath_{\text{l}}\tilde{S}_{x}S_{x}^{\prime}t\bigr{)}\tilde {S}_{y}\exp\bigl{(}i2\pi\jmath_{\text{l}}\tilde{S}_{x}S_{x}^{\prime}t\bigr{)} =\tilde{S}_{y}\cos(\pi\jmath_{\text{l}}t)-2\tilde{S}_{x}S_{x}^{\prime}\sin( \pi\jmath_{\text{l}}t)\) and the condition \(\tau_{\text{ZQ}}=(4\jmath_{\text{l}})^{-1}\). Upon application of a \((\pi/2)_{y}\) pulse, the density matrix becomes \[\rho\left(2\tau_{\text{ZQ}}^{(+)}\right)=\frac{\mathbb{I}}{4}-\tilde{S}_{x}S_{ x}^{\prime},\] (S7) and following evolution during the second half of the protocol, we obtain \[\rho\left(4\tau_{\text{ZQ}}^{(+)}\right)=\frac{\mathbb{I}}{4}-\frac{S_{x}^{ \prime}}{2}.\] (S8) Re-pumping spin \(S\) into \(|m_{\text{S}}=0\rangle\), the density matrix takes the final form \[\rho_{\text{Init}}=\left(\frac{\mathbb{I}}{2}+S_{x}\right)\left(\frac{\mathbb{ I}}{2}-S_{x}^{\prime}\right)=\frac{1}{4}\left(\mathbb{I}+2\bigl{(}S_{x}-S_{x}^{ \prime}\bigr{)}-4\tilde{S}_{x}S_{x}^{\prime}\right)=|0,-1/2\rangle\langle-1/2 \,,0|.\] (S9) #### III-Coherence order conversion For a pulse sequence of the form \((\pi/2)_{x}\rightarrow(\pi)_{x}\rightarrow(\pi/2)_{x}\) (lower panel in Fig. 3a), the evolution operator can be expressed as \[U_{\text{COC}} =\exp\left(-i\,\frac{\pi}{2}\bigl{(}\tilde{S}_{x}+S_{x}^{\prime} \bigr{)}\right)\exp\bigl{(}-H\tau_{ZQ}\bigr{)}\exp\left(-i\pi\bigl{(}\tilde{S}_ {x}+S_{x}^{\prime}\bigr{)}\right)\exp\bigl{(}-H\tau_{ZQ}\bigr{)}\exp\left(-i\, \frac{\pi}{2}\bigl{(}\tilde{S}_{x}+S_{x}^{\prime}\bigr{)}\right)\] \[=\exp\left(+i\,\frac{\pi}{2}\bigl{(}\tilde{S}_{x}+S_{x}^{\prime} \bigr{)}\right)\exp\bigl{(}-i2\pi\jmath_{\text{l}}\tilde{S}_{x}S_{x}^{\prime}2 \tau_{ZQ}\bigr{)}\exp\left(-i\,\frac{\pi}{2}\bigl{(}\tilde{S}_{x}+S_{x}^{ \prime}\bigr{)}\right)\] \[=\exp\bigl{(}-i2\pi\jmath_{\parallel}\tilde{S}_{y}S^{\prime}_{y}2\tau_{ ZQ}\bigr{)},\] (S10) where we have assumed \(B\neq B_{\rm m}\). Therefore, if Eq. (S9) describes the system state after initialization, the density matrix after coherence order conversion (COC) can be cast as \[\rho_{\rm ZQ} = \frac{1}{4}\bigl{(}\mathbb{I}+2U_{\rm COC}\bigl{(}\tilde{S}_{x}-S^ {\prime}_{x}\bigr{)}U^{\dagger}_{\rm COC}-4U_{\rm COC}\tilde{S}_{x}S^{\prime}_{ x}U^{\dagger}_{\rm COC}\bigr{)}\] (S11) \[= \frac{1}{4}\bigl{(}\mathbb{I}+4\bigl{(}\tilde{S}_{x}S^{\prime}_{y }-\tilde{S}_{y}S^{\prime}_{x}\bigr{)}-4\tilde{S}_{x}S^{\prime}_{x}\bigr{)}.\] \[= \frac{1}{4}\bigl{(}\mathbb{I}-2i\bigl{(}\tilde{S}_{-}S^{\prime}_{ +}-\tilde{S}_{+}S^{\prime}_{-}\bigr{)}-4\tilde{S}_{x}S^{\prime}_{x}\bigr{)}.\] We note that the term \(\tilde{S}_{x}S^{\prime}_{x}\) in the expression for \(\rho_{\rm 1nit}\) is insensitive to \(U_{\rm COC}\) hence allowing us to define an effective density matrix \(\rho_{\rm eff}\) that only takes into account the contribution deriving from the \(\bigl{(}\tilde{S}_{x}-S^{\prime}_{x}\bigr{)}\) term; this is the approach we follow in the main text. ### IV-Evolution of zero-quantum coherences In the simplest scenario, the system evolves freely without any external excitation (we also assume \(B\neq B_{\rm m}\)). The first and last terms in Eq. (S11) commute with \(H\) and hence undergo no evolution; we therefore write \[\rho_{\rm ZQ,eff}(\tau) = \rho_{\rm ZQ}-\frac{1}{4}\bigl{(}\mathbb{I}-4\tilde{S}_{x}S^{ \prime}_{x}\bigr{)}=\frac{1}{2i}\exp(-iH\tau)\,\bigl{(}\tilde{S}_{-}S^{\prime} _{+}-\tilde{S}_{+}S^{\prime}_{-}\bigr{)}\exp(iH\tau)\] (S12) \[=\frac{1}{2i}\exp\bigl{(}-i2\pi\jmath_{\parallel}\tilde{S}_{x}S^ {\prime}_{x}\tau\bigr{)}\,\bigl{(}\tilde{S}_{-}S^{\prime}_{+}-\tilde{S}_{+}S^ {\prime}_{-}\bigr{)}\exp\bigl{(}i2\pi\jmath_{\parallel}\tilde{S}_{x}S^{\prime }_{x}\tau\bigr{)},\] where we can ignore the terms in \(H\) linear in \(\tilde{S}_{x}\), \(S^{\prime}_{x}\) after a double rotating frame transformation resonant with spins \(\tilde{S}\) and \(S^{\prime}\). After a bit of algebra, one can prove that \[\bigl{[}\bigl{(}\tilde{S}_{-}S^{\prime}_{+}-\tilde{S}_{+}S^{\prime}_{-}\bigr{)},\tilde{S}_{x}S^{\prime}_{x}\bigr{]}=2i\bigl{[}\bigl{(}\tilde{S}_{x}S^{\prime} _{y}-\tilde{S}_{y}S^{\prime}_{x}\bigr{)},\tilde{S}_{x}S^{\prime}_{x}\bigr{]}=0\] (S13) implying that \[\rho_{\rm ZQ,eff}(\tau)=\frac{1}{2i}\bigl{(}\tilde{S}_{-}S^{\prime}_{+}- \tilde{S}_{+}S^{\prime}_{-}\bigr{)}=\bigl{(}\tilde{S}_{x}S^{\prime}_{y}- \tilde{S}_{y}S^{\prime}_{x}\bigr{)},\] (S14) i.e., independent of time. To assess the impact of noise, we consider a contribution to the Hamiltonian of the form \(H_{\rm n}=|\gamma_{e}|\bigl{(}\beta\tilde{S}_{x}+\beta^{\prime}S^{\prime}_{x} \bigr{)}\) with \(\beta\) and \(\beta^{\prime}\) constant, and make use of standard spin transformation rules to re-calculate \(\rho_{\rm ZQ,eff}\) after an evolution interval \(\tau\); we find \[\bigl{(}\tilde{S}_{x}S^{\prime}_{y}-\tilde{S}_{y}S^{\prime}_{x}\bigr{)}\to \bigl{(}\tilde{S}_{x}S^{\prime}_{y}-\tilde{S}_{y}S^{\prime}_{x}\bigr{)}\cos(| \gamma_{e}|(\beta-\beta^{\prime})\tau)+\bigl{(}\tilde{S}_{x}S^{\prime}_{x}+ \tilde{S}_{y}S^{\prime}_{y}\bigr{)}\sin(|\gamma_{e}|(\beta-\beta^{\prime})\tau)\] (S15) From the expression above, it is clear that the system dephases in the presence of imbalance between the noise amplitudes at either spin site (the case for \(\beta_{\rm S}(t)\) and \(\beta_{\rm S^{\prime}}(t)\) in the main text). Partial compensation can be attained, e.g., by intercalating a \(\pi\)-pulse at the midpoint of the zero-quantum evolution interval (or, more generally, by a train of inversion pulses). This can be seen from the fact that a (global) \((\pi)_{x}\)-pulse inverts the sign of \(\bigl{(}\tilde{S}_{x}S^{\prime}_{y}-\tilde{S}_{y}S^{\prime}_{x}\bigr{)}\) but leaves \(\bigl{(}\tilde{S}_{x}S^{\prime}_{x}+\tilde{S}_{y}S^{\prime}_{y}\bigr{)}\) unchanged. Further, after a similar derivation we find \[\bigl{(}\tilde{S}_{x}S^{\prime}_{x}+\tilde{S}_{y}S^{\prime}_{y}\bigr{)}\to \bigl{(}\tilde{S}_{x}S^{\prime}_{x}+\tilde{S}_{y}S^{\prime}_{y}\bigr{)}\cos(| \gamma_{e}|(\beta-\beta^{\prime})\tau)-\bigl{(}\tilde{S}_{x}S^{\prime}_{y}- \tilde{S}_{y}S^{\prime}_{x}\bigr{)}\,\sin(|\gamma_{e}|(\beta-\beta^{\prime})\tau)\] (S16) Therefore, evolution under a protocol of the form \(\tau-(\pi)_{x}-\tau\) yields \[\rho_{\rm ZQ,eff}(\tau,\tau^{\prime})=-\bigl{(}\tilde{S}_{x}S^{\prime}_{y}- \tilde{S}_{y}S^{\prime}_{x}\bigr{)}\cos\bigl{(}|\gamma_{e}|(\beta-\beta^{\prime}) (\tau-\tau^{\prime})\bigr{)}+\bigl{(}\tilde{S}_{x}S^{\prime}_{x}+\tilde{S}_{y}S ^{\prime}_{y}\bigr{)}\sin\bigl{(}|\gamma_{e}|(\beta-\beta^{\prime})(\tau-\tau^{ \prime})\bigr{)}\] (S17) hence leading to an echo at \(\tau=\tau^{\prime}\). In this sense, the present approach is fully compatible with dynamical decoupling protocols, which can be integrated into the protocol for additional noise protection. For completeness, we mention that electric fields \(\overline{\xi}\) selectively affect spin \(S\) through contributions to the Hamiltonian of the form \[H_{\varepsilon}=\delta_{1}\xi_{2}\left(S_{2}^{2}-\frac{2}{3} \right)-\delta_{\perp}\left(\xi_{x}\big{(}S_{x}S_{y}+S_{y}S_{x}\big{)}+\varepsilon _{y}\big{(}S_{x}^{2}-S_{y}^{2}\big{)}\right)\rightarrow\\ -d_{1}\xi_{2}S_{x}-2\delta_{\perp}\left(\xi_{x}\big{(}S_{x}S_{y}+ S_{y}S_{x}\big{)}+\varepsilon_{y}\big{(}S_{x}^{2}-S_{y}^{2}\big{)}\right),\] (S18) where the last expression holds in the reduced representation used herein with the correspondence \(S_{x,y}\rightarrow\sqrt{2}\;S_{x,y}\) (see Section I). Since spin \(S^{\prime}\) is insensitive to electric fields, no phase compensation takes place and the dyad behaves as an electrometer, as stated in the main text. Finally, thermal sensing in Fig. 4 starts with a \(\pi/2\)-phase shift on spin \(\tilde{S}\), hence leading to the transformation \[\big{(}\tilde{S}_{x}S_{y}^{\prime}-\tilde{S}_{y}S_{x}^{\prime}\big{)} \rightarrow\big{(}\tilde{S}_{y}S_{y}^{\prime}+\tilde{S}_{x}S_{x}^{\prime} \big{)}\rightarrow\big{(}\tilde{S}_{y}S_{y}^{\prime}+\tilde{S}_{x}S_{x}^{ \prime}\big{)}\cos(\delta\omega\tilde{\tau})-\big{(}\tilde{S}_{x}S_{y}^{\prime }-\tilde{S}_{y}S_{x}^{\prime}\big{)}\sin(\delta\omega\tilde{\tau})\] (S19) where we obtain the last expression after evolution for a time \(\tilde{\tau}\) under the Hamiltonian \(H(\mathcal{T})=\delta\omega\tilde{S}_{x}+2\pi J_{1}\tilde{S}_{x}S_{x}^{\prime}\), with \(\delta\omega=\left(\frac{d\Delta}{d\tilde{\tau}}\right)\delta\mathcal{T}\). Note that the first term in the sum, \(\big{(}\tilde{S}_{y}S_{y}^{\prime}+\tilde{S}_{x}S_{x}^{\prime}\big{)}=\frac{1 }{2}\big{(}\tilde{S}_{+}S_{-}^{\prime}+\tilde{S}_{-}S_{+}^{\prime}\big{)}\), remains unchanged (and thus undetectable) upon application of a zero- to single-quantum conversion whereas the second term transforms as \(\big{(}\tilde{S}_{x}S_{y}^{\prime}-\tilde{S}_{y}S_{x}^{\prime}\big{)}\to \frac{1}{2}\big{(}\tilde{S}_{x}-S_{x}^{\prime}\big{)}\). Therefore, observation of spin \(\tilde{S}\) yields a net signal \[\Sigma=-\frac{1}{2}\text{Tr}\big{\{}S_{x}^{2}\big{\}}\sin(\delta\omega\tilde{ \tau})\approx-\frac{1}{4}\,\delta\omega\tilde{\tau},\] (S20) where the last expression is valid at sufficiently short evolution times. **V-Spin dynamics simulations** Unitary spin dynamics is evaluated by exact diagonalization of the Hamiltonian in Eq. (S4). In order to include the effects of the magnetic 'fluctuators', we add a time-dependent noise amplitude to the external field, \(B\to B+\beta(t)\). Here, the random variable \(\beta(t)\) changes over time at a fixed rate \(r\). Every time it changes, it takes a value from a uniform distribution with zero mean and width \(\beta_{\text{rms}}\). In practice, the quantum dynamics of the spin system is computed by small time-steps \(dt\), so the noise-induced field shifts occur with a probability \(p=r\times dt\). Averaging over many trajectories, i.e. the full evolution of a complete pulse sequence, yields the final density matrix. To gauge the impact of local differences between the environments at either spin site, we model the noise fields as the sum of a global and a local contribution independent from each other, namely, \(\beta(t)=\beta_{g}(t)+\beta_{l}(t)\) and \(\beta^{\prime}(t)=\beta_{g}(t)+\beta_{l}^{\prime}(t)\). In our simulations, we choose \(\beta_{l}(t)\) and \(\beta_{l}^{\prime}(t)\) so that \(\beta_{\text{rms}}=\beta_{\text{rms}}^{\prime}=\text{constant}\). Under these conditions \(\langle\beta_{l}^{2}\rangle=\langle{\beta_{l}^{\prime}}^{2}\rangle\), and \(\xi\to 0\) as \(\langle\beta_{l}^{2}\rangle\to 0\) (correspondingly, \(\xi\to 1\) as \(\langle\beta_{g}^{2}\rangle\to 0\)). In our simulations, all pulses are considered instantaneous and perfect transformations, so there is no magnetic noise acting during them.
2307.00426
Sparsity-aware generalization theory for deep neural networks
Deep artificial neural networks achieve surprising generalization abilities that remain poorly understood. In this paper, we present a new approach to analyzing generalization for deep feed-forward ReLU networks that takes advantage of the degree of sparsity that is achieved in the hidden layer activations. By developing a framework that accounts for this reduced effective model size for each input sample, we are able to show fundamental trade-offs between sparsity and generalization. Importantly, our results make no strong assumptions about the degree of sparsity achieved by the model, and it improves over recent norm-based approaches. We illustrate our results numerically, demonstrating non-vacuous bounds when coupled with data-dependent priors in specific settings, even in over-parametrized models.
Ramchandran Muthukumar, Jeremias Sulam
2023-07-01T20:59:05Z
http://arxiv.org/abs/2307.00426v2
# Sparsity-aware generalization theory for deep neural networks ###### Abstract Deep artificial neural networks achieve surprising generalization abilities that remain poorly understood. In this paper, we present a new approach to analyzing generalization for deep feed-forward ReLU networks that takes advantage of the degree of sparsity that is achieved in the hidden layer activations. By developing a framework that accounts for this reduced effective model size for each input sample, we are able to show fundamental trade-offs between sparsity and generalization. Importantly, our results make no strong assumptions about the degree of sparsity achieved by the model, and it improves over recent norm-based approaches. We illustrate our results numerically, demonstrating non-vacuous bounds when coupled with data-dependent priors in specific settings, even in over-parametrized models. ## 1 Introduction Statistical learning theory seeks to characterize the generalization ability of machine learning models, obtained from finite training data, to unseen test data. The field is by now relatively mature, and several tools exist to provide upper bounds on the generalization error, \(R(h)\). Often the upper bounds depend on the empirical risk, \(\hat{R}(h)\), and different characterizations of complexity of the hypothesis class as well as potentially specific data-dependent properties. The renewed interest in deep artificial neural network models has demonstrated important limitations of existing tools. For example, VC dimension often simply relates to the number of model parameters and is hence insufficient to explain generalization of overparameterized models (Bartlett et al., 2019). Traditional measures based on Rademacher complexity are also often vacuous, as these networks can indeed be trained to fit random noise (Zhang et al., 2017). Margin bounds have been adapted to deep non-linear networks (Bartlett et al., 2017; Golowich et al., 2018; Neyshabur et al., 2015, 2018), albeit still unable to provide practically informative results. An increasing number of studies advocate for non-uniform data-dependent measures to explain generalization in deep learning (Nagarajan and Kolter, 2019; Perez and Louis, 2020; Wei and Ma, 2019). Of particular interest are those that employ the sensitivity of a data-dependent predictor to parameter perturbations - sometimes also referred to as _flatness_(Shawe-Taylor and Williamson, 1997; Neyshabur et al., 2017; Dziugaite and Roy, 2017; Arora et al., 2018; Li et al., 2018; Nagarajan and Kolter, 2019; Wei and Ma, 2019; Sulam et al., 2020; Banerjee et al., 2020). This observation has received some empirical validation as well (Zhang et al., 2017; Keskar et al., 2017; Izmailov et al., 2018; Neyshabur et al., 2019; Jiang* et al., 2020; Foret et al., 2021). Among the theoretical results of this line of work, Arora et al. (2018) study the generalization properties of a _compressed_ network, and Dziugaite and Roy (2017); Neyshabur et al. (2017) study a stochastic perturbed version of the original network. The work in (Wei and Ma, 2019) provides improved bounds on the generalization error of neural networks as measured by a low Jacobian norm with respect to training data, while Wei and Ma (2020) capture the sensitivity of a neural network to perturbations in intermediate layers. PAC-Bayesian analysis provides an alternate way of studying generalization by incorporating prior knowledge on a distribution of well-performing predictors in a Bayesian setting (McAllester, 1998; Guedj, 2019; Alquier, 2021). Recent results (Dziugaite and Roy, 2017, 2018; Zhou et al., 2019) have further strengthened the standard PAC-Bayesian analysis by optimizing over the posterior distribution to generate non-vacuous bounds on the expected generalization error of stochastic neural networks. Derandomized versions of PAC-Bayes bounds have also been recently developed (Nagarajan and Kolter, 2019; Banerjee et al., 2020) relying on the sensitivity or _noise resilience_ of an obtained predictor. All of these works are insightful, alas important gaps remain in understanding generalization in non-linear, over-parameterized networks (Perez and Louis, 2020). **Our contributions.** In this work we employ tools of sensitivity analysis and PAC-Bayes bounds to provide generalization guarantees on deep ReLU feed-forward networks. Our key contribution is to make explicit use of the sparsity achieved by these networks across their different layers, reflecting the fact that only sub-networks, of reduced sizes and complexities, are active at every sample. Similar in spirit to the observations in Muthukumar and Sulam (2022), we provide conditions under which the set of active neurons (smaller than the number of total neurons) is stable over suitable distributions of networks, with high-probability. In turn, these results allow us to instantiate recent de-randomized PAC-Bayes bounds (Nagarajan and Kolter, 2019) and obtain new guarantees that do not depend on the global Lipschitz constant, nor are they exponential in depth. Importantly, our results provide data-dependent non-uniform guarantees that are able to leverage the structure (sparsity) obtained on a specific predictor. As we show experimentally, this degree of sparsity - the reduced number of active neurons - need not scale linearly with the width of the model or the number of parameters, thus obtaining bounds that are significantly tighter than known results. We also illustrate our generalization results on MNIST for models of different width and depth, providing non-vacuous bounds in certain settings. **Manuscript organization.** After introducing basic notation, definitions and problem settings, we provide a detailed characterization of stable inactive sets in single-layer feed-forward maps in Section 2. Section 3 presents our main results by generalizing our analysis to multiple layers, introducing appropriate distributions over the hypothesis class and tools from de-randomized PAC-Bayes theory. We demonstrate our bounds numerically in Section 4, and conclude in Section 5. ### Notation And Definitions Sets and spaces are denoted by capital (and often calligraphic) letters, with the exception of the set \([K]=\{1,\ldots,K\}\). For a Banach space \(\mathcal{W}\) embedded with norm \(\left\|\cdot\right\|_{\mathcal{W}}\), we denote by \(\mathcal{B}_{r}^{\mathcal{W}}(\mathbf{W})\), a bounded ball centered around \(\mathbf{W}\) with radius \(r\). Throughout this work, scalar quantities are denoted by lower or upper case (not bold) letters, and vectors with bold lower case letters. Matrices are denoted by bold upper case letters: \(\mathbf{W}\) is a matrix with _rows_\(\mathbf{w}[i]\). We denote by \(\mathcal{P}_{I}\), the index selection operator that restricts input to the coordinates specified in the set \(I\). For a vector \(\mathbf{x}\in\mathbb{R}^{d}\) and \(I\subset[d]\), \(\mathcal{P}_{I}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{|I|}\) is defined as \(\mathcal{P}_{I}(\mathbf{x}):=\mathbf{x}[I]\). For a matrix \(\mathbf{W}\in\mathbb{R}^{p\times d}\) and \(I\subset[p]\), \(\mathcal{P}_{I}(\mathbf{W})\in\mathbb{R}^{|I|\times d}\) restricts \(\mathbf{W}\) to the _rows_ specified by \(I\). For row and column index sets \(I\subset[p]\) and \(J\subset[d]\), \(\mathcal{P}_{I,J}(\mathbf{W})\in\mathbb{R}^{|I|\times|J|}\) restricts \(\mathbf{W}\) to the corresponding sub-matrix. Throughout this work, we refer to _sparsity_ as the _number of zeros_ of a vector, so that for \(\mathbf{x}\in\mathbb{R}^{d}\) with degree of sparsity \(s\), \(\left\|\mathbf{x}\right\|_{0}=d-s\). We denote the induced operator norm by \(\left\|\cdot\right\|_{2}\), and the Frobenius norm by \(\left\|\cdot\right\|_{F}\). In addition, we will often use operator norms of reduced matrices induced by sparsity patterns. To this end, the following definition will be used extensively. **Definition 1**: _(Sparse Induced Norms) Let \(\mathbf{W}\in\mathbb{R}^{d_{2}\times d_{1}}\) and \((s_{2},s_{1})\) be sparsity levels such that \(0\leq s_{1}\leq d_{1}-1\) and \(0\leq s_{2}\leq d_{2}-1\). We define the \((s_{2},s_{1})\) sparse induced norm \(\left\|\cdot\right\|_{(s_{2},s_{1})}\) as_ \[\left\|\mathbf{W}\right\|_{(s_{2},s_{1})}:=\max_{|J_{2}|=d_{2}-s_{2}}\ \ \max_{|J_{1}|=d_{1}-s_{1}}\ \ \left\|\mathcal{P}_{J_{2},J_{1}}(\mathbf{W})\right\|_{2}.\] The sparse induced norm \(\left\|\cdot\right\|_{(s_{2},s_{1})}\) measures the induced operator norm of a worst-case sub-matrix. For any two sparsity vectors \((s_{2},s_{1})\preceq(\hat{s}_{2},\hat{s}_{1})\), one can show that \(\left\|\mathbf{W}\right\|_{(\hat{s}_{2},\hat{s}_{1})}\leq\left\|\mathbf{W} \right\|_{(s_{2},s_{1})}\) for any matrix \(\mathbf{W}\) (see Lemma 4). In particular, \[\max_{i,J}\left|\mathbf{W}[i,j]\right|=\left\|\mathbf{W}\right\|_{(d_{2}-1,d _{1}-1)}\leq\left\|\mathbf{W}\right\|_{(s_{2},s_{1})}\leq\left\|\mathbf{W} \right\|_{(0,0)}=\left\|\mathbf{W}\right\|_{2}.\] Thus, the sparse norm interpolates between the maximum absolute entry norm and the operator norm. Frequently in our exposition we rely on the case when \(s_{2}=d_{2}-1\), thus obtaining \(\left\|\mathbf{W}\right\|_{(d_{2}-1,s_{1})}=\max_{i\in[d_{2}]}\max_{|J_{1}|=d_{1} -s_{1}}\left\|\mathcal{P}_{J_{1}}(\mathbf{w}[i])\right\|_{2}\), the maximum norm of any reduced row of matrix \(\mathbf{W}\). Outside of the special cases listed above, computing the sparse norm for a general \((s_{2},s_{1})\) has combinatorial complexity. Instead, a modified version of the babel function (see Tropp et al. (2003)) provides computationally efficient upper bounds1. Footnote 1: The particular definition used in this paper is weaker but more computationally efficient than that introduced in Muthukumar and Sulam (2022). **Definition 2**: _(Reduced Babel Function (Muthukumar and Sulam, 2022)) Let \(\mathbf{W}\in\mathbb{R}^{d_{2}\times d_{1}}\), the reduced babel function at row sparsity level \(s_{2}\in\{0,\ldots,d_{2}-1\}\) and column sparsity level \(s_{1}\in\{0,\ldots,d_{1}-1\}\) is defined as2,_ Footnote 2: When \(s_{2}=d_{2}-1,|J_{2}|=1\), we simply define \(\mu_{(s_{2},s_{1})}(\mathbf{W}):=0\). \[\mu_{s_{2},s_{1}}(\mathbf{W}):=\frac{1}{\left\|\mathbf{W}\right\|_{(d_{2}-1,s_ {1})}^{2}}\max_{\begin{subarray}{c}J_{2}\subset[d_{2}],\\ |J_{2}|=d_{2}-s_{2}\end{subarray}}\max_{j\in J_{2}}\left[\sum_{\begin{subarray} {c}i\in J_{2},\\ i\neq j\end{subarray}}\max_{\begin{subarray}{c}J_{1}\subseteq[d_{1}]\\ i\neq j\end{subarray}}\left|\mathcal{P}_{J_{1}}(\mathbf{w}[i])\mathcal{P}_{J_{ 1}}(\mathbf{w}[j])^{T}\right|\right].\] For the special case when \(s_{2}\) = 0, the reduced babel function is equivalent to the babel function from Tropp et al. (2003) on the transposed matrix \(\mathbf{W}^{T}\). We show in Lemma 5 that the sparse-norm can be bounded using the reduced babel function and the maximum reduced row norm \(\left\|\cdot\right\|_{(d_{2}-1,s_{1})}\), \[\left\|\mathbf{W}\right\|_{s_{2},s_{1}}\leq\left\|\mathbf{W}\right\|_{d_{2}-1,s_{1}}\sqrt{1+\mu_{s_{2},s_{1}}(\mathbf{W})}. \tag{1}\] See Appendix D for a computationally efficient implementation of the reduced babel function. ### Learning Theoretic Framework We consider the task of multi-class classification with a bounded input space \(\mathcal{X}=\{\mathbf{x}\in\mathbb{R}^{d_{0}}\mid\left\|\mathbf{x}\right\|_{2 }\leq\mathbf{M}_{\mathcal{X}}\}\) and labels \(\mathcal{Y}=\{1,\ldots,C\}\) from an unknown distribution \(\mathcal{D}_{\mathcal{Z}}\) over \(\mathcal{Z}:=(\mathcal{X}\times\mathcal{Y})\). We search for a hypothesis in \(\mathcal{H}\subset\{h:\mathcal{X}\rightarrow\mathcal{Y}^{{}^{\prime}}\}\) that is an accurate predictor of label \(y\) given input \(\mathbf{x}\). Note that \(\mathcal{Y}\) and \(\mathcal{Y}^{{}^{\prime}}\) need not be the same. In this work, we consider \(\mathcal{Y}^{{}^{\prime}}=\mathbb{R}^{C}\), and consider the predicted label of the hypothesis \(h\) as \(\hat{y}(\mathbf{x}):=\operatorname*{argmax}_{j}[h(\mathbf{x})]_{j}\)3. The quality of prediction of \(h\) at \(\mathbf{z}=(\mathbf{x},y)\) is informed by the margin defined as \(\rho(h,\mathbf{z}):=\big{(}[h(\mathbf{x})]_{y}-\operatorname*{argmax}_{j\neq }[h(\mathbf{x})]_{j}\big{)}\). If the margin is positive, then the predicted label is correct. For a threshold hyper-parameter \(\gamma\geq 0\), we define a \(\gamma\)-threshold 0/1 loss \(\ell_{\gamma}\) based on the margin as \(\ell_{\gamma}(h,\mathbf{z}):=1\left\{\rho(h,\mathbf{z})<\gamma\right\}\). Note that \(\ell_{\gamma}\) is a stricter version of the traditional zero-one loss \(\ell_{0}\), since \(\ell_{0}(h,\mathbf{z})\leq\ell_{\gamma}(h,\mathbf{z})\) for all \(\gamma\geq 0\). With these elements, the _population risk_ (also referred to as _generalization error_) of a hypothesis \(R_{\gamma}\) is the expected loss it incurs on a randomly sampled data point, \(R_{\gamma}(h):=\mathbb{E}_{\mathbf{z}\sim\mathcal{D}_{\mathcal{Z}}}\left[\ell_{ \gamma}\big{(}h,\mathbf{z}\big{)}\right]\). The goal of supervised learning is to obtain a hypothesis with low population risk \(R_{0}(h)\), the probability of misclassification. While the true distribution \(\mathcal{D}_{\mathcal{Z}}\) is unknown, we assume access to an i.i.d training set \(\mathbf{S}_{T}=\{\mathbf{z}^{(i)},\ldots,\mathbf{z}^{(m)}\}\sim(\mathcal{D}_{ \mathcal{Z}})^{m}\) and we seek to minimize the _empirical risk_\(\hat{R}_{\gamma}\), the average loss incurred on the training sample \(\mathbf{S}_{T}\), i.e. \(\hat{R}_{\gamma}(h):=\frac{1}{m}\sum_{i=1}^{m}\ell_{\gamma}\left(h,\mathbf{z}^{ (i)}\right)\). We shall later see that for any predictor, \(R_{0}(h)\) can be upper bounded using the stricter empirical risk \(\hat{R}_{\gamma}(h)\) for an appropriately chosen \(\gamma>0\). Footnote 3: The argmax here is assumed to break ties deterministically. In this work, we study the hypothesis class \(\mathcal{H}\) containing feed-forward neural networks with \(K\) hidden layers. Each hypothesis \(h\in\mathcal{H}\) is identified with its weights \(\{\mathbf{W}_{k}\}_{k=1}^{K+1}\), and is a sequence of \(K\) linear maps \(\mathbf{W}_{k}\in\mathbb{R}^{d_{k}\times d_{k-1}}\) composed with a nonlinear activation function \(\sigma(\cdot)\) and a final linear map \(\mathbf{W}_{K+1}\in\mathbb{R}^{C\times d_{K}}\), \[h(\mathbf{x}_{0}):=\mathbf{W}_{K+1}\sigma\left(\mathbf{W}_{k}\sigma\left( \mathbf{W}_{K-1}\cdots\sigma\left(\mathbf{W}_{1}\mathbf{x}_{0}\right)\cdots \right)\right).\] We exclude bias from our definitions of feed-forward layers for simplicity4. We denote by \(\mathbf{x}_{k}\) the \(k^{th}\) hidden layer representation of network \(h\) at input \(\mathbf{x}_{0}\), so that \(\mathbf{x}_{k}:=\sigma\left(\mathbf{W}_{k}\mathbf{x}_{k-1}\right)\ \forall 1\leq k\leq K\), and \(h(\mathbf{x}):=\mathbf{W}_{K+1}\mathbf{x}_{K}\). Throughout this work, the activation function is assumed to be the Rectifying Linear Unit, or ReLU, defined by \(\sigma(x)=\max\{x,0\}\), acting entrywise on an input vector. ## 2 Warm Up: Sparsity In Feed-Forward Maps As a precursor to our sensitivity analysis for multi-layer feed-forward networks, we first consider a generic feed-forward map \(\Phi(\mathbf{x}):=\sigma(\mathbf{W}\mathbf{x})\). A naive bound on the norm of the function output is \(\left\|\Phi(\mathbf{x})\right\|_{2}\leq\left\|\mathbf{W}\right\|_{2}\left\| \mathbf{x}\right\|_{2}\), but this ignores the sparsity of the output of the feed-forward map (due to the ReLU). Suppose there exists a set \(I\) of inactive indices such that \(\mathcal{P}_{I}(\Phi(\mathbf{x}))=\mathbf{0}\), i.e. for all \(i\in I\), \(\mathbf{w}[i]\cdot\mathbf{x}\leq 0\). In the presence of such an index set, clearly \(\left\|\Phi(\mathbf{x})\right\|_{2}\leq\left\|\mathcal{P}_{I^{c}}(\mathbf{W} )\right\|_{2}\left\|\mathbf{x}\right\|_{2}\)5. Thus, estimates of the effective size of the feed-forward output, and other notions such as sensitivity to parameter perturbations, can be refined by accounting for the sparsity of activation patterns. Note that the inactive index set \(I\) varies with each input, \(\mathbf{x}\), and with the parameters of predictor, \(\mathbf{W}\). Footnote 5: \(I^{c}\) is the complement of the index set \(I\), also referred to as \(J\) when clear from context. For some \(\zeta_{0},\xi_{1},\eta_{1}>0\) and sparsity levels \(s_{1},s_{0}\), let \(\mathcal{X}_{0}=\left\{\mathbf{x}\in\mathbb{R}^{d_{0}}\mid\|\mathbf{x}\|_{2} \leq\zeta_{0},\;\|\mathbf{x}\|_{0}\leq d_{0}-s_{0}\right\}\) denote a bounded sparse input domain and let \(\mathcal{W}_{1}:=\left\{\mathbf{W}\in\mathbb{R}^{d_{1}\times d_{0}}\mid\| \mathbf{W}\|_{(d_{1}-1,s_{0})}\leq\xi_{1},\;\mu_{s_{1},s_{0}}(\mathbf{W})\leq \eta_{1}\right\}\) denote a parameter space. We now define a radius function that measures the amount of relative perturbation within which a certain inactive index set is stable. **Definition 3**: _(Sparse local radius6) For any weight \(\mathbf{W}\in\mathbb{R}^{d_{1}\times d_{0}}\), input \(\mathbf{x}\in\mathbb{R}^{d_{0}}\) and sparsity level \(1\leq s_{1}\leq d_{1}\), we define a sparse local radius and a sparse local index set as_ Footnote 6: The definition here is inspired by Muthukumar and Sulam (2022) but stronger. \[r_{\text{sparse}}(\mathbf{W},\mathbf{x},s_{1}):=\sigma\left(\text{\sc sort} \left(-\frac{\mathbf{W}\cdot\mathbf{x}}{\xi_{1}\zeta_{0}},\;s_{1}\right) \right),\quad I(\mathbf{W},\mathbf{x},s_{1}):=\text{\sc Top-k}\left(-\frac{ \mathbf{W}\cdot\mathbf{x}}{\xi_{1}\zeta_{0}},s_{1}\right). \tag{2}\] _Here, \(\text{\sc Top-k}(\mathbf{u},j)\) is the index set of the top \(j\) entries in \(\mathbf{u}\), and \(\text{\sc sort}(\mathbf{u},j)\) is its \(j^{th}\) largest entry._ We note that when evaluated on a weight \(\mathbf{W}\in\mathcal{W}_{1}\) and input \(\mathbf{x}\in\mathcal{X}_{0}\), for all sparsity levels the sparse local radius \(r_{\text{sparse}}(\mathbf{W},\mathbf{x},s_{1})\in[0,1]\). We denote the sparse local index set as \(I\) when clear from the context. We now analyze the stability of the sparse local index set and the resulting reduced sensitivity of model output. For brevity, we must defer all proofs to the appendix. **Lemma 1**: _Let \(\epsilon_{0}\in[0,1]\) be a relative input corruption level and let \(\epsilon_{1}\in[0,1]\) be the relative weight corruption. For the feed-forward map \(\Phi\) with weight \(\mathbf{W}\in\mathcal{W}_{1}\) and input \(\mathbf{x}\in\mathcal{X}_{0}\), the following statements hold for any output sparsity level \(1\leq s_{1}\leq d_{1}\),_ 1. _Existence of an inactive index set and bounded outputs:_ _If_ \(r_{\text{sparse}}(\mathbf{W},\mathbf{x},s_{1})>0\)_, then the index set_ \(I(\mathbf{W},\mathbf{x},s_{1})\) _is inactive for_ \(\Phi(\mathbf{x})\)_. Moreover,_ \(\left\|\Phi(\mathbf{x})\right\|_{2}\leq\xi_{1}\sqrt{1+\eta_{1}}\cdot\zeta_{0}\)_._ 2. _Stability of an inactive index set to input and parameter perturbations:_ _Suppose_ \(\hat{\mathbf{x}}\) _and_ \(\hat{\mathbf{W}}\) _are perturbed inputs and weights respectively such that,_ \(\left\|\hat{\mathbf{x}}-\mathbf{x}\right\|_{0}\leq d_{0}-s_{0}\) _and,_ \[\frac{\left\|\hat{\mathbf{x}}-\mathbf{x}\right\|_{2}}{\zeta_{0}}\leq\epsilon_{ 0}\;\text{ and }\;\max\left\{\frac{\left\|\hat{\mathbf{W}}-\mathbf{W}\right\|_{(d_{1}-1,s_{0} )}}{\xi_{1}},\frac{\left\|\hat{\mathbf{W}}-\mathbf{W}\right\|_{(s_{1},s_{0})}}{ \xi_{1}\sqrt{1+\eta_{1}}}\right\}\leq\epsilon_{1},\] _and denote_ \(\hat{\Phi}(\mathbf{x})=\sigma(\hat{\mathbf{W}}\mathbf{x})\)_. If_ \(r_{\text{sparse}}(\mathbf{W},\mathbf{x},s_{1})\geq-1+(1+\epsilon_{0})(1+ \epsilon_{1})\)_, then the index set_ \(I(\mathbf{W},\mathbf{x},s_{1})\) _is inactive and stable to perturbations, i.e._7__\(\mathcal{P}_{I}(\Phi(\mathbf{x}))=\mathcal{P}_{I}(\hat{\Phi}(\hat{\mathbf{x}}))= \mathcal{P}_{I}(\hat{\Phi}(\hat{\mathbf{x}}))=\mathbf{0}\)_. Moreover,_ \(\left\|\hat{\Phi}(\hat{\mathbf{x}})-\Phi(\mathbf{x})\right\|_{2}\leq(-1+(1+ \epsilon_{0})(1+\epsilon_{1}))\cdot\xi_{1}\sqrt{1+\eta_{1}}\cdot\zeta_{0}\)_._ Footnote 7: For notational ease we suppress arguments and let \(I=I(\mathbf{W},\mathbf{x},s_{1})\). 3. _Stability of sparse local radius_: _For a perturbed input_ \(\hat{\mathbf{x}}\) _such that_ \(\left\|\hat{\mathbf{x}}-\mathbf{x}\right\|_{0}\leq d_{0}-s_{0}\)_, and perturbed weight_ \(\hat{\mathbf{W}}\)_, the difference between sparse local radius is bounded_ \[\left|r_{\text{sparse}}(\hat{\mathbf{W}},\hat{\mathbf{x}},s_{1})-r_{\text{ sparse}}(\mathbf{W},\mathbf{x},s_{1})\right|\leq-1+\left(1+\frac{\left\|\hat{ \mathbf{x}}-\mathbf{x}\right\|_{2}}{\zeta_{0}}\right)\left(1+\frac{\left\| \hat{\mathbf{W}}-\mathbf{W}\right\|_{(d_{1}-1,s_{0})}}{\xi_{1}}\right).\] A key takeaway of this Lemma (see Appendix A.1.1 for its proof) is that one can obtain tighter bounds, on both the size of the network output as well as its sensitivity to corruptions, if the corresponding sparse local radius is sufficiently large. The results above quantify these notions for a given sample. In the next section, we will leverage this characterization within the framework of PAC-Bayes analysis to provide a generalization bound for feed-forward networks. ## 3 A Sparsity-Aware Generalization Theory We shall construct non-uniform data-dependent generalization bounds for feed-forward networks based on a local sensitivity analysis of deep ReLU networks, employing the intuition from the previous section. To do so, we will first study the size of the layer outputs using Definition 2, then measure the sensitivity in layer outputs to parameter perturbations using Lemma 1 across multiple layers, and finally leverage a derandomized PAC-Bayes result from Nagarajan and Kolter (2019b) (see Appendix C.2). Before embarking on the analysis, we note the following convenient property of the margin for any two predictors \(h,\hat{h}\) from (Bartlett et al., 2017, Lemma A.3), \[\left|\left(h(\mathbf{x})_{y}-\max_{j\neq y}h(\mathbf{x})_{j}\right)-\left( \hat{h}(\mathbf{x})_{y}-\max_{j\neq y}\hat{h}(\mathbf{x})_{j}\right)\right| \leq 2\left\|\hat{h}(\mathbf{x})-h(\mathbf{x})\right\|_{\infty}.\] Hence, quantifying the sensitivity of the predictor outputs will inform the sensitivity of the loss. Similar to other works (Nagarajan and Kolter, 2019b; Banerjee et al., 2020), our generalization bound will be derived by studying the sensitivity of neural networks upon perturbations to the layer weights. For the entirety of this section, we fix a set of _base hyper-parameters_ that determine a specific class of neural networks, the variance of a posterior distribution over networks, and the resolution (via a sparsity vector) at which the generalization is measured - see Table 1 for reference. We denote by \(\mathbf{s}=\{s_{1},\ldots,s_{K}\}\) a vector of layer-wise sparsity levels, which reflects the inductive bias of the learner on the potential degree of sparsity of a trained network on the training data. Next we define two hyper-parameters, \(\boldsymbol{\xi}:=\{\xi_{1},\ldots,\xi_{K+1}\}\) where \(\xi_{k}>0\) bounds the sparse norm \(\left\|\cdot\right\|_{(d_{k}-1,s_{k-1})}\) of the layer weights and \(\boldsymbol{\eta}:=\{\eta_{1},\ldots,\eta_{K}\}\) where \(\eta_{k}>0\) bounds the reduced babel function \(\mu_{s_{k},s_{k-1}}(\cdot)\) of the layer weights. Finally, we let \(\boldsymbol{\epsilon}:=\{\epsilon_{1},\ldots,\epsilon_{K+1}\}\) with \(\epsilon_{k}>0\) bound the amount of relative perturbation in the weights. This section treats the quartet \((\mathbf{s},\boldsymbol{\xi},\boldsymbol{\eta},\boldsymbol{\epsilon})\) as constants8, while in the next section we shall discuss appropriate values for these hyper-parameters. Footnote 8: Unless otherwise specified we let \(s_{0}=s_{K+1}=0\) and \(\epsilon_{0}=0\). **Definition 4**: _(Norm bounded feed-forward networks) We define below the parameter domain \(\mathcal{W}_{k}\) and a class of feed-forward networks \(\mathcal{H}_{K+1}\) with \(K\)-hidden layers,_ \[\mathcal{W}_{k}:=\left\{\mathbf{W}\in\mathbb{R}^{d_{k}\times d_{k- 1}}\ |\ \left\|\mathbf{W}\right\|_{(d_{k}-1,s_{k-1})}\leq\xi_{k},\quad\mu_{s_{k},s_{k- 1}}(\mathbf{W})\leq\eta_{k},\right\},\;\forall\;k\in[K],\] \[\mathcal{H}:=\left\{h(\cdot):=\mathbf{W}_{K+1}\sigma\left(\mathbf{W} _{K}\cdots\sigma\left(\mathbf{W}_{1}\cdot\right)\right)\ |\ \left\|\mathbf{W}_{K+1}\right\|_{(C-1,s_{K})}\leq\xi_{K+1},\ \mathbf{W}_{k}\in\mathcal{W}_{k},\;\forall\;k\in[K]\right\}.\] \begin{table} \begin{tabular}{|c|c|} \hline \(\mathbf{s}=\{s_{1},\ldots,s_{k}\}\), \(\ 0\leq s_{k}\leq d_{k}-1\) & Layer wise sparsity vector \\ \hline \(\boldsymbol{\xi}=\{\xi_{1},\ldots,\xi_{K+1}\}\), \(\ 0\leq\xi_{k}\) & Layer wise bound on \(\left\|\cdot\right\|_{(d_{k}-1,s_{k-1})}\) \\ \hline \(\boldsymbol{\eta}=\{\eta_{1},\ldots,\eta_{K}\}\), \(\ 0\leq\eta_{k}\) & Layer wise bound on \(\mu_{s_{k},s_{k-1}}(\cdot)\) \\ \hline \(\boldsymbol{\epsilon}=\{\epsilon_{1},\ldots,\epsilon_{K+1}\}\), \(\ 0\leq\epsilon_{k}\) & Layer wise bound on relative perturbation \\ \hline \end{tabular} \end{table} Table 1: Independent base hyper-parameters To measure the local sensitivity of the network outputs, it will be useful to formalize a notion of local neighborhood for networks. **Definition 5**: _(Local Neighbourhood) Given \(h\in\mathcal{H}\), define \(\mathcal{B}(h,\mathbf{\epsilon})\) to be the local neighbourhood around \(h\) containing perturbed networks \(\hat{h}\) with weights \(\{\hat{\mathbf{W}}_{j}\}_{k=1}^{K+1}\) such that at each layer \(k\)9,_ Footnote 9: For the last layer we only require \(\left\|\hat{\mathbf{W}}_{K+1}-\mathbf{W}_{K+1}\right\|_{C-1,s_{K}}\leq\epsilon_ {K+1}\cdot\xi_{K+1}\). \[\max\left\{\frac{\left\|\hat{\mathbf{W}}_{k}-\mathbf{W}_{k}\right\|_{(s_{k},s_ {k-1})}}{\xi_{k}\sqrt{1+\eta_{k}}},\frac{\left\|\hat{\mathbf{W}}_{k}-\mathbf{W }_{k}\right\|_{(d_{k}-1,s_{k-1})}}{\xi_{k}}\right\}\leq\epsilon_{k}.\] It will be useful to understand the probability that \(\hat{h}\in\mathcal{B}(h,\mathbf{\epsilon})\) when the perturbations to each layer weight are random, in particular from Gaussian distributions over feed-forward networks: **Definition 6**: _(Entrywise Gaussian) Let \(h\in\mathcal{H}\) be any network with \(K+1\) layers, and let \(\mathbf{\sigma}^{2}:=\{\sigma_{1}^{2},\ldots,\sigma_{K+1}^{2}\}\) be a layer-wise variance. We denote by \(\mathcal{N}(h,\mathbf{\sigma}^{2})\) a distribution with mean network \(h\) such that for any \(\hat{h}\sim\mathcal{N}(h,\mathbf{\sigma}^{2})\) with layer weights \(\hat{\mathbf{W}}_{k}\), each entry \(\hat{\mathbf{W}}_{k}[i,j]\sim\mathcal{N}(\mathbf{W}_{k}[i,j],\sigma_{k}^{2})\)._ ### Sensitivity Of Network Output Given a predictor \(h\in\mathcal{H}\), note that the size of a network output for any given input is bounded by \(\left\|h(\mathbf{x}_{0})\right\|_{2}\leq\prod_{k=1}^{K+1}\left\|\mathbf{W}_{k} \right\|_{2}\mathsf{M}_{\mathcal{X}}\), which ignores the sparsity of the intermediate layers. We will now generalize the result in Lemma 1 by making use of the inactive index sets at every layer \(I_{k}\), such that \(\mathcal{P}_{I_{k}}(\mathbf{x}_{k})=\mathbf{0}\), obtaining a tighter (input dependent) characterization of sensitivity to perturbations of the network. For notational convenience, we define two additional dependent notations: we let \(\zeta_{0}:=\mathsf{M}_{\mathcal{X}}\) and \(\zeta_{k}:=\xi_{k}\sqrt{1+\eta_{k}}\cdot\zeta_{k-1}=\mathsf{M}_{\mathcal{X}} \prod_{n=1}^{k}\xi_{n}\sqrt{1+\eta_{n}}\) denote a bound on the layer-wise size of the outputs. At the final layer, we let \(\zeta_{K+1}:=\xi_{K+1}\zeta_{K}\) as a bound on the network output. Additionally, we define \(\gamma_{k}:=-1+\prod_{n=1}^{k}(1+\epsilon_{n})\) as a threshold on the sparse local radius evaluated at each layer - see Table 2 for a summary. In the last layer, we let this value \(\gamma_{K+1}\) represent the desired margin. For networks \(\hat{h}\) with perturbed weights \(\hat{\mathbf{W}}\), we denote by \(\hat{\mathbf{x}}_{k}:=\sigma\left(\hat{\mathbf{W}}_{k}\hat{\mathbf{x}}_{k-1}\right)\) the perturbed layer representation corresponding to input \(\mathbf{x}_{0}\). **Definition 7**: _(Layer-wise sparse local radius) Let \(h\) be any feed-forward network with weighs \(\mathbf{W}_{k}\in\mathbb{R}^{d_{k}\times d_{k-1}}\), and let \(\mathbf{x}_{0}\in\mathbb{R}^{d_{0}}\). We define a layer-wise sparse local radius and a layer-wise inactive index set as below,_ \[I_{k}(h,\mathbf{x}_{0}):=\text{Top-k}\left(-\frac{\mathbf{W}_{k}\cdot \mathbf{x}_{k-1}}{\xi_{k}\zeta_{k-1}},s_{k}\right),\quad r_{k}(h,\mathbf{x}_{0 }):=\sigma\left(\text{sort}\left(-\frac{\mathbf{W}_{k}\cdot\mathbf{x}_{k-1}}{ \xi_{k}\zeta_{k-1}},\ s_{k}\right)\right).\] Definition 7 now allows us, by employing Lemma 1, to generalize our previous observations to entire network models, as we now show. **Theorem 1**: _Let \(h\in\mathcal{H}\), if at each layer \(k\) the layer-wise sparse local radius is nontrivial, i.e. \(\forall\ k\in[K],\ \ r_{k}(h,\mathbf{x}_{0})>0\). Then the index sets \(I_{k}(h,\mathbf{x}_{0})\) are inactive at layer \(k\) and the size of the hidden layer representations and the network output are bounded as follows,_ \[\forall\ k\in[K],\quad\left\|\mathbf{x}_{k}\right\|_{2}\leq\zeta_{k},\quad \text{and}\quad\left\|h(\mathbf{x}_{0})\right\|_{\infty}\leq\zeta_{K+1}. \tag{3}\] \begin{table} \begin{tabular}{|c|c|} \hline \(\zeta_{k}:=\xi_{k}\sqrt{1+\eta_{k}}\cdot\zeta_{k-1}\), & \(\forall\ k\in[K]\) \\ \hline \(\zeta_{K+1}:=\xi_{K+1}\zeta_{K}\) & Bound on norm of network output \\ \hline \(\gamma_{k}:=-1+\prod_{n=1}^{k}(1+\epsilon_{n}),\ \ \forall\ k\in[K+1]\) & Layer wise threshold for local radius \\ \hline \(r_{k}(h,\mathbf{z}):=\sigma\left(\text{sort}\left(-\left[\frac{\mathbf{w}_{k}[i] \cdot\mathbf{x}_{k-1}}{\xi_{k}\zeta_{k-1}}\right]_{i=1}^{d_{k}},\ d_{k}-s_{k}\right)\right)\) & Layer-wise sparse local radius \\ \hline \end{tabular} \end{table} Table 2: Layer-wise bounds and thresholds. In a similar vein, we can characterize the sensitivity of the network to parameter perturbations. **Theorem 2**: _Let \(h\in\mathcal{H}\) and let \(\hat{h}\in\mathcal{B}(h,\boldsymbol{\epsilon})\) be a nearby perturbed predictor with weights \(\{\hat{\mathbf{W}}_{k}\}\). If each layer-wise sparse local radius is sufficiently large, i.e. \(\forall\ k\in[K],\ r_{k}(h,\mathbf{x}_{0})\geq\gamma_{k}\), then the index sets \(I_{k}(h,\mathbf{x}_{0})\) are inactive for the perturbed layer representations \(\hat{\mathbf{x}}_{k}\) and the distance between the layer representations and the network output are bounded as follows,_ \[\forall\ k\in[K],\quad\left\|\hat{\mathbf{x}}_{k}-\mathbf{x}_{k}\right\|_{2} \leq\zeta_{k}\cdot\gamma_{k},\quad\text{and}\quad\left\|\hat{h}(\mathbf{x}_{0} )-h(\mathbf{x}_{0})\right\|_{\infty}\leq\zeta_{K+1}\cdot\gamma_{K+1}. \tag{4}\] Proofs of the above propositions can be found in A.1.2 and A.1.3 respectively. ### Sparsity-Aware Generalization We are now ready to state our main theorem on generalization of feed-forward networks that leverages improved sensitivity of network outputs due to stable inactive index sets. **Theorem 3**: _Let \(\mathcal{P}\) be any prior distribution over depth-\((K+1)\) feed-forward network chosen independently of the training sample. Let \(h\in\mathcal{H}\) be any feed-forward network (possibly trained on sample data), with \(\mathcal{H}\) determined by fixed base hyper-parameters \((\mathbf{s},\boldsymbol{\epsilon},\boldsymbol{\delta},\boldsymbol{\eta})\), and denote the sparse loss by \(\ell_{\mathrm{sparse}}(h,\mathbf{x})=\mathbb{I}\{\exists\,k,\ r_{k}(h, \mathbf{x})<3\gamma_{k}\}\). With probability at least \((1-\delta)\) over the choice of i.i.d training sample \(\textbf{S}_{T}\) of size \(m\), the generalization error of \(h\) is bounded as follows,_ \[R_{0}(h)\leq\hat{R}_{4\zeta_{K+1}\gamma_{K+1}}(h)+\frac{2K}{m}\sum_{\mathbf{x }^{(i)}\in\mathcal{S}_{T}}\ell_{\mathrm{sparse}}(h,\mathbf{x}^{(i)})+\tilde{ \mathcal{O}}\left(\sqrt{\frac{\mathrm{KL}\left(\mathcal{N}\left(h,\boldsymbol{ \sigma}_{\mathrm{sparse}}^{2}\right)\ ||\ \mathcal{P}\right)}{m}}\right)\] _where \(\boldsymbol{\sigma}_{\mathrm{sparse}}=\{\sigma_{1},\ldots,\sigma_{K}\}\) is defined by \(\sigma_{k}:=\epsilon_{k}\cdot\frac{\xi_{k}}{4\sqrt{2d_{\mathrm{eff}}+\log\left( 2(K+1)\sqrt{m}\right)}}\), and where \(d_{\mathrm{eff}}:=\max_{k\in[K]}\frac{(d_{k}-s_{k})\log(d_{k})+(d_{k-1}-s_{k-1} )\log(d_{k-1})}{2}\) is an effective layer width10._ Footnote 10: We note the effective width is at worst \(\max_{k}d_{k}\log(d_{k})\) and could be larger than actual width depending on the sparsity vector \(\mathbf{s}\). In contrast, for large \(\mathbf{s}\), \(d_{\mathrm{eff}}\ll\max_{k}d_{k}\). The notation \(\tilde{\mathcal{O}}\) above hides logarithmic factors (see Appendix A.3 for a complete version of the bound). This result bounds the generalization error of a trained predictor as a function of three terms. Besides the empirical risk with margin threshold \(4\zeta_{K+1}\gamma_{K+1}\), the risk is upper bounded by an empirical sparse loss that measures the proportion of samples (in the training data) that do not achieve a sufficiently large sparse radius at any layer. Lastly, as is characteristic in PAC-Bayes bounds, we see a term that depends on the distance between the prior and posterior distributions, the latter centered at the obtained (data-dependent) predictor. The posterior variance \(\boldsymbol{\sigma}_{\mathrm{sparse}}^{2}\) is determined entirely by the base hyper-parameters. Finally, note that the result above holds for any prior distribution \(\mathcal{P}\). Before moving on, we comment on the specific factors influencing this bound. Sparsity.The result above depends on the sparsity by the choice of the parameter \(\mathbf{s}\). One can always instantiate the above result for \(\mathbf{s}=\mathbf{0}\), corresponding to a global sensitivity analysis. At this trivial choice, the sparsity loss vanishes (because the sparse radius is infinite) and the bound is equivalent to an improved (derandomized) version of the results by Neyshabur et al. (2018). The formulation in Theorem 3 enables a continuum of choices (via hyper-parameters) suited to the trained predictor and sample data. A larger degree of sparsity at every layer results in a tighter bound since the upper bounds to the sensitivity of the predictor is reduced (as only reduced matrices are involved in its computation). In turn, this reduced sensitivity leads to a lower empirical margin risk by way of a lower threshold \(4\zeta_{K+1}\gamma_{K+1}\). Furthermore, the effective width - determining the scale of posterior - is at worst \(\max_{k}d_{k}\log(d_{k})\) (for \(\mathbf{s}=0\)), but for large \(\mathbf{s}\), \(d_{\mathrm{eff}}\ll\max_{k}d_{k}\). Sensitivity.Standard sensitivity-based generalization bounds generally depend directly on the global Lipschitz constant that scales as \(\mathcal{O}(\prod_{k=1}^{K}\|\mathbf{W}_{k}\|_{2})\). For even moderate-size models, such dependence can render the bounds vacuous. Further recent studies suggest that the layer norms can even increase with the size of the training sets showing that, even for under-parameterized models, generalization bounds may be vacuous (Nagarajan and Kolter, 2019). Our generalization bound does _not_ scale with the reduced Lipschitz constant \(\zeta_{K+1}\): while larger (reduced) Lipschitz constants can render the empirical sparse loss closer to its maximum value of \(1\), the bound remains controlled due to our choice of modelling _relative_ perturbations of model parameters. Dependence On Depth.Unlike recent results (Bartlett et al., 2017; Neyshabur et al., 2015, 2018, 2019), our bound is not exponential with depth. However, the sensitivity bounds \(\zeta_{k}\) and radius thresholds \(\gamma_{k}\) are themselves exponential in depth. While the empirical risk and sparse loss terms in the generalization bounds depend on \(\zeta_{k},\gamma_{k}\), they are bounded in \([0,1]\). In turn, by choosing the prior to be a Gaussian \(P=\mathcal{N}(h_{\mathrm{prior}},\mathbf{\sigma}_{\mathrm{sparse}}^{2})\), the KL-divergence term can be decomposed into layer-wise contributions, \(\mathrm{KL}\left(\mathcal{N}\left(h,\mathbf{\sigma}_{\mathrm{sparse}}^{2} \right)\;||\;\mathcal{N}(h_{\mathrm{prior}},\mathbf{\sigma}_{\mathrm{sparse}}^ {2})\right)=\sum_{k=1}^{K+1}\frac{\|\mathbf{W}_{k}-\mathbf{W}_{\mathrm{prior},k}\|_{2}^{2}}{2\sigma_{k}^{2}}\). Hence, the KL divergence term does not scale with the product of the relative perturbations (like \(\gamma_{k}\)) or the product of layer norms (like \(\zeta_{k}\)). Comparison To Related Work.Besides the relation to some of the works that have been mentioned previously, our contribution is most closely related to those approaches that employ different notions of reduced effective models in developing generalization bounds. Arora et al. (2018) do this via a _compression_ argument, alas the resulting bound holds for the compressed network and not the original one. Neyshabur et al. (2017) develops PAC-Bayes bounds that clearly reflect the importance of _flatness_, which in our terms refers to the loss effective sensitivity of the obtained predictor. Similar in spirit to our results, Nagarajan and Kolter (2019) capture a notion of reduced active size of the model and presenting their derandomized PAC-Bayes bound (which we centrally employ here). While avoiding exponential dependence on depth, their result depends inversely with the minimum absolute pre-activation level at each layer, which can be arbitrarily small (and thus, the bound becomes arbitrarily large). Our analysis, as represented by Lemma 1, circumvents this limitation. Our constructions on normalized sparse radius have close connections with the _normalized margins_ from Wei and Ma (2020), and our use of augmented loss function (such as our _sparse loss_) resemble the ones proposed in Wei and Ma (2019). Most recently, Galanti et al. (2023) analyze the complexity of compositionally sparse networks, however the sparsity stems from the convolutional nature of the filters rather than as a data-dependent (and sample dependent) property. ### Hyper-Parameter Search For any fixed predictor \(h\), there can be multiple choices of \(\mathbf{s},\mathbf{\xi},\mathbf{\eta}\) such that \(h\) is in the corresponding hypothesis class. In the following, we discuss strategies to search for suitable hyper-parameters that can provide tighter generalization bounds. To do so, one can instantiate a grid of candidate values for each hyper-parameter that is independent of data. Let the grid sizes be \((T_{\mathbf{s}},T_{\mathbf{\xi}},T_{\mathbf{\eta}},T_{\mathbf{\epsilon}})\), respectively. We then instantiate the generalization bound in Theorem 3 for each choice of hyper-parameters in the cartesian product of grids with a reduced failure probability \(\delta_{\mathrm{red}}=\frac{\delta}{T_{\mathbf{\tau}}T_{\mathbf{\xi}}T_{\mathbf{\eta}}T_{ \mathbf{\epsilon}}}\). By a simple union-bound argument, all these bounds hold simultaneously with probability \((1-\delta)\). In this way, for a fixed \(\delta\), the statistical cost above is \(\sqrt{\log(T_{\mathbf{\ast}}T_{\mathbf{\xi}}T_{\mathbf{\eta}}T_{\mathbf{\epsilon}})}\) as the failure probability dependence in Theorem 3 is \(\sqrt{\log\left(\frac{1}{\delta_{\mathrm{red}}}\right)}\). The computational cost of a naive search is \(\mathcal{O}(T_{\mathbf{s}}T_{\mathbf{\xi}}T_{\mathbf{\eta}}T_{\mathbf{\epsilon}})\). In particular, for multilayer networks, to exhaustively search for a sparsity vector requires a grid of size \(T_{\mathbf{s}}:=\prod_{k=1}^{K}d_{k}\) rendering the search infeasible. Nonetheless, we shall soon show that by employing a greedy algorithm one can still obtain tighter generalization bounds with significantly lesser computational cost. Moreover, these hyper-parameters are not independent, and so we briefly describe here how this optimization can be performed with manageable complexity. Norm Hyper-Parameters (\(\mathbf{\xi},\mathbf{\eta}\)):One can choose \((\mathbf{\xi},\mathbf{\eta})\) from a grid (fixed in advance) of candidate values, to closely match the true properties of the predictor. For networks with zero bias, w.l.o.g. one can normalize each layer weight \(\mathbf{W}_{k}\rightarrow\tilde{\mathbf{W}}_{k}:=\frac{1}{\|\mathbf{W}_{k\|(d_{k }-1,\epsilon_{k-1})}}\mathbf{W}_{k}\) to ensure that \(\left\|\tilde{\mathbf{W}}_{k}\right\|_{(d^{k}-1,s_{k-1})}=1\) without changing the prediction11. The predicted labels, label function, sparse local radius, margin and the generalization bound in Theorem 3 are all invariant to such a scaling. For the normalized network we can simply let \(\xi_{k}:=1\) for all \(k\). Fixing \(\mathbf{\xi}\) this way results in no statistical or computational cost (beyond normalization). For discretizing \(\mathbf{\eta}\), we can leverage the fact that for all \((s_{k},s_{k-1})\), the reduced babel function is always less than \(d_{k}-s_{k}-1\) - since the inner products are scaled by the square of the sparse norms. Thus, we can construct a grid in \([0,1]\) with \(T_{\eta}\) elements, which can be searched efficiently (see Appendix B for further details). Footnote 11: This is not true for networks with non-zero bias. In networks with bias, one can still employ a grid search like in Bartlett et al. (2017). Sparsity Parameter s:The sparsity vector \(\mathbf{s}\) determines the degree of structure at which we evaluate the generalization of a fixed predictor. For a fixed predictor and relative sensitivity vector \(\mathbf{\epsilon}\), a good choice of \(\mathbf{s}\) is one that has sufficiently large sparse local radii on the training sample resulting in small average sparse loss, \(\frac{1}{n}\sum_{\mathbf{x}^{(i)}\in\mathbf{S}_{T}}\ell_{\text{sparse}}(h, \mathbf{x}^{(i)})\). At the trivial choice of sparsity \(\mathbf{s}=\mathbf{0}\), for any choice of \(\mathbf{\epsilon}\), the above loss is exactly zero. In general, at a fixed \(\mathbf{\epsilon}\), this loss increases with larger (entrywise) \(\mathbf{s}\). At the same time, the empirical margin loss term \(\hat{R}_{4\zeta_{K+1}\gamma_{K+1}}(h)\) decreases with increasing \(\mathbf{s}\) (since \(\zeta_{K+1}\) grows). This reflects an inherent tradeoff in the choice of \((\mathbf{s},\mathbf{\epsilon})\) to balance the margin loss and the sparse loss (in addition to the KL-divergence). For any \(\mathbf{\epsilon}\) and a data point \(\mathbf{z}=(\mathbf{x},y)\), we employ a greedy algorithm to find a sparsity vector \(s^{*}(\mathbf{x},\mathbf{\epsilon})\) in a layer wise fashion such that the loss incurred is zero, i.e. so that \(r_{k}(h,\mathbf{x})\geq 3\gamma_{k}\) for all \(k\). At each layer, we simply take the maximum sparsity level with sufficiently large radius. The computational cost of such an approach is \(\log_{2}\left(\prod_{k=1}^{K}d_{k}\right)\). One can thus collect the sparsity vectors \(s^{*}(\mathbf{x},\mathbf{\epsilon})\) across the training set and choose the one with sample-wise minimum, so that the average sparse loss vanishes. Of course, one does not necessarily need the sparse loss to vanish; one can instead choose \(\mathbf{s}\) simply to _control_ the sparse loss to a level of \(\frac{\alpha}{\sqrt{m}}\). We expand in Appendix B how this can done. Sensitivity Vector \(\mathbf{\epsilon}\):Lastly, the relative sensitivity vector \(\mathbf{\epsilon}\) represents the size of the posterior and desired level of sensitivity in layer outputs upon parameter perturbations. Since \(\epsilon_{k}\) denotes _relative perturbation_ we can simply let it be the same across all layers. i.e. \(\mathbf{\epsilon}=\epsilon\cdot[1,\ldots,1]\). In summary, as we expand in Appendix B, we can compute a best in-grid generalization bound in \(\mathcal{O}\left(T_{\mathbf{\epsilon}}\cdot\log_{2}\left(\prod_{k=1}^{K}d_{k} \right)\cdot\log_{2}(T_{\mathbf{\eta}})\cdot(\sum_{k=1}^{K}d_{k}d_{k-1})\right).\) ## 4 Numerical Experiments In this last section we intend to demonstrate the derived bounds on a series of feed-forward networks, of varying width and depth, on MNIST. As we now show, the resulting bounds are controlled and sometimes non-vacuous upon the optimization over a discrete grid for hyper-parameters, as explained above. Experimental Setup:We train feed-forward networks \(h\) with weights \(\{\mathbf{W}_{k}\}_{k=1}^{K+1}\) where \(\mathbf{W}_{k}\in\mathbb{R}^{d_{k}\times d_{k-1}}\) using the cross-entropy loss with stochastic gradient descent (SGD) for 5,000 steps with a batch size of 100 and learning rate of 0.01. The MNIST training set is randomly split into train and validation data (55,000 : 5,000). The models are optimized on the training data and the resulting measures are computed on validation data. To evaluate scaling with the number of samples, \(m\), we train networks on randomly sampled subsets of the training data of increasing sizes from 20% to 100% of the training set. Because of the chosen architectures, all of these models are over-parametrized (i.e. having more parameters than training samples). Recall that the bound on generalization error in Theorem 3 depends on the KL divergence between a posterior centered at trained predictor \(h\), \(\mathcal{N}(h,\mathbf{\sigma}_{\text{sparse}}^{2})\), and the prior \(P=\mathcal{N}(h_{\text{prior}},\mathbf{\sigma}_{\text{sparse}}^{2})\). Thus, each model is encouraged to be close to its initialization via a regularization term. In this way, we minimize the following regularized empirical risk based on the cross-entropy loss as well as a regularization term with penalty \(\lambda\) (set as \(\lambda=1.0\) for all experiments for simplicity), \[\min_{\left\{\mathbf{W}_{k}\right\}_{k=1}^{K+1}} \frac{1}{m}\sum_{i=1}^{m}\ell_{\text{cross}-\text{ent}}\Big{(}h, \left(\mathbf{x}_{i},y_{i}\right)\Big{)}+\frac{\lambda}{K+1}\sum_{k=1}^{K+1} \left\|\mathbf{W}_{k}-\mathbf{W}_{\text{prior},k}\right\|_{F}^{2}.\] Choice Of Prior:As with any PAC-Bayes bound, choosing a prior distribution with an appropriate inductive bias is important. For example, optimizing the choice of prior by instantiating multiple priors simultaneously was shown to be an effective procedure to obtain good generalization bounds (Langford and Caruana, 2001; Dziugaite and Roy, 2017). In this work, we evaluate our bounds for two choices of the prior: _a)_ a data-independent prior, \(P_{0}:=\mathcal{N}(h_{\mathbf{0}},\mathbf{\sigma}_{\text{sparse}}^{2})\) centered at a model with zero weights, \(h_{\mathbf{0}}\); and _b)_ a data-dependent prior \(P_{\text{data}}:=\mathcal{N}(h_{\text{init}},\mathbf{\sigma}_{\text{sparse}}^{2})\) centered at a model \(h_{\text{init}}\) obtained by training on a small fraction of the training data (\(5\%\) of all training data). Note that this choice is valid, as the base hyper-parameter \((\mathbf{s},\mathbf{\xi},\mathbf{\eta},\mathbf{\epsilon})\) are chosen independent of data, and the empirical risk terms in the bound are not evaluated on the small subset of data \(h_{\text{init}}\) is trained on. Generalization Bounds Across Width:We first train a 2-layer (1 hidden layer) fully connected neural network with increasing widths, from 100 to 1,000 neurons. Note that in all cases these models are over-parametrized. In Figures 0(a) to 0(c) we plot the true risk (orange curve) and the generalization bounds (blue curve) from Theorem 3 across different sizes of training data and for the two choices of priors mentioned above. We observe that our analysis, when coupled with data-dependent prior \(P_{\text{data}}\), generates non-vacuous bounds for a network with width of 100. Even for the naive choice of the prior \(P_{0}\), the bound is controlled and close to 1. Furthermore, note that our bounds remain controlled for larger widths. In Appendix E, we include complementary results depicting our generalization bounds for 3-layer networks. Figure 1: Generalization error of a 2-layer model of different widths trained on MNIST. Effective Activity Ratio:Lastly, we intend to illustrate the degree of sparsity achieved in the obtained models that allow for the bounds presented in Figure 1. For each data point \(\mathbf{x}\) and relative perturbation level \(\epsilon\), we define the Effective Activity ratio \(\kappa(\mathbf{x},\epsilon):=\frac{\sum_{k}(d_{k}-s_{k})(d_{k-1}-s_{k-1})}{\sum_{ k}d_{k}d_{k-1}}\) where \(\mathbf{s}=s^{*}(\mathbf{x},\epsilon)\), the greedy sparsity vector chosen such that the sparse loss in Theorem 3 is zero. In this way, \(\kappa(\mathbf{x},\epsilon)\) measures the reduced local dimensionality of the model at input \(\mathbf{x}\) under perturbations of relative size \(\epsilon\). When \(\kappa(\mathbf{x},\epsilon)=1\), there are no sparse activation patterns that are stable under perturbations, and the full model is considered at that point. On the other hand, when \(0<\kappa(\mathbf{x},\epsilon)\ll 1\), the size of stable sparse activation patterns \(s^{*}(\mathbf{x},\epsilon)_{k}\) at each layer is close to the layer dimension \(d_{k}\). Theorem 3 enables a theory of generalization that accounts for this local reduced dimensionality. We present the effective activity rations for a trained 3-layer model in Figure 2, and include the corresponding results for the 2-layer model in Appendix E for completeness. The central observation from these results is that trained networks with larger width have _smaller_ effective activity ratios across the training data. In Figure 1(a) (as well as in Figure 1(a) for the 2-layer model), the distribution of effective activity ratio across the training data at \(\epsilon=10^{-4}\) shows that smaller width networks have less stable sparsity. In turn, Figure 1(b) and Figure 1(b) demonstrate that this effect is stronger for smaller relative perturbation levels. This observation is likely the central reason of why our generalization bounds do not increase drastically with model size. ## 5 Conclusion This work makes explicit use of the degree of sparsity that is achieved by ReLU feed-forward networks, reflecting the level of structure present in data-driven models, but without making any strong distributional assumptions on the data. Sparse activations imply that only a subset of the network is active at a given point. By studying the stability of these local sub-networks, and employing tools of derandomized PAC-Bayes analysis, we are able to provide bounds that exploit this effective reduced dimensionality of the predictors, as well as avoiding exponential dependence on the sensitivity of the function and of depth. Our empirical validation on MNIST illustrates our results, which are always controlled and sometimes result in non-vacuous bounds on the test error. Note that our strategy to instantiate our bound for practical models relied on a discretization of the space of hyper-parameters and a greedy selection of these values. This is likely suboptimal, and the grid of hyper-parameters could be further tuned for each model. Moreover, in light of the works in (Dziugaite and Roy, 2017, 2018; Zhou et al., 2019), we envision optimizing our bounds directly, leading to even tighter solutions. Figure 2: Effective activity ratio \(\kappa(\mathbf{x},\epsilon)\) based on greedy sparsity vector \(s^{*}(\mathbf{x},\epsilon)\) for 3-layer networks (smaller implies sparser stable activations). ## Acknowledgments We kindly thank Vaishnavh Nagarajan for helpful conversations that motivated the use of de-randomized PAC-Bayesian analysis. This work was supported by NSF grant CCF 2007649.
2305.02448
Asynchronous Distributed Consensus with Minimum Communication
In this paper, the communication effort required in a multi-agent system (MAS) is minimized via an explicit optimization formulation. The paper considers a MAS of single-integrator agents with bounded inputs and a time-invariant communication graph. A new model of discrete asynchronous communication and a distributed consensus protocol based on it, are proposed. The goal of the proposed protocol is to minimize the aggregate number of communication instants of all agents, required to steer the state trajectories inside a pres-specified bounded neighbourhood within a pre-specified time. Due to information structure imposed by the underlying communication graph, an individual agent does not know the global parameters in the MAS, which are required for the above-mentioned minimization. To counter this uncertainty, the worst-case realizations of the global parameters are considered, which lead to min-max type optimizations. The control rules in the proposed protocol are obtained as the closed form solutions of these optimization problems. Hence, the proposed protocol does not increase the burden of run-time computation making it suitable for time-critical applications.
Vishal Sawant, Debraj Chakraborty, Debasattam Pal
2023-05-03T22:20:42Z
http://arxiv.org/abs/2305.02448v1
# Asynchronous Distributed Consensus with Minimum Communication ###### Abstract In this paper, the communication effort required in a multi-agent system (MAS) is minimized via an explicit optimization formulation. The paper considers a MAS of single-integrator agents with bounded inputs and a time-invariant communication graph. A new model of discrete asynchronous communication and a distributed consensus protocol based on it, are proposed. The goal of the proposed protocol is to minimize the aggregate number of communication instants of all agents, required to steer the state trajectories inside a pre-specified bounded neighbourhood within a pre-specified time. Due to information structure imposed by the underlying communication graph, an individual agent does not know the global parameters in the MAS, which are required for the above-mentioned minimization. To counter this uncertainty, the worst-case realizations of the global parameters are considered, which lead to min-max type optimizations. The control rules in the proposed protocol are obtained as the closed form solutions of these optimization problems. Hence, the proposed protocol does not increase the burden of run-time computation making it suitable for time-critical applications. Consensus, Distributed Optimization, Communication Cost, Asynchronous Communication ## I Introduction In the recent years, multi-agent systems (MASs) have received tremendous attention due to their extensive applications in unmanned aerial vehicles (UAVs) [3], sensor networks [14], power grids [18], industrial robotics [26] etc. A typical MAS consists of a group of agents which collaborate to achieve desired objectives. Often, the objective is to achieve _consensus_, i.e., to drive the states of all agents into an agreement. To achieve this objective, agents in MAS need to exchange information with each other over some communication network. It is well known that communication network is an expensive resource [2, 22]. Hence, with the intent of reducing communication effort, a few approaches have been proposed in [9, 27, 28] etc. However, except in our preliminary work [25], the minimization of communication effort via explicit optimization, has never been addressed. The problem of achieving consensus of MAS with single-integrator agents was first extensively analyzed in [15]. After that, various consensus protocols for higher order agents were developed in [11, 23, 24] etc. These protocols are either based on discrete _synchronous_ communication or on continuous communication. A global synchronization clock is necessary for the implementation of synchronous protocols, which is often a major practical constraint [16]. On the other hand, the energy consumption of agent's transponder is proportional to the number and duration of transmissions [7]. Thus, continuous transmission limits the life of agent's battery and hence, its flight time [4, 19]. Second, continuous transmissions over a shared bandwidth-limited channel by multiple agents may lead to congestion of communication channel [2, 22]. And finally, MASs such as a group of UAVs are often used for stealthy military applications [17, 34]. In such scenarios, it is of strategic advantage to keep radio transmissions to a minimum in order to avoid detection by the enemy. For all these applications, it is necessary to develop an asynchronous/intermittent communication based consensus protocol which minimizes communication effort, i.e., the number and/or duration of transmissions. In order to reduce the required communication effort in a MAS, a few indirect approaches have been proposed in the literature. In self-triggered control [13] based consensus protocols [8, 32], the next communication instant is pre-computed based on the current state. Event-triggered control [13] based consensus protocols [9, 35] initiate communication only when a certain error state reaches a predetermined threshold. Intermittent communication based consensus protocols are investigated in [27, 28] and [29]. Consensus protocols based on asynchronous information exchange for a MAS with single-integrator agents are developed in [5, 6, 10] and [31]. Other work on asynchronous consensus include [1, 12, 30] and [33]. All of the above protocols result in the reduced communication effort as compared to the conventional continuous communication based protocols. However, there is no explicit minimization of communication effort and hence, the above protocols can result in sub-optimal communication performance. To overcome this issue, in this paper, we develop a distributed protocol which minimizes the communication effort required to maintain the consensus of single-integrator agents, via explicit optimization. This protocol is based on discrete asynchronous communication. Our notion of consensus is less stringent than the conventional one [15], in that we only require the difference between neighbouring agents' states (i.e., the _local disagreement_) to reduce below a pre-specified bound. In the proposed protocol, communication occurs at discrete time instants, namely _update/communication instants_, at which agents access the states of neighbouring agents and then based on that information, update their control. Thus, in the proposed protocol, the number of update instants is a good measure of communication effort. Hence, our basic objective is to minimize the aggregate number of update instants of all agents in the MAS. Now, intuitively, if the inter-update durations are increased, then the number of update instants decreases. However, increase in the inter-update durations increases the time required to achieve the consensus bound. Hence, the problem of minimization of the number of update instants is well-posed only when the consensus time is included in the formulation. Evidently, the time required for a MAS to achieve consensus depends on the initial configuration of the agents. Hence, to make the minimization of the number of update instants well-posed, we require that the local disagreements of all agents be steered below a pre-specified consensus bound within a pre-specified time, which is expressed as a function of the initial condition of the MAS. Due to communication structure imposed by the network, the above-mentioned minimization is a _decentralized_ optimal control [20] problem. Because of the said imposition, an individual agent does not have global information such as the number of agents in the MAS, the underlying communication graph, the complete initial condition of the MAS etc. To counter this lack of global information, we require that the consensus constraint be satisfied for any number of agents, any communication graph and any initial condition. Further, due to the imposed communication structure, an individual agent can not predict the control inputs of the neighbouring agents. In order to guard against the resulting uncertainty, the control inputs in the proposed protocol are obtained as a solution of certain max-min optimizations (Sections V-A and V-B). We obtain the closed form solutions of these optimizations (see (7)-(10)) and hence, extensive numerical computations are not required for their implementation. This makes the proposed protocol suitable for time-critical applications. Our contributions in this paper are summarized as follows: 1. We develop a discrete asynchronous communication based distributed consensus protocol for a MAS with single-integrator agents (Section III). 2. The proposed protocol minimizes the aggregate number of update instants under the constraint of steering the local disagreements of all agents below a pre-specified limit within a pre-specified time (Theorems 15 and 16). 3. The control rules in the proposed protocol are solution of certain max-min optimizations (Sections V-A and V-B). We obtain the closed form expressions of the corresponding optimal controls (Lemmas 11 and 12). The current paper is an extension of our work in [25] in three major ways. First, it was assumed in [25] that the initial local disagreements between agents are confined below a pre-specified bound. In the current paper, no such assumption on initial conditions has been made. Second, the protocol in [25] solves the minimization problem for a specific consensus time. On the other hand, the protocol developed in the current paper solves the minimization problem for a general pre-specified consensus time. Finally, in the current paper, the effect of pre-specified consensus time on optimal communication cost is analyzed. Such analysis was not presented in [25]. The remaining part of this paper is organized as follows. In Section II, the problem of minimizing the number of communication instants is formulated. In Section III, a distributed consensus protocol is proposed, which will be shown to be the solution for the special case of the formulated problem, in Sections IV and V. The protocol proposed in Section III is extended for the general case of the formulated problem, in Section VI. In Section VII, the simulation results are presented. The paper is concluded in Section VIII with future directions. ## II Preliminaries and Problem formulation ### _Graphs_ A graph \(G=(V,E)\) is a finite set of nodes \(V\) connected by a set of edges \(E\subseteq(V\times V)\). An edge between nodes \(i\) and \(j\) is represented by an ordered pair \((i,j)\in E\). A graph \(G\) is said to be _simple_ if \((i,i)\not\in E,\ \forall i\in V\). A graph \(G\) is said to be _undirected_ if \((i,j)\in E\) implies \((j,i)\in E\). In an undirected graph \(G\), if \((i,j)\in E\) (and equivalently \((j,i)\in E\)), then the nodes \(i\) and \(j\) are said to be _neighbours_ of each other. A _path_ between nodes \(i\) and \(j\) in an undirected graph \(G\) is a sequence of edges \((i,k_{1}),(k_{1},k_{2}),\ldots,(k_{r-1},k_{r}),(k_{r},j)\in E\). An undirected graph \(G\) is said to be _connected_ if there exists a path between any two nodes in \(G\). Let \(n_{i}\) denote the number of neighbours of node \(i\) and \(|V|\) denote the cardinality of set \(V\). Then, the _Laplacian_ matrix \(L\in\mathbb{R}^{|V|\times|V|}\) of a simple undirected graph \(G=(V,E)\) is defined as \[L_{i,j}:=\begin{cases}n_{i},&\text{if}\ \ \ \ \ i=j\\ -1,&\text{if}\ \ \ i\neq j\ \ \ \text{and}\ \ \ (i,j)\in E\\ 0,&\text{if}\ \ \ i\neq j\ \ \ \text{and}\ \ \ (i,j)\not\in E\end{cases}\] ### _System description_ Consider a multi-agent system (MAS) of \(n\) single-integrator agents, labeled as \(a_{1}\), \(a_{2},\ldots,a_{n}\), with dynamics \[\dot{x}_{i}(t)=u_{i}(t),\qquad i=1,\ldots,n \tag{1}\] where \(x_{i}(t)\in\mathbb{R}\) and \(u_{i}(t)\in\mathbb{R}\) are the state and the control input of agent \(a_{i}\), respectively. Let \(X:=[x_{1},x_{2},\ldots,x_{n}]\) and \(\mathbf{u}:=[u_{1},u_{2},\ldots,u_{n}]\) be the augmented state and control vectors of MAS (1), respectively. Define the set \(P:=\{1,2,\ldots,n\}\). Let \(G\) be a _time-invariant_ simple undirected graph, whose nodes represent agents in MAS (1) whereas the edges represent the communication links between agents, over which they exchange information with their neighbours. Let \(S_{i}\) be the set of indices of neighbours of agent \(a_{i}\). Note that \(i\not\in S_{i}\) as \(G\) is a simple graph. The cardinality of the set \(S_{i}\) is denoted by \(n_{i}\). Let \(L\) denote the _Laplacian_ matrix of \(G\). We make the following assumptions about MAS (1): 1. The control input \(u_{i}\) of each agent belongs to the set \[\mathcal{U}:=\{u\in\mathcal{M}\ |\ |u(t)|\leq\beta,\ \forall t\geq 0\}\] where \(\mathcal{M}\) denotes the set of measurable functions from \([0,\infty)\) to \(\mathbb{R}\). 2. The communication graph \(G\) is connected. 3. The communication delay is zero. ### _Consensus_ In this paper, we will be using two notions of consensus, which we define next. **Definition 1**.: _MAS (1) is said to have achieved conventional consensus at instant \(\widehat{t}\) if_ \[\widetilde{t}:=\inf\big{\{}\widehat{t}\ \big{|}\ x_{i}(t)=x_{j}(t),\ \ \forall t\geq\widehat{t},\ \ \forall i,j\in P\big{\}}<\infty\] Define \(Z(t)=[z_{1}(t),\ldots,z_{n}(t)]:=\)\(LX(t)\). Then, it follows from the definition of the Laplacian matrix that \[z_{i}(t)=\sum_{j\in S_{i}}\big{(}x_{i}(t)-x_{j}(t)\big{)},\qquad\forall i\in P \tag{2}\] As \(z_{i}\) is the sum of differences of agent \(a_{i}\)'s state with its neighbours, we call it the _local disagreement_ of agent \(a_{i}\). It is well known [15] that for a MAS with connected, time-invariant communication graph, conventional consensus at instant \(\widehat{t}\) is equivalent to \(z_{i}(t)=0,\ \forall t\geq\widetilde{t},\ \forall i\in P\). However, in many practical applications, it is not necessary that each \(z_{i}\) becomes exactly zero. It is sufficient if each \(|z_{i}|\) remains below a prespecified consensus bound. This motivates our next notion of consensus, namely \(\alpha\)_-consensus_. **Definition 2**.: _Let \(\alpha\in\mathbb{R}^{+}\) be the prespecified consensus bound. MAS (1) is said to have achieved \(\alpha\)-consensus at instant \(\widehat{t}\) if_ \[\widehat{t}:=\inf\big{\{}\widehat{t}\ \big{|}\ |z_{i}(t)|\leq\alpha,\ \ \forall t\geq \widehat{t},\ \ \forall i\in P\big{\}}<\infty\] ### _Communication model_ In this paper, we consider a discrete communication model. Let \(t_{i}^{k}\) denote the \(k\)th update instant (also referred to as the communication instant) of agent \(a_{i}\), at which it accesses the state information of its neighbours and then, based on that information, updates its control. Our communication model is asynchronous, i.e., the update instants \(t_{i}^{k}\)'s of two different agents need not coincide. As the communication model is discrete, the number of update instants is a good measure of communication effort. Hence, we define the _communication cost_ of agent \(a_{i}\), denoted by \(C_{i}(t)\), as the number of update instants of agent \(a_{i}\) in the time interval \([0,t]\). Then, we define the _aggregate_ communication cost of MAS (1), denoted by \(C_{MAS}(t)\), as \[C_{MAS}(t):=\sum_{i\in P}C_{i}(t) \tag{3}\] ### _Problem formulation_ Let \(X(0)\) be an initial condition of MAS (1), \(\alpha\) be the prespecified consensus bound and \(T\) be the prespecified consensus time. Our objective is to develop a protocol which minimizes the communication cost \(C_{MAS}(T)\), under the constraint of achieving \(\alpha\)-consensus of MAS (1) within time \(T\). As discussed in Section I, due to information structure imposed by graph \(G\), an individual agent in MAS (1) has access only to its own information and that of its neighbours. Therefore, the proposed protocol needs to be _distributed_, i.e., based only on the local information. In addition, an individual agent does not have global information such as the number of agents \(n\) in MAS (1), the structure of the communication graph \(G\), the complete initial condition \(X(0)\) etc. To address these uncertainties, the proposed protocol must be able to achieve \(\alpha\)-consensus of MAS (1) within the prespecified time \(T\), for any \(n\), any connected \(G\) and any \(X(0)\in\mathbb{R}^{n}\). Since the input set \(\mathcal{U}\) is magnitude bounded, for a fixed \(T\), it will not be possible to achieve \(\alpha\)-consensus within time \(T\), for every \(X(0)\in\mathbb{R}^{n}\). Thus, the consensus time must be specified as a function of \(X(0)\). To highlight this dependence on \(X(0)\), we modify the notation of the prespecified consensus time from \(T\) to \(T\big{(}X(0)\big{)}\). Similarly, the communication costs \(C_{i}\) and \(C_{MAS}\) depend on \(X(0)\). Thus, we modify their notations from \(C_{i}(t)\) and \(C_{MAS}(t)\) to \(C_{i}\big{(}t,X(0)\big{)}\) and \(C_{MAS}\big{(}t,X(0)\big{)}\), respectively. Now, we formalize our objective as follows: **Problem 3**.: _Consider MAS (1) with initial condition \(X(0)\in\mathbb{R}^{n}\) and connected communication graph \(G\). Let \(\Psi(n)\) denote the set of connected graphs with \(n\) nodes. Let \(T\big{(}X(0)\big{)}\) be the prespecified consensus time which is expressed as a function of \(X(0)\). Develop, if possible, a discrete asynchronous communication based protocol, i.e., admissible control \(\mathbf{u}^{*}=[u_{1}^{*},\ldots,u_{n}^{*}]\), adhering to graph \(G\), which is solution of the following optimization:_ \[\mathbf{u}^{*}=\begin{array}{ll}\min\limits_{\begin{subarray}{c}\alpha\in \mathcal{U}\\ \forall i\in P\end{subarray}}&C_{MAS}\Big{(}T\big{(}X(0)\big{)},X(0)\Big{)}\\ &\text{s.t.}&|z_{i}(t)|\leq\alpha,\ \ \forall t\geq T\big{(}X(0)\big{)},\ \ \forall i\in P\\ &\qquad\qquad\forall n,\ \ \forall G\in\Psi(n),\ \ \forall X(0)\in\mathbb{R}^{n} \end{array} \tag{4}\] ### _Choosing \(T\big{(}X(0)\big{)}\)_ In Problem 3, the time \(T\big{(}X(0)\big{)}\) can be specified as any function of \(X(0)\). However, in practical applications, it is desirable to set \(T\big{(}X(0)\big{)}\) to the minimum feasible value. In [21], the time-optimal control rule is proposed which achieves conventional consensus of MAS (1) in minimum time. This control rule and the corresponding consensus time, denoted by \(u_{i}^{*}\) and \(T^{*}\big{(}X(0)\big{)}\), respectively, are presented below. Let \(X(0)=[x_{1}(0),\ldots,x_{n}(0)]\) be an initial condition of MAS (1). Define \(x^{min}(0):=\min\big{\{}x_{1}(0),\ldots,x_{n}(0)\big{\}}\) and \(x^{max}(0):=\max\big{\{}x_{1}(0),\ldots,x_{n}(0)\big{\}}\). Recall that \(z_{i}\) denote the local disagreement of agent \(a_{i}\). Let \(sign\) denote the standard signum function. Then, the time-optimal consensus rule from [21] is \[u_{i}^{*}(t)=-\beta\ sign\big{(}z_{i}(t)\big{)},\ \ \ \ \forall t\geq 0,\ \ \ \ \forall i\in P \tag{5}\] with the corresponding consensus time \[T^{*}\big{(}X(0)\big{)}=\frac{x^{max}(0)-x^{min}(0)}{2\beta} \tag{6}\] Notice that the control rule (5) requires instantaneous access to the local disagreement \(z_{i}\), and as a result, demands continuous communication. Then, it follows from the time optimality of \(T^{*}\big{(}X(0)\big{)}\) that a discrete communication based protocol cannot achieve \(\alpha\)-consensus of MAS (1) within time \(T^{*}\big{(}X(0)\big{)}\). This implies that Problem 3 is infeasible for \(T\big{(}X(0)\big{)}\leq T^{*}\big{(}X(0)\big{)}\) and we should assume \(T\big{(}X(0)\big{)}>T^{*}\big{(}X(0)\big{)}\). We further assume that \(T\big{(}X(0)\big{)}\geq 2T^{*}\big{(}X(0)\big{)}\). This assumption makes Problem 3 tractable and results in a particularly simple closed form solution for the control inputs. We solve Problem 3 for \(T\big{(}X(0)\big{)}=2T^{*}\big{(}X(0)\big{)}\) in Sections III-V and later extend it for \(T\big{(}X(0)\big{)}>2T^{*}\big{(}X(0)\big{)}\) in Section VI. ## III Protocol \(I\): For \(T\big{(}X(0)\big{)}=2T^{*}\big{(}X(0)\big{)}\) In this section, we present the protocol proposed for \(T\big{(}X(0)\big{)}=2T^{*}\big{(}X(0)\big{)}\). We refer to this protocol as Protocol \(I\). Later, this protocol will be shown to be a solution of Problem 3 for \(T\big{(}X(0)\big{)}=2T^{*}\big{(}X(0)\big{)}\). Protocol \(I\) has the following two elements: 1. _Computation of next update instant and control input_ \(a)\) At each update instant \(t_{i}^{k},\ k\geq 1\), agent \(a_{i},\ i\in P\), accesses the current states \(x_{j}(t_{i}^{k})\)'s of its neighbours \(a_{j},\ j\in S_{i}\), and computes \(z_{i}(t_{i}^{k})=\sum_{j\in S_{i}}\big{(}x_{i}(t_{i}^{k})-x_{j}(t_{i}^{k})\big{)}\). \(b)\) After that, agent \(a_{i}\) computes its next update instant \(t_{i}^{k+1}\) and control input \(u_{i}^{*}\) to be applied in the interval \([t_{i}^{k},t_{i}^{k+1})\) as follows: \(i)\) If \(|z_{i}(t_{i}^{k})|\leq\alpha\), then \[t_{i}^{k+1}=t_{i}^{k}+\frac{\alpha}{\beta n_{i}} \tag{7}\] \[u_{i}^{*}(t)=-\frac{z_{i}(t_{i}^{k})}{\alpha}\beta,\ \ \ \ \ \ \ \forall t\in\big{[}t_{i}^{k},t_{i}^{k+1} \big{)} \tag{8}\] \(ii)\) If \(|z_{i}(t_{i}^{k})|>\alpha\), then \[t_{i}^{k+1}=t_{i}^{k}+\frac{|z_{i}(t_{i}^{k})|+\alpha}{2\beta n_{i}} \tag{9}\] \[u_{i}^{*}(t)=-\beta sign\big{(}z_{i}(t_{i}^{k})\big{)},\ \ \ \ \ \forall t\in\big{[}t_{i}^{k},t_{i}^{k+1} \big{)} \tag{10}\] \(c)\) Then, agent \(a_{i}\) broadcasts \(x_{i}(t_{i}^{k})\) and \(u_{i}^{*}(t_{i}^{k})\). This broadcast information is received by agent \(a_{i}\)'s neighbours \(a_{j},\ j\in S_{i}\) at the same instant \(t_{i}^{k}\). The neighbours \(a_{j},\ j\in S_{i}\), store this information with the reception time-stamp. _2) Accessing the states of neighbours at update instants \(a)\)_ The time instant \(t_{i}^{1}=0\) is the first update instant of all agents \(a_{i},\ i\in P\). At this instant, all agents broadcast their current states. Thus, each agent \(a_{i},\ i\in P\), has a direct access to the current states \(x_{j}(t_{i}^{1})\)'s of its neighbours \(a_{j},\ j\in S_{i}\). For example, consider the communication graph \(G_{c}\) shown in Fig. 1 and the corresponding communication timeline shown in Fig. 2. At instant \(t=0\), agents \(a_{1}\), \(a_{2}\) and \(a_{3}\) broadcast information to their neighbours. \(b)\) Let \(t_{i}^{k},\ k>1\), be any update instant of agent \(a_{i}\). Let \(t_{j}^{l}<t_{k}^{k}\) be the latest update instant of agent \(a_{j},\ j\in S_{i}\), at which it had broadcast \(x_{j}(t_{j}^{l})\) and \(u_{j}^{*}(t_{j}^{l})\). At update instant \(t_{i}^{k}\), agent \(a_{i}\) accesses the stored information and retrieves \(x_{j}(t_{j}^{l})\) and \(u_{j}^{*}(t_{j}^{l})\) for all \(j\in S_{i}\). For example, in the communication timeline shown in Fig. 2, at instants \(t_{1}^{k}\) and \(t_{3}^{l}\), agent \(a_{2}\) receives information from its neighbours. Later, agent \(a_{2}\) uses this information at its update instant \(t_{2}^{p}\). \(c)\) It is known from (8) and (10) that every agent \(a_{j},\ j\in S_{i}\), had applied control \(u_{j}^{*}(t)=u_{j}^{*}(t_{j}^{l})\) in the interval \(t\in[t_{j}^{l},t_{i}^{k})\). Using this fact, agent \(a_{i}\) computes the current state \(x_{j}(t_{i}^{k})\) of \(a_{j},\ \forall j\in S_{i}\), as \[x_{j}\big{(}t_{i}^{k}\big{)}=x_{j}(t_{j}^{l})+\int_{t_{j}^{l}}^{t_{i}^{k}}u_{j} ^{*}\big{(}t_{j}^{l}\big{)}\ dt\] **Remark 4**.: _As per Assumption 3, the communication delay is zero. In addition, the time required for the computation of \(t_{i}^{k+1}\) and \(u_{i}^{*}(t_{i}^{k})\) is negligible. This justifies the assumption that computation, transmission and reception of \(u_{i}^{*}(t_{i}^{k})\) and \(x_{i}(t_{i}^{k})\), happen at the same time instant \(t_{i}^{k}\)._ **Remark 5**.: _It may appear from (8) and (10) that we have selected \(u_{i}^{*}\)'s which are constant over intervals \([t_{i}^{k},t_{i}^{k+1})\), in order to simplify analysis. However, \(u_{i}^{*}\)'s in (8) and (10) are obtained as the solution of certain max-min optimizations (Sections V-A and V-B) and coincidently, they have this nice form._ ## IV \(\alpha\)-consensus under Protocol \(I\) In this section, we show that Protocol \(I\) achieves \(\alpha\)-consensus of MAS (1) within time \(2T^{*}\big{(}X(0)\big{)}\). However, before that, we present a necessary condition on feasible solutions of Problem 3, which will be utilized while proving the attainment of \(\alpha\)-consensus within time \(2T^{*}\big{(}X(0)\big{)}\). ### _Necessary condition on feasible solutions of Problem 3_ A discrete communication based distributed protocol is said to be a _feasible_ solution of Problem 3 if it satisfies constraint (4), i.e., achieves \(\alpha\)-consensus of MAS (1) within time \(T\big{(}X(0)\big{)}\), for any number of agents \(n\), any connected graph \(G\) and any initial condition \(X(0)\in\mathbb{R}^{n}\). Recall that \(\Psi(n)\) denotes the set of connected graphs with \(n\) nodes. Then, the following lemma gives a necessary condition on the feasible solutions of Problem 3. **Lemma 6**.: _Consider MAS (1). Let \(T\big{(}X(0)\big{)}\) be the pre-specified consensus time. Let \(Q\) be any discrete communication based distributed protocol. Under Protocol \(Q\), let \(\widetilde{t_{i}}\) be the first time instant at which the local disagreement \(z_{i}\) of agent \(a_{i}\) satisfies \(\big{|}z_{i}\big{(}\widetilde{t_{i}}\big{)}\big{|}\leq\alpha\). Then, Protocol \(Q\) is a feasible solution of Problem 3 only if it ensures_ \[\big{|}z_{i}(t)\big{|}\leq\alpha,\ \ \ \forall t\geq\widetilde{t_{i}},\ \ \forall n,\ \ \forall G\in\Psi(n),\ \ \forall X(0)\in\mathbb{R}^{n} \tag{11}\] Proof.: Recall that due to communication structure imposed by graph \(G\), an individual agent \(a_{i}\) in MAS (1) does not know the complete initial condition \(X(0)\). Therefore, it does not know the exact value of \(T\big{(}X(0)\big{)}\) and how far \(\widetilde{t_{i}}\) is from \(T\big{(}X(0)\big{)}\). In fact, as in Example 9 presented below, there may exist a MAS of the form (1) in which \(\widetilde{t}_{i}=T\big{(}X(0)\big{)}\). In such a case, violation of (11) results in the violation of constraint (4) in Problem 3. This contradicts the fact that Protocol \(Q\) is a feasible solution of Problem 3. Hence, Protocol \(Q\) must satisfy (11). This completes the proof. The following lemma shows that Protocol \(I\) satisfies the necessary condition (11). The proof of this lemma relies on the derivation of control rules (7)-(10) in Protocol \(I\), which is deferred to Section V for better structure of the paper. Hence, we defer the proof of the said lemma to Section V-C. **Lemma 7**.: _Protocol \(I\) satisfies the necessary condition (11)._ ### \(\alpha\)_-consensus within time \(2T^{*}\big{(}X(0)\big{)}\)_ The following theorem shows that Protocol \(I\) achieves \(\alpha\)-consensus of MAS (1) within time \(2T^{*}\big{(}X(0)\big{)}\). **Theorem 8**.: _Consider MAS (1). Let \(\alpha\in\mathbb{R}^{+}\) be the pre-specified consensus bound. Let \(T^{*}\big{(}X(0)\big{)}\) be as defined in (6). Then, for every \(n\), every connected communication graph \(G\) and every \(X(0)\in\mathbb{R}^{n}\), Protocol \(I\) achieves \(\alpha\)-consensus of MAS (1) in time less than or equal to \(2T^{*}\big{(}X(0)\big{)}\)._ Proof.: See the Appendix for the proof. According to Theorem 8, under Protocol \(I\), the duration \(2T^{*}\big{(}X(0)\big{)}\) is an upper bound on the time required to achieve \(\alpha\)-consensus of MAS (1). Next, we show with the following example that there exists a MAS of the form (1) for which the \(\alpha\)-consensus time under Protocol \(I\) is arbitrarily close to \(2T^{*}\big{(}X(0)\big{)}\). **Example 9**.: _Consider the connected graph \(G_{e}\) shown in Fig. 3, which has \(n\) nodes. The subgraph \(G_{s}\) of \(G_{e}\) (inside the dashed square in Fig. 3) is a complete graph on \(n-1\) nodes. The set Fig. 1: Communication graph \(G_{c}\) Fig. 2: Timeline of communication over graph \(G_{c}\) in Fig. 1. Dark arrows indicate information transmissions. of indices of neighbours of node \(1\) is \(S_{1}=\{2,3,\ldots,r+1\}\). Hence, the cardinality of \(S_{1}\) is \(r\)._ _Now, consider MAS (1) with communication graph \(G_{e}\). Let the consensus bound and the control bound be \(\alpha=3\) and \(\beta=1\), respectively. Let the initial conditions of agents be \(x_{1}(0)=0\) and \(x_{i}(0)=5,\ \forall i=2,\ldots,n\). Thus, \(x^{min}(0)=\min_{i}x_{i}(0)=0\) and \(x^{max}(0)=\max_{i}x_{i}(0)=5\). Then, it follows from the definition of \(T^{*}\big{(}X(0)\big{)}\) in (6) that \(T^{*}\big{(}X(0)\big{)}=2.5\) sec._ The following theorem shows that under Protocol \(I\), the time required to achieve \(\alpha\)-consensus of the MAS in Example 9 is arbitrarily close to \(2T^{*}\big{(}X(0)\big{)}\). **Theorem 10**.: _Consider the MAS in Example 9. Let \(T\big{(}\alpha,X(0)\big{)}\) denote the time required by Protocol \(I\) to achieve \(\alpha\)-consensus of a MAS with initial condition \(X(0)\). Let \(\epsilon>0\) be any real number. Then, there exist integers \(r=r_{\epsilon}\) and \(n=n_{\epsilon}\) such that the following holds._ \[\Big{(}2T^{*}\big{(}X(0)\big{)}-\epsilon\Big{)}\leq T\big{(}\alpha,X(0)\big{)} \leq 2T^{*}\big{(}X(0)\big{)} \tag{12}\] Proof.: See the Appendix for the proof. ## V Optimality of Protocol \(I\) The objective in Problem 3 is to minimize the number of update instants under the constraint of achieving \(\alpha\)-consensus of MAS (1) within the pre-specified time. Intuitively, if the duration between the successive update instants is increased, then the number of update instants decreases. Motivated from this intuition, we obtain the solution of Problem 3 by maximizing the inter-update durations. We divide the solution process into two steps. First, we solve two maximization problems, one corresponding to \(|z_{i}(t_{i}^{k})|\leq\alpha\) and the other corresponding to \(|z_{i}(t_{i}^{k})|>\alpha\), in which the objective function is the inter-update duration. The control rules (7)-(8) and (9)-(10) in Protocol \(I\) are solutions of these maximization problems, respectively. Later, we show how these two control rules together form the solution of Problem 3. _Maximization of inter-update durations: For \(\big{|}z_{i}\big{(}t_{i}^{k}\big{)}\big{|}\leq\alpha\)_ Let \(z_{i}\big{(}t_{i}^{k}\big{)}\) be the local disagreement of agent \(a_{i}\) at its update instant \(t_{i}^{k}\) such that \(\big{|}z_{i}\big{(}t_{i}^{k}\big{)}\big{|}\leq\alpha\). The goal of agent \(a_{i}\) is to maximize the inter-update duration \(\big{(}t_{i}^{k+1}-t_{i}^{k}\big{)}\) by delaying its next update instant \(t_{i}^{k+1}\). However, while doing so, agent \(a_{i}\) must satisfy the necessary condition (11). For that purpose, agent \(a_{i}\) needs to ensure that \(|z_{i}(t)|\leq\alpha\) for all \(t\in[t_{i}^{k},t_{i}^{k+1}]\). Recall the definition of \(z_{i}\) in (2). Then, the evolution of \(z_{i}\) in the interval \([t_{i}^{k},t_{i}^{k+1}]\) is given as \[z_{i}\big{(}t\big{)}=z_{i}\big{(}t_{i}^{k}\big{)}+n_{i}\int_{t_{i}^{k}}^{t}u_{i }(\tau)d\tau-\sum_{j\in S_{i}}\bigg{(}\int_{t_{i}^{k}}^{t}u_{j}(\tau)d\tau\bigg{)} \tag{13}\] This evolution depends on control inputs \(u_{i}\) and \(u_{j},\ \forall j\in S_{i}\), in the interval \([t_{i}^{k},t_{i}^{k+1}]\). Note that any instant \(t\in[t_{i}^{k},t_{i}^{k+1}]\) can be an update instant of a neighbouring agent \(a_{j},\ j\in S_{i}\), at which \(a_{j}\) updates its control. This updated value is not known to agent \(a_{i}\) in advance, at instant \(t_{i}^{k}\). Thus, while maximizing the inter-update duration \(\big{(}t_{i}^{k+1}-t_{i}^{k}\big{)}\), agent \(a_{i}\) needs to consider the worst-case realizations of the neighbouring inputs \(u_{j},\ \forall j\in S_{i}\), which result in the minimum value of the inter-update duration. This leads to the following max-min optimization: \[u_{i}^{i}= \max_{\begin{subarray}{c}u_{i}\in\mathcal{U},\\ \tilde{t}\in\mathbb{R}\end{subarray}}\quad\min_{\begin{subarray}{c}u_{j}\in \mathcal{U},\\ \forall j\in S_{i}\end{subarray}}\quad\tilde{t}-t_{i}^{k} \tag{14}\] \[t_{i}^{k+1}=\arg \max_{\begin{subarray}{c}u_{i}\in\mathcal{U},\\ \tilde{t}\in\mathbb{R}\end{subarray}}\quad\min_{\begin{subarray}{c}u_{j}\in \mathcal{U},\\ \forall j\in S_{i}\end{subarray}}\quad\tilde{t}-t_{i}^{k}\] (15) s.t. \[|z_{i}(t)|\leq\alpha,\ \ \forall t\in[t_{i}^{k},\widetilde{t}\,] \tag{16}\] The following lemma shows that control law (7)-(8) is the solution of optimization (14)-(16). **Lemma 11**.: _Consider any agent \(a_{i}\) in MAS (1). Let \(z_{i}\big{(}t_{i}^{k}\big{)}\) be the local disagreement of \(a_{i}\) at its update instant \(t_{i}^{k}\) such that \(\big{|}z_{i}\big{(}t_{i}^{k}\big{|}\big{)}\leq\alpha\). Then, the control law (7)-(8) is the solution of optimization (14)-(16)._ Proof.: See the Appendix for the proof. ### _Maximization of inter-update durations: For \(\big{|}z_{i}\big{(}t_{i}^{k}\big{)}\big{|}>\alpha\)_ Let \(z_{i}\big{(}t_{i}^{k}\big{)}\) be the local disagreement of agent \(a_{i}\) at its update instant \(t_{i}^{k}\) such that \(\big{|}z_{i}\big{(}t_{i}^{k}\big{)}\big{|}>\alpha\). Then, in order to achieve \(\alpha\)-consensus, it is necessary to steer \(\big{|}z_{i}\big{(}t_{i}^{k}\big{)}\big{|}\) below the consensus bound \(\alpha\). If that could be done in time-optimal manner, it is an added advantage. Recall that in Section II-F, we discussed the time-optimal consensus rule (5) which achieves conventional consensus of MAS (1) in minimum time. Motivated from this rule, we choose our control rule as \[u_{i}^{*}(t)=-\beta\ sign\Big{(}z_{i}\big{(}t_{i}^{k}\big{)}\Big{)},\ \ \ \ \ \ \ \ \ \ \ \forall t\in[t_{i}^{k},t_{i}^{k+1}\big{)} \tag{17}\] As mentioned in Section V-A, the goal of agent \(a_{i}\) is to maximize the inter-update duration \(\big{(}t_{i}^{k+1}-t_{i}^{k}\big{)}\) by delaying its next update instant \(t_{i}^{k+1}\). However, while doing so, agent \(a_{i}\) must satisfy the necessary condition (11). Recall that \(\big{|}z_{i}\big{(}t_{i}^{k}\big{|}\big{)}>\alpha\). Without loss of generality, assume that \(z_{i}\big{(}t_{i}^{k}\big{)}<-\alpha\). Then, it follows from (17) that \(u_{i}^{*}(t)=\beta,\ \forall t\in[t_{i}^{k},t_{i}^{k+1}]\). Let \(\tilde{t}\in[t_{i}^{k},t_{i}^{k+1}]\) be the first time instant at which \(z_{i}(\tilde{t})=-\alpha\). Then, in order to satisfy (11), agent \(a_{i}\) needs to ensure that \[z_{i}(t)\in[-\alpha,\alpha],\ \ \ \ \ \ \ \ \ \forall t\in[\tilde{t},t_{i}^{k+1}]\] Fig. 3: Communication graph \(G_{e}\) of the MAS in Example 9 which is equivalent to \[z_{i}(t)\leq\alpha,\qquad\qquad\forall t\in[\hat{t},t_{i}^{k+1}]\] As discussed in Section V-A, at instant \(t_{i}^{k}\), agent \(a_{i}\) does not know the future values of the neighbouring inputs in the interval \([t_{i}^{k},t_{i}^{k+1})\). Thus, while maximizing the inter-update duration \(\big{(}t_{i}^{k+1}-t_{i}^{k}\big{)}\), agent \(a_{i}\) needs to consider the worst-case realizations of the neighbouring inputs \(u_{j},\ \forall j\in S_{i}\), which result in the minimum value of the inter-update duration. This leads to the following max-min optimization: \[t_{i}^{k+1}=\arg\ \max_{\begin{subarray}{c}\hat{t}\in\mathbb{R} \\ \text{ }u_{i}=\beta,\\ \text{ }u_{j}\in S_{i}\end{subarray}}\quad\widetilde{t}-t_{i}^{k} \tag{18}\] \[\text{s.t.}\quad z_{i}(t)\leq\alpha,\quad\forall t\in\big{[}t_{i} ^{k},\widetilde{t}\,\big{]} \tag{19}\] The following lemma shows that control law (9)-(10) is the solution of optimization (18)-(19). **Lemma 12**.: _Consider any agent \(a_{i}\) in MAS (1). Let \(z_{i}\big{(}t_{i}^{k}\big{)}\) be the local disagreement of \(a_{i}\) at its update instant \(t_{i}^{k}\) such that \(z_{i}\big{(}t_{i}^{k}\big{)}<-\alpha\). Then, the control law (9)-(10) is the solution of optimization (18)-(19)._ Proof.: See the Appendix for the proof. **Remark 13**.: _The optimization (18)-(19) and Lemma 12 correspond to the case \(z_{i}\big{(}t_{i}^{k}\big{)}<-\alpha\). If \(z_{i}\big{(}t_{i}^{k}\big{)}>\alpha\), the constraint (19) becomes \(z_{i}(t)\geq-\alpha,\ \forall t\in\big{[}t_{i}^{k},\widetilde{t}\,\big{]}\). Then, by following the arguments in the proof of Lemma 12, we can show that even for \(z_{i}\big{(}t_{i}^{k}\big{)}>\alpha\), the control rule (9)-(10) is the solution of optimization (18)-(19)._ ### _Feasibility of Protocol \(I\)_ In this section, we present the proof of Lemma 7 which claims that Protocol \(I\) satisfies the necessary condition (11). Proof.: _of Lemma 7_ : Consider MAS (1) with any \(n\), any connected communication graph \(G\) and any initial condition \(X(0)\in\mathbb{R}^{n}\). Let \(a_{i}\) be any agent in MAS (1) and \(\widetilde{t}_{i}\in[t_{i}^{k},t_{i}^{k+1})\) be the first time instant under Protocol \(I\) at which \(|z_{i}(t_{i})|\leq\alpha\). If \(|z_{i}\big{(}t_{i}^{k}\big{)}|\leq\alpha\), then it follows from (16) and Lemma 11 that \(|z_{i}(t)|\leq\alpha,\ \forall t\in\big{[}\widetilde{t}_{i},t_{i}^{k+1} \big{]}\) under Protocol \(I\). On the other hand, if \(|z_{i}\big{(}t_{i}^{k}\big{)}|>\alpha\), then it follows from (19), Lemma 12 and Remark 13 that \(|z_{i}(t)|\leq\alpha,\ \forall t\in\big{[}\,\widetilde{t}_{i},t_{i}^{k+1} \big{]}\) under Protocol \(I\). Consequently, in both cases, Lemma 11 leads to \(|z_{i}(t)|\leq\alpha,\ \forall t>t_{i}^{k+1}\). This proves that Protocol \(I\) satisfies the necessary condition (11). By using Lemma 7, it is already shown in Theorem 8 that Protocol \(I\) satisfies constraint (4) in Problem 3 for time \(2T^{*}\big{(}X(0)\big{)}\), i.e., achieves \(\alpha\) consensus of MAS (1) within time \(2T^{*}\big{(}X(0)\big{)}\), for every \(n\), every connected \(G\) and every \(X(0)\in\mathbb{R}^{n}\). In the next section, we prove the optimality of Protocol \(I\). ### _Proof of optimality of Protocol \(I\)_ Consider MAS (1) with initial condition \(X(0)\in\mathbb{R}^{n}\). Let \(Q\) be any protocol which is a feasible solution of Problem 3 for \(T\big{(}X(0)\big{)}=2T^{*}\big{(}X(0)\big{)}\), i.e., a discrete communication based distributed protocol which achieves \(\alpha\)-consensus of MAS (1) within time \(2T^{*}\big{(}X(0)\big{)}\), for every \(n\), every connected \(G\) and every \(X(0)\in\mathbb{R}^{n}\). Let \(C_{MAS}^{Q}\big{(}2T^{*}\big{(}X(0)\big{)},X(0)\big{)}\) and \(C_{MAS}^{*}\big{(}2T^{*}\big{(}X(0)\big{)},X(0)\big{)}\) denote the value of the aggregate communication cost \(C_{MAS}\) defined in (3), under Protocol \(Q\) and Protocol \(I\), respectively. **Theorem 14**.: _Consider MAS (1) with initial condition \(X(0)\in\mathbb{R}^{n}\). Then, under Protocol \(I\), the following holds._ \[C_{MAS}^{*}\Big{(}2T^{*}\big{(}X(0)\big{)},X(0)\Big{)}\leq C_{MAS}^{Q}\Big{(} 2T^{*}\big{(}X(0)\big{)},X(0)\Big{)} \tag{20}\] Proof.: For the sake of contradiction, assume that \[C_{MAS}^{Q}\Big{(}2T^{*}\big{(}X(0)\big{)},X(0)\Big{)}<C_{MAS}^{*}\Big{(}2T^{*} \big{(}X(0)\big{)},X(0)\Big{)} \tag{21}\] Recall that \(C_{i}\big{(}t,X(0)\big{)}\) denotes the number of update instants of agent \(a_{i}\) in the interval \([0,t)\), corresponding to initial condition \(X(0)\). Let \(C_{i}^{Q}\Big{(}2T^{*}\big{(}X(0)\big{)},X(0)\Big{)}\) and \(C_{i}^{*}\Big{(}2T^{*}\big{(}X(0)\big{)},X(0)\Big{)}\) denote the value of \(C_{i}\Big{(}2T^{*}\big{(}X(0),X(0)\big{)}\Big{)}\), under Protocol \(Q\) and Protocol \(I\), respectively. Then, (3) and (21) imply that there exists at least one agent, say \(a_{i}\), such that \[C_{i}^{Q}\Big{(}2T^{*}\big{(}X(0)\big{)},X(0)\Big{)}<C_{i}^{*}\Big{(}2T^{*} \big{(}X(0)\big{)},X(0)\Big{)}\] This implies that under Protocol \(Q\), at least one inter-update duration of agent \(a_{i}\) is longer than that prescribed by control laws (7) and (9) in Protocol \(I\). Recall from (15) and Lemma 11 that control law (7) gives the maximum inter-update duration under constraint (16). Similarly, it follows from (18) and Lemma 12 that control law (9) gives the maximum inter-update duration under constraint (19). Then, as one inter-update duration under Protocol \(Q\) is longer than that prescribed by control laws (7) and (9), Protocol \(Q\) must be violating either constraint (16) or constraint (19). Recall that violation of (16) or (19) by Protocol \(Q\) results in the violation of necessary condition (11) on feasible protocols. This contradicts the fact that Protocol \(Q\) is a feasible solution of Problem 3 and proves claim (20). ## VI Protocol \(Ii\): For \(T\big{(}X(0)\big{)}>2T^{*}\big{(}X(0)\big{)}\) In Section V-D, we proved that Protocol \(I\) is the solution of Problem 3 for \(T\big{(}X(0)\big{)}=2T^{*}\big{(}X(0)\big{)}\). In this section, we extend Protocol \(I\) for \(T\big{(}X(0)\big{)}>2T^{*}\big{(}X(0)\big{)}\). We refer to the extended protocol as Protocol \(II\). ### _Protocol \(Ii\)_ Let \(\gamma>1\) be a real number and \(T\big{(}X(0)\big{)}=2\gamma T^{*}\big{(}X(0)\big{)}\) be the pre-specified \(\alpha\)-consensus time. Define \(\widetilde{\beta}:=\frac{\beta}{\gamma}\). Then, Protocol \(II\) is same as Protocol \(I\), except the control bound \(\widetilde{\beta}\) in place of \(\beta\). ### _Optimality of Protocol \(Ii\)_ In this section, we first show that Protocol \(II\) achieves \(\alpha\)-consensus of MAS (1) within the pre-specified time \(T\big{(}X(0)\big{)}=2\gamma T^{*}\big{(}X(0)\big{)}\). **Theorem 15**.: _Consider MAS (1). Let \(\alpha\in\mathbb{R}^{+}\) be the pre-specified consensus bound. Then, for every \(n\), every connected communication graph \(G\) and every \(X(0)\in\mathbb{R}^{n}\), Protocol \(II\) achieves \(\alpha\)-consensus of MAS (1) in time less than or equal to \(2\gamma T^{*}\big{(}X(0)\big{)}\). Moreover, there exist \(\widetilde{n}\), connected graph \(\widetilde{G}\) and initial condition \(\widetilde{X}(0)\in\mathbb{R}^{\widetilde{n}}\) for which \(\alpha\)-consensus time under Protocol \(II\) is arbitrarily close to \(2\gamma T^{*}\big{(}X(0)\big{)}\)._ Proof.: Recall that the control bounds in Protocols \(I\) and \(II\) are \(\beta\) and \(\widetilde{\beta}=\dfrac{\beta}{\gamma}\), respectively. Thus, the dynamics of MAS (1) under Protocol \(II\) is \(\gamma\) times slower than that of under Protocol \(I\). Then, the claim follows from the arguments in the proofs of Theorems 8 and 10. Now, we prove the optimality of Protocol \(II\) for \(T\big{(}X(0)\big{)}=2\gamma T^{*}\big{(}X(0)\big{)}\). Consider MAS (1) with an initial condition \(X(0)\in\mathbb{R}^{n}\). Let \(Q\) be any protocol which is a feasible solution of Problem 3 for \(T\big{(}X(0)\big{)}=2\gamma T^{*}\big{(}X(0)\big{)}\). Let \(C_{MAS}^{Q}\Big{(}2\gamma T^{*}\big{(}X(0)\big{)},X(0)\Big{)}\) and \(C_{MAS}^{*}\Big{(}2\gamma T^{*}\big{(}X(0)\big{)},X(0)\Big{)}\) denote the value of \(C_{MAS}\) defined in (3), under Protocol \(Q\) and Protocol \(II\), respectively. **Theorem 16**.: _Consider MAS (1) with initial condition \(X(0)\in\mathbb{R}^{n}\). Then, under Protocol \(II\), the following holds._ \[C_{MAS}^{*}\Big{(}2\gamma T^{*}\big{(}X(0)\big{)},X(0)\Big{)}\leq C_{MAS}^{Q} \Big{(}2\gamma T^{*}\big{(}X(0)\big{)},X(0)\Big{)}\] Proof.: The claim follows from the arguments in the proof of Theorem 14. Next, we analyze the effect of \(T\big{(}X(0)\big{)}\) on \(C_{MAS}^{*}\). Intuitively, with the increase in \(T\big{(}X(0)\big{)}\), agents in MAS (1) can afford to delay their next update instants, which would result in lower \(C_{MAS}^{*}\). However, the following theorem shows that the above intuition is not true after \(T\big{(}X(0)\big{)}=2T^{*}\big{(}X(0)\big{)}\). **Theorem 17**.: _Consider MAS (1) with initial condition \(X(0)\in\mathbb{R}^{n}\). Then, under Protocol \(II\), for every real \(\gamma>1\), the following holds._ \[C_{MAS}^{*}\Big{(}2\gamma T^{*}\big{(}X(0)\big{)},X(0)\Big{)}=C_{ MAS}^{*}\Big{(}2T^{*}\big{(}X(0)\big{)},X(0)\Big{)} \tag{22}\] Proof.: Let \(t_{i}^{k,\;\beta}\) and \(t_{i}^{k,\;\beta}\) denote the \(k\)th update instants of agent \(a_{i}\) under Protocols \(I\) and \(II\), respectively. Recall that the dynamics of MAS (1) under Protocol \(II\) is \(\gamma\) times slower than that of under Protocol \(I\). Then, it is easy to show that \[t_{i}^{k,\;\beta}=\gamma\,t_{i}^{k,\;\beta},\hskip 28.452756pt\forall i\in P,\hskip 28.452756pt\forall k\geq 1\] Hence, the timeline of agent \(a_{i}\) under Protocol \(II\) is just the \(\gamma\)-scaled version of its timeline under Protocol \(I\). As a result, the number of update instants of agent \(a_{i}\) in the interval \(\big{[}0,\,2\gamma T^{*}\big{(}X(0)\big{)}\big{]}\) under Protocol \(II\) is equal to that of in the interval \(\big{[}0,\,2T^{*}\big{(}X(0)\big{)}\big{]}\) under Protocol \(I\). Then, the claim (22) follows from the definition of \(T^{*}\big{(}X(0)\big{)}\) in (6) that \(T^{*}\big{(}X(0)\big{)}=3\) sec. ## VII Simulation results In this section, we present the simulation results obtained under Protocol \(II\) for various \(T\big{(}X(0)\big{)}\)'s. Consider MAS (1) with \(n=6\) agents and the communication graph shown in Fig. 4. Let the consensus bound and the control bound be \(\alpha=0.6\) and \(\beta=1\), respectively. Let the initial condition of the MAS be \(X(0)=[7,2,4,3,1,5]\). Then, it follows from the definition of \(T^{*}\big{(}X(0)\big{)}\) in (6) that \(T^{*}\big{(}X(0)\big{)}=3\) sec. We simulated the above MAS under Protocol \(II\) for \(T\big{(}X(0)\big{)}=2T^{*}\big{(}X(0)\big{)}\), \(T\big{(}X(0)\big{)}=10T^{*}\big{(}X(0)\big{)}\) and \(T\big{(}X(0)\big{)}=20T^{*}\big{(}X(0)\big{)}\). The corresponding disagreement trajectories \(z_{i}\)'s are plotted in Fig. 4(a), 4(b) and 4(c), respectively. The corresponding \(\alpha\)-consensus times and optimal communication costs are given in Table \(I\). Notice that in each of the above three cases, Protocol \(II\) achieves \(\alpha\)-consensus within the prespecified time \(T\big{(}X(0)\big{)}\), as proved in Theorem 15. Further, the \(\alpha\)-consensus times corresponding to \(T\big{(}X(0)\big{)}=10T^{*}\big{(}X(0)\big{)}\) and \(T\big{(}X(0)\big{)}=20T^{*}\big{(}X(0)\big{)}\) are \(5\) and \(10\) times of that corresponding to \(T\big{(}X(0)\big{)}=2T^{*}\big{(}X(0)\big{)}\), respectively. This observation is in accordance with the arguments in the proof of Theorem 15. In addition, observe that the trajectories in Figures 4(b) and 4(c) are just the time-stretched versions of the corresponding trajectories in Fig. 4(a). The optimal communication costs \(C_{i}^{*}\)'s and \(C_{MAS}^{*}\) corresponding to \(T\big{(}X(0)\big{)}=2T^{*}\big{(}X(0)\big{)}\), \(T\big{(}X(0)\big{)}=10T^{*}\big{(}X(0)\big{)}\) and \(T\big{(}X(0)\big{)}=20T^{*}\big{(}X(0)\big{)}\) are same, as proved in Theorem 17. ## VIII Conclusion and future work In this paper, a distributed consensus protocol (Protocol \(II\)) is proposed for a MAS of single-integrator agents with bounded inputs and time-invariant communication graph. We showed (Theorems 15 and 16) that the proposed protocol minimizes the aggregate number of communication instants required to achieve \(\alpha\)-consensus within the pre-specified time \(T\big{(}X(0)\big{)}\geq 2T^{*}\big{(}X(0)\big{)}\). The control rules in the proposed protocol are obtained by maximizing the inter-update durations. We computed (Lemmas 11 and 12) the closed form solutions of these optimizations. Finally, we presented the simulation results which verify the theoretical claims. The work of extending the proposed protocol to MASs with complex dynamics is in progress. ### _Intermediate results for the proof of Theorem 8_ In this section, we develop a few intermediate results which will be used later in the proof of Theorem 8. Let \(x_{i}\big{(}t_{i}^{k}\big{)}\) and \(z_{i}\big{(}t_{i}^{k}\big{)}\) be the state and the local disagreement of agent \(a_{i}\) respectively, at its update instant \(t_{i}^{k}\). Recall that \(S_{i}\) is the set of indices of neighbours of agent \(a_{i}\), with cardinality \(n_{i}\). **Lemma 18**.: _Consider agent \(a_{i}\) in MAS (1) with \(|z_{i}\big{(}t_{i}^{k}\big{)}|\leq\alpha\). Then, under Protocol \(I\), the following holds._ 1. \(x_{i}\big{(}t_{i}^{k+1}\big{)}=\dfrac{\sum_{j\in S_{i}}x_{j}\big{(}t_{i}^{k} \big{)}}{n_{i}}\)__ 2. \(|z_{i}(t)|\leq\alpha,\quad\forall t\geq t_{i}^{k}\)__ Proof.: At instant \(t>t_{i}^{k}\), the state \(x_{i}\) of agent \(a_{i}\) is \(x_{i}(t)=x_{i}(t_{i}^{k})+\int_{t_{i}^{k}}^{t}u_{i}(\tau)d\tau\). It is given that \(|z_{i}(t_{i}^{k})|\leq\alpha\). Then, control rule (7)-(8) in Protocol \(I\) lead to \[x_{i}\big{(}t_{i}^{k+1}\big{)}=x_{i}\big{(}t_{i}^{k}\big{)}+\int_{t_{i}^{k}}^{t_ {i}^{k}+\dfrac{\alpha}{\beta n_{i}}\ \dfrac{-z_{i}(t_{i}^{k})}{\alpha}\beta d\tau=x_{i}\big{(}t_{i}^{k}\big{)}- \dfrac{z_{i}\big{(}t_{i}^{k}\big{)}}{n_{i}}\] Fig. 4: Communication graph for simulation Then, it follows from the definition of \(z_{i}\) in (2) that \[x_{i}\big{(}t_{i}^{k+1}\big{)}=x_{i}\big{(}t_{i}^{k}\big{)}-\frac{\sum_{j\in S_{i }}\Big{(}x_{i}\big{(}t_{i}^{k}\big{)}-x_{j}\big{(}t_{i}^{k}\big{)}\Big{)}}{n_{i}} \tag{23}\] Recall that the cardinality of \(S_{i}\) is \(n_{i}\). Then, by re-arranging the terms in (23), we get \(x_{i}\big{(}t_{i}^{k+1}\big{)}=\frac{\sum_{j\in S_{i}}x_{j}\big{(}t_{i}^{k} \big{)}}{n_{i}}\). This proves the claim in Lemma 18.1. Next, we prove that under Protocol \(I\), \(z_{i}\) satisfies \(|z_{i}(t)|\leq\alpha,\ \forall t\in[t_{i}^{k},t_{i}^{k+1}]\). Then, the claim in Lemma 18.2 follows by repeating arguments in the subsequent inter-update intervals. Let the control input \(u_{i}^{*}\) be as defined in (8). We know from the dynamics of \(x_{i}\) in (1) and the definition of \(z_{i}\) in (2) that \[z_{i}\big{(}t_{i}^{k+1}\big{)}=z_{i}\big{(}t_{i}^{k}\big{)}+n_{i}\int_{t_{i}^{ k}}^{t_{i+1}^{k+1}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### _Proof of Theorem 8_ For the sake of contradiction, assume that the claim is not correct, i.e., for some \(n\), \(G\) and \(X(0)\), Protocol \(I\) does not achieve \(\alpha\)-consensus of MAS (1) within time \(2T^{*}\big{(}X(0)\big{)}\). This implies that there exists at least one agent, say \(a_{i}\), and a time instant \(\hat{t}_{i}>2T^{*}\big{(}X(0)\big{)}\), such that \(|z_{i}(\hat{t}_{i})|>\alpha\). Recall from Lemma 7 that Protocol \(I\) satisfies the necessary condition (11). Then, it follows from (11) that under Protocol \(I\), the following holds. \[z_{i}(t)<-\alpha,\ \forall t\in[0,\hat{t}_{i}]\qquad or\qquad z_{i}(t)>\alpha,\ \forall t\in[0,\hat{t}_{i}]\] Without loss of generality, assume that \(z_{i}(t)<-\alpha,\ \forall t\in[0,\hat{t}_{i}]\). Then, it follows from control rule (10) that \(u_{i}(t)=\beta,\ \forall t\in[0,\hat{t}_{i}]\) and hence, \(x_{i}\big{(}\hat{t}_{i}\big{)}=x_{i}(0)+\beta\hat{t}_{i}\). As \(\hat{t}_{i}>2T^{*}\big{(}X(0)\big{)}\), we get \[x_{i}\big{(}\hat{t}_{i}\big{)}>\Big{(}x_{i}(0)+2\beta T^{*}\big{(}X(0)\big{)} \Big{)} \tag{35}\] Let \(x^{min}(0)\) and \(x^{max}(0)\) be as in the definition of \(T^{*}\big{(}X(0)\big{)}\) in (6). Then, (6) and (35) lead to \[x_{i}\big{(}\hat{t}_{i}\big{)}>\Big{(}x_{i}(0)+x^{max}(0)-x^{min}(0)\Big{)} \tag{36}\] It is clear from the definition of \(x^{min}(0)\) that \(x_{i}(0)\geq x^{min}(0)\). Then, (36) leads to \(x_{i}(\hat{t}_{i})>x^{max}(0)\). This contradicts Lemma 20 and proves the claim in Theorem 8. ### _Intermediate result for the proof of Theorem 10_ **Lemma 21**.: _Consider the MAS in Example 9 with the communication graph \(G_{e}\) shown in Fig. 3. Let \(r\) be the number of neighbours of agent \(a_{1}\). Then, for every \(\mu\in\mathbb{R}^{+}\), there exists an integer \(n=n_{\mu,r}\) such that under Protocol \(I\), the following holds._ \[5\!-\!\mu\leq x_{i}(t)\leq 5,\qquad\forall t\geq\mu,\qquad\forall i=2,3, \ldots,n_{\mu,r} \tag{37}\] Proof.: We first define the integer \(n_{\mu,r}\) for which the claim (37) holds. Let \(\mu\in\mathbb{R}^{+}\) be the given number. Define \(\widetilde{\mu}:=\frac{\mu}{2}\). Let _ceil_ be the standard ceiling function. Define the integers \(n_{\mu_{1}}:=1+ceil\bigg{(}\frac{4}{\widetilde{\mu}}\bigg{)}\), \(n_{\mu_{2}}:=ceil\bigg{(}\frac{5}{\mu-\widetilde{\mu}}\bigg{)}\), \(n_{r_{1}}:=1+\bigg{(}\frac{16r}{5r+3}\bigg{)}\) and \(n_{r_{2}}:=3r-7\). Then, define \[n_{\mu,r}:=\max\big{\{}8,n_{\mu_{1}},n_{\mu_{2}},n_{r_{1}},n_{r_{2}}\big{\}}+1 \tag{38}\] Now, consider the MAS in Example 9 with \(n=n_{\mu,r}\). We will show that the claim (37) holds for this MAS. For the sake of clarity, we divide the remaining proof into five parts as follows. \(1)\)_Computation of \(t_{i}^{2}\) and \(x_{i}\big{(}t_{i}^{2}\big{)}\) for \(i\geq 1\)_ According to Protocol \(I\), the first update instant of every agent is \(t_{i}^{1}=0\). At this instant, the states of agents are \(x_{1}(0)=0\) and \(x_{i}(0)=5,\ \forall i=2,\ldots,n_{\mu,r}\). Then, the local disagreement of agent \(a_{1}\) at instant \(t=0\) is \(z_{1}(0)=-5r\). Note that \(r\geq 1\). Thus, \(z_{1}(0)<-3=-\alpha\). Then, it follows from control rule (9)-(10) that \[t_{1}^{2} =\frac{|z_{1}(0)|+\alpha}{2\beta n_{1}}=\frac{5r+3}{2r} \tag{39}\] \[u_{1}^{*}(t) =\beta=1,\qquad\forall t\in\big{[}0,t_{1}^{2}\big{)} \tag{40}\] As a result, \(x_{1}\big{(}t_{1}^{2}\big{)}=x_{1}(0)+t_{1}^{2}=\frac{5r+3}{2r}\). Fig. 5: Comparison of local disagreement trajectories under Protocol \(II\) for various \(T\big{(}X(0)\big{)}\)’s Now, consider any agent \(a_{l},\ l=2,\ldots,r+1\). Note that \(n_{l}=n_{\mu,r}-1\). Thus, the local disagreement of agent \(a_{l}\) at instant \(t=0\) is \(z_{l}(0)=(5-0)+(n_{\mu,r}-2)(5-5)=5>\alpha\). Then, it follows from control rule (9)-(10) that \[t_{l}^{2}= \frac{|z_{l}(0)|+\alpha}{2\beta n_{l}}=\frac{4}{n_{\mu,r}-1} \tag{41}\] \[u_{l}^{*}(t)= -\beta=-1,\ \ \ \ \ \forall t\in\big{[}0,t_{l}^{2}\big{)} \tag{42}\] As a result, \(x_{l}\big{(}t_{l}^{2}\big{)}=x_{l}(0)-t_{l}^{2}=5-\frac{4}{n_{\mu,r}-1}\). It follows from the definitions of \(n_{\mu_{1}}\) and \(n_{\mu,r}\) in (38) that \(\frac{4}{n_{\mu,r}-1}<\widetilde{\mu}\). Thus, \(x_{l}\big{(}t_{l}^{2}\big{)}\) satisfies \((5-\widetilde{\mu})<x_{l}\big{(}t_{l}^{2}\big{)}<5\). Recall from (42) that \(\dot{x}_{l}(t)<0,\ \forall t\in[0,t_{l}^{2})\). Recall also that \(x_{l}(0)=5\). Hence, \[(5-\widetilde{\mu})<x_{l}(t)<5,\ \ \forall t\in\big{[}0,t_{l}^{2}\big{)},\ \ \forall l=2,\ldots,r+1 \tag{43}\] Next, consider any agent \(a_{j},\ j=r+2,\ldots,n_{\mu,r}\). Note that \(n_{j}=n_{\mu,r}-2\). Thus, the local disagreement of agent \(a_{j}\) at instant \(t=0\) is \(z_{j}(0)=(5-5)(n_{\mu,r}-2)=0\). Then, it follows from control rule (7)-(8) that \[t_{j}^{2} =\frac{\alpha}{\beta n_{j}}=\frac{3}{n_{\mu,r}-2} \tag{44}\] \[u_{j}^{*}(t) =0,\ \ \ \ \ \ \forall t\in\big{[}0,t_{j}^{2}\big{)} \tag{45}\] As a result, the state \(x_{j}\) satisfies \[x_{j}(t)=x_{j}(0)=5,\ \ \ \ \ \forall t\in\big{[}0,t_{j}^{2}\big{]},\ \ \ \ \ \forall j=r+2,\ldots,n_{\mu,r} \tag{46}\] _2) Ordering of update instants \(t_{i}^{2}\)'s_ Let \(t_{i}^{2}\)'s be as given in (41) and (44). Recall from (38) that \(n_{\mu,r}>8\). Then, it is easy to show that \(\frac{3}{n_{\mu,r}-2}<\frac{4}{n_{\mu,r}-1}\). Then, it follows from (41) and (44) that \[t_{r+2}^{2}=t_{r+3}^{2}=\cdots=t_{n_{\mu,r}}^{2}<t_{2}^{2}=t_{3}^{2}=\cdots=t _{r+1}^{2} \tag{47}\] Let \(t_{1}^{2}\) be as given in (39). It follows from the definitions of \(n_{r_{1}}\) and \(n_{\mu,r}\) in (38) that \(\frac{4}{n_{\mu,r}-1}<\frac{5r+3}{2r}\). Then, (39) and (41) lead to \[t_{2}^{2}=t_{3}^{2}=\cdots=t_{r+1}^{2}<t_{1}^{2} \tag{48}\] \(3\)) _Computation of \(t_{i}^{3}\) for \(i\geq 2\)_ Consider any agent \(a_{l},\ l=2,\ldots,r+1\). Recall from (47) and (48) that \(t_{r+2}^{2}<t_{l}^{2}<t_{1}^{2}\). Now, we compute \(z_{l}\big{(}t_{r+2}^{2}\big{)}\). According to (40) and (44), the state of agent \(a_{1}\) at instant \(t_{r+2}^{2}\) is \(x_{1}\big{(}t_{r+2}^{2}\big{)}=\frac{3}{n_{\mu,r}-2}\). Similarly, according to (42) and (44), the state of agent \(a_{l},\ l=2,\ldots,r+1\), at instant \(t_{r+2}^{2}\) is \(x_{l}\big{(}t_{r+2}^{2}\big{)}=5-\frac{3}{n_{\mu,r}-2}\). Recall from (46) that \(x_{j}\big{(}t_{r+2}^{2}\big{)}=5,\ \forall j=r+2,\ldots,n_{\mu,r}\). Thus, for \(l=2,\ldots,r+1\), we get \[z_{l}\big{(}t_{r+2}^{2}\big{)}=\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### _Proof of Theorem 10_ Let \(\epsilon>0\) be the given real number. Define \(\widehat{\mu}:=\frac{\epsilon}{2}\) and \(\widehat{r}:=\max\left\{ceil\!\left(\frac{2\alpha}{\epsilon}\right)\!,ceil\! \left(\frac{\alpha}{5-\epsilon}\right)\right\}\). Let \(n_{\widehat{\mu},\widehat{r}}\) be as defined in (38) corresponding to \(\widehat{\mu}\) and \(\widehat{r}\). Recall from (50) that \(|z_{l}(t)|\leq\alpha,\ \forall t\geq t_{r+2}^{r},\ \forall l=2,\ldots,r+1\), where the expression of \(t_{r+2}^{2}\) is given in (44). Then, it follows from (44), the definitions of \(n_{\mu_{1}}\) and \(n_{\mu,r}\) in (38) and the definition of \(n_{\widehat{\mu},\widehat{r}}\) that \(\widehat{\mu}\geq t_{r+2}^{2}\). Thus, \[|z_{i}(t)|\leq\alpha,\ \ \forall t\geq\widehat{\mu},\ \ \forall i=2,\ldots,r+1 \tag{60}\] Further, recall from (53) that \[|z_{i}(t)|\leq\alpha,\ \ \forall t\geq 0,\ \ \forall i=r+2,\ldots,n_{\widehat{ \mu},\widehat{r}} \tag{61}\] Then, (60) and (61) imply that the \(\alpha\)-consensus time is equal to the maximum of \(\widehat{\mu}\) and the time required to steer \(z_{1}\) inside \([-\alpha,\alpha]\). Let \(\widehat{t}_{1}\) be the first time instant at which \(|z_{1}(\widehat{t}_{1})|=\alpha\). Recall that the initial conditions of the agents in Example 9 are \(x_{1}(0)=0\) and \(x_{i}(0)=5,\ \forall i=2,\ldots,n_{\widehat{\mu},\widehat{r}}\). Thus, \(z_{1}(0)=-5r<-\alpha=-3\). Then, clearly, \(z_{1}\big{(}\widehat{t}_{1}\big{)}=-\alpha\). We know from Lemma 21 that \(x_{i}(t)\geq(5-\widehat{u}),\ \forall t\geq\widehat{\mu},\ \forall i=2,\ldots,n_{ \widehat{\mu},\widehat{r}}\). Then, \(z_{1}\big{(}\widehat{t}_{1}\big{)}=-\alpha\) implies that \(x_{1}\big{(}\widehat{t}_{1}\big{)}\geq\left(5-\widehat{\mu}-\frac{\alpha}{ \widehat{r}}\right)\). Further, the control rule (10) and \(z_{1}\big{(}\widehat{t}_{1}\big{)}=-\alpha\) together imply that \(\dot{x}_{1}(t)=\beta=1,\ \forall t\in\big{[}0,\widehat{t}_{1}\big{)}\). Recall that \(x_{1}(0)=0\). Hence, \(\widehat{t}_{1}\geq\left(5-\widehat{\mu}-\frac{\alpha}{\widehat{r}}\right)\). Then, it follows from the definitions of \(\widehat{\mu}\) and \(\widehat{r}\) that \(\widehat{t}_{1}\geq(5-\epsilon)\) and \(\widehat{t}_{1}\geq\widehat{\mu}\). Subsequently, (60) and (61) imply that the \(\alpha\)-consensus time \(T\big{(}\alpha,X(0)\big{)}\) satisfies \(T\big{(}\alpha,X(0)\big{)}\geq(5-\epsilon)=\left(2T^{*}\big{(}X(0)\big{)}- \epsilon\right)\). This proves the first inequality in (12). The second inequality in (12) follows from Theorem 8. This completes the proof. ### _Proof of Lemma 11_ It is given that \(|z_{i}(t_{i}^{k})|\leq\alpha\). Then, it follows from Lemma 18.2 that control rule (7)-(8) satisfies constraint (16). Let \(t_{i}^{k+1}\) and \(u_{i}^{*}\) be as defined in (7) and (8), respectively. Then, the inter-update duration \(\big{(}t_{i}^{k+1}-t_{i}^{k}\big{)}\) is \(\frac{\alpha}{\beta n_{i}}\), for all \(k\geq 1\). Next, we show that \(\frac{\alpha}{\beta n_{i}}\) is the max-min value of optimization (14)-(16). This will prove the optimality of control rule (7)-(8) and complete the proof. For the sake of contradiction, assume that the max-min value of optimization (14)-(16) is greater than \(\frac{\alpha}{\beta n_{i}}\). Let that value be \(T_{i}:=\big{(}t_{i}^{k+1}-t_{i}^{k}\big{)}>\frac{\alpha}{\beta n_{i}}\). Let \(\widetilde{u}_{i}\in\mathcal{U}\) be the corresponding optimal control. Then, it follows from the evolution of \(z_{i}\) given in (13) that \[z_{i}\big{(}t_{i}^{k+1}\big{)}=z_{i}\big{(}t_{i}^{k}\big{)}+n_{i}\int_{t_{i}^{ k}}^{t_{i}^{k}+T_{i}}\widetilde{u}_{i}(t)\ dt-\sum_{j\in S_{i}}\Bigg{(}\int_{t_{i}^{k}}^{t_{i}^{k }+T_{i}^{k}}u_{j}(t)\ dt\Bigg{)} \tag{62}\] Define \(\widetilde{z}_{i}\big{(}t_{i}^{k+1}\big{)}:=z_{i}\big{(}t_{i}^{k}\big{)}+n_{i} \int_{t_{i}^{k}+T_{i}}^{t_{i}^{k}+T_{i}}\widetilde{u}_{i}(t)\ dt\). Then, depending on the value of \(\widetilde{z}_{i}\big{(}t_{i}^{k+1}\big{)}\), there are following three cases. Case 1 : \(\big{|}\widetilde{z}_{i}\big{(}t_{i}^{k+1}\big{)}\big{|}>\alpha\) Take \(u_{j}(t)=0,\ \forall t\in[t_{i}^{k},t_{i}^{k}+T_{i}],\ \forall j\in S_{i}\). Then, it follows from (62) that \(\big{|}z_{i}\big{(}t_{i}^{k+1}\big{)}\big{|}>\alpha\). Case 2 : \(0\leq\widetilde{z}_{i}\big{(}t_{i}^{k+1}\big{)}\leq\alpha\) Take \(u_{j}(t)=-\beta,\ \forall t\in[t_{i}^{k},t_{i}^{k}+T_{i}],\ \forall j\in S_{i}\). Then, it follows from (62) that \[z_{i}\big{(}t_{i}^{k+1}\big{)}=\widetilde{z}_{i}\big{(}t_{i}^{k+1}\big{)}+\sum_{ j\in S_{i}}\beta\big{(}t_{i}^{k}+T_{i}-t_{i}^{k}\big{)}=\widetilde{z}_{i}\big{(}t_{i}^{k+1} \big{)}+\beta n_{i}T_{i}\] Recall that \(T_{i}>\frac{\alpha}{\beta n_{i}}\), which is equivalent to \(\beta n_{i}T_{i}>\alpha\). Then, the fact \(\widetilde{z}_{i}\big{(}t_{i}^{k+1}\big{)}\geq 0\) implies that \(\big{|}z_{i}\big{(}t_{i}^{k+1}\big{)}\big{|}>\alpha\). Case 3 : \(-\alpha\leq\widetilde{z}_{i}\big{(}t_{i}^{k+1}\big{)}<0\) Take \(u_{j}(t)=\beta,\ \forall t\in[t_{i}^{k},t_{i}^{k+1}],\ \forall j\in S_{i}\). Then, by following the arguments in Case 2, we get \(\big{|}z_{i}\big{(}t_{i}^{k+1}\big{)}\big{|}>\alpha\). In each of the above cases, for control \(u_{i}\), there exist \(u_{j}\in\mathcal{U},\forall j\in S_{i}\), such that \(\big{|}z_{i}\big{(}t_{i}^{k+1}\big{)}\big{|}>\alpha\). This violates constraint (16) and in result, contradicts the assumption that \(\widetilde{u}_{i}\) is a solution of optimization (14)-(16). This contradiction proves our claim that \(\frac{\alpha}{\beta n_{i}}\) is the max-min value of optimization (14)-(16) and completes the proof. ### _Proof of Lemma 12_ We first show that control rule (9)-(10) satisfies constraint (19). Let \(t_{i}^{k+1}\) and \(u_{i}^{*}\) be as defined in (9) and (10), respectively. Define \(T_{i}:=t_{i}^{k+1}-t_{i}^{k}=\frac{|z_{i}\big{(}t_{i}^{k}\big{)}|+\alpha}{2 \beta n_{i}}\). It is given that \(z_{i}\big{(}t_{i}^{k}\big{)}<-\alpha\). Then, (10) leads to \(u_{i}^{*}(t)=\beta,\ \forall t\in[t_{i}^{k},t_{i}^{k}+T_{i}]\). Recall the evolution of \(z_{i}\) given in (13). By putting the expressions of \(t_{i}^{k+1}\) and \(u_{i}^{*}\) in (13), we get \[z_{i}\big{(}t_{i}^{k+1}\big{)}=z_{i}\big{(}t_{i}^{k}\big{)}+\frac{\big{|}z_{i} \big{(}t_{i}^{k}\big{)}\big{|}+\alpha}{2}-\sum_{j\in S_{i}}\Bigg{(}\int_{t_{i}^{ k}}^{t_ agents be \(u_{j}(t)=-\beta,\ \forall t\in[t_{i}^{k},t_{i}^{k+1}),\ \forall j\in S_{i}\). Then, it follows from the evolution of \(z_{i}\) given in (13) that \[z_{i}\big{(}t_{i}^{k+1}\big{)}=z_{i}\big{(}t_{i}^{k}\big{)}+n_{i}\int_{t_{i}^{k} }^{t_{i}^{k+T_{i}+\epsilon}}\beta\ d\!t+\!\sum_{j\in S_{i}}\left(\int_{t_{i}^{k }}^{t_{i}^{k+T_{i}+\epsilon}}\beta\ d\!t\right)\] Now, by putting the expression of \(T_{i}\) and rearranging the terms, we get \(z_{i}\big{(}t_{i}^{k+1}\big{)}=\left(z_{i}\big{(}t_{i}^{k}\big{)}+|z_{i}\big{(} t_{i}^{k}\big{)}|+\alpha+2\beta n_{i}\epsilon\right)\). As \(z_{i}\big{(}t_{i}^{k}\big{)}<0\), we have \(z_{i}\big{(}t_{i}^{k}\big{)}+|z_{i}\big{(}t_{i}^{k}\big{)}|=0\). Thus, \(z_{i}\big{(}t_{i}^{k+1}\big{)}=\left(\alpha+2\beta n_{i}\epsilon\right)>\alpha\). This violates constraint (19) and contradicts the fact that \((T_{i}+\epsilon)\) is the max-min value of optimization (18)-(19). Hence, the inter-update duration \(T_{i}\) under control rule (9)-(10) is the max-min value of optimization (18)-(19). This completes the proof.
2307.14453
Predictive Maintenance of Armoured Vehicles using Machine Learning Approaches
Armoured vehicles are specialized and complex pieces of machinery designed to operate in high-stress environments, often in combat or tactical situations. This study proposes a predictive maintenance-based ensemble system that aids in predicting potential maintenance needs based on sensor data collected from these vehicles. The proposed model's architecture involves various models such as Light Gradient Boosting, Random Forest, Decision Tree, Extra Tree Classifier and Gradient Boosting to predict the maintenance requirements of the vehicles accurately. In addition, K-fold cross validation, along with TOPSIS analysis, is employed to evaluate the proposed ensemble model's stability. The results indicate that the proposed system achieves an accuracy of 98.93%, precision of 99.80% and recall of 99.03%. The algorithm can effectively predict maintenance needs, thereby reducing vehicle downtime and improving operational efficiency. Through comparisons between various algorithms and the suggested ensemble, this study highlights the potential of machine learning-based predictive maintenance solutions.
Prajit Sengupta, Anant Mehta, Prashant Singh Rana
2023-07-26T18:50:32Z
http://arxiv.org/abs/2307.14453v1
# Predictive Maintenance of Armoured Vehicles using Machine Learning Approaches ###### Abstract Armoured vehicles are specialized and complex pieces of machinery designed to operate in high-stress environments, often in combat or tactical situations. This study proposes a predictive maintenance-based ensemble system that aids in predicting potential maintenance needs based on sensor data collected from these vehicles. The proposed model's architecture involves various models such as Light Gradient Boosting, Random Forest, Decision Tree, Extra Tree Classifier and Gradient Boosting to predict the maintenance requirements of the vehicles accurately. In addition, K-fold cross validation, along with TOPSIS analysis, is employed to evaluate the proposed ensemble model's stability. The results indicate that the proposed system achieves an accuracy of 98.93%, precision of 99.80% and recall of 99.03%. The algorithm can effectively predict maintenance needs, thereby reducing vehicle downtime and improving operational efficiency. Through comparisons between various algorithms and the suggested ensemble, this study highlights the potential of machine learning-based predictive maintenance solutions. Ensemble Models, Machine Learning Models, Classification, Bootstrapping, Topsis Analysis, Cross-Validation ## I Introduction Armoured vehicles are critical assets in defence and security operations, and their proper functioning is essential for mission success. These vehicles, which are designed to withstand hostile environments and provide protection for personnel and equipment, are complex systems that rely on a wide range of components, including engines, transmissions, and hydraulics. These components can experience wear and tear over time, leading to potential failures and downtime that can compromise the effectiveness of the vehicle [1]. Traditional maintenance approaches often rely on scheduled maintenance and inspections, which can be inefficient and costly. Predictive maintenance, on the other hand, can leverage machine learning algorithms to identify potential problems and optimize maintenance schedules, reducing costs and downtime while improving the overall reliability and effectiveness of the vehicle. Machine learning approaches, such as supervised and unsupervised learning, by analyzing massive volumes of data and finding patterns that might reveal future faults, can be useful in predictive maintenance. This approach can reduce maintenance costs, improve vehicle readiness, and enhance overall operational efficiency. This study explores the potential of ensemble model based approach for predictive maintenance of armoured vehicles [2]. A meta algorithm is proposed that is comprised of five base classifiers, to help in technical surveillance as shown in figure 1. A thorough examination of the pertinent literature is conducted, along with a discussion of the relative merits and demerits of the various existing machine learning algorithms used for armoured vehicle predictive maintenance. This research also discusses the challenges and future prospects of this emerging field, highlighting areas where further research can be done. By doing so, this paper provides a valuable resource for researchers, practitioners, and policymakers interested in improving armoured vehicles' maintenance and operational readiness. Fig. 1: Communication between broken-down vehicle and control tower ## II Related Work Xiao et al. (2021) explained the basics of machine learning and fault diagnosis in their paper. Several other popular machine learning techniques were discussed, and the development status in recent years was summarised and analyzed [3]. In another study, Theissler et al. (2021) discussed the PdM(Predictive Maintenance), when integrated with Machine Learning approaches provides valuable result. In this study they collected, classified, and analyzed papers from an application and Machine Learning standpoint [4]. Arena et al. (2021) provided a comprehensive research study of AI and statistical inference techniques, as well as stochastic methods, for use in automotive preventative maintenance. The authors presented an LSTM network for RUL prediction, which was given features extracted from the sensor data by a CNN (Convolutional Neural Network). In addition to predicting the RUL, machine learning approaches were also used to optimize the maintenance schedules of armoured vehicles [5]. In order to predict a heavy machine's condition for maintenance, this paper presented by Putra et al. (2021) concentrated on developing ML models with actual data from large vehicles like trains and tanks. Classification Algorithms were used to predict the failure scope of the machine in order to maintain it [6]. A study by Jain et al. (2022) used a Deep Learning method to estimate how long a tank engine would last in service [7]. Similarly, a study by Tessaro et al. (2020) recommended a machine learning based method to predict the remaining useful life of military vehicles. This research demonstrated the potential of machine learning approaches in predicting equipment failures and optimizing maintenance schedules for armoured vehicles [8]. Several studies have explored the potential of machine learning approaches for predictive maintenance in various industrial sectors. For example, a study by Paolanti et al. (2018) used a Random Forest-based Machine Learning architecture for Predictive Maintenance. The training data was collected form various sensors, machine PLCs and other components [9]. Similarly, a research by Divya et al. (2022) outlined a ML-based system for wind turbine predictive maintenance that achieved remarkably high accuracy. It highlighted machine learning's promise for predictive maintenance and highlighted its capacity to identify and stop equipment faults. [10]. Souza et al. (2022) conducted a review of machine learning techniques for predictive maintenance of industrial equipment, including armoured vehicles. They concluded that machine learning approaches, such as decision trees, random forests, and support vector machines, are effective in predicting equipment failures and can be used to improve the maintenance practices of armoured vehicles [11]. Zhang et al. (2019) conducted a review of predictive maintenance techniques in industrial applications, including the use of machine learning approaches. They focused on data-driven methods for PdM. They presented six machine learning and deep learning (DL) algorithms are used to categorise specific industrial applications, and five performance metrics were compared for each classification [12]. Raja et al. (2022) discussed the predictive maintenance(PdM) of various electrical machines, such as BLDC motors. To have a cost effective diagnostic system they presented a data acquisition system used to transmit the data in real-time onto the cloud, where it is further processed to ascertain whether there is a possibility that a motor fault could occur. They used IoT and Cloud techniques to maintain the different electrical components [13]. ## III Methodology The methodology for this work consists of three main steps. Firstly, a dataset is extracted from the armoured vehicles using sensors and other monitoring equipment. We have taken the AI4I 2020 Predictive Maintenance Dataset for our study. Secondly, the data set is pre-processed to remove any noisy or irrelevant data, and class imbalance is addressed. Finally, the proposed ensemble model, consisting of multiple machine learning algorithms, is trained on the pre-processed dataset to predict potential equipment failures. The efficiency of the ensemble model in foreseeing maintenance difficulties is assessed using key measures. ### _Dataset_ The AI4I 2020 Predictive Maintenance Dataset is a publicly available dataset provided by the UCI Machine Learning Repository [14]. The dataset contains sensor data collected from an industrial production line of a simulated manufacturing plant. The purpose of the dataset is to facilitate research and development in predictive maintenance and machine learning. The dataset comprises 10,000 rows of data, with six features stored in columns. * The first feature is the _Product ID/Type_. * The second feature is _Air Temperature_, which is generated using a random walk process. * The third feature is _Process Temperature_. * The fourth feature is _Rotational Speed_. This is derived from a power of 2900 Watts. * The fifth feature is _Torque_, with values generally distributed about 40 Nm. * The final feature is _Tool Wear_, which is influenced by the product quality variant. This synthetic dataset can be used to train and test machine learning models for predictive maintenance analysis, despite being a simulated representation of real-world maintenance data. ### _Machine Failure Modes and Description_ There are five distinct failure modes in the AI4I 2020 Predictive Maintenance Dataset [14]. Each of the modes are listed below: * Tool Wear Failure (TWF) * Heat Dissipation Failure (HDF) * Power Failure (PWF) * Overstrain Failure (OSF) * Random Failures (RNF) The machine failure label is assigned to 1 if any of the aforementioned procedures fail, which is the case for 339 data points. If none of the aforementioned failure types are present, the Machine Failure Label is set to 0. [14]. However, in this research only six features have been considered and a binary classification model is trained on them. A multi-label classification problem can be solved by including above five independent failure modes. But, it is not a part of the study. ### _Feature Extraction_ As stated earlier, a total of six features were extracted from the dataset and have been displayed in Table I with the respective values of all 10000 machines and their respective units. These features have been abbreviated to the following names: _Type_ as F1, _Air Temperature_ as F2, _Process Temperature_ as F3, _Rotational Speed_ as F4, _Torque_ as F5 and _Tool Wear_ as F6. ### _Data Preprocessing_ In Statistical Machine Learning, the data must be pre-processed before training the machine learning model on it. The following steps were performed to preprocess the data: * The data along with the features and the output category was first represented in the form of a Pandas dataframe. * The dataset was then divided into feature vector and output vector. Min-Max Normalization [15] was used to scale the features F2, F3, F4, F5 and F6 (as shown in Table I). The formula for this Normalization is shown in Equation 1, with \(F_{\alpha}\) referring to the \(\alpha\)th feature. \[\begin{split} ScaledData=(X[:,F_{\alpha}]-min(X[:,F_{\alpha}])) \\ /(max(X[:,F_{\alpha}])-min(X[:,F_{\alpha}]))\end{split}\] (1) * Once the Normalization was done, F1 was label encoded into three categories as shown in Equation 2 \[X[^{\prime}Type^{\prime}]=[L,M,H]\to X[^{\prime}Type^{\prime}]=[1,2,0]\] (2) * The dataframe was then converted into 2-D NumPy array and was divided into a 70:30 train-test ratio. * After balancing the data (described below), various models depicted in table II were trained using Scikit-learn library and their classification scores were recorded. ### _Data Balancing_ After splitting the dataset (70:30), there were only 240 instances of Label '1' as compared to the 6760 samples of Label '0'. Therefore, Synthetic Minority Over-sampling Technique (SMOTE) was used to equalize the number of both the samples. By generating synthetic samples for the minority class, it helps in creating a more balanced dataset, leading to more robust and unbiased models. Moreover, SMOTE does not result in any loss of information since the synthetic instances are created from existing data points. \[x_{new}=x_{min}+(x_{i}-x_{min})\times\delta \tag{3}\] In equation 3, \(x_{min}\) represents the number of original minority class datapoints. \(ith\) nearest neighbor is depicted by \(x_{i}\) and \(x_{new}\) shows the synthetic datapoint generated. As shown in figure 2, the count of label '1' is increased from 240 to 6760. Consequently, there is an increase of total samples in training set from 7000 to 13520 as shown by figure 3. ## IV Proposed Model The pipeline's workflow and the working of the proposed ensemble model can be divided into four phases, as illustrated in Fig.4. **Phase 1**: In the first phase, we extracted the data from the AI4I 2020 Predictive Maintenance Dataset [14]. Six features (F1, F2, F3, F4, F5 and F6) as depicted in table I were extracted from the dataset. The data was than pre-processed as explained in Section III-D. The training and testing data was split in 70:30 ratio and datapoints were balanced. **Phase 2**: Further, five bootstrap samples were created with replacement. The size of each bootstrap sample was kept as the size of the training data. These samples were to be used as input to five classifiers. These classification algorithms were decided in the next phase. **Phase 3**: According to the accuracy obtained after hyper parameter tuning of the various Machine Learning Models shown in table II, five best machine learning models were selected namely, Light Gradient Boosting, Decision Tree Classifier, Gradient Boosting Classifier, Random Forest Classifier and Extra Tree Classifier. Accordingly, five Bootstrap samples from the pre-processed were taken in this phase and fed onto the respective models. The prediction was done according to majority voting. **Phase 4**: Topsis Analysis was used to create the ranking of different models along with the ensemble model. The proposed ensemble achieved the first rank in the statistical analysis. The model's reliability was tested with additional K-Fold Cross Validation. (discussed in Section V-C). ## V Model Evaluation To assess the effectiveness of the suggested ensemble model, several parameters, including precision, recall, accuracy, AUC and F1, were calculated. The obtained metrics are organised in a table and are displayed in Table III. ### _Model Evaluation Parameters_ 1. _Precision_: Precision measures how often the model's positive predictions are correct. It is calculated by dividing the entire number of the true positives (TP) by the addition of the true positives and false positives (FP). Precision is computed as: \[Precision=\gamma/(\gamma+\lambda)\] (4) where \(\gamma\) is the number of accurate results and \(\lambda\) is the number of erroneous ones. 2. _F1 Score_: The F1 score is a measurement that combines recall and precision into a single figure. It is determined Fig. 3: Variation in Training set (Total) Fig. 2: Variation in the count of minority class by averaging them harmonically and is calculated using the following equation: \[F1=2*(a*b)/(a+b) \tag{5}\] wherein \(a\) corresponds to Precision and \(b\) corresponds to the Recall obtained by training the classification model. 3. _Area under curve (AUC)_: AUC is a measure of the model's ability to distinguish between positive and negative classes. AUC measures the model's ability to distinguish between positive and negative cases. The area under curve constructed by plotting the true positive rate (TPR) against the false positive rate (FPR) at varying cutoff values is the measure of accuracy. 4. _Recall_: Recall measures how well the model is able to identify all the positive cases. It is determined by dividing the total number of positive results (\(\gamma\)) by the total number of possible negative results (\(\gamma\)+\(\chi\)). The following equation is used to determine it: \[Recall=\gamma/(\gamma+\chi)\] (6) 5. _Accuracy_: The accuracy of a model is evaluated by how well its predictions actually turn out. Correct predictions are divided by overall predictions to get this ratio. It is calculated using the following formula: \[Accuracy=(TP+TN)/(TP+TN+FP+FN)\] (7) Correct predictions are the addition of TP and TN, overall resulting prediction is the collective summation of TP, TN, FP and FN. Fig. 4: Workflow of the Proposed Ensemble Model ### _Topsis_ TOPSIS is a multi-criteria decision analysis method used for solving complex decision-making problems. It is a commonly used tool in operations research and management sciences. The basic idea behind TOPSIS is to identify the best alternative out of a set of alternatives based on a set of criteria. The method evaluates each alternative based on how well it satisfies the criteria and then ranks the alternatives in order of preference. Each TOPSIS criteria contains both a positive and a negative ideal solution that are necessary for the approach to work. When evaluating a set of criteria, the best possible answer is the ideal solution, while the worst possible solution is the negative ideal solution. The ideal and negative ideal solutions are determined by the decision maker based on their preferences and goals [16]. ### _K-Fold Cross Validation_ A common method for assessing a machine learning model's performance is K-fold cross-validation. It involves dividing the data into k equal-sized subsets, or folds. Out of the k folds, each fold is once used for testing and the remaining k-1 folds used for training. Repeated K-fold Cross Validation is used to test whether the ensemble model that is being suggested is consistent with low bias and variance [17]. Five instances of the 5-fold Cross Evaluation are performed in the current study. The resulting graph of the above cross validation is shown in figure 6. As the lines are coinciding, it is indicating that the suggested ensemble model is reliable. The overall average accuracy after five trials is 95.36%. Fig. 5: Confusion Matrix (Testing Set) Fig. 6: K-Fold Cross Validation ## VI Result Analysis and Discussion The machine learning models that are shown in II were trained using the AI4I 2020 Predictive Maintenance Dataset along with the hyper parameters that were adjusted for each model accordingly. The dataset for training was used to train the models by taking the bootstrap sample (SWR), and the testing dataset was used to validate them. Five models were combined in the ensemble model that is being proposed. The models were evaluated according to the evaluation parameter mentioned in Section V-A. The proposed ensemble model outperformed other machine learning models, according to Topsis Analysis as well as in the evaluation parametric. The proposed algorithm achieves a testing accuracy of 98.93%, precision of 99.80% and a recall of 99.03%. Overfitting is a potential issue that can arise during training. To overcome this we had cross-validated the model using K-Fold cross-validations with 5 runs. ## VII Conclusion and Future Scope In conclusion, predictive maintenance using machine learning approaches has the potential to significantly improve the reliability and availability of armoured vehicles. By analyzing large amounts of data, such as sensor readings, historical maintenance records, and operational conditions, machine learning models can identify patterns and anomalies that indicate potential failures or maintenance needs before they occur. Moreover, the application of machine learning based approaches can help reduce costs associated with unexpected breakdowns, extend the lifespan of armoured vehicles, and improve overall mission readiness. The proposed ensemble model created using Decision Tree, Random Forest, Extra Tree Classifier and Gradient Boosting techniques achieves an accuracy of 98.93%. Future research in this field may concentrate on enhancing the predictive maintenance models' accuracy by including further data sources, such as weather information, information about the topography, and operator behavior. With ongoing research and development efforts aimed at improving the accuracy and efficiency of machine learning algorithms. The use of real-time data from sensors and the incorporation of advanced analytics, such as deep learning, are expected to enhance the ability of predictive maintenance systems to detect and anticipate problems. Additionally, research could be conducted to develop new machine learning algorithms that can operate with limited or incomplete data, as well as explore ways to integrate these models into existing maintenance management systems. Also, finer tuning of the hyperparameters as displayed in Table II may be done to raise the suggested ensemble model's accuracy.
2305.05327
Bayes Linear Analysis for Statistical Modelling with Uncertain Inputs
Statistical models typically capture uncertainties in our knowledge of the corresponding real-world processes, however, it is less common for this uncertainty specification to capture uncertainty surrounding the values of the inputs to the model, which are often assumed known. We develop general modelling methodology with uncertain inputs in the context of the Bayes linear paradigm, which involves adjustment of second-order belief specifications over all quantities of interest only, without the requirement for probabilistic specifications. In particular, we propose an extension of commonly-employed second-order modelling assumptions to the case of uncertain inputs, with explicit implementation in the context of regression analysis, stochastic process modelling, and statistical emulation. We apply the methodology to a regression model for extracting aluminium by electrolysis, and emulation of the motivating epidemiological simulator chain to model the impact of an airborne infectious disease.
Samuel E. Jackson, David C. Woods
2023-05-09T10:26:57Z
http://arxiv.org/abs/2305.05327v1
# Bayes Linear Analysis for Statistical Modelling with Uncertain Inputs ###### Abstract Statistical models typically capture uncertainties in our knowledge of the corresponding real-world processes, however, it is less common for this uncertainty specification to capture uncertainty surrounding the values of the inputs to the model, which are often assumed known. We develop general modelling methodology with uncertain inputs in the context of the Bayes linear paradigm, which involves adjustment of second-order belief specifications over all quantities of interest only, without the requirement for probabilistic specifications. In particular, we propose an extension of commonly-employed second-order modelling assumptions to the case of uncertain inputs, with explicit implementation in the context of regression analysis, stochastic process modelling, and statistical emulation. We apply the methodology to a regression model for extracting aluminium by electrolysis, and emulation of the motivating epidemiological simulator chain to model the impact of an airborne infectious disease. Durham University, Durham, UK [email protected] ## 1 Introduction Most often, it is assumed that the input, or independent, variables of a statistical model are known, and that uncertainties largely result from measurement error in the response and an imperfect description of the link between the independent variables and the responses. However, in reality, the inputs may have uncertainty surrounding their values, that is, they may themselves be random variables. This may arise as a result of imperfect measurements of the corresponding quantities in the physical experiment, or when those input variables may have been themselves modelled as an independent process. Such modelling is sometimes referred to as Error in Variables models (Durbin 1954, Figueroa-Zuniga et al. 2022), although such models are primarily concerned with estimation at specified independent variable values when the independent variables for the training data have been measured with error (often a parameterised homoscedastic error structure to be estimated). In this paper, we develop general Bayes linear modelling methods for experiments where there is specified uncertainty in the values of the inputs for both the training data used to estimate the model and use-case data for which predictions are required. Following de Finetti (de Finetti 1974, 1975, Whittle 1992), Bayes linear methods (Goldstein & Wooff 2007) involve prior second-order belief specification over all quantities of interest; the beliefs about unobserved quantities then being adjusted in light of those which have been observed. A general advantage of the Bayes linear approach over the fully Bayesian approach is the lack of requirement to specify full probabilistic distributions over all quantities of interest, these often being difficult to specify meaningfully and thus often chosen for computational convenience. Bayes linear methods have seen diverse application, including in the petrochemical (Craig et al. 1996), medical (Gosling et al. 2013) and climate (Astfalck et al. 2021) sciences. We propose an extension of second-order modelling assumptions to the case of uncertain inputs. Whilst the methodology is generally applicable, two important scenarios will be studied in detail. Firstly, uncertain input modelling for regression analyses, applied to a specific example taken from Goldstein & Wooff (1998). In this example, input uncertainty is assumed to arise from inaccurate physical measurements training input values. Secondly, we address emulation of computer models, or simulators, using a Bayes linear framework, often termed Bayes linear emulation (Cumming & Goldstein 2009, Goldstein & Huntley 2016). We assume uncertain inputs arising, for example, as the outputs of a stochastic process (perhaps another emulated simulator, as is discussed in Section 5.3) about which we are uncertain. Such application first requires consideration of stochastic process modelling. The article is structured as follows. In Section 2, we formally introduce concepts and notation concerning Bayes linear methods, before discussing our developed general framework to statistical modelling with uncertain inputs within a Bayes linear paradigm. In Section 3, we demonstrate the Bayes linear modelling approach in the context of (exchangeable) regression. In Section 4, we develop the methodology to model stochastic processes. In Section 5, we combine the scenarios presented in Sections 3 and 4 to develop Uncertain Input Bayes Linear Emulation and demonstrate its application to emulating a chain of linked simulators. The methods in this section are implemented in R, with the developed R packages available at [https://github.com/Jackson-SE/UIBLE](https://github.com/Jackson-SE/UIBLE). Section 6 contains a brief discussion and some directions for future research. ## 2 Bayes Linear Analysis and Uncertain Inputs In this article, we focus on the Bayes Linear approach (Hartigan 1969, O'Hagan 1987, Goldstein & Wooff 2007) to statistical inference, which deals with second-order belief specifications (that is, expectations, variances and covariances) of observable quantities. Probabilities can be represented as the expectation of the corresponding indicator functions when required. More precisely, suppose that there are two collections of random quantities, \(\mathcal{B}=(B_{1},...,B_{r})\) and \(\mathcal{D}=(D_{1},...,D_{s})\). Bayes linear analysis involves updating subjective beliefs about \(\mathcal{B}\) given observation of \(\mathcal{D}\). In order to do so, prior mean vectors and covariance matrices for \(\mathcal{B}\) and \(\mathcal{D}\) (that is, \(\mathrm{E}[\mathcal{B}]\), \(\mathrm{E}[\mathcal{D}]\), \(\mathrm{Var}[\mathcal{B}]\) and \(\mathrm{Var}[\mathcal{D}]\)), along with a covariance matrix between \(\mathcal{B}\) and \(\mathcal{D}\) (that is, \(\mathrm{Cov}[\mathcal{B},\mathcal{D}]\)), must be specified. Second-order beliefs about \(\mathcal{B}\) can be adjusted in the light of \(\mathcal{D}\) using the Bayes linear update formulae: \[\mathrm{E}_{\mathcal{D}}[\mathcal{B}] = \mathrm{E}[\mathcal{B}]+\mathrm{Cov}[\mathcal{B},\mathcal{D}] \mathrm{Var}[\mathcal{D}]^{-1}(\mathcal{D}-\mathrm{E}[\mathcal{D}]), \tag{1}\] \[\mathrm{Var}_{\mathcal{D}}[\mathcal{B}] = \mathrm{Var}[\mathcal{B}]-\mathrm{Cov}[\mathcal{B},\mathcal{D}] \mathrm{Var}[\mathcal{D}]^{-1}\mathrm{Cov}[\mathcal{D},\mathcal{B}],\] (2) \[\mathrm{Cov}_{\mathcal{D}}[\mathcal{B}_{1},\mathcal{B}_{2}] = \mathrm{Cov}[\mathcal{B}_{1},\mathcal{B}_{2}]-\mathrm{Cov}[ \mathcal{B}_{1},\mathcal{D}]\mathrm{Var}[\mathcal{D}]^{-1}\mathrm{Cov}[ \mathcal{D},\mathcal{B}_{2}]. \tag{3}\] Equations (1)-(3) are the backbone of a Bayes Linear Analysis, and permit construction of Bayes linear statistical models, such as presented in the later sections of this article. \(\mathrm{E}_{\mathcal{D}}[\mathcal{B}]\) and \(\mathrm{Var}_{\mathcal{D}}[\mathcal{B}]\) are termed the adjusted expectation and variance of \(\mathcal{B}\) given \(\mathcal{D}\). \(\mathrm{Cov}_{\mathcal{D}}[\mathcal{B}_{1},\mathcal{B}_{2}]\) is termed the adjusted covariance of \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\) given \(\mathcal{D}\), where \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\) are subcollections of \(\mathcal{B}\). The Bayes linear approach differs both philosophically and practically to the fully Bayesian approach to statistical inference (Goldstein, 1999). For example, in practical problems, there are often many relevant sources of uncertainty. A coherent fully Bayesian analysis requires specification of a full joint prior probability distribution and likelihood to reflect beliefs about the high-dimensional structure of these uncertainties (Garthwaite et al., 2005). Such specification can be very difficult, hence approximations are frequently made for mathematical convenience which causes the specification to reflect some, but not all, aspects of a person's beliefs. The theoretical coherence of the full Bayesian analysis can then get lost due to the required practical simplifications and assumptions. Furthermore, the resulting Bayesian analysis is often too computationally intensive to carry out in reasonable time. In contrast, by only requiring belief specification up to the second order (Hartigan, 1969), uncertainty in model assumptions, along with any other uncertainties, can be more easily incorporated into a Bayes linear analysis. Since linear fitting is generally computationally simpler than full conditioning, it can make for a more straightforward approach to the analysis of complex problems. For a more detailed overview and thorough treatment of Bayes linear methods, see Goldstein (1999) and Goldstein & Wooff (2007). For a comparison of Bayes linear methods with the full Bayesian approach, see, for example, Vernon et al. (2010). We now present a general statistical model for \(\mathbf{y}(\mathbf{x})=(y_{1}(\mathbf{x}),\ldots,y_{q}(\mathbf{x}))\in\mathbb{ R}^{q}\) being a vector of quantities of interest assuming known input setting \(\mathbf{x}\in\mathbb{X}\subseteq\mathbb{R}^{p}\). The model is given by \[y_{i}(\mathbf{x})=f_{i}(\mathbf{x})+\epsilon_{i}(\mathbf{x}),\qquad\qquad i=1, \ldots,q. \tag{4}\] Here, \(\mathbf{f}(\cdot)=(f_{1}(\cdot),\ldots,f_{q}(\cdot))\) is a statistical model of input \(\mathbf{x}\), and \(\boldsymbol{\epsilon}(\cdot)=(\epsilon_{1}(\mathbf{x}),\ldots,\epsilon_{q}( \mathbf{x}))\) is a residual process attempting to capture the discrepancy between \(\mathbf{f}\) and \(\mathbf{y}\) under \(\mathbf{x}\). In order to fit the model, we assume a collection of observed training data given by the \(nq\)-vector \(\mathcal{Y}=\mathcal{Y}(\mathcal{X})=(\mathbf{y}_{1}(\mathcal{X})^{T},\ldots, \mathbf{y}_{q}(\mathcal{X})^{T})^{T}\) corresponding to each row of the \(n\times p\) matrix \(\mathcal{X}=(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(n)})^{T}\), with \(\mathbf{y}_{k}(\mathcal{X}),k=1,\ldots,q\), being \(n\)-vectors of observations corresponding to the \(k\)th quantity \(y_{k}\). We aim to adjust our second-order prior belief specification about \(\mathbf{y}(\mathbf{x})\) across \(\mathcal{X}\) by \(\mathcal{Y}\) using the Bayes linear update equations to obtain posterior quantities: \[\mathrm{E}_{\mathcal{Y}}[\mathbf{y}(\mathbf{x})]=\mathrm{E}[\mathbf{y}( \mathbf{x})]+\mathrm{Cov}[\mathbf{y}(\mathbf{x}),\mathcal{Y}]\mathrm{Var}[ \mathcal{Y}]^{-1}(\mathcal{Y}-\mathrm{E}[\mathcal{Y}]), \tag{5}\] \[\mathrm{Cov}_{\mathcal{Y}}\left[\mathbf{y}(\mathbf{x}),\mathbf{y}(\mathbf{x}^ {\prime})\right]=\mathrm{Cov}[\mathbf{y}(\mathbf{x}),\mathbf{y}(\mathbf{x}^{ \prime})]-\mathrm{Cov}[\mathbf{y}(\mathbf{x}),\mathcal{Y}]\mathrm{Var}[ \mathcal{Y}]^{-1}\mathrm{Cov}[\mathcal{Y},\mathbf{y}(\mathbf{x}^{\prime})]\,. \tag{6}\] The magnitude of the adjustment of our beliefs for \(\mathbf{y}(\mathbf{x})\) in light of \(\mathcal{Y}\) is largely governed by the prior belief specification \(\mathrm{E}[\mathbf{y}(\mathbf{x})]\) and \(\mathrm{Cov}[\mathbf{y}(\mathbf{x}),\mathbf{y}(\mathbf{x}^{\prime})]\) across \(\mathbb{X}\). For most statistical models, such prior belief specifications can be expressed as smooth functions of \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\), this function being determined by the model structure for \(\mathbf{f}\) and \(\boldsymbol{\epsilon}\) and the induced belief specification over them. For example, we often consider a separable covariance structure of the form \[\mathrm{Cov}[\mathbf{y}(\mathbf{x}),\mathbf{y}(\mathbf{x}^{\prime})]=c(\mathbf{ x},\mathbf{x}^{\prime})\,\boldsymbol{\varSigma}, \tag{7}\] where \(c(\mathbf{x},\mathbf{x}^{\prime})\) is a stationary correlation function of \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\), and \(\boldsymbol{\varSigma}\) is an output covariance matrix. It is the structure of \(\mathrm{Cov}[\mathbf{y}(\mathbf{x}),\mathbf{y}(\mathbf{x}^{\prime})]\), and in particular correlation function \(c(\mathbf{x},\mathbf{x}^{\prime})\), that we generalise to extend Bayes linear modelling to include uncertain inputs, firstly in general terms here, and then in specific (but broad) contexts and examples in the subsequent sections. Consider the generalisation of the statistical model given by Equation (4) to the following form: \[\mathbf{y}(\mathbf{X})=\mathbf{f}(\mathbf{X})+\boldsymbol{\epsilon}(\mathbf{X}), \tag{8}\] where now \(\mathbf{X}\) is a random variable with second order belief specification \(\{\mathrm{E}[\mathbf{X}],\mathrm{Cov}[\mathbf{X},\mathbf{X}^{\prime}]\}\) available for any \(\mathbf{X},\mathbf{X}^{\prime}\). Note that \(y(\mathbf{X})\) is therefore the random quantity in which interest lies. In a Bayes linear setting, we wish to obtain \[\mathrm{E}_{\mathcal{Y}}[\mathbf{y}(\mathbf{X})]=\mathrm{E}[\mathbf{y}( \mathbf{X})]+\mathrm{Cov}[\mathbf{y}(\mathbf{X}),\mathcal{Y}]\mathrm{Var}[ \mathcal{Y}]^{-1}(\mathcal{Y}-\mathrm{E}[\mathcal{Y}]), \tag{9}\] \[\mathrm{Cov}_{\mathcal{Y}}\left[\mathbf{y}(\mathbf{X}),\mathbf{y}(\mathbf{X}^{ \prime})\right]=\mathrm{Cov}[\mathbf{y}(\mathbf{X}),\mathbf{y}(\mathbf{X}^{ \prime})]-\mathrm{Cov}[\mathbf{y}(\mathbf{X}),\mathcal{Y}]\mathrm{Var}[ \mathcal{Y}]^{-1}\mathrm{Cov}[\mathcal{Y},\mathbf{y}(\mathbf{X}^{\prime})]\,, \tag{10}\] where training data \(\mathcal{Y}=\mathcal{Y}(\mathcal{X})=(\mathbf{y}_{1}(\mathcal{X})^{T},\ldots, \mathbf{y}_{q}(\mathcal{X})^{T})^{T}\) are now observations corresponding to each row of \(\mathcal{X}=(\mathbf{X}^{(1)},\ldots,\mathbf{X}^{(n)})^{T}\), each element of which is specified up to second order. In other words, we have \(\mathrm{E}[\mathbf{X}^{(i)}]\) and \(\mathrm{Cov}[\mathbf{X}^{(i)},\mathbf{X}^{(j)}]\) for all \(i,j=1,\ldots,n\). Prior belief specification for \(\mathcal{Y}\) is given by \[\mathrm{E}[\mathcal{Y}] = (\mathrm{E}[y(\mathbf{X}^{(1)})],\ldots,\mathrm{E}[y(\mathbf{X}^ {(n)})]), \tag{11}\] \[\mathrm{Var}[\mathcal{Y}] = \{\mathrm{Cov}[y(\mathbf{X}^{(i)}),y(\mathbf{X}^{(j)})]\}_{i,j=1, \ldots,n}. \tag{12}\] Appropriate methodology to obtain \(\mathrm{E}_{\mathcal{Y}}[\mathbf{y}(\mathbf{X})]\) and \(\mathrm{Cov}_{\mathcal{Y}}\left[\mathbf{y}(\mathbf{X}),\mathbf{y}(\mathbf{X}^ {\prime})\right]\) by Equations (9) and (10) is therefore achieved by specification of appropriate prior belief structures \(\mathrm{E}[\mathbf{y}(\mathbf{X})]\) and \(\mathrm{Cov}[\mathbf{y}(\mathbf{X}),\mathbf{y}(\mathbf{X}^{\prime})]\) over any possible specification of the set \(\{\mathrm{E}[\mathbf{X}],\mathrm{E}[\mathbf{X}^{\prime}],\mathrm{Cov}[\mathbf{ X},\mathbf{X}^{\prime}]\}\). Such belief structures often exist as generalisations of those for known \(\mathbf{x},\mathbf{x}^{\prime}\) alluded to above. For example, the separable covariance structure given by Equation (7) can be generalised to \[\mathrm{Cov}[\mathbf{y}(\mathbf{X}),\mathbf{y}(\mathbf{X}^{\prime})]=c( \mathbf{X},\mathbf{X}^{\prime})\,\boldsymbol{\Sigma}, \tag{13}\] with \[c(\mathbf{X},\mathbf{X}^{\prime})=c(\mathrm{E}[\mathbf{X}],\mathrm{E}[ \mathbf{X}^{\prime}],\mathrm{Cov}[\mathbf{X},\mathbf{X}^{\prime}]), \tag{14}\] implying that the correlation between \(\mathbf{y}(\mathbf{X})\) and \(\mathbf{y}(\mathbf{X}^{\prime})\) for two uncertain inputs \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\) is a function of the second order belief specification about and between the two input variables. In the following sections, we proceed to discuss Bayes linear statistical modelling with uncertain inputs in two specific, but broad, contexts; firstly, that of regression analyses, and secondly, that of stochastic processes and emulation. ## 3 Regression Analysis Consider a regression model, presented here with scalar output \(y\) for simplicity of notation and exposition: \[y(\mathbf{X})=\mathbf{X}^{T}\boldsymbol{\beta}+\epsilon(\mathbf{X}), \tag{15}\] where \(\boldsymbol{\beta}\in\mathbb{R}^{p}\) is a vector of exchangeable regression parameters, and \(\mathbf{X}\in\mathbb{R}^{p}\) is a vector random variable. We consider a generic prior specification over \(\boldsymbol{\beta}\) of the form \(\mathrm{E}[\boldsymbol{\beta}]=\boldsymbol{\Gamma}\) and \(\mathrm{Var}[\boldsymbol{\beta}]=\boldsymbol{\Delta}\). Under the usual scenario of known \(\mathbf{x}\), we usually specify \(\mathrm{E}[\epsilon(\mathbf{x})]=0\) for all \(\mathbf{x}\), thus we view it a reasonable extension to specify \(\mathrm{E}[\epsilon(\mathbf{X})]=0\) in the random variable scenario. It is common for \(\epsilon\) to be viewed as uncorrelated with \(\boldsymbol{\beta}\)_a priori_, that is to specify \(\mathrm{Cov}[\boldsymbol{\beta},\epsilon(\mathbf{x})]=0\), so again we view it as reasonable to extend this in the random variable context to \(\mathrm{Cov}[\boldsymbol{\beta},\epsilon(\mathbf{X})]=0\). Following Equations (9) and (10), the required prior specifications are \(\mathrm{E}[y(\mathbf{X})]\) and \(\mathrm{Cov}[y(\mathbf{X}),y(\mathbf{X}^{\prime})]\) given any possible specification of the set \(\{\mathrm{E}[\mathbf{X}],\mathrm{E}[\mathbf{X}^{\prime}],\mathrm{Cov}[\mathbf{ X},\mathbf{X}^{\prime}]\}\) for random variables \(\mathbf{X},\mathbf{X}^{\prime}\). In the remainder of this section, we proceed to state results related to appropriate expression of prior beliefs of this form, both at a general level and for two specific examples of possible residual structure. Extended derivation of these results is presented in the supplementary material. We have that: \[\mathrm{E}[y(\mathbf{X})] = \mathrm{E}[\mathbf{X}^{T}]\,\mathbf{\Gamma}, \tag{16}\] \[\mathrm{Cov}[y(\mathbf{X}),y(\mathbf{X}^{\prime})] = \mathrm{Cov}[\mathbf{X}^{T}\boldsymbol{\beta},\mathbf{X}^{\prime T }\boldsymbol{\beta}]+\mathrm{Cov}[\epsilon(\mathbf{X}),\epsilon(\mathbf{X}^{ \prime})], \tag{17}\] noting that \(\mathrm{Cov}[\mathbf{X}^{T}\boldsymbol{\beta},\epsilon(\mathbf{X}^{\prime})]=0\). Further, using the law of total covariance, \[\mathrm{Cov}[\mathbf{X}^{T}\boldsymbol{\beta},\mathbf{X}^{\prime T }\boldsymbol{\beta}] = \mathrm{E}[\mathbf{X}^{T}\boldsymbol{\Delta}\mathbf{X}^{\prime}] +\mathbf{\Gamma}^{T}\mathrm{Cov}[\mathbf{X},\mathbf{X}^{\prime}]\mathbf{\Gamma}. \tag{18}\] The second term of Equation (79) is a covariance specification over the residual process \(\epsilon(\mathbf{X})\). In Sections A.1 and A.2, we present two example model belief structures in the context of statistical modelling with known inputs, demonstrating reasonable generalisation to the random variable input scenario using the ideas presented in Section 2. ### Linear Regression with Uncorrelated Random Error In the common known input case, the standard linear regression model with uncorrelated and homoscedastic random error can be obtained by specifying a prior covariance structure over \(\epsilon(\mathbf{x})\) of \[\mathrm{Cov}[\epsilon(\mathbf{x}),\epsilon(\mathbf{x}^{\prime})]=\mathbb{I}_{ \mathbf{x}=\mathbf{x}^{\prime}}\sigma^{2}, \tag{19}\] for all \(\mathbf{x},\mathbf{x}^{\prime}\), where \(\mathbb{I}\) is the indicator function taking value 1 if the statement is true and 0 otherwise, and \(\sigma^{2}\) is a common scalar variance parameter. This specification can be simply extended to the random variable case as follows: \[\mathrm{Cov}[\epsilon(\mathbf{X}),\epsilon(\mathbf{X}^{\prime})]=\mathbb{I}_{ \mathbf{X}=\mathbf{X}^{\prime}}\sigma^{2}, \tag{20}\] where the covariance between two random variable inputs is zero unless they are known to be the same. Note that two random variables being the same is not equivalent to two different random variables having the same second-order belief specification. ### Linear Regression with Correlated Error - An Example In this section, we develop an uncertain input Bayes linear approach for exchangeable regressions with correlated errors, building on the example of extracting aluminium by electrolysis over time presented in Goldstein & Wooff (1998). Using the covariance structure presented in that paper for known inputs, we present a generalisation to the random variable input scenario using the ideas presented in Section 2. The model is of the form \[y(t)=\beta_{0}+\beta_{1}\,t+\epsilon(t), \tag{21}\] and is exchangeable over \(\boldsymbol{\beta}\). Note that this regression model is consistent with the general form presented in Equation (73), with \(p=2\) variables being an intercept and single controllable parameter \(t\). In Goldstein & Wooff (1998), it is assumed that \(t=1,\ldots,13\), with a structured error model as follows: \[\epsilon(t) = A(t)+J(t)+H(t), \tag{22}\] \[J(t) = J(t-1)+Q(t),\] \[H(t) = \psi\,H(t-1)+R(t),t\geq 2,\] with \(\psi\in(0,1)\), and where, for generality, we denote \(\mathrm{Var}[A(t)]=\sigma_{A}^{2}\), \(\mathrm{Var}[Q(t)]=\mathrm{Var}[J(1)]=\sigma_{Q}^{2}\), \(\mathrm{Var}[R(t)]=\sigma_{R}^{2}\), and \(\mathrm{Var}[H(1)]=\sigma_{1}^{2}\). These terms express discrepancies from the linear trend as the sum of a pure measurement error \(A(t)\), a stochastic development of the discrepancy as a random walk with drift \(J(t)\), and an autoregressive term expressing the measurement of the suspended particles in the chemical analysis \(H(t)\). For further details, see Goldstein & Wooff (1998). Whilst the original model assumes discrete timesteps \(t\), we will generalise the regression structure to continuous \(t\)-values, subject to \(t\geq 3\) for consistency issues, before addressing the uncertain input \(T\) scenario. We assume that the three terms in Equation (22) are uncorrelated, leading to \[\mbox{Cov}[\epsilon(t),\epsilon(t)] = \mbox{Cov}[A(t),A(t^{\prime})]+\mbox{Cov}[J(t),J(t^{\prime})]+ \mbox{Cov}[H(t),H(t^{\prime})]. \tag{23}\] The three terms on the right hand side of Equation 88 have the following structure, where we define \(d=|t-t^{\prime}|\), and \(t_{m}=\min(t,t^{\prime})\), with greater exposition presented in the supplementary material: \[\mbox{Cov}[A(t),A(t^{\prime})] = \mathbb{I}_{t=t^{\prime}}\sigma_{A}^{2}, \tag{24}\] \[\mbox{Cov}[J(t),J(t^{\prime})] = t_{m}\,\sigma_{Q}^{2},\] (25) \[\mbox{Cov}[H(t),H(t^{\prime})] = \psi^{d}\,\left(\psi^{2(t_{m}-1)}\,\sigma_{1}^{2}+\frac{1-\psi^{ 2(t_{m}-2)}}{1-\psi^{2}}\,\sigma_{R}^{2}\right). \tag{26}\] Hence we have a prior residual covariance belief structure across all \(t\) of \[\mbox{Cov}[\epsilon(t),\epsilon(t)] = \mathbb{I}_{t=t^{\prime}}\sigma_{A}^{2}+t_{m}\,\sigma_{Q}^{2}+ \psi^{d}\,\left(\psi^{2(t_{m}-1)}\,\sigma_{1}^{2}+\frac{1-\psi^{2(t_{m}-2)}}{1 -\psi^{2}}\,\sigma_{R}^{2}\right). \tag{27}\] We now proceed to consider suitable extensions to the uncertain input scenario. In this case, we need an expectation and covariance structure for any possible specification of the set \(\{\mbox{E}[T],\mbox{E}[T^{\prime}],\mbox{Cov}[T,T^{\prime}]\}\). We consider that the statements of expectation in the error structure simply extend to being \(\mbox{E}[A(T)]=0\), \(\mbox{E}[J(T)]=0\) and \(\mbox{E}[H(T)]=0\); logical derivation of these statements can be shown using the law of total expectation. Regarding covariance structure, Equation (89) simply extends to the following: \[\mbox{Cov}[A(T),A(T^{\prime})]=\mathbb{I}_{T=T^{\prime}}\sigma_{A}^{2}, \tag{28}\] this being similar to the uncorrelated error term presented in Section A.1. Equation (90) extends as follows, where the random variables \(T_{m}=\min(T,T^{\prime})\) and \(D=|T-T^{\prime}|\) correspond to known quantities \(t_{m}\) and \(d\) above: \[\mbox{Cov}[J(T),J(T^{\prime})] = \sigma_{Q}^{2}\,\mbox{E}[T_{m}]. \tag{29}\] The tricky specification here is eliciting \(\mbox{E}[T_{m}]\) from \(\mbox{E}[T],\mbox{E}[T^{\prime}]\) and \(\mbox{Cov}[T,T^{\prime}]\), since a function which takes the minimum of two random quantities is non-linear. If we know which of our random variables is bigger (without loss of generality, that \(T<T^{\prime}\), say), then \(\mbox{E}[T_{m}]=\mbox{E}[T]\). This is not an unrealistic situation; for example, we may have measurements at two times about which we are uncertain, but know that one of them certainly happened after the other. More generally, we propose a reasonable covariance structure as follows. Let \(\mathbb{I}=\mathbb{I}_{T<T^{\prime}}\) and \(p=\mbox{E}[\mathbb{I}]\). We then condition on this event using the law of total expectation to give: \[\mbox{E}[T_{m}] = p\,\mbox{E}_{\mathbb{I}=1}[T_{m}]+(1-p)\,\mbox{E}_{\mathbb{I}=0 }[T_{m}], \tag{30}\] where \[\mbox{E}_{\mathbb{I}=1}[T_{m}] = \mbox{E}[T]+\frac{\mbox{Cov}[T,\mathbb{I}]}{p}, \tag{31}\] \[\mbox{E}_{\mathbb{I}=0}[T_{m}] = \mbox{E}[T^{\prime}]-\frac{\mbox{Cov}[T^{\prime},\mathbb{I}]}{1-p}, \tag{32}\] \[{\rm Cov}[T,\mathbb{I}] = {\rm Corr}\left[T,\mathbb{I}\right]\,\sqrt{{\rm Var}[T]\,p(1-p)}, \tag{33}\] \[{\rm Cov}[T^{\prime},\mathbb{I}] = {\rm Corr}\left[T^{\prime},\mathbb{I}\right]\,\sqrt{{\rm Var}[T^{ \prime}]\,p(1-p)}, \tag{34}\] so that: \[{\rm E}[T_{m}] = p\,{\rm E}[T]+(1-p)\,{\rm E}[T^{\prime}] \tag{35}\] \[+\sqrt{p(1-p)}\left({\rm Corr}\left[T,\mathbb{I}\right]\,\sqrt{{ \rm Var}[T]}-{\rm Corr}\left[T^{\prime},\mathbb{I}\right]\,\sqrt{{\rm Var}[T^{ \prime}]}\right).\] We may be able to specify values for all quantities required in Equation (103), in which case we have our expression for \({\rm E}[T_{m}]\). If specification of \({\rm Corr}\left[T,\mathbb{I}\right]\) and \({\rm Corr}\left[T^{\prime},\mathbb{I}\right]\) is proving difficult, then let us first note that it is logical to assume that \(-1<{\rm Corr}\left[T,\mathbb{I}\right]<0\) (since \(\mathbb{I}=1\) necessarily corresponds, on average in some sense, to smaller \(T\), as in this case \(T<T^{\prime}\)), and \(0<{\rm Corr}\left[T^{\prime},\mathbb{I}\right]<1\), so that \[{\rm E}[T_{m}]>p\,{\rm E}[T]+(1-p)\,{\rm E}[T^{\prime}]-\sqrt{p(1-p)}\left( \sqrt{{\rm Var}[T]}+\sqrt{{\rm Var}[T^{\prime}]}\right). \tag{36}\] Again, if we feel able to specify a value for \(p\), then we can use it in Equation (104). Alternatively, in order to be conservative with respect to variance resolution, we can use elementary calculus to minimise the expression on the right had side of Equation (104) over \(p\in[0,1]\) to get that \[{\rm E}[T_{m}]\geq\frac{1}{2}\left({\rm E}[T]+{\rm E}[T^{\prime}]-\sqrt{\left( {\rm E}[T^{\prime}]-{\rm E}[T]\right)^{2}+\left(\sqrt{{\rm Var}[T]}+\sqrt{{ \rm Var}[T^{\prime}]}\right)^{2}}\right). \tag{37}\] We thus propose, for \(T\neq T^{\prime}\), the covariance structure: \[{\rm Cov}[J(T),J(T^{\prime})]=\frac{\sigma_{Q}^{2}}{2}\left({\rm E}[T]+{\rm E }[T^{\prime}]-\sqrt{\left({\rm E}[T^{\prime}]-{\rm E}[T]\right)^{2}+\left( \sqrt{{\rm Var}[T]}+\sqrt{{\rm Var}[T^{\prime}]}\right)^{2}}\right). \tag{38}\] From Equation (90) with \(t=t^{\prime}\), it follows that \({\rm Var}[J(T)]=\sigma_{Q}^{2}\,{\rm E}[T]\), since \(T_{m}=T\). We thus make use of an indicator function in the expression for \({\rm Cov}[J(T),J(T^{\prime})]\) to ensure it is appropriate when it is known that \(T=T^{\prime}\): \[{\rm Cov}[J(T),J(T^{\prime})]\] \[= \frac{\sigma_{Q}^{2}}{2}\left({\rm E}[T]+{\rm E}[T^{\prime}]- \sqrt{\left({\rm E}[T^{\prime}]-{\rm E}[T]\right)^{2}+(1-\mathbb{I}_{T=T^{ \prime}})\left(\sqrt{{\rm Var}[T]}+\sqrt{{\rm Var}[T^{\prime}]}\right)^{2}}\right)\] Note that: 1. When \({\rm Var}[T]={\rm Var}[T^{\prime}]=0\), the expression on the right hand side of Inequality (112) yields \[\frac{1}{2}\left(t+t^{\prime}-\sqrt{(t-t^{\prime})^{2}}\right)=\frac{1}{2} \left(t+t^{\prime}-|t-t^{\prime}|\right)=\min(t,t^{\prime}),\] thus resolving to the known input scenario. 2. If it is known that \(T<T^{\prime}\), then, as mentioned earlier, it would be logical to have that \({\rm E}[T_{m}]={\rm E}[T]\). Since \[{\rm E}[T]=\frac{1}{2}\left({\rm E}[T]+{\rm E}[T^{\prime}]-\sqrt{\left({\rm E }[T^{\prime}]-{\rm E}[T]\right)^{2}}\right),\] (40) it is clear that \({\rm E}[T_{m}]\) satisfies Inequality (112) in this case. We extend Equation (92) by noticing that \[{\rm Cov}[H(T),H(T^{\prime})] \geq \sigma_{1}^{2}\exp(\log\psi\,{\rm E}[T_{M}+T_{m}-2])+\sigma_{R}^{2} \exp(\log\psi\,{\rm E}[T_{M}-T_{m}]), \tag{41}\] where \(t_{M}=\max(t,t^{\prime})\), and correspondingly \(T_{M}=\max(T,T^{\prime})\). We note that \({\rm E}[T_{M}+T_{m}]={\rm E}[T]+{\rm E}[T^{\prime}]\), and thus seek an upper bound for \({\rm E}[T_{M}-T_{m}]\) in order to find a lower bound for Expression (117) (since \(\log\psi<0\) if \(\psi\in(0,1)\)). Similar to Equation (98), we use the law of total expectation on \({\rm E}[T_{M}]\) to get that \[{\rm E}[T_{M}] = p\,{\rm E}_{\mathbb{I}=1}[T_{M}]+(1-p)\,{\rm E}_{\mathbb{I}=0}[T _{M}], \tag{42}\] where: \[{\rm E}_{\mathbb{I}=1}[T_{M}] = {\rm E}[T^{\prime}]+\frac{{\rm Cov}[T^{\prime},\mathbb{I}]}{p}, \tag{43}\] \[{\rm E}_{\mathbb{I}=0}[T_{M}] = {\rm E}[T]-\frac{{\rm Cov}[T,\mathbb{I}]}{1-p}. \tag{44}\] Combining Equations (98) and (118), we get that \[{\rm E}[T_{M}-T_{m}] = (1-2p)\left({\rm E}[T]-{\rm E}[T^{\prime}]\right) \tag{45}\] \[+2\sqrt{p(1-p)}\left({\rm Corr}\left[T^{\prime},\mathbb{I}\right] \sqrt{{\rm Var}[T^{\prime}]}-{\rm Corr}\left[T,\mathbb{I}\right]\sqrt{{\rm Var }[T^{\prime}]}\right).\] Again, if we don't feel able to specify \({\rm Corr}\left[T^{\prime},\mathbb{I}\right]\) or \({\rm Corr}\left[T,\mathbb{I}\right]\), then using the assumptions that \(-1<{\rm Corr}\left[T,\mathbb{I}\right]<0\) and \(0<{\rm Corr}\left[T^{\prime},\mathbb{I}\right]<1\) we have that \[{\rm E}[T_{M}-T_{m}]<(1-2p)\left({\rm E}[T]-{\rm E}[T^{\prime}]\right)+2 \sqrt{p(1-p)}\left(\sqrt{{\rm Var}[T^{\prime}]}+\sqrt{{\rm Var}[T]}\right). \tag{46}\] If we have specified a value for \(p\), again we can use this in Expression (122), otherwise we can maximise over \(p\in[0,1]\) to obtain \[{\rm E}[T_{M}-T_{m}] \leq \sqrt{\left({\rm E}[T]-{\rm E}[T^{\prime}]\right)^{2}+\left(\sqrt {{\rm Var}[T]}+\sqrt{{\rm Var}[T^{\prime}]}\right)^{2}}. \tag{47}\] For \(T\neq T^{\prime}\), we therefore propose \[{\rm Cov}[H(T),H(T^{\prime})] \tag{48}\] \[= \sigma_{1}^{2}\exp(\log\psi\left({\rm E}[T]+{\rm E}[T^{\prime}]-2 \right))\] \[\quad+\,\sigma_{R}^{2}\exp\left(\log\psi\,\sqrt{\left({\rm E}[T]-{ \rm E}[T^{\prime}]\right)^{2}+\left(\sqrt{{\rm Var}[T]}+\sqrt{{\rm Var}[T^{ \prime}]}\right)^{2}}\right).\] When \(T=T^{\prime}\), the above expression needs to be commensurate with a reasonable expression for \({\rm Var}[H(T)]\). Unlike \({\rm Var}[J(T)]\), we find that \({\rm Var}[H(T)]\) itself needs approximating. However, for a variance term we need to find an upper bound in order to be conservative with regard to prior variance specification (i.e. overestimate rather than underestimate). We have that \[{\rm Var}[H(T)] \leq {\mathbb{I}}_{\sigma_{1}^{2}\leq K}K+(1-{\mathbb{I}}_{\sigma_{1}^{ 2}\leq K})(\psi^{4}\sigma_{1}^{2}+\sigma_{R}^{2}), \tag{49}\] where \(K=\frac{\sigma_{R}^{2}}{1-\psi^{2}}\) (see the supplementary material for justification). We can now set \[\mathrm{Cov}[H(T),H(T^{\prime})] \tag{50}\] \[= \mathbb{I}_{T=T^{\prime}}L+(1-\mathbb{I}_{T=T^{\prime}})\bigg{(} \sigma_{1}^{2}\exp(\log\psi\left(\mathrm{E}[T]+\mathrm{E}[T^{\prime}]-2\right))\] \[\qquad+\sigma_{R}^{2}\exp\Bigg{(}\log\psi\,\sqrt{\left(\mathrm{E} [T]-\mathrm{E}[T^{\prime}]\right)^{2}+\left(\sqrt{\mathrm{Var}[T]}+\sqrt{ \mathrm{Var}[T^{\prime}]}\right)^{2}}\Bigg{)}\,\bigg{)},\] where \[L=\mathbb{I}_{\sigma_{1}^{2}\leq K}K+(1-\mathbb{I}_{\sigma_{1}^{2}\leq K})( \psi^{4}\sigma_{1}^{2}+\sigma_{R}^{2}).\] Note that Expression (50) underestimates \(\mathrm{Cov}[H(t),H(t^{\prime})]\), as given by Equation (92), when \(\mathrm{Var}[T]=\mathrm{Var}[T^{\prime}]=0\). We do not view this as a problem, as we consider interest to lie in the case of statistical modelling with random variable input. If it is assumed that all \(t,t^{\prime}\) are known, we would use Equation (92) throughout. Finally, to conclude our prior specification in the case of uncertain inputs for the model presented in this section, note that, with \(\mathbf{T}=(1,T)^{T}\), we have \[\mathrm{E}[\mathbf{T}^{T}\boldsymbol{\Delta}\mathbf{T}^{\prime}] = \delta_{00}+\delta_{01}(\mathrm{E}[T^{\prime}]+\mathrm{E}[T])+ \delta_{11}\mathrm{E}[T]\mathrm{E}[T^{\prime}]+\mathrm{Cov}[T,T^{\prime}], \tag{51}\] \[\mathbf{\Gamma}^{T}\mathrm{Cov}[\mathbf{T},\mathbf{T}^{\prime}]\mathbf{\Gamma} = \gamma_{1}^{2}\mathrm{Cov}[T,T^{\prime}], \tag{52}\] so that full prior specification for \(y(T)\) can therefore be given by \[\mathrm{E}[y(T)]=\gamma_{0}+\mathrm{E}[T]\gamma_{1}, \tag{53}\] and \[\mathrm{Cov}[y(T),y(T^{\prime})] \tag{54}\] \[= \delta_{00}+\delta_{01}(\mathrm{E}[T^{\prime}]+\mathrm{E}[T])+ \delta_{11}\mathrm{E}[T]\mathrm{E}[T^{\prime}]+\mathrm{Cov}[T,T^{\prime}]\] \[\quad+\,\gamma_{1}^{2}\mathrm{Cov}[T,T^{\prime}]\] \[\quad+\,\mathbb{I}_{T=T^{\prime}}\sigma_{A}^{2}\] \[\quad+\,\frac{\sigma_{Q}^{2}}{2}\left(\mathrm{E}[T]+\mathrm{E}[T ^{\prime}]-\sqrt{\left(\mathrm{E}[T^{\prime}]-\mathrm{E}[T]\right)^{2}+\left(1 -\mathbb{I}_{T=T^{\prime}}\right)\left(\sqrt{\mathrm{Var}[T]}+\sqrt{\mathrm{ Var}[T^{\prime}]}\right)^{2}}\right)\] \[\quad+\,\mathbb{I}_{T=T^{\prime}}L+(1-\mathbb{I}_{T=T^{\prime}}) \bigg{(}\sigma_{1}^{2}\exp(\log\psi\left(\mathrm{E}[T]+\mathrm{E}[T^{\prime}] -2\right))\] \[\qquad\quad+\sigma_{R}^{2}\exp\Bigg{(}\log\psi\,\sqrt{\left( \mathrm{E}[T]-\mathrm{E}[T^{\prime}]\right)^{2}+\left(\sqrt{\mathrm{Var}[T]}+ \sqrt{\mathrm{Var}[T^{\prime}]}\right)^{2}}\Bigg{)}\,\bigg{)}.\] Given these prior specifications, we can use the Bayes linear update Equations (9) and (10) to adjust belief specifications for any second-order specification \(\{\mathrm{E}[T],\mathrm{Var}[T]\}\) for \(T\), given knowledge of system behaviour \(y(\mathcal{T})\) for training set \(\mathcal{T}=(T^{(1)},\ldots,T^{(n)})^{T}\). Here, beliefs across \(\mathcal{T}\) are specified up to second order. ## 4 Bayes Linear Stochastic Processes We here consider modelling the relationship between input \(\mathbf{x}\in\mathbb{R}^{p}\) and output \(\mathbf{y}(\mathbf{x})\in\mathbb{R}^{q}\) using a second-order stochastic process, which provides an intuitive setting for the discussion of many of our novel extensions in Uncertain Input modelling required for Bayes linear emulation in Section 5. More precisely, we assume \[\mathbf{y}(\mathbf{x})=\mathbf{u}(\mathbf{x}), \tag{55}\] with a prior assumption that \(\mathrm{E}[\mathbf{u}(\mathbf{x})]=0\), and assuming that the between-outputs covariance and spatial correlation across the input space is separable, thus given by \[\mathrm{Cov}[\mathbf{u}(\mathbf{x}),\mathbf{u}(\mathbf{x}^{\prime})]=c( \mathbf{x},\mathbf{x}^{\prime})\;\boldsymbol{\varSigma}, \tag{56}\] for two inputs \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\). Here, \(\boldsymbol{\varSigma}\) is a \(q\times q\) output covariance matrix and \(c(\mathbf{x},\mathbf{x}^{\prime})\) is a stationary correlation function of \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\)(Koehler & Owen, 1996; Kennedy & O'Hagan, 2001); for example, the Gaussian correlation function for modelling a deterministic relationship (for example, that of a computer experiment) is given by \[c(\mathbf{x},\mathbf{x}^{\prime})=\exp\left\{-\sum_{r=1}^{p}\left(\frac{x_{(r )}-x^{\prime}_{(r)}}{\theta_{r}}\right)^{2}\right\}, \tag{57}\] which depends on the specification of the correlation length parameters \(\theta_{r},r=1,...,p\). The second-order stochastic process in Equation (55) can be viewed similar to a Gaussian process in the fully Bayesian paradigm with corresponding second-order specification. Gaussian process modelling with uncertain inputs has been considered (Dallaire et al., 2009; McHutchon & Rasmussen, 2011; Ye et al., 2022; Wang et al., 2022), however, the uncertain input location noise is often assumed to be homoscedastic and unknown, whereas we assume specification of \(\{\mathrm{E}[\mathbf{X}],\mathrm{E}[\mathbf{X}^{\prime}],\mathrm{Cov}[\mathbf{ X},\mathbf{X}^{\prime}]\}\), but for the modelling structure to hold for any such specification. In a Bayes linear setting with known inputs, the assumed covariance structure, for example as presented as separable in Equation (56) and for a specific example form in Equation (57), makes it straightforward to adjust our second-order prior belief specification about \(\mathbf{y}(\mathbf{x})\) across \(\mathcal{X}\) by \(\mathbf{Y}\) to obtain posterior quantities, using the Bayes linear update Equations (5) and (6). Extension of Bayes linear stochastic process modelling to the random variable input case with second-order specification requires extension of the prior belief specifications for \(\mathbf{u}\), such as given by Equations (56) and (57). We propose \(\mathrm{E}[\mathbf{u}(\mathbf{X})]=0\) and \[\mathrm{Cov}[\mathbf{u}(\mathbf{X}),\mathbf{u}(\mathbf{X}^{\prime})]=c( \mathbf{X},\mathbf{X}^{\prime})\;\boldsymbol{\varSigma} \tag{58}\] where \[c(\mathbf{X},\mathbf{X}^{\prime})=c(\mathrm{E}[\mathbf{X}],\mathrm{E}[ \mathbf{X}^{\prime}],\mathrm{Cov}[\mathbf{X},\mathbf{X}^{\prime}]) \tag{59}\] which implies that the correlation between \(\mathbf{u}(\mathbf{X})\) and \(\mathbf{u}(\mathbf{X}^{\prime})\) for two uncertain inputs \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\) is a function of the second order belief specification about and between the two input variables. As an example, we propose the following extension to the Gaussian correlation function, given by Equation (57): \[\begin{split} c(\mathbf{X},\mathbf{X}^{\prime})&=\exp \left\{-\mathrm{E}[(\mathbf{X}-\mathbf{X}^{\prime})^{T}\boldsymbol{\varTheta}^ {-2}(\mathbf{X}-\mathbf{X}^{\prime})]\right\}\\ &=\exp\left\{-\sum_{r=1}^{p}\left(\frac{\mathrm{E}[X_{(r)}-X^{ \prime}_{(r)}]^{2}+\mathrm{Var}[X_{(r)}-X^{\prime}_{(r)}]}{\theta_{r}^{2}} \right)\right\}\right\}\,,\end{split} \tag{60}\] with positive-definite diagonal matrix \(\boldsymbol{\varTheta}^{-2}\) having entries \((\Theta^{-2})_{rr}=1/\theta_{r}^{2}\), and the second line obtained from standard results on the expected value of a quadratic form (Harville, 2018, pp. 200-201). This choice satisfies the desirable property that it reduces to a standard form of correlation function if \((\mathbf{X},\mathbf{X}^{\prime})=(\mathbf{x},\mathbf{x}^{\prime})\) are known. We also derive two further important results for this new correlation function in the form of two lemmas, proofs of which can be found in the supplementary material. **Lemma 4.0.1**: _For random variables \({\bf X},{\bf X}^{\prime}\) with finite first and second moments, the kernel function \(c({\bf X},{\bf X}^{\prime})\) from (60) is positive-definite._ **Lemma 4.0.2**: _Covariance function (59), with \(c({\bf X},{\bf X}^{\prime})\) given by (60), is a lower bound under the Loewner (partial) ordering on the covariance obtained by assuming the conditional covariance_ \[{\rm Cov}[{\bf u}({\bf X}),{\bf u}({\bf X}^{\prime})\,|\,{\bf X}={\bf x},{\bf X }^{\prime}={\bf x}^{\prime}]=\exp\left\{-\sum_{r=1}^{p}\left(\frac{x_{(r)}-x^{ \prime}_{(r)}}{\theta_{r}}\right)^{2}\right\}\,\mathbf{\Sigma}\,. \tag{61}\] _Moreover, the \(kk\)th element of (60) is a lower bound on the expected value, with respect to \({\bf X},{\bf X}^{\prime}\), of the \(kk\)th element of (61) (\(k=1,\ldots q\))._ Whilst the proposed correlation function of Equation (60) can be viewed simply as a modelling assumption, Lemma 4.0.2 shows that it can also be derived as an approximation to the conditional covariance between \({\bf u}({\bf X})\) and \({\bf u}({\bf X}^{\prime})\) assuming the standard Gaussian correlation function that one might use for known \({\bf x},{\bf x}^{\prime}\). In terms of a Bayes linear adjustment, this quantity reflects the amount of resolved uncertainty given the training data, hence an underestimation (lower bound) of this quantity is preferable to an overestimation. Similar derivations for other correlation function forms commonly presented in the literature (for example, given by Paulo 2005), is an avenue for future research. ## 5 Uncertain Input Bayes Linear Emulation We extend uncertain input modelling methodology to Bayes linear emulation, which can in some sense be viewed as combining the applications of Sections 3 and 4 together. Emulators are typically utilised as statistical approximations of mathematical models, represented as computer code, or simulators, of scientific processes. Such mathematical models encapsulate the key features of the system and facilitate prediction and decision making. Emulation is often required as a result of the computational expense of a typical simulator leading to substantial output uncertainty across the input space due to the small number of input combinations for which it is feasible to run the simulator. The difficulties encountered as a result of the overall modelling uncertainties are exacerbated in the case of Uncertain Inputs, as discussed in this section. We begin with a review of Bayes linear emulation. ### Bayes Linear Emulation We assume system behaviour of interest \({\bf y}\) is modelled by a generic deterministic simulator \({\bf f}\). The simulator has input vector \({\bf x}=(x_{(1)},\ldots,x_{(p)})\in\mathbb{X}\subseteq\mathbb{R}^{p}\), and outputs vector \({\bf f}({\bf x})=(f_{1}({\bf x}),\ldots,f_{q}({\bf x}))\in{\bf f}(\mathbb{X}) \subseteq\mathbb{R}^{q}\). We represent our beliefs about the behaviour of \({\bf f}({\bf x})\) in the following form (Goldstein & Rougier 2004): \[{\bf f}({\bf x})={\bf g}({\bf x})^{T}\mathbf{B}+{\bf u}({\bf x})\,, \tag{62}\] where \({\bf g}({\bf x})\) is an \(m\)-vector of known basis regression functions, \(B\) is an \(m\times q\) matrix of unknown regression coefficients, and \({\bf u}({\bf x})\) is a \(q\)-dimensional second-order weakly-stationary stochastic process. Let \(\mathbf{\beta}={\rm vec}(\mathbf{B})\) be an \(mq\)-vector resulting from stacking the columns of \(B\), with a generic prior specification \({\rm E}[\mathbf{\beta}]=\mathbf{\Gamma}\) and \({\rm Var}[\mathbf{\beta}]=\mathbf{\Delta}\). We also make the common assumptions that \({\rm E}[{\bf u}({\bf x})]={\bf 0}\), \({\rm Cov}[\mathbf{\beta},{\bf u}({\bf x})]=\mathbf{0}\), and covariance between \({\bf u}({\bf x})\) and \({\bf u}({\bf x}^{\prime})\) is of the separable form given by Equation (56). Consistent with previous notation, suppose \(\mathbf{F}=\mathbf{f}(\mathcal{X})=(\mathbf{f}_{1}(\mathcal{X})^{T},\ldots, \mathbf{f}_{q}(\mathcal{X})^{T})^{T}\) is an \(nq\)-vector with \(\mathbf{f}_{k}(\mathcal{X})\) being \(n\)-vectors of simulator output \(k=1,\ldots,q\) run at each row of the \(n\times p\) design matrix \(\mathcal{X}=(\mathbf{x}^{(1)},\ldots,\mathbf{x}^{(n)})^{T}\). We can adjust our second-order prior belief specification about \(\mathbf{f}(\mathbf{x})\) across \(\mathcal{X}\) by \(\mathbf{F}\) using the Bayes linear update Equations (63) and (64), now with \(\mathbf{y}\) replaced with \(\mathbf{f}\), to obtain posterior quantities: \[\mathrm{E}_{\mathbf{F}}[\mathbf{f}(\mathbf{x})]=\mathrm{E}[ \mathbf{f}(\mathbf{x})]+\mathrm{Cov}[\mathbf{f}(\mathbf{x}),\mathbf{F}] \mathrm{Var}[\mathbf{F}]^{-1}(\mathbf{F}-\mathrm{E}[\mathbf{F}]), \tag{63}\] \[\mathrm{Var}_{\mathbf{F}}[\mathbf{f}(\mathbf{x})]=\mathrm{Var}[ \mathbf{f}(\mathbf{x})]-\mathrm{Cov}[\mathbf{f}(\mathbf{x}),\mathbf{F}] \mathrm{Var}[\mathbf{F}]^{-1}\mathrm{Cov}[\mathbf{F},\mathbf{f}(\mathbf{x}) ]\,. \tag{64}\] ### Bayes Linear Emulation with Uncertain Inputs Uncertain Input Bayes Linear Emulation (UIBLE) is a computationally efficient extension to Bayes linear emulation in the case of uncertain inputs. We consider an emulator setup for \(\mathbf{f}\) similar to that discussed in Section 5.1. Following Equation (62), we choose to decompose the vector of training runs \(\mathbf{F}\) as follows: \[\mathbf{F}=\mathrm{vec}(\boldsymbol{G}\boldsymbol{B})+\mathbf{U}=\boldsymbol{W }\boldsymbol{\beta}+\mathbf{U},\] where \(\boldsymbol{G}=(\mathbf{g}(\mathbf{x}^{(1)}),\ldots,\mathbf{g}(\mathbf{x}^{(n )}))^{T}\) is an \(n\times m\) matrix of regressors at the known design points in \(\mathcal{X}\), \(\boldsymbol{W}=\boldsymbol{I}_{q}\otimes\boldsymbol{G}\), \(\boldsymbol{I}_{q}\) is a \(q\times q\) identity matrix, \(\otimes\) represents the kronecker product, \(\mathbf{U}=\mathbf{u}(\mathcal{X})\) is an \(nq\) vector of residuals, and recall \(\boldsymbol{\beta}=\mathrm{vec}(\boldsymbol{B})\) with prior specification \(\mathrm{E}[\boldsymbol{\beta}]=\boldsymbol{\Gamma}\) and \(\mathrm{Var}[\boldsymbol{\beta}]=\boldsymbol{\Delta}\). We wish to make inference about \(\mathbf{f}(\mathbf{X})\), where \(\mathbf{X}\in\mathbb{R}^{p}\) is an uncertain (random variable) input to \(\mathbf{f}\). Following Equation (62), \(\mathbf{f}(\mathbf{X})\) can be written as \[\mathbf{f}(\mathbf{X})=\mathbf{g}(\mathbf{X})^{T}\boldsymbol{B}+\mathbf{u}( \mathbf{X})=\boldsymbol{w}(\mathbf{X})\boldsymbol{\beta}+\mathbf{u}(\mathbf{ X}),\] where \(\boldsymbol{w}(\mathbf{X})=\boldsymbol{I}_{q}\otimes\mathbf{g}(\mathbf{X})^{T}\). We assume \(\mathrm{E}[\mathbf{u}(\mathbf{X})]=\mathbf{0}\) and \(\mathrm{Cov}[\boldsymbol{\beta},\mathbf{u}(\mathbf{X})]=\mathbf{0}\), these assumptions being a reasonable extension to the prior specification that may be made in the case of known inputs (Jackson 2018). We also utilise the covariance structure for \(\mathbf{u}(\mathbf{X})\) as given by Equations (58) and (59). We therefore have that \(\mathrm{E}[\mathbf{U}]=\mathbf{0}\), \(\mathrm{Var}[\mathbf{U}]=\boldsymbol{\Omega}=\boldsymbol{\Sigma}\otimes \boldsymbol{C}\) and \(\mathrm{Cov}[\boldsymbol{\beta},\mathbf{U}]=\boldsymbol{\theta}\), where we define \[\boldsymbol{C}=\left(\begin{array}{cccc}c(\mathbf{x}_{1},\mathbf{x}_{1})&c( \mathbf{x}_{1},\mathbf{x}_{2})&\cdots&c(\mathbf{x}_{1},\mathbf{x}_{n})\\ c(\mathbf{x}_{2},\mathbf{x}_{1})&c(\mathbf{x}_{2},\mathbf{x}_{2})&\cdots&c( \mathbf{x}_{2},\mathbf{x}_{n})\\ \vdots&\vdots&\ddots&\vdots\\ c(\mathbf{x}_{n},\mathbf{x}_{1})&c(\mathbf{x}_{n},\mathbf{x}_{2})&\cdots&c( \mathbf{x}_{n},\mathbf{x}_{n})\end{array}\right).\] We also define \(\boldsymbol{v}(\mathbf{X})=\boldsymbol{\Sigma}\otimes\mathbf{c}(\mathbf{X})\) and \(\mathbf{c}(\mathbf{X})=(c(\mathbf{X},\mathbf{x}_{1}),\ldots,c(\mathbf{X}, \mathbf{x}_{n}))\). We now proceed to state the adjusted belief formulae for \(\mathbf{f}(\mathbf{X})\) by \(\mathbf{F}\) in the form of two lemmas, proofs of which can be found in the supplementary material. **Lemma 5.2.1**: _The expected value of \(\mathbf{f}(\mathbf{X})\), adjusted by \(\mathbf{F}\), is given by:_ \[\mathrm{E}_{\mathbf{F}}[\mathbf{f}(\mathbf{X})]=\mathrm{E}[\boldsymbol{w}( \mathbf{X})]\,\mathrm{E}_{\mathbf{F}}[\boldsymbol{\beta}]+\boldsymbol{v}( \mathbf{X})\,\boldsymbol{\Omega}^{-1}\left(\mathbf{F}-\boldsymbol{W}\,\mathrm{E} _{\mathbf{F}}[\boldsymbol{\beta}]\right). \tag{65}\] **Lemma 5.2.2**: _The variance of \(\mathbf{f}(\mathbf{X})\), adjusted by \(\mathbf{F}\), is given by:_ \[\mathrm{Var}_{\mathbf{F}}[\mathbf{f}(\mathbf{X})] = \mathrm{E}[\boldsymbol{w}(\mathbf{X})\,\mathrm{Var}_{\mathbf{F}}[ \boldsymbol{\beta}]\,\boldsymbol{w}(\mathbf{X})^{T}]+\mathrm{E}_{\mathbf{F}}[ \boldsymbol{\beta}^{T}]\,\mathrm{Var}[\boldsymbol{w}(\mathbf{X})]\,\mathrm{E}_{ \mathbf{F}}[\boldsymbol{\beta}]+\,\boldsymbol{\Sigma} \tag{66}\] \[\quad-\,\boldsymbol{v}(\mathbf{X})\,\boldsymbol{\Omega}^{-1}\, \boldsymbol{v}(\mathbf{X})^{T}+\boldsymbol{v}(\mathbf{X})\,\boldsymbol{\Omega}^{- 1}\,\boldsymbol{W}\,\mathrm{Var}_{\mathbf{F}}[\boldsymbol{\beta}]\,\boldsymbol{W} ^{T}\,\boldsymbol{\Omega}^{-1}\,\boldsymbol{v}(\mathbf{X})^{T}\] \[\quad-\,\mathrm{E}[\boldsymbol{w}(\mathbf{X})]\,\mathrm{Var}_{ \mathbf{F}}[\boldsymbol{\beta}]\,\boldsymbol{W}\,\boldsymbol{\Omega}^{-1}\, \boldsymbol{v}(\mathbf{X})^{T}\] \[\quad-\,(\mathrm{E}[\boldsymbol{w}(\mathbf{X})]\,\mathrm{Var}_{ \mathbf{F}}[\boldsymbol{\beta}]\,\boldsymbol{W}\,\boldsymbol{\Omega}^{-1}\, \boldsymbol{v}(\mathbf{X})^{T})^{T}\,.\] Specification of \(\mathrm{E}[\mathbf{w}(\mathbf{X})]\) and \(\mathrm{Var}[\mathbf{w}(\mathbf{X})]\) is straight forward for first-order linear regression functions. It is also possible for further functions of the input components, but these transformed input components will require a sensible second-order specification. Expressions for \(\mathrm{E}_{\mathbf{F}}[\mathbf{\beta}]\) and \(\mathrm{Var}_{\mathbf{F}}[\mathbf{\beta}]\) (extended derivations for which are provided in the supplementary material) are given by \[\mathrm{E}_{\mathbf{F}}[\mathbf{\beta}] = (\mathbf{W}^{T}\,\mathbf{\Omega}^{-1}\,\mathbf{W}+\mathbf{\Delta}^{-1})^{-1}(\bm {W}^{T}\,\mathbf{\Omega}^{-1}\,\mathbf{W}\,\hat{\mathbf{\beta}}_{GLS}+\mathbf{\Delta}^{-1}\bm {\Gamma}), \tag{67}\] \[\mathrm{Var}_{\mathbf{F}}[\mathbf{\beta}] = (\mathbf{W}^{T}\,\mathbf{\Omega}^{-1}\,\mathbf{W}+\mathbf{\Delta}^{-1})^{-1}, \tag{68}\] where \[\hat{\mathbf{\beta}}_{GLS}=\mathbf{I}_{q}\otimes(\mathbf{G}^{T}\,\mathbf{C}^{-1}\,\mathbf{G})^{-1 }\mathbf{G}^{T}\,\mathbf{C}^{-1}\,\mathbf{F}. \tag{69}\] As for the common known input case, vague priors on \(\mathbf{\beta}\), which we define to mean that the eigenvectors of \(\mathbf{\Delta}\) tend to \(\infty\) and thus the prior has negligible effect on the posterior, resulting in \(\mathrm{E}_{\mathbf{F}}[\mathbf{\beta}]=\hat{\mathbf{\beta}}_{GLS}\) and \(\mathrm{Var}_{\mathbf{F}}[\mathbf{\beta}]=(\mathbf{W}^{T}\,\mathbf{\Omega}^{-1}\,\mathbf{W})^{ -1}\). Assuming an appropriate correlation structure across any possible specification of the set \(\{\mathrm{E}[\mathbf{X}],\mathrm{E}[\mathbf{X}^{\prime}],\mathrm{Cov}[\mathbf{ X},\mathbf{X}^{\prime}]\}\), the results of Lemmas 5.2.1 and 5.2.2 can be used to provide a second-order approximation of the output (\(\mathrm{E}_{\mathbf{F}}[\mathbf{f}(\mathbf{X})]\) and \(\mathrm{Var}_{\mathbf{F}}[\mathbf{f}(\mathbf{X})]\)) of any simulator at random variable input \(\mathbf{X}\) for which a second-order belief specification \(\{\mathrm{E}[\mathbf{X}],\mathrm{Var}[\mathbf{X}]\}\) is itself provided. ### Emulation of Simulator Networks An application of UIBLE is that of the Bayes linear emulation of simulator networks. Complex systems can often be most appropriately modelled as a network of simpler _component_ simulators, that together form a _composite_ simulator of the entire system of interest. One simple, but important, example from epidemiology, that will be considered further in Section 5.4, combines an atmospheric Anthrax dispersion simulator (Legrand et al., 2009), labelled \(d(\cdot)\), with a dose-response (DR) simulator (Groer, 1978), labelled \(\rho(\cdot)\), in a simple _chain_ network; see Figure 1. The composite dispersion dose-response (DDR) simulator \(h=\rho(d(\mathbf{z}))\) models the overall process, where \(\mathbf{z}\) can be viewed as the input to \(d\) or \(h\). Utilising simulator networks for uncertainty quantification is challenging, largely due to the variety of sources of uncertainty for each individual component simulator (Goldstein et al., 2013) and the necessity of propagating that uncertainty through the network. The issues surrounding computational expense, discussed in the introductory paragraph to Section 5, are also exacerbated, requiring use of emulation. A key question in such scenarios is whether combining emulators for the component simulators within a network can result in more powerful approximations than emulating the network as a single composite simulator. The output of a Bayes linear emulator, or UIBLE, as given by Equations (63) and (64), or (65) and (146) respectively, is a second order belief specification across the output components of \(\mathbf{f}(\mathbf{x})\), which can subsequently be used as the second order belief specification of the input to a subsequent simulator or emulator. As a result, UIBLE can be used to approximate arbitrarily large networks of simulators, where the random input \(\mathbf{X}\) to one simulator is taken to have a second-order belief specification arising from a previous emulator. Emulation of simulator networks has been addressed elsewhere in the literature. Kyzyurova et al. (2018) proposed coupling two simulators by linking independently developed Gaussian process emulators of the simulators. Their motivation arose from potentially having _separate_ training runs for two simulators \(\mathbf{f}^{1}=\textit{bent}\) and \(\mathbf{f}^{2}=\textit{puff}\), where _bent_ simulates volcanic ash plumes arising from a vent and _puff_ simulates ash dispersion. As a result, direct emulation (similar to that discussed in Section 5.1) of the composite simulator defined by the chain is not possible. For specific Gaussian process forms, they derived closed form expressions for the overall mean and variance arising from linking the two emulators and applied these quantities within a normal distribution approximation for the composite emulator. Subsequently, Ming and Guillas (2021) extended the work of emulating coupled simulators to much larger networks, also deriving closed-form mean and variance expressions when using a class of Matern correlation functions. The availability of second-order posterior statistics for the emulation of simulator networks provided in this literature, but the lack of a closed-form distribution, naturally suggests the use of Bayes linear approaches in the same context, hence the application of UIBLE. Emulation, particularly Gaussian process emulation, of simulator networks, can also be compared to emulation using deep Gaussian processes (Damianou and Lawrence 2013, Dunlop et al. 2018). Deep Gaussian processes arise from belief networks about simulator behaviour based on Gaussian process mappings, such that layers of Gaussian process latent variables exist between simulator input and output, these being marginalised variationally (see, for example, Titisias and Lawrence 2010). Whilst similar, the intermediate variables in a simulator network represent physical system properties, which aids the construction and modelling of the emulators for each of the component processes. Direct use of a deep Gaussian process for the entire network will not exploit this additional information. Linking deep Gaussian processes of component simulators can make use of the advantages of both observable and latent variables (Ming et al. 2022), however, extension of such strategies to the Bayes linear paradigm, where one is usually concerned with belief specification over observable quantities, is an area for future research. ### Application to a DDR Chain of Simulators In this section, we apply UIBLE to an important example from epidemiology which combines an atmospheric Anthrax dispersion simulator (Legrand et al. 2009), labelled \(d(\cdot)\), with a dose-response (DR) simulator (Groer 1978), labelled \(\rho(\cdot)\), in a simple _chain_ network; see Figure 1. The dispersion simulator models the spread of a released biological agent across a given spatial domain, with input parameters of interest corresponding to physical quantities wind speed (\(z_{WS}\)), wind direction (\(z_{WD}\)) and source mass (\(z_{SM}\)). Simulator outputs \(d(\mathbf{z})\) represent dose at each location across the domain, however, we consider a single spatial location output for illustrative purposes. Due to the behaviour of \(d(\cdot)\), we chose to emulate a transformation of the output, namely \(f^{1}(\mathbf{x}_{1})=\log(d(\mathbf{x}_{1})+1)\), treating this transformed function \(f^{1}(\cdot)\) as the first simulator of the network. The DR simulator takes dose, \(x\), as input, and outputs casualties, \(\rho(x)\), as a proportion of the population at a particular spatial location. However, to be consistent with \(f^{1}(\cdot)\), we consider the second simulator to be \(f^{2}(x_{2})=\rho(\exp(x_{2})-1)\), so that \(h(\mathbf{z})=f^{2}(f^{1}(\mathbf{z}))=\rho(d(\mathbf{z}))\). We also note that whilst \(d(\cdot)\) is computationally expensive, dose-response model \(\rho(\cdot)\) is not; however we emulate both simulators to demonstrate the efficacy of our methods. Our methods are also applicable and effective when only a subset of the simulators in a network require emulation. Figure 1: Graphical representation of the DDR network of simulators \(h(\mathbf{z})=\rho(d(\mathbf{z}))\). As \(f^{2}(\cdot)\) here is straightforward to emulate, our application also effectively demonstrates the use of the methods for this special case. The simplicity of this second simulator additionally highlights a key benefit of the emulation of simulator networks methodology in general, since if the composite simulator is emulated as a single model, neither the computational efficiency of running the second simulation nor the simplicity of the behaviour of this simulator can be exploited. The composite simulator \(h=f^{2}\cdot f^{1}=\rho\cdot d\) takes wind speed, wind direction and source mass as input \(\mathbf{z}\), and directly outputs a proportion of casualties \(h(\mathbf{z})\). An expanded DAG showing Figure 1 shows the links between the original simulators \(d\) and \(\rho\), their inputs, output and corresponding physical quantities. The primary interest of decision makers is the impact of release conditions on the proportion of casualties. This assessment requires linking the two component simulators, each of which implements modelling from two different groups of experts. To address the question of whether emulating individual simulators in the network is preferable to a single emulator of the composite simulator, we will compare direct Bayes linear emulation of \(h(\cdot)\) with application of UIBLE to \(f^{2}(\cdot)\) given standard Bayes linear emulation of \(f^{1}(\cdot)\). We proceeded to construct Bayes linear emulators for each of the component simulators \(f^{1}\) and \(f^{2}\), as well as the composite simulator \(h\). Ranges of interest of the inputs to simulator \(f^{1}\) (and thus \(h\)) are \((z_{WD},z_{WS},z_{SM})\in[37,63]^{\circ}\times[1,150]\mathrm{ms}^{-1}\times[0.001,1]\mathrm{kg}\), each of which were scaled to \([-1,1]\) for the purposes of our analysis. We constructed a training point design for \(f^{1}\) and \(h\) using a maximin Latin hypercube (McKay et al., 1979; Currin et al., 1991) of size 50 across the three input dimensions. In contrast, simulator \(f^{2}\) is one-dimensional, thus the need for fewer training points, so we take a random sample of 20 points from a uniform distribution. This idea of using independent space-filling designs for the construction of each of the two emulators is similar to that proposed in Kyzyurova et al. (2018). More sophisticated design strategies are discussed in Section 6, however, yield much scope for future research. In order to explore the effects of design size, we subsequently repeated the analysis using 30 training points for each simulator, this following the ad-hoc \(10d\) rule-of-thumb (Loeppky et al., 2009). It is correct to use 30 (as opposed to 10) training points even for \(f^{2}\), since it is the power of the component-wise emulation of simulator networks that permits separation of the 1-dimensional emulator from the 3-dimensional composite simulator. For each of the emulators for \(f^{1},f^{2}\) and \(h\), we assumed a Gaussian correlation function, as given by Equation (57), along with a first-order polynomial mean function. We represent the scalar variance parameter and correlation length vectors as \(\sigma_{1}^{2},\sigma_{2}^{2},\sigma_{h}^{2}\) and \(\boldsymbol{\theta}_{1},\theta_{2},\boldsymbol{\theta}_{h}\) respectively. We fit these parameters using maximum likelihood for each emulator, this permitting a fair comparison between the emulation methods presented. It should be noted that whilst it can be argued that use of maximum likelihood lies outside of the Bayes linear paradigm in which we present the methodology of this article, it provides a useful tool for obtaining valid emulators. Alternative approaches, such as making use of cross-validation, might also be used. Given the component emulators for \(f^{1}\) and \(f^{2}\), we can then combine them using UIBLE to yield chained emulators for \(h\). Figures 2 and 3 show plots, for the case of 50 and 30 training points respectively, of adjusted expectation (red points) \(\pm 3\) standard deviations for each of a set of diagnostic points (indexed along the x-axis according to increasing size of the simulator output), with the simulator output value given as a green point if falling within the \(\pm 3\) standard deviation error bar, and black otherwise. The input designs for these diagnostic runs were constructed in the same manner as the training run designs. In addition to the plots, Table 1 shows the Mean Absolute Standardised Prediction Error Figure 2: Adjusted expectation (red points) \(\pm 3\) standard deviations for each of the diagnostic points (indexed along the x-axis according to increasing size of the simulator output) when 50 simulator training points were used for \(f^{1}\), and \(h\), and 20 were used for \(f^{2}\), with the simulator output value given as a green point if falling within the \(\pm 3\) standard deviation error bar, and black otherwise. Figure 3: Adjusted expectation (red points) \(\pm 3\) standard deviations for each of the diagnostic points (indexed along the x-axis according to increasing size of the simulator output) when 30 simulator training points were used for \(f^{1}\), \(f^{2}\) and \(h\), with the simulator output value given as a green point if falling within the \(\pm 3\) standard deviation error bar, and black otherwise. (MASPE): \[\frac{1}{n}\sum_{k=1}^{n}\frac{|f(\mathbf{x}^{(k)})-\mu_{f}(\mathbf{x}^{(k)})|}{ \sqrt{\nu_{f}(\mathbf{x}^{(k)})}}, \tag{70}\] Root Mean Squared Prediction Error (RMSPE) (Bastos & O'Hagan 2008): \[\sqrt{\frac{1}{n}\sum_{k=1}^{n}(f(\mathbf{x}^{(k)})-\mu_{f}(\mathbf{x}^{(k)})) ^{2}}, \tag{71}\] and Mean Generalised Entropy Score (MGES), as defined by Equation (27) of Gneiting & Raftery (2007): \[-\,\frac{1}{n}\sum_{k=1}^{n}\left\{\frac{\left[f(\mathbf{x}^{(k)})-\mu_{f}( \mathbf{x}^{(k)})\right]^{2}}{\nu_{f}(\mathbf{x}^{(k)})}+\log(\nu_{f}(\mathbf{ x}^{(k)}))\right\} \tag{72}\] for the diagnostic runs for each of the four emulators, with \(\mu_{f}\), \(\nu_{f}\) representing appropriate mean and variance estimators corresponding to generic simulator output \(f\). MASPE is a measure of emulator validity; heuristically we expect this value to be roughly 1 (assuming normal errors this value should be \(\sqrt{2/\pi}\)). RMSPE permits comparison of emulator accuracy. MGES is larger (better) for approximations that are both valid and precise. We can see from Figure 2a that the emulator for \(f^{1}\) is fairly accurate, with less accuracy where simulator output lies towards the lower end of the output range. The emulator for \(f^{2}\) (Figure 2b) is very accurate, reflecting the fact that emulator predictions can be taken with almost as much certainty as running the simulator itself. The direct emulator for \(h\) (Figure 2c) yields predictions with underestimated uncertainty. By comparison, the estimated uncertainty for UIBLE is perhaps slightly overestimated, but in general yields both more appropriate MASPE values and improved MGES values. In addition, the accuracy of the predictions for the UIBLE are, on the whole, improved, this being confirmed by the RMSPE values. Looking now to Figure 3, we can see greater estimated uncertainty and lower levels of accuracy for the direct emulators of \(f^{1}\) and \(h\), as well as the UIBLE, relative to the corresponding predictions made using the emulators trained with 50 simulator runs. What is consistent for both design sizes, however, is that direct emulation underestimates the uncertainty towards the upper end of the input space, where the predictions are in addition more inaccurate than those resulting from UIBLE. It is particularly noticable, in both Figures 2d and 3d, that UIBLE seems to overestimate uncertainty, particularly where the simulator output lies towards the lower end of the output \begin{table} \begin{tabular}{|c|c|c c|c c|} \hline & & DE of \(f^{1}\) & DE of \(f^{2}\) & DE & UIBLE \\ \hline 50-50-20 & MASPE & 0.850 & 0.580 & 1.676 & 0.485 \\ & RMSPE & 0.520 & 0.00031 & 0.0308 & 0.0223 \\ & MGES & 0.343 & 15.161 & 3.960 & 6.616 \\ \hline & MASPE & 0.798 & 0.987 & 2.075 & 0.754 \\ 30-30-30 & RMSPE & 0.616 & 0.00025 & 0.0534 & 0.0379 \\ & MGES & 0.111 & 15.176 & -0.019 & 5.683 \\ \hline \end{tabular} \end{table} Table 1: The MASPE, RMSPE and MGES for each of the four approximations and two design sizes discussed in Section 5.4. MASPE and RMSPE are smaller-the-better quantities; MGES is larger-the-better. range. This is likely to be a consequence of the uncertainty propagation approach. UIBLE has uncertainty from the regression part and covariance structure. As with standard Bayes linear emulation that uses a single correlation structure across \(\mathbb{X}\), there is some averaging of the uncertainty estimates for simulator prediction across the input space, even if the behaviour at some points is smoother than others. Use of uncertain input adaptations to non-stationary covariance structures could address this, and is an area for future research. To summarise, we feel that the results presented give evidence for utilising UIBLE for linking emulators of component simulators in a network over using a direct emulator of the composite simulator in many cases. ## 6 Conclusion We have presented novel general Bayes linear methodology for the analysis of statistical models with uncertain inputs. This has been achieved by extending commonly-employed second-order modelling assumptions to be applicable in the context of uncertain inputs. Section A.2 demonstrated the applicability of the methodology within the context of a regression model with correlated error structure in the case of known inputs. By considering adaptations of the constituent components of the prior correlation structure, we developed an appropriate prior covariance structure which could be adjusted under observations (at uncertain inputs) of the modelled electrolysis extraction process over time. An important feature of the presented adapted covariance structures is the extended homogeneity of the error structure across possible specifications of the set \(\{\mathrm{E}[\mathbf{X}],\mathrm{E}[\mathbf{X}^{\prime}],\mathrm{Cov}[ \mathbf{X},\mathbf{X}^{\prime}]\}\). This homogeneity may result in conservative approximations in the sense of effectively overestimating variances \(\mathrm{Var}[\mathbf{y}(\mathbf{X})]\), and underestimating covariances \(\mathrm{Cov}[\mathbf{y}(\mathbf{X}),\mathbf{y}(\mathbf{X}^{\prime})]\) for \(\mathbf{X}\neq\mathbf{X}^{\prime}\), relative to the expected values of the corresponding covariances, that is \(\mathrm{E}_{\mathbf{X},\mathbf{X}^{\prime}}[\mathrm{Cov}[\mathbf{y}(\mathbf{ X}),\mathbf{y}(\mathbf{X}^{\prime})]]\), were we to assume a probability distribution over \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\) and integrate out. Such approximations can enter either by conservatively optimising over introduced parameters (such as \(p\) in Equation (103) whilst establishing \(\mathrm{Cov}[J(T),J(T^{\prime})]\)), or by making direct statements via inequalities (such as in (117) whilst establishing \(\mathrm{Cov}[H(T),H(T^{\prime})]\)). Whilst practically making little difference, it is perhaps important to bear in mind that whilst the former approximation resolves to the expression for \(\mathrm{Cov}[J(t),J(t^{\prime})]\) in the case that \(\mathrm{Var}[T]=\mathrm{Var}[T^{\prime}]=0\), the latter approximation does not have such resolution. We justify these covariance structures as useful when we are unwilling or unable to meaningfully specify a probability distribution over the inputs \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\). Whilst perhaps deemed conservative, there are many statistical models for which it can be viewed as preferable to underestimate resolved uncertainty in a Bayes linear paradigm than to overestimate resolved uncertainty by arbitrary specification of too concentrated a probability distribution over the inputs, leading to an underestimation of adjusted uncertainty for future inputs, perhaps leading to real-world decision-making with dire consequences. Having said this, less conservative covariance structures may be available via exploration of heterogeneous covariance structures over the space of possible specifications \(\{\mathrm{E}[\mathbf{X}],\mathrm{E}[\mathbf{X}^{\prime}],\mathrm{Cov}[ \mathbf{X},\mathbf{X}^{\prime}]\}\) for \(\mathbf{X},\mathbf{X}^{\prime}\), this being an area for future research. In Section 5, we developed UIBLE, which is a direct extension of Bayes linear emulation methodology to the case of training emulators with, and predicting for, simulator output from uncertain inputs. UIBLE is a computationally efficient approximation of simulator output given a second-order belief specification regarding the input, with each evaluation being akin to a run of a standard Bayes linear emulator. The modelling assumptions are pragmatic, but the resulting emulator can, and should, be assessed using diagnostic summaries and plots for overall adequacy as anyway necessary for a standard Bayes linear emulator (or indeed any approximating statistical model). The specific application example which we presented was for component-wise emulation of a chain or network of simulators. In this case, the input to the emulator for the dose-response simulator was uncertain as a result of it being the emulated output of the dispersion simulator. Such networks of simulators are common, both explicitly as here, and given the fact that many complex simulators, for example climate simulators, are usually made up of smaller components which arguably form a network. The potential benefits in many scenarios of linking emulators of component-simulators in contrast to direct emulation of the composite simulator is exemplified through the results of Section 5.4, along with the results of Kyzyurova et al. (2018) and Ming and Guillas (2021), albeit with these works focusing on Gaussian process emulation rather than Bayes linear emulation. In some cases, the links between the simulators in a network, and particularly those between the outputs of an earlier simulator and the inputs of a later simulator, may not be directly compatible. In such cases, these differences are a potential source of model discrepancy, which should be taken into account, and an area for future research, however, we have assumed a direct correspondence within this article to aid the illustration of our methodology. There are future directions for UIBLE in the context of the emulation of a network of simulators. One example would be the consideration of suitable adaptations to additional covariance structures, such as the Matern correlation function (Matern, 1947; Paulo, 2005), or additional adaptations for correlation functions used for the emulation of stochastic simulators with uncertain inputs. In this work, the two emulators were constructed independently, before being linked together to form an approximation to the composite simulators. However, both design and parameter estimation for all component emulators could be considered simultaneously, and provide much scope for future research. The issue regarding design has been partially addressed in Ming and Guillas (2021), where it is correctly highlighted that ignorance of structural dependence cause unnecessary refinements of component emulators that are insignificant to the global output. They propose a variance-based adaptive design, which selects at each iteration first the component emulator with greatest average variance contribution to the linked emulator, before selecting the single design point for this emulator based on individual contribution to the linked emulator. Whilst shown to be effective in comparison to the non-sequential design strategy of Kyzyurova et al. (2018), it should be noted that the emulator with greatest average variance contribution may not contain the optimal simulator run across all component simulators for reducing the targeted aspects of linked emulator variance, for example, average or maximal variance. A similar design approach based on predictive means in the Bayes linear paradigm, which may get around this issue, is outlined as follows. For each simulator \(f^{i}\), locate input \(\mathbf{x}_{i}\) with (near-)optimal predicted composite emulator variance reduction by constructing the relevant component emulator under the assumption that \(f^{i}(\mathbf{x}_{i})=\mathrm{E}[f^{i}(\mathbf{x}_{i})]\), and hence \(\mathrm{Var}[f^{i}(\mathbf{x}_{i})]=0\), for a large range of possible \(\mathbf{x}_{i}\). Exploration over possible \(\mathbf{x}_{i}\) may involve a large space-filling design, or more likely an efficient optimisation algorithm, for each component emulator. Once the optimal point for each simulator is found, select the simulator for which the component optimal composite emulator variance reduction is greatest. Whilst we postulate this design algorithm, we do not apply it here because it is not the main focus of this work, and thorough exploration into this topic, which makes advances on this suggestion and the algorithm of Ming and Guillas (2021), is a separate research area requiring thorough attention. It should be noted in particular that both these algorithms neglect the computational efficiency of the component simulators, which are likely to vary significantly. A realistic and applicable design algorithm, assuming it to be restricted by overall computational budget, must take this into account. In addition to situations involving networks of simulators, UIBLEs have more general application. For example, the inputs of a simulator may be uncertain as a result of wishing to make predictions about the corresponding system under a specific scenario, but not knowing the direct relationship between the real-world quantities representing the scenario and the corresponding parameter values in the model. Alternatively, UIBLEs would permit efficient sensitivity analyses; several evaluations of an emulator with constant \(\mathrm{E}[\mathbf{X}]\) and varying \(\mathrm{Var}[\mathbf{X}]\) could quickly provide an indication of the influence of individual inputs on simulator output behaviour. More generally, the methodology presented in this article may fit smoothly within the context of Bayes linear Bayes graphical models and Bayes linear kinematics (Goldstein & Shaw 2004). In particular, both approaches involve specification and adjustment of our beliefs about uncertain quantities within a Bayes linear structure based on revised, or updated, beliefs about other quantities within the structure. We view this connection as an area for future research, since the inputs to the statistical model over which we are uncertain may enter into the modelling process in a non-linear way, for example, through use of correlation functions. In addition, we wish the specification, of model output, to hold for all possible specifications \(\{\mathrm{E}[\mathbf{X}],\mathrm{Var}[\mathbf{X}]\}\) of the input variable \(\mathbf{X}\) that we may have. To conclude, this work opens up many doors for the analysis of statistical modelling with uncertain inputs, particular within a Bayes linear paradigm. The most appropriate belief structures for different practical applications will be situation-specific, and scope for future research themselves, but will benefit from the foundations laid out within this article. _Acknowledgements_ This work was supported by Chemical and Biological Technologies Department (contract HDTRA1-17-C-0028). We are grateful to Crystalcast project members for invaluable discussions, comments, and provision of the simulators for the dispersion dose-response application. Particular thanks are due to Professor Veronica Bowman and Dr Daniel Silk (Defence Science and Technology Laboratory, UK), and Dr Daria Semochkina (University of Southampton, UK).
2307.04640
Properties of the $η_q$ leading-twist distribution amplitude and its effects to the $B/D^+ \toη^{(\prime)}\ell^+ ν_\ell$ decays
The $\eta^{(\prime)}$-mesons in the quark-flavor basis are mixtures of two mesonic states $|\eta_{q}\rangle=|\bar u u+\bar d d\rangle/\sqrt 2$ and $|\eta_{s}\rangle=|\bar s s\rangle$. In the previous work, we have made a detailed study on the $\eta_{s}$ leading-twist distribution amplitude. As a sequential work, in the present paper, we fix the $\eta_q$ leading-twist distribution amplitude by using the light-cone harmonic oscillator model for its wave function and by using the QCD sum rules within the QCD background field to calculate its moments. The input parameters of $\eta_q$ leading-twist distribution amplitude $\phi_{2;\eta_q}$ at an initial scale $\mu_0\sim 1$ GeV are then fixed by using those moments. The sum rules for the $0_{\rm th}$-order moment can also be used to fix the magnitude of $\eta_q$ decay constant, which gives $f_{\eta_q}=0.141\pm0.005$ GeV. As an application of the present derived $\phi_{2;\eta_q}$, we calculate the transition form factors $B(D)^+ \to\eta^{(\prime)}$ by using the QCD light-cone sum rules up to twist-4 accuracy and by including the next-to-leading order QCD corrections to the twist-2 part, and then fix the related CKM matrix element and the decay width for the semi-leptonic decays $B(D)^+ \to\eta^{(\prime)}\ell^+ \nu_\ell$.
Dan-Dan Hu, Xing-Gang Wu, Hai-Bing Fu, Tao Zhong, Zai-Hui Wu, Long Zeng
2023-07-10T15:36:23Z
http://arxiv.org/abs/2307.04640v2
Properties of the \(\eta_{q}\) leading-twist distribution amplitude and its effects to the \(B/D^{+}\to\eta^{(\prime)}\ell^{+}\nu_{\ell}\) decays ###### Abstract The \(\eta^{(\prime)}\)-mesons in the quark-flavor basis are mixtures of two mesonic states \(|\eta_{q}\rangle=|\bar{u}u+\bar{d}d\rangle/\sqrt{2}\) and \(|\eta_{s}\rangle=|\bar{s}s\rangle\). In the previous work, we have made a detailed study on the \(\eta_{s}\) leading-twist distribution amplitude. As a sequential work, in the present paper, we fix the \(\eta_{q}\) leading-twist distribution amplitude by using the light-cone harmonic oscillator model for its wave function and by using the QCD sum rules within the QCD background field to calculate its moments. The input parameters of \(\eta_{q}\) leading-twist distribution amplitude \(\phi_{2;\eta_{q}}\) at an initial scale \(\mu_{0}\sim 1\) GeV are then fixed by using those moments. The sum rules for the \(\mathrm{th_{b}}\)-order moment can also be used to fix the magnitude of \(\eta_{q}\) decay constant, which gives \(f_{\eta_{q}}=0.141\pm 0.005\) GeV. As an application of the present derived \(\phi_{2;\eta_{q}}\), we calculate the transition form factors \(B(D)^{+}\to\eta^{(\prime)}\) by using the QCD light-cone sum rules up to twist-4 accuracy and by including the next-to-leading order QCD corrections to the twist-2 part, and then fix the related CKM matrix element and the decay width for the semi-leptonic decays \(B(D)^{+}\to\eta^{(\prime)}\ell^{+}\nu_{\ell}\). pacs: 13.25.Hw, 11.55.Hx, 12.38.Aw, 14.40.Be ## I Introduction The mixing of \(\eta\) and \(\eta^{\prime}\) mesons is essential to disentangle the standard model (SM) hadronic uncertainties with the new physics beyond the SM. It involves the dynamics and structure of the pseudoscalar mesons that has two mixing modes \(\eta-\eta^{\prime}\) and \(\eta-\eta^{\prime}-G\), both of which have important theoretical significance. These mixings are caused by the QCD anomalies and are related to the breaking of chiral symmetry. However, since the matrix element of the exception operator is mainly nonperturbative, it still has not been calculated reliably. One may turn to phenomenological studies to obtain useful information on the non-perturbative QCD theory [1; 2; 3]. At present, the \(\eta-\eta^{\prime}-G\) mixing mode has been studied in detail in Refs [4; 5; 6; 7; 8; 9; 10]. As for the \(\eta-\eta^{\prime}\) mixing model, one can investigate it by using two distinct schemes, namely the singlet-octet (SO) scheme and the quark-flavor (QF) scheme. These two schemes reflect different understandings of the essential physics and they are related with a proper rotation of an ideal mixing angle [3]. Practically, a dramatic simplification can be achieved by adopting the QF scheme [11; 12; 13], especially, the decay constants in the quark-flavor basis simply follow the same pattern of the state mixing due to the OZI-rule. In QF scheme, the physical meson states \(|\eta\rangle\) and \(|\eta^{\prime}\rangle\) are related to the QF basis \(|\eta_{q}\rangle=|\bar{u}u+\bar{d}d\rangle/\sqrt{2}\) and \(|\eta_{s}\rangle=|\bar{s}s\rangle\) by an orthogonal transformation [13], \[\begin{pmatrix}|\eta\rangle\\ |\eta^{\prime}\rangle\end{pmatrix} = \begin{pmatrix}\cos\phi&-\sin\phi\\ \sin\phi&\cos\phi\end{pmatrix}\begin{pmatrix}|\eta_{q}\rangle\\ |\eta_{s}\rangle\end{pmatrix}, \tag{1}\] where \(\phi\) is the mixing angle. In the present paper, we shall adopt the QF scheme to do our analysis and to achieve a better understanding of the mixing mechanism between \(\eta\) and \(\eta^{\prime}\). The \(B(D)\to\eta^{(\prime)}\) transitions are important, since they involve \(b\to u\) and \(c\to d\) transitions and are sensitive to the CKM matrix elements \(|V_{\rm ub}|\) and \(|V_{\rm cd}|\). A more accurate determination of \(|V_{\rm ub}|\) and \(|V_{\rm cd}|\) would improve the stringency of unitarity constraints on the CKM matrix elements and provides an improved test of standard model (SM). Many measurements on \(|V_{\rm ub}|\) and \(|V_{\rm cd}|\) have been done according to various decay channels of \(B(D)\)-mesons [14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. Compared with the non-leptonic \(B(D)\)-meson decays, the semi-leptonic decays \(D^{+}\to\eta^{(\prime)}\ell^{+}\nu_{\ell}\)[24; 25; 26; 27] and \(B^{+}\to\eta^{(\prime)}\ell^{+}\nu_{\ell}\)[28; 29; 30; 31; 14] are much simpler with less non-perturbative effects and can serve as helpful platforms for exploring the differences among various mechanisms. As key components of the \(B(D)\to\eta^{(\prime)}\) semileptonic decays, the \(B(D)\to\eta^{(\prime)}\) transition form factors (TFFs) need to be precisely calculated, whose main contribution comes from the \(|\eta_{q}\rangle\)-component (the \(|\eta_{s}\rangle\)-component gives negligible contribution here, but will have sizable contribution for \(B_{s}\) (\(D_{s}\)) decays [22]). By further assuming \(SU_{\rm F}(3)\) symmetry, the TFFs \(f_{+}^{B(D)\to\eta^{(\prime)}}\) satisfy the following relation [32; 22] \[f_{+}^{B(D)\to\eta} = \cos\phi f_{+}^{B(D)\to\eta_{q}}, \tag{2}\] \[f_{+}^{B(D)\to\eta^{\prime}} = \sin\phi f_{+}^{B(D)\to\eta_{q}}. \tag{3}\] The TFFs of the heavy-to-light transitions at large and intermediate momentum transfers are among the most important applications of the light-cone sum rules (LCSR) approach. Using the LCSR approach, a two-point correlation function will be introduced and expanded near the light cone \(x^{2}\to 0\), whose transition matrix elements are then parameterized as the light meson's light-cone distribution amplitudes (LCDAs) of increasing twists [33; 34; 35; 36]. It is thus important to know the properties of the LCDAs. In the present paper, we will adopt the light cone harmonic oscillator (LCHO) model for the \(\eta_{q}\) leading-twist LCDA \(\phi_{2;\eta_{q}}\). The LCHO model is based on the Brodsky-Huang-Lepage (BHL) prescription [37; 38]1 for the light-cone wavefunction (LCWF), which is composed of the spin-space LCWF and the spatial one. The LCDA can be obtained by integrating over the transverse momentum from the LCWF. The parameters of \(\phi_{2;\eta_{q}}\) at an initial scale will be fixed by using the derived moments of the LCDA, which will then be run to any scale region via proper evolution equation. Its moments will be calculated by using the QCD sum rules within the framework of the background field theory (BFTSR) [39; 40]. The QCD sum rules method suggests to use the non-vanishing vacuum condensates to represent the non-perturbative effects [41]. The QCD background field approach provides a description for those vacuum condensates from the viewpoint of field theory [42; 43; 44; 45]. It assumes that the quark and gluon fields are composed of the background fields and the quantum fluctuations around them. And the vacuum expectation values of those background fields describe the non-perturbative effects, while the quantum fluctuations represent the calculable perturbative effects. As a combination, the BFTSR approach provides a clean physical picture for separating the perturbative and non-perturbative properties of the QCD theory and provides a systematic way to derive the QCD sum rules for hadron phenomenology. At the present, the BFTSR approach has been successfully applied for dealing with the LCDAs of various mesons, some recent examples can be found in Refs.[46; 47; 48; 49; 50]. Footnote 1: The BHL-prescription is obtained in this way by connecting the equal-time wavefunction in the rest frame and the wavefunction in the infinite momentum frame, which indicates that the LCWF should be a function of the meson’s off-shell energy. The remaining parts of the paper are organized as follows. In Sec. II, we give the calculation technology for the moments of the \(\eta_{q}\) leading-twist LCDA \(\phi_{2;\eta_{q}}\) by using the BFTSR approach, give a brief introduction of the LCHO model of \(\phi_{2;\eta_{q}}\), and then give the LCSR to the semi-leptonic decay \(B(D)^{+}\to\eta_{q}\ell^{+}\nu_{\ell}\). In Sec. III, we first determine the parameters of \(\phi_{2;\eta_{q}}\). Finally, the TFF, the decay width and the CKM matrix element of the semi-leptonic decay \(B(D)^{+}\to\eta^{(\prime)}\ell^{+}\nu_{\ell}\) will be discussed. We will also compare our results with the experimental data and other theoretical predictions. Sec. IV is reserved for a summary. ## II Calculation technology Determination of the moments \(\langle\xi_{2;\eta_{q}}^{n}\rangle\) of the \(\eta_{q}\) twist-2 LCDA using the BFTSR To determine the distribution amplitude, one can calculate firstly the moment of the distribution amplitude. The \(\eta^{(^{\prime})}\) meson twist-2 LCDA is defined as [9; 10] \[\langle 0|\bar{\Psi}(z){\cal C}_{i}[z,-z]\not{\mu}\gamma_{5} \Psi(-z)|\eta^{(^{\prime})}(q)\rangle\] \[=i(z\cdot q)f_{\eta}\int\limits_{0}^{1}dxe^{i(2x-1)(z\cdot q)} \phi_{2;\eta^{(^{\prime})}}(x,\mu) \tag{4}\] where \(\Psi=(u,d,s)\) represents the triplet of the light-quark fields in the flavour space, \([z,-z]\) is the path-ordered gauge connection which ensures the gauge invariance of the operator, and \(\phi_{2;\eta^{(\prime)}}(x,\mu)\) is the twist-2 LCDA of the \(\eta\) meson with respect to the current whose flavour content is given by \({\cal C}_{i}(i=q,s)\). And we have \({\cal C}_{q}=(\sqrt{2}{\cal C}_{1}+{\cal C}_{8})/\sqrt{3}\) and \({\cal C}_{s}=({\cal C}_{1}-\sqrt{2}{\cal C}_{8})/\sqrt{3}\) with \({\cal C}_{1}={\bf 1}/\sqrt{3}\) and \({\cal C}_{8}={\lambda}_{8}/\sqrt{2}\) which are derived in singlet-octet scheme [9], where \({\lambda}_{8}\) is the standard Gell-Mann matrix and \({\bf 1}\) is \(3\times 3\) unit matrix. The \(\eta^{(^{\prime})}\)-meson twist-2 two-quark LCDAs are symmetric in the QF basis [10]. In line with the implementation of the QF scheme for the \(\eta_{q}\) twist-2 LCDA, an approximation is implicitly adopted, i.e. \(\langle 0|\bar{\Psi}(z){\cal C}_{q}[z,-z]\not{\mu}\gamma_{5}\Psi(-z)|\eta_{q} (q)\rangle=\langle 0|\bar{u}(z)[z,-z]\not{\mu}\gamma_{5}d(-z)|\pi^{-}(q)\rangle\)[9]. That is, the definition of the \(\eta_{q}\) meson is the same as that of the \(\pi^{0}\) meson. According to the definition, we have \[\frac{{\cal C}_{q}}{\sqrt{2}}\langle 0|[\bar{u}(0)\not{\mu} \gamma_{5}(iz\cdot\stackrel{{\leftrightarrow}}{{D}})^{n}u(0)+\bar{ d}(0)\not{\mu}\gamma_{5}(iz\cdot\stackrel{{\leftrightarrow}}{{D}})^{n}d(0)]| \eta_{q}(q)\rangle\] \[=i(z\cdot q)^{n+1}f_{\eta_{q}}\langle\xi_{2;\eta_{q}}^{n}\rangle| _{\mu}, \tag{5}\] where \(\mu\) is an initial scale. The \(\eta_{q}\) twist-2 LCDA \(\phi_{2;\eta_{q}}\) and the \(n_{\rm th}\)-order moment satisfy the equation, \[\langle\xi_{2;\eta_{q}}^{n}\rangle|_{\mu}=\int\limits_{0}^{1}dx(2x-1)^{n}\phi _{2;\eta_{q}}(x,\mu). \tag{6}\] Once the insert current is determined, the first step is to construct the correlation function (correlator) \[\Pi_{2;\eta_{q}}^{(n,0)} =i\int d^{4}xe^{iq\cdot x}\langle 0|T\{J_{n}(x),J_{0}^{\dagger}(0) \}|0\rangle\] \[=(z\cdot q)^{n+2}\Pi_{2;\eta_{q}}^{(n,0)}(q^{2}), \tag{7}\] For the QF basis, one may have two independent axial vector currents \(J_{\mu\nu}^{q}\) (\(q=u,d\)) and \(J_{\mu\nu}^{s}\). We have discussed \(J_{\mu\nu}^{s}\) in our previous work [51] for the case of \(\eta_{s}\), and in this paper, we will focus on \(J_{\mu 5}^{q}\) (\(q=u,d\)) for the present case of \(\eta_{q}\). Then the required currents in the correlator can be defined as \(J_{n}(x)=\frac{\mathcal{C}_{s}}{\sqrt{2}}[\bar{u}(x)\dot{\neq}\gamma_{5}(iz\cdot \stackrel{{\leftrightarrow}}{{D}})^{n}u(x)+\bar{d}(x)\dot{\neq} \gamma_{5}(iz\cdot\stackrel{{\leftrightarrow}}{{D}})^{n}d(x)]= \bar{u}(x)\gamma_{5}(iz\cdot\stackrel{{\leftrightarrow}}{{D}})^{ n}d(x)\)[9], where \(z^{2}=0\). It is found that even moments are non-zero and the odd moments of the LCDA are zero because of the \(G\)-parity, then only the \(n=(0,2,4,\ldots)\) will be considered. For the second step, the correlator can be calculated by inserting a complete set of intermediate hadronic states in physical region. Based on the quark-hadron duality, the hadron expression can be obtained \[\text{Im}I^{(n,0)}_{2;\eta_{q},\text{Had}}(q^{2})=\pi\delta(q^{2 }-\tilde{m}_{\eta_{q}}^{2})f_{\eta_{q}}^{2}\langle\xi^{n}_{2;\eta_{q}}\rangle_ {|\mu}\langle\xi^{0}_{2;\eta_{q}}\rangle_{|\mu}\] \[\qquad+\pi\frac{3}{4\pi^{2}(n+1)(n+3)}\theta(q^{2}-s_{\eta_{q}}) \tag{8}\] Because of the SU(3) flavour symmetric, here \(\tilde{m}_{\eta_{q}}\) is the \(\eta_{q}\) effective mass [52], \(f_{\eta_{q}}\) is the decay constant of \(\eta_{q}\) and \(s_{\eta_{q}}\) stands for the continuum threshold. For the third step, one can apply the operator product expansion (OPE) to deal with the correlator in the deep Euclidean region. It is calculable and can be carried out within the framework of BFTSR. Detailed calculation processes can be found in Ref. [51]. The fourth step is to match the hadron expression corresponding to the correlator and the results obtained by OPE using the dispersion relation. After applying the Borel transformation for both sides so as to suppress the unwanted contributions from the even higher-order dimensional condensates, the sum rules for the moments of the \(\eta_{q}\) leading-twist LCDA \(\phi_{2;\eta_{q}}(x,\mu)\) can be finally obtained, which takes the following form \[\langle\xi^{n}_{2;\eta_{q}}\rangle_{|\mu}\langle\xi^{0}_{2;\eta_ {q}}\rangle_{|\mu} =\frac{M^{2}}{f_{\eta_{q}}^{2}}e^{\tilde{m}_{\eta_{q}}^{2}/M^{2}} \bigg{\{}\frac{3}{4\pi^{2}(n+1)(n+3)}(1-e^{-s_{\eta_{q}}/M^{2}})+\frac{(m_{u} +m_{d})\langle\bar{q}q\rangle}{M^{4}}+\frac{\langle\alpha_{s}G^{2}\rangle}{12 \pi M^{4}}\frac{1+n\theta(n-2)}{n+1}\] \[-\frac{(m_{u}+m_{d})\langle g_{s}\bar{q}\sigma TGq\rangle}{M^{6} }\,\frac{8n+1}{18}\,+\,\frac{\langle g_{s}\bar{q}q\rangle^{2}}{M^{6}}\,\frac{ 4(2n+1)}{18}\,-\,\frac{\langle g_{s}^{3}fG^{3}\rangle}{M^{6}}\,\frac{n\theta( n-2)}{48\pi^{2}}\,+\,\frac{\langle g_{s}^{2}\bar{q}q\rangle^{2}}{M^{6}}\,\frac{2+ \kappa^{2}}{486\pi^{2}}\] \[-25(2n+1)\bigg{[}\psi\bigg{(}\frac{n+1}{2}\bigg{)}-\psi\bigg{(} \frac{n}{2}\bigg{)}+\ln 4\bigg{]}\bigg{]}\bigg{\}}\bigg{\}}. \tag{9}\] It has been shown that due to the anomalous dimension of the \(n_{\text{th}}\)-order moment grows with increment of \(n\), the contribution of the much higher moments at the large momentum transfer shall be highly suppressed [53]. Thus one may only need to calculate the first few ones. Specifically, the sum rule of the \(0_{\text{th}}\)-order moment is \[(\langle\xi^{0}_{2;\eta_{q}}\rangle_{|\mu})^{2} =\frac{M^{2}}{f_{\eta_{q}}^{2}}e^{\tilde{m}_{\eta_{q}}^{2}/M^{2}} \bigg{\{}\frac{1}{4\pi^{2}}\bigg{(}1-e^{-s_{\eta_{q}}/M^{2}}\bigg{)}\] \[+\frac{(m_{u}+m_{d})\langle\bar{q}q\rangle}{M^{4}}-\frac{(m_{u}+m_ {d})\langle g_{s}\bar{q}\sigma TGq\rangle}{18M^{6}}\] \[+\frac{\langle\alpha_{s}G^{2}\rangle}{12\pi M^{4}}+\frac{4\langle g _{s}\bar{q}q\rangle^{2}}{18M^{6}}+\frac{\langle g_{s}^{2}\bar{q}q\rangle^{2}} {M^{6}}\,\frac{2+\kappa^{2}}{486\pi^{2}}\] \[\times\bigg{[}-50\bigg{(}-\ln\frac{M^{2}}{\mu^{2}}\bigg{)}+105 \bigg{]}\bigg{\}}. \tag{10}\] Due to the particularity of quark composition of \(\eta\)-meson, we take the \(\eta_{q}\) mass appeared in Eqs. (9) and (10) as its effective mass 370 MeV [52]. We use the equation \(\langle\xi^{n}_{2;\eta_{q}}\rangle_{|\mu}=\langle\xi^{n}_{2;\eta_{q}}\rangle_ {|\mu}\langle\xi^{0}_{2;\eta_{q}}\rangle_{|\mu}/\sqrt{(\langle\xi^{0}_{2;\eta_ {q}}\rangle_{|\mu})^{2}}\) to calculate the moment [54]. The decay constant is an important input for the \(B(D)\to\eta^{(\prime)}\) TFFs, which has been calculated under different methods such as the LCSR [55], the QCD sum rules (QCD SR) [56; 57], the light-front quark model (LFQM) [58; 59; 60; 61], the lattice QCD (LQCD) [62; 63; 64; 65], the Bethe-Salpeter (BS) model [66; 67; 68], the relativistic quark model (RQM) [69; 70; 71], the non-relativistic quark model (NRQM) [72], and etc.. As for the decay constant \(f_{\eta_{q}}\), those studies shows that \(f_{\eta_{q}}\) is within a broader range \([0.130,0.168]\,\)GeV. At present, the sum rule of the \(\eta_{q}\) decay constant can be inversely obtained by using Eq.(10). The \(\langle\xi^{0}_{2;\eta_{q}}\rangle_{|\mu}\) should be normalized in a suitable Borel window, which will be treated as an important criteria for determining the \(\eta_{q}\) decay constant. ### The LCHO model for \(\eta_{q}\) twist-2 LCDA The meson's LCDA can be derived from its light-cone wave-function (LCWF) by integrating its transverse components. It is helpful to construct the \(\eta_{q}\) leading-twist LCWF and then get its LCDA [73; 74]. Practically, the \(\eta_{q}\) wave-function can be constructed by using the BHL prescription, and the LCHO model takes the form [54]: \[\psi_{2;\eta_{q}}(x,{\bf k}_{\perp})=\chi_{2;\eta_{q}}(x,{\bf k}_{\perp})\psi^{R }_{2;\eta}(x,{\bf k}_{\perp}), \tag{11}\] where \({\bf k}_{\perp}\) is the \(\eta_{q}\) transverse momentum, \(\chi_{2;\eta_{q}}(x,{\bf k}_{\perp})\) stands for the spin-space WF that comes from the Wigner-Melosh rotation and the spatial WF \(\psi^{R}_{2;\eta_{q}}(x,{\bf k}_{\perp})\) comes from the approximate bound-state solution in the quark model for \(\eta_{q}\), which the detailed expressions can be found in Ref [54]. Using the following relationship between the \(\eta_{q}\) twist-2 LCDA and LCWF, \[\phi_{2;\eta_{q}}(x,\mu)=\frac{2\sqrt{6}}{f_{\eta_{q}}}\int_{0}^{|{\bf k}_{ \perp}|^{2}\leq\mu^{2}}\frac{d^{2}{\bf k}_{\perp}}{16\pi^{3}}\psi_{2;\eta_{q}} (x,{\bf k}_{\perp}), \tag{12}\] and by integrating over the transverse momentum \({\bf k}_{\perp}\), one can get the twist-2 LCDA \(\phi_{2;\eta_{q}}(x,\mu)\), which can be read off, \[\phi_{2;\eta_{q}}(x,\mu)=\frac{\sqrt{3}A_{2;\eta_{q}}m_{q}\beta_{ 2;\eta_{q}}}{2\sqrt{2}\pi^{3/2}f_{\eta_{q}}}\sqrt{x\bar{x}}\varphi_{2;\eta_{q} }(x)\] \[\times\left\{{\rm Erf}\left[\sqrt{\frac{m_{q}^{2}+\mu^{2}}{8\beta _{2;\eta_{q}}^{2}x\bar{x}}}\right]-{\rm Erf}\left[\sqrt{\frac{m_{q}^{2}}{8\beta _{2;\eta_{q}}^{2}x\bar{x}}}\right]\right\}. \tag{13}\] where \(q=(u,d)\), \(m_{q}\) is the constituent quark mass. The main difference of the model parameters is the constituent quark mass, i.e. \(m_{u}=m_{d}=250\) MeV in the spin-averaged meson mass scheme [75], \(m_{u}=m_{d}=330\) GeV in the invariant meson mass scheme [76; 77] and \(m_{u}=m_{d}=300\) MeV for the simplest in Refs [78; 79]. In principle, the hadron function determines all properties of hadrons. From the relation between wavefunction and measurability, we can obtain some constraints on the general properties of hadronic function. We will constraint the parameters \(A_{2;\eta_{q}}\) and \(\beta_{2;\eta_{q}}\) according to the following two constraints. Both the pseudoscalar and vector mesons one constraint on the wavefunction is from the leptonic decay processes. The WF normalization condition provided from the process \(\eta_{q}\to\mu\nu\) \[\int_{0}^{1}dx\int\frac{d^{2}{\bf k}_{\perp}}{16\pi^{3}}\psi_{2;\eta_{q}}(x,{ \bf k}_{\perp})=\frac{f_{\eta_{q}}}{2\sqrt{6}}. \tag{14}\] The second constraint is the most natural one: the probability of finding the \(q\bar{q}\) Fock state in a meson should be not larger than 1, \[P_{\eta_{q}} = \int_{0}^{1}dx\int\frac{d^{2}{\bf k}_{\perp}}{16\pi^{3}}|\psi_{2; \eta_{q}}(x,{\bf k}_{\perp})|^{2} \tag{15}\] \[= \frac{A_{2;\eta_{q}}^{2}m_{q}^{2}}{32\pi^{2}}[\varphi_{2;\eta_{q} }(x)]^{2}\Gamma\bigg{[}0,\frac{m_{q}^{2}}{4\beta_{2;\eta_{q}}^{2}x\bar{x}} \bigg{]}.\] Since pionic twist-2 wavefunction conforms to the probability \(P_{\pi}\approx 0.3\)[74], we adopt \(P_{\eta_{q}}\approx 0.3\) to carry out the following calculation. Equivalently, one can replace the constraint (15) by the quark transverse momentum \(\langle{\bf k}_{\perp}^{2}\rangle_{\eta_{q}}\), which is measurable and is defined as [74] \[\langle{\bf k}_{\perp}^{2}\rangle_{\eta_{q}}=\int_{0}^{1}dx\int \frac{d^{2}{\bf k}_{\perp}}{16\pi^{3}}|{\bf k}_{\perp}^{2}|\psi_{2;\eta_{q}}^{ R}(x,{\bf k}_{\perp})^{2}/P_{\eta_{q}}\] \[= \int_{0}^{1}dx\frac{4\exp\left[-\frac{m_{q}^{2}}{4x\bar{x}\beta_{ 2;\eta_{q}}^{2}}\right]x\bar{x}\beta_{2;\eta_{q}}^{2}}{\Gamma\left[0,\frac{m_ {q}^{2}}{4x\bar{x}\beta_{2;\eta_{q}}^{2}}\right]}-m_{q}^{2} \tag{16}\] where the incomplete gamma function \(\Gamma[s,x]=\int_{0}^{x}t^{(s-1)}e^{-t}dt\). The function \(\varphi_{2;\eta_{q}}(x)\) determines the dominant longitudinal behavior of \(\phi_{2;\eta_{q}}(x,\mu^{2})\), which can be expanded as a Gegenbauler series as \[\varphi_{2;\eta_{q}}(x)=\left[1+\sum_{n}B_{n}\times C_{n}^{3/2}(2x-1)\right], \tag{17}\] For self-consistency, it has been found that the parameters \(B_{n}\) are close to their corresponding Gegenbauer moment, i.e. \(B_{n}\sim a_{n}\), especially for the first few ones [79; 80; 81]. The \(\eta_{q}\) meson Gegenbauer moments can be calculated by the following way \[a_{2;\eta_{q}}^{n}(\mu)=\frac{\int_{0}^{1}dx\phi_{2;\eta_{q}}(x,\mu)C_{n}^{3/ 2}(2x-1)}{\int_{0}^{1}dx6x(1-x)[C_{n}^{3/2}(2x-1)]^{2}} \tag{18}\] The Gegenbauer moments \(a_{2;\eta_{q}}^{n}(\mu)\) and the DA moments \(\langle\xi_{2;\eta_{q}}^{n}\rangle_{|\mu}\) satisfy the following relations \[\langle\xi_{2;\eta_{q}}^{2}\rangle_{|\mu} = \frac{1}{5}+\frac{12}{35}a_{2;\eta_{q}}^{2}(\mu)\] \[\langle\xi_{2;\eta_{q}}^{4}\rangle_{|\mu} = \frac{3}{35}+\frac{8}{35}a_{2;\eta_{q}}^{2}(\mu)+\frac{8}{77}a_{2; \eta_{q}}^{4}(\mu) \tag{19}\] \[\ldots\] By using the sum rules (9) of \(\langle\xi_{2;\eta_{q}}^{n}\rangle_{|\mu}\), one can determine the values of \(a_{2;\eta_{q}}^{n}(\mu)\), which then can be used to fix the values of \(B_{n}\). In the following we will adopt the given two Gegenbauer moments \(a_{2;\eta_{q}}^{2,4}\) to fix the parameters \(B_{2,4}\). ### The \(B(d)^{+}\to\eta_{q}\ell^{+}\nu_{\ell}\) TFFs using the LCSR The LCSR approach is an effective tool in determining the non-perturbative properties of hadronic states. Here and after, we use the symbol "\(H\)" to indicate the \(B(D)\)-meson for convenience. Following the LCSR approach, one should first construct a correlator with the weak current and a current with the quantum numbers of the \(H\) meson that are sandwiched between the vacuum and \(\eta_{q}\) state. More explicitly, for \(H\to\eta_{q}\), we need to calculate the correlator \[\Pi_{\mu}(p,q) = i\int d^{4}xe^{iqx}\langle\eta_{q}(p)|T\{\bar{u}(x)\gamma_{\mu}Q( x),j_{H}(0)\}|0\rangle \tag{20}\] \[= \Pi[q^{2},(p+q)^{2}]p_{\mu}+\tilde{\Pi}[q^{2},(p+q)^{2}]q_{\mu}.\] where \(j_{H}=(m_{Q}\bar{Q}i\gamma_{5}d)\) with \(Q=(b,c)\)-quark for \((B,D)\) meson, respectively. The LCSR calculation for the \(B(D)^{+}\to\eta_{q}\) TFFs is similar to the case of \(B_{s}(D_{s})\to\eta_{s}\), which has been done in Ref.[51]. In the following, we will give the main procedures for self-consistency, and the interesting reader may turn to Ref.[51] for more detail. The dual property of the correlator (20) is used to connect the two different representations in different momentum transfer regions. In the time-like region, one can insert a complete set of the intermediate hadronic states in the correlator and obtain its hadronic representation by isolating out the pole term of the lowest meson state, i.e. \[\Pi^{\rm had}_{\mu}(p,q)=\frac{\langle\eta_{q}(p)|\bar{u}\gamma_{ \mu}Q|H(p+q)\rangle\langle H(p+q)|\bar{Q}i\gamma_{5}q|0\rangle}{m_{H}^{2}-(p+ q)^{2}}\] \[+\sum_{\cal H}\frac{\langle\eta_{q}(p)|\bar{u}\gamma_{\mu}Q|H^{ \cal H}(p+q)\rangle\langle H^{\cal H}(p+q)|\bar{Q}i\gamma_{5}q|0\rangle}{m_{H }^{2}-(p+q)^{2}}\] \[=\Pi^{\rm had}[q^{2},(p+q)^{2}]p_{\mu}+\tilde{\Pi}^{\rm had}[q^{2},(p+q)^{2}]q_{\mu}, \tag{21}\] where the superscript "had" and "\({\cal H}\)" stand for the hadronic expression of the correlator and the continuum states of heavy meson, respectively. Here, the decay constant of \(B(D)\)-meson is defined via the equation, \(\langle H|\bar{Q}i\gamma_{5}q|0\rangle=m_{H}^{2}f_{H}/m_{Q}\), and by using the hadronic dispersion relations in the virtuality \((p+q)^{2}\) of the current in the \(B(D)\) channel, we can relate the correlator to the \(H\to\eta_{q}\) matrix element [9] \[\langle\eta_{q}(p)|\bar{u}\gamma_{\mu}Q|H(p+q)\rangle=2p_{\mu}f_{+ }^{H\to\eta_{q}}(q^{2})\\ +q_{\mu}\Big{(}f_{+}^{H\to\eta_{q}}(q^{2})+f_{-}^{H\to\eta_{q}}(q ^{2})\Big{)}. \tag{22}\] Due to chiral suppression, only the first term contributes to the semileptonic decay of \(H\to\eta_{q}\) with massless leptons in the final state. Then, the hadronic expression for the invariant amplitude can be written as \[\Pi[q^{2},(p+q)^{2}] =\frac{2m_{H}^{2}f_{H}f_{+}^{H\to\eta_{q}}(q^{2})}{[m_{H}^{2}-(p+ q)^{2}]}p_{\mu}\] \[+\int_{s_{0}}^{\infty}ds\frac{\rho^{\cal H}(q^{2},s)}{s-(p+q)^{2}}, \tag{23}\] where \(s_{0}\) is continuum threshold parameter, \(\rho^{\cal H}\) is the hadronic spectral density. In the space-like region, the correlator can be calculated by using the operator production expansion (OPE). The OPE near the light cone \(x^{2}\approx 0\) leads to a convolution of perturbatively calculable hard-scattering amplitudes and universal soft LCDAs. Since the contributions of the three-particle part is small [51], we only calculate the two-particle part here, and the corresponding matrix element is [82] \[\langle\eta_{q}(p)|\bar{u}^{i}_{\alpha}(x)d^{j}_{\beta}(0)|0 \rangle=\frac{i\delta^{ij}}{12}f_{\eta_{q}}\int\limits_{0}^{1}due^{iup\cdot x }\bigg{\{}[\not{p}\gamma_{5}]_{\beta\alpha}\phi_{2;\eta_{q}}\] \[(u)-[\gamma_{5}]_{\beta\alpha}\mu_{\eta_{q}}\phi^{p}_{3;\eta_{q}}( u)+\frac{1}{6}[\sigma_{\nu\tau}\gamma_{5}]_{\beta\alpha}p_{\nu}x_{\tau}\mu_{ \eta_{q}}\phi^{\sigma}_{3;\eta_{q}}(u)\] \[+\frac{1}{16}[\not{p}\gamma_{5}]_{\beta\alpha}x^{2}\phi_{4,\eta_ {q}}(u)-\frac{i}{2}[\not{p}\gamma_{5}]_{\beta\alpha}\int\limits_{0}^{u}\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \(\bar{u}=(1-u)\), \(\mu_{\eta_{q}}=m_{\eta}^{2}/(m_{u}+m_{d})\), \(s(u)=(m_{Q}^{2}-\bar{u}q^{2}+u\bar{u}m_{\eta}^{2})/u\) and \(u_{0}=\big{(}q^{2}-s_{0}+m_{\eta}^{2}+\sqrt{(q^{2}-s_{0}+m_{\eta}^{2})^{2}-4m_{ \eta}^{2}(q^{2}-m_{Q}^{2})}\big{)}/2m_{\eta}^{2}\). The invariant amplitude \(F_{1}(q^{2},M^{2},s_{0})\) has been given in Ref. [51], which can be written as a factorized form of the convolutions. As will be shown below, the high-twist terms will have quite small contributions to compare with the leading-twist terms, thus we will not discuss the uncertainties caused by the different choices of the high-twist LCDAs. For convenience, we take the \(\eta_{q}\) twist-3 LCDAs \(\phi_{3;\eta_{q}}^{p}(u),\phi_{3;\eta_{q}}^{s}(u)\), and the twist-4 LCDAs \(\psi_{4;\eta_{q}}(u),\phi_{4;\eta_{q}}(u)\), together with their parameters from Ref. [10]. Using the resultant \(B(D)\to\eta^{(\prime)}\) TFFs, one can extract the CKM matrix element \(|V_{\rm cd}|\) or \(|V_{\rm ub}|\) by comparing with the predictions with the experimental data, i.e. via the following equation [83] \[\frac{\mathcal{B}(H\to\eta^{(\prime)}\ell\nu_{\ell})}{\tau(H)}=\int_{0}^{q_{ \rm max}^{2}}\frac{d\Gamma}{dq^{2}}(H\to\eta^{(\prime)}\ell\nu_{\ell}), \tag{28}\] where \(\tau(H)\) is the \(H\)-meson lifetime, and the maximum of the squared momentum transfer \(q_{\rm max}^{2}=(m_{H}-m_{\eta^{(\prime)}})^{2}\). ## III Numerical analysis ### Input parameters We adopt the following parameters to do the numerical calculation. According to the Particle Data Group (PDG) [23], we take the charm-quark mass \(m_{c}(\bar{m}_{c})=1.27\pm 0.02\), \(b\)-quark mass \(m_{b}(\bar{m}_{b})=4.18^{+0.03}_{-0.02}\) GeV; the \(\eta\), \(\eta^{\prime}\), \(D\) and \(B\)-meson masses are \(m_{\eta}=0.5478\) GeV, \(m_{\eta^{\prime}}=0.9578\) GeV, \(m_{D^{+}}=1.870\) GeV and \(m_{B^{+}}=5.279\) GeV, respectively; the lifetimes of \(D^{+}\) and \(B^{+}\) mesons are \(\tau(B^{+})=1.638\pm 0.004\) ps and \(\tau(D^{+})=1.033\pm 0.005\) ps, respectively; the current-quark-masses for the light \(u\) and \(d\)-quarks are \(m_{u}=2.16^{+0.49}_{-0.26}\) MeV and \(m_{d}=4.67^{+0.48}_{-0.17}\) MeV at the scale \(\mu=2\) GeV. As for the decay constants \(f_{B}\) and \(f_{D}\), we take \(f_{B}=0.215^{+0.007}_{-0.007}\) GeV [10] and \(f_{D}=0.142\pm 0.006\)[83]. The renormalization scale is set as the typical momentum flow \(\mu_{B}=\sqrt{m_{B}^{2}-\bar{m}_{b}^{2}}\approx 3\) GeV for \(B\)-meson decay or \(\mu_{B}\approx 1.4\) GeV for \(D\)-meson decay. We also need to know the values of the non-perturbative vacuum condensates up to dimension-six, which include the double-quark condensates \(\langle q\bar{q}\rangle\) and \(\langle g_{s}\bar{q}q\rangle^{2}\), the quark-gluon condensate \(\langle g_{s}\bar{q}\sigma TGq\rangle\), the four-quark condensate \(\langle g_{s}^{2}\bar{q}q\rangle^{2}\), the double-gluon condensate \(\langle\alpha_{s}G^{2}\rangle\) and the triple-gluon condensate \(\langle g_{s}^{3}fG^{3}\rangle\), and etc. We take their values as [84; 85; 86], \[\langle q\bar{q}\rangle = (-2.417^{+0.227}_{-0.114})\times 10^{-2}~{}{\rm GeV}^{3},\] \[\langle g_{s}\bar{q}q\rangle^{2} = (2.082^{+0.734}_{-0.697})\times 10^{-3}~{}{\rm GeV}^{6},\] \[\langle g_{s}\bar{q}\sigma TGq\rangle = (-1.934^{+0.188}_{-0.103})\times 10^{-2}~{}{\rm GeV}^{5},\] \[\langle g_{s}^{2}\bar{q}q\rangle^{2} = (7.420^{+2.614}_{-2.483})\times 10^{-3}~{}{\rm GeV}^{6},\] \[\langle\alpha_{s}G^{2}\rangle = 0.038\pm 0.011~{}{\rm GeV}^{4},\] \[\langle g_{s}^{3}fG^{3}\rangle \approx 0.045~{}{\rm GeV}^{6}. \tag{29}\] The ratio \(\kappa=\langle s\bar{s}\rangle/\langle q\bar{q}\rangle=0.74\pm 0.03\) is given in Ref. [85]. In order to make the calculation more accurate, every vacuum condensates and current quark masses need to be run from their initial values at the scale \(\mu_{0}\) to the required scale by using the renormalization group equations (RGE) [54]. ### The \(\eta_{q}\) decay constant and the moments \(\langle\xi_{2;\eta_{q}}^{n}\rangle\) The continuum threshold parameter (\(s_{0}\)) and the Borel parameter \(M^{2}\) are two important parameters for the sum rules analysis. When calculating the decay constant \(f_{\eta_{q}}\), one may set its continuum threshold to be close to the squared mass of the \(\eta^{\prime}\) meson, i.e. \(s_{0}=0.95\pm 0.1\)GeV\({}^{2}\)[56]. To determine the allowable \(M^{2}\) range, e.g. the Borel window, for the \(\eta_{q}\) decay constant, we adopt the following criteria, * The continuum contribution is less than 30%; * The contributions of the six-dimensional condensates are no more than 5%; * The value of \(f_{\eta_{q}}\) is stable in the Borel window; * The \(\langle\xi_{2;\eta_{q}}^{0}\rangle|_{\mu_{0}}\) is normalized in the Borel window, e.g. \(\langle\xi_{2;\eta_{q}}^{2}\rangle|_{\mu_{0}}=1\). We put the curves for the decay constant \(f_{\eta_{q}}\) versus the Borel parameter \(M^{2}\) in Fig. 1, where the shaded Figure 1: (Color online) The \(\eta_{q}\) decay constant \(f_{\eta_{q}}\) versus the Borel parameter \(M^{2}\), where the shaded band indicates the uncertainties from the input parameters. band indicates the uncertainties from the errors of all the mentioned input parameters. The decay constant is flat in the allowable Borel window, which confirms the third criterion. Using the above four criteria and the chosen continuum threshold parameter, we put the numerical results of \(f_{\eta_{q}}\) in Table 1. As a comparison, we also present several predictions using the QCDSR and LQCD approaches. Our predictions are in good agreement with the QCDSR 2000 [56] and the LQCD 2021 predictions within errors [65]. The reason why we are slightly different from QCDSR 2000 is that their calculation only includes the contributions up to five dimensional operators, and our present one includes the dimension-6 vacuum condensation terms. Using the determined \(f_{\eta_{q}}\), we then determine the moments of its twist-2 LCDA. Similarly, several important conditions need to be satisfied before the moments of \(\eta_{q}\) LCDA can be determined [51]. Furthermore, in order to search for a suitable Borel window for the moments, one can take the similar criteria adopted for the traditional sum rules, i.e. keeping the dimension-six condensate's contribution to be no more than 5% and the continuum contribution to be no more than 40%. To determine the first two LCDA moments \(\langle\xi^{a}_{2;\eta_{q}}\rangle|_{\mu_{0}}\) with \(n=(2,4)\), we set the continuum contributions to be less than 35% and 40%, respectively. We find that the allowable Borel windows for the two moments \(\langle\xi^{2,4}_{2;\eta_{q}}\rangle|_{\mu}\) are \(M^{2}\in[1.782,2.232]\) and \(M^{2}\in[2.740,3.258]\), respectively. Numerical results of the first two moments \(\langle\xi^{2,4}_{2;\eta_{q}}\rangle|_{\mu}\) can be obtained, which at the initial scale \(\mu_{0}\) are \[\langle\xi^{2}_{2;\eta_{q}}\rangle|_{\mu_{0}}=0.253\pm 0.014, \tag{30}\] \[\langle\xi^{4}_{2;\eta_{q}}\rangle|_{\mu_{0}}=0.127\pm 0.010. \tag{31}\] its uncertainty. Table 2 shows that the parameters \(B_{2}\) and \(B_{4}\) and the quark transverse momentum \(\langle{\bf k}_{\perp}^{2}\rangle_{\eta_{q}}\) increase with the increment of constituent quark mass, but the harmonious parameter \(\beta_{2;\eta_{q}}\) decreases gradually. Experimentally the average quark transverse momentum of pion, \(\langle{\bf k}_{\perp}^{2}\rangle_{\pi}\), is of the order (300 MeV)\({}^{2}\) approximately [87]. So it is reasonable to require \(\sqrt{\langle{\bf k}_{\perp}^{2}\rangle_{\eta_{q}}}\) have the value of about a few hundreds MeV [74]. For the case of \(m_{q}=300\pm 50\) MeV, we numerically obtain \(\langle{\bf k}_{\perp}^{2}\rangle_{\eta_{q}}=0.123^{+0.003}_{-0.002}\approx(35 1^{+4}_{-3}\ {\rm MeV})^{2}\), which is reasonable and in some sense indicates the inner consistency of all the LCHO model parameters. Moreover, by using the RGE, one can get the \(\phi_{2;\eta_{q}}(x,\mu)\) at any scale \(\mu\)[54]. Fig. 3 shows the LCDA \(\phi_{2;\eta_{q}}\) at several typical scales with \(m_{q}=300\ {\rm MeV}\). At low scale, it shows double humped behavior and when the scale \(\mu\) increases, the shape of \(\phi_{2;\eta_{q}}\) becomes narrower; and when \(\mu\to\infty\), it will tends to single-peak asymptotic behavior for the light mesons \([\!]\)\(\phi_{\eta_{q}}^{\rm as}(x,\mu)|_{\mu\to\infty}=6x(1-x)\). We make a comparison of the properties of the LCHO model of the twist-2 LCDA \(\phi_{2;\eta_{q}}\) with other theoretical predictions in Fig. 4. Fig. 4 gives the results for \(\mu=\mu_{0}=1\) GeV, where the asymptotic form [88], the CZ form [89] and the behaviors given by the LCSR 2007 [9] and LCSR 2015 [10] are presented. For the LCSR 2007 result, its double peaked behavior is caused by the keeping its Gegenbauer expansion only with the first term together with the approximation \(a_{2;\eta_{q}}^{2}(\mu_{0})=a_{2;\eta_{q}}^{2}(\mu_{0})=0.25\)[9]. For the LCDA used in LCSR 2015 [10], its behavior is close to our present one. It is obtained by using the approximation that the twist-2 LCDA \(\phi_{2;\eta_{q}}\) has the same behavior as that of the pion twist-2 LCDA \(\phi_{2;\pi}\), e.g. \(a_{2;\eta_{q}}^{2}(\mu_{0})=a_{2;\pi}^{2}(\mu_{0})=0.17\) and \(a_{2;\eta_{q}}^{4}(\mu_{0})=a_{2;\pi}^{4}(\mu_{0})=0.06\), which are consistent with our Gegenbauer moments within errors 2. Footnote 2: Since the twist-2 parts dominant the TFFs, this consistency also explains why our following LCSR predictions for the TFFs are close in shape with those of Ref.[10]. ### The TFFs and observable for the semileptonic decay \(B(d)^{+}\to\eta^{(\prime)}\ell^{+}\nu_{\ell}\) One of the most important applications of the \(\eta_{q}\)-meson LCDAs is the semileptonic decay \(H^{+}\to\eta^{(\prime)}\ell^{+}\nu_{\ell}\), whose main contribution in the QF scheme comes from the \(|\eta_{q}\rangle\)-component. Here \(H^{+}\) stands for \(B^{+}\) or \(D^{+}\), respectively. And to derive the required \(H^{+}\to\eta^{(\prime)}\) TFFs, we take the mixing angle \(\phi=(41.2^{+0.05}_{-0.06})^{\circ}\)[51]. The continuum threshold \(s_{0}^{H\to\eta^{(\prime)}}\) and Borel parameters \(M^{2}\) are two important parameters for the LCSR of the TFFs. As usual choice of treating the heavy-to-light TFFs, we set the continuum threshold as the one near the squared mass of the first excited state of \(D\) or \(B\)-meson, accordingly. And to fix the Borel window for the TFFs, we require the contribution of the continuum states to be less than 30%. The determined values agree with Refs.[10; 90], and we will take the following values to do our discussion \[s_{0}^{D\to\eta}=7.0\pm 0.5\ {\rm GeV}^{2}, M_{D\to\eta}^{2}=3.0\pm 0.5\ {\rm GeV}.\] \[s_{0}^{D\to\eta^{\prime}}=7.0\pm 0.5\ {\rm GeV}^{2}, M_{D\to\eta^{\prime}}^{2}=3.0\pm 0.5\ {\rm GeV}.\] \[s_{0}^{B\to\eta}=37.0\pm 1.0\ {\rm GeV}^{2}, M_{B\to\eta}^{2}=18.0\pm 2.0\ {\rm GeV}.\] \[s_{0}^{B\to\eta^{\prime}}=37.0\pm 1.0\ {\rm GeV}^{2}, M_{B\to\eta^{\prime}}^{2}=18.0\pm 2.0\ {\rm GeV}.\] \begin{table} \begin{tabular}{l l l} \hline & \(f_{+}^{B\to\eta}(0)\) & \(f_{+}^{B\to\eta^{(\prime)}}(0)\) \\ \hline This work (LCSR) & \(0.145^{+0.009}_{-0.010}\) & \(0.128^{+0.008}_{-0.009}\) \\ LCSR 2007 [9] & \(0.229\pm 0.035\) & \(0.188\pm 0.028\) \\ LCSR 2015 [10] & \(0.168^{+0.041}_{-0.047}\) & \(0.130^{+0.036}_{-0.032}\) \\ pQCD [92] & \(0.147\) & \(0.121\) \\ CLF [93] & \(0.220\pm 0.018\) & \(0.180\pm 0.016\) \\ LCSR 2013 [91] & \(0.238\pm 0.046\) & \(0.198\pm 0.039\) \\ \hline & \(f_{+}^{D\to\eta}(0)\) & \(f_{+}^{D\to\eta^{\prime}}(0)\) \\ \hline This work (LCSR) & \(0.329^{+0.021}_{-0.015}\) & \(0.294^{+0.021}_{-0.015}\) \\ LCSR 2015 [10] & \(0.429^{+0.165}_{-0.141}\) & \(0.292^{+0.113}_{-0.104}\) \\ BES-III 2020 [27] & \(0.39\pm 0.04\pm 0.01\) & - \\ LFQM [94] & \(0.39\) & \(0.32\) \\ CCQM [95] & \(0.36(5)\) & \(0.36(5)\) \\ LCSR 2013 [91] & \(0.552(51)\) & \(0.458(105)\) \\ \hline \end{tabular} \end{table} Table 3: Typical theoretical predictions on the TFFs \(f_{+}^{H\to\eta^{(\prime)}}(0)\) at the large recoil point \(q^{2}=0\). Figure 4: (Color online) The \(\eta_{q}\) meson twist-2 LCDA \(\phi_{2;\eta_{q}}(x,\mu_{0})\). As a comparison, the asymptotic and CZ forms [88; 89] and the one derived using the LCSR approach [9; 10] are also presented. Using Eqs.(2, 3) together with the LCSR (27) for the TFF \(f_{+}^{H\to\eta_{t}}(q^{2})\), we then get the results for \(f_{+}^{H\to\eta^{(^{\prime})}}(q^{2})\), where \(H\) represents \(B\) or \(D\), respectively. Fig. 5 shows how the total TFFs \(f_{+}^{H\to\eta^{(^{\prime})}}(q^{2})\) change with the increment of \(q^{2}\), in which the twist-2 up to NLO QCD corrections, the twist-3 and the twist-4 contributions have been presented separately. Fig. 5 shows that the twist-2 terms dominant the TFFs. We also find that the NLO QCD corrections to the twist-2 terms are sizable and should be taken into consideration for a sound prediction. For examples, at the large recoil point, the twist-2 NLO terms give about 15.8% (17.6%) and 6.4% (7.2%) contributions to the total TFFs \(f_{+}^{D\to\eta^{(^{\prime})}}(0)\) and \(f_{+}^{B\to\eta^{(^{\prime})}}(0)\), respectively. Table 3 gives our present LCSR predictions for the TFFs \(f_{+}^{D\to\eta^{(^{\prime})}}(0)\) and \(f_{+}^{B\to\eta^{(^{\prime})}}(0)\). As a comparison, we have also presented the results derived from various theoretical approaches and experimental data in Table 3, including the LCSR approach [9; 10; 91], the pQCD approach [92], the covariant light front (CLF) approach [93], the light front quark model (LFQM) approach [94], the covariant confining quark mode (CCQM) approach [95], and the BES-III Collaboration [27]. The uncertainties of the TFFs \(f_{+}^{H\to\eta^{(^{\prime})}}(0)\) caused by different input parameters are listed as follows, \[f_{+}^{B\to\eta}(0) = 0.145(^{+0.004}_{-0.004})_{so}(^{+0.002}_{-0.002})_{M^{2}}(^{+0.00 7}_{-0.007})_{m_{f}B} \tag{32}\] \[(^{+0.005}_{-0.005})_{f_{\eta_{q}}}(^{+0.0001}_{-0.0001})_{\phi}\] \[= 0.145^{+0.009}_{-0.010},\] \[f_{+}^{B\to\eta^{\prime}}(0) = 0.128(^{+0.003}_{-0.003})_{so}(^{+0.002}_{-0.002})_{M^{2}}(^{+0.0 06}_{-0.006})_{m_{b}fB}\] (33) \[(^{+0.005}_{-0.005})_{f_{\eta_{q}}}(^{+0.0002}_{-0.0001})_{\phi}\] \[= 0.128^{+0.008}_{-0.009},\] \[f_{+}^{D\to\eta}(0) = 0.329(^{+0.003}_{-0.004})_{so}(^{+0.009}_{-0.005})_{M^{2}}(^{+0. 016}_{-0.009})_{m_{c}fD}\] (34) \[(^{+0.010}_{-0.010})_{f_{\eta_{q}}}(^{+0.0002}_{-0.0003})_{\phi}\] \[= 0.329^{+0.012}_{-0.015},\] \[f_{+}^{D\to\eta^{\prime}}(0) = 0.294(^{+0.003}_{-0.004})_{so}(^{+0.009}_{-0.005})_{M^{2}}(^{+0. 017}_{-0.011})_{m_{c}fD}\] (35) \[(^{+0.009}_{-0.009})_{f_{\eta_{q}}}(^{+0.0002}_{-0.0003})_{\phi}\] \[= 0.294^{+0.021}_{-0.015}.\] Here the second equations show the squared averages of all the mentioned errors. The physically allowable ranges of the above four heavy-to-light TFFs are \(m_{2}^{2}\leq q^{2}\leq(m_{D^{+}}-m_{\eta})^{2}\approx 1.75\) GeV\({}^{2}\), \(m_{2}^{2}\leq q^{2}\leq(m_{D^{+}}-m_{\eta^{\prime}})^{2}\approx 0.84\) GeV\({}^{2}\), \(m_{2}^{2}\leq q^{2}\leq(m_{B^{+}}-m_{\eta})^{2}\approx 22.40\) GeV\({}^{2}\) and \(m_{2}^{2}\leq q^{2}\leq(m_{B^{+}}-m_{\eta^{\prime}})^{2}\approx 18.67\) GeV\({}^{2}\), respectively. The LCSR approach is applicable in low and intermediate Figure 5: (Color online) LCSR predictions on the TFFs \(f_{+}^{H\to\eta^{(^{\prime})}}(q^{2})\) with \(H=B^{+}\) or \(D^{+}\) in the allowable \(q^{2}\) range, where the contributions from the twist-2, 3, 4 LCDAs are given separately. The twist-2 terms are given up to NLO QCD corrections. Figure 6: (Color online) The TFFs \(f_{+}^{H\to\eta^{(^{\prime})}}(q^{2})\) in whole \(q^{2}\)-region, where the solid line is the central value and the shaded band shows its uncertainty. The darker part of the shaded band is the LCSR prediction, and the remaining part is the extrapolated result. As a comparison, predictions using different theoretical approaches and the experimental data, such as CCQM [95], LFQM [94], LCSR [10], pQCD [92] and BESIII collaboration [27], are also presented. region, which however can be extended to whole \(q^{2}\) region via proper extrapolation approaches. In the present paper, we adopt the converging simplified series expansion (SSE) proposed in Refs.[96; 97] to do the extrapolation, which suggest a simple parameterization for the heavy-to-light TFF, e.g. \[f_{+}^{H\to\eta^{(^{\prime})}}(q^{2})=\frac{1}{1-q^{2}/m_{R^{*}}^{2}}\sum_{k}b_{ k}z^{k}(t,t_{0}) \tag{36}\] where \(m_{R^{*}}=m_{B^{*}}=5.325\)GeV (\(m_{D^{*}}=2.010\)GeV) [10] are vector meson resonances, \(z(t,t_{0})\) is a function \[z(t,t_{0})=\frac{\sqrt{t_{+}-t}-\sqrt{t_{+}-t_{0}}}{\sqrt{t_{+}-t}+\sqrt{t_{+ }-t_{0}}}. \tag{37}\] Here \(t_{\pm}=(m_{H^{+}}\pm m_{\eta^{(^{\prime})}})^{2}\) and \(t_{0}=t_{+}(1-\sqrt{1-t_{-}/t_{+}})\) is a free parameter. The free parameter \(b_{k}\) can be fixed by requiring \(\Delta<1\%\), where the parameter \(\Delta\) is used to measure the quality of extrapolation and it is defined as \[\Delta=\frac{\sum_{t}|F_{i}(t)-F_{i}^{\rm fit}(t)|}{\sum_{t}|F_{i}(t)|}\times 1 00, \tag{38}\] where \(t\in[0,\frac{1}{40},\cdots,\frac{40}{40}]\times 13.0(1.0)\) GeV of \(\eta\)-meson, \(t\in[0,\frac{1}{40},\cdots,\frac{40}{40}]\times 11.2(0.5)\) GeV of \(\eta^{\prime}\)-meson. The two coefficients \(b_{1,2}\) with all input parameters are set as their central values are listed in Table 4. The qualities of extrapolation parameter \(\Delta\) are less than \(\sim 0.8\%\). The extrapolated TFFs in whole \(q^{2}\)-region are given in Fig. 6, where some typical theoretical and experimental results are presented as a comparison, such as CCQM [95], LFQM [94], LCSR 2015 [10], pQCD [92] and BESIII 2020 [27]. The solid lines in Fig. 6 denote the center values of the LCSR predictions, where the shaded areas are theoretical uncertainties from all the mentioned error sources. The thicker shaded bands represent the LCSR predictions, which have been extrapolated to physically allowable \(q^{2}\)-region. Fig. 6 indicates that: 1) Our present LCSR prediction of \(f_{+}^{D\to\eta}(q^{2})\) is in good agreement with BESIII data [27]; 2) Our present LCSR prediction of \(f_{+}^{D\to\eta^{\prime}}(q^{2})\) is consistent with the LFQM prediction [94] and the LCSR 2015 [10] predictions within errors; 3) Our present LCSR predictions of \(f_{+}^{B\to\eta^{(^{\prime})}}(q^{2})\) are close to the LCSR 2015 prediction [10], and their values at \(q^{2}=0\) are consistent with the pQCD prediction [92] within errors. Fig. 7 shows the differential decay widthes for \(B(D)^{+}\to\eta^{(^{\prime})}\ell^{+}\nu_{\ell}\) without CKM matrix elements. As a comparison, the predictions using different theoretical approaches and the experimental data, such as CCQM [95], LFQM [94], LCSR [10] and BESIII collaboration [26; 27], are also presented. The differential decay width \(d\Gamma/|V_{\rm cd}|dq^{2}(D^{+}\to\eta\ell^{+}\nu_{\ell})\) agrees with the BESIII 2018 [26] and BESIII 2020 [27] within errors. By matching the branching fractions and the decay lifetimes given by the PDG with the decay widthes predicted by Eq.(28), one may derive the CKM matrix elements \(|V_{ub}|\) and \(|V_{cd}|\). We put our results in Table 5, where the errors are caused by all the mentioned error sources and the PDG errors for the branching fractions and the decay lifetimes. Some typical measured values of \(|V_{ub}|\) and \(|V_{cd}|\) are also given in Table 5. The predicted \(|V_{cd}|\) is within the error range of experimental result BESIII 2020. Using the fixed CKM matrix elements, our final predictions of the branching function are: \(\mathcal{B}(D\to\eta e\nu_{e})=(1.11\pm 0.07)\times 10^{-3}\), \(\mathcal{B}(D\to\eta\mu\nu_{\mu})=(1.04\pm 0.11)\times 10^{-3}\), \(\mathcal{B}(D\to\eta^{\prime}e\nu_{e})=(2.0\pm 0.4)\times 10^{-4}\), \(\mathcal{B}(B\to\eta\ell\nu_{\ell})=(3.9\pm 0.5)\times 10^{-5}\), \(\mathcal{B}(B\to\eta^{\prime}\ell\nu_{\ell})=(2.3\pm 0.8)\times 10^{-5}\), respectively. \begin{table} \begin{tabular}{c c c c c} \hline & \(f_{+}^{D\to\eta}(q^{2})\) & \(f_{+}^{D\to\eta}(q^{2})\) & \(f_{+}^{B\to\eta}(q^{2})\) & \(f_{+}^{B\to\eta^{\prime}}(q^{2})\) \\ \hline \(b_{1}\) & \(-0.033\) & \(-0.680\) & \(-0.392\) & \(-0.397\) \\ \(b_{2}\) & \(37.901\) & \(23.961\) & \(-0.108\) & \(-0.308\) \\ \(\Delta\) & \(0.761\%\) & \(0.026\%\) & \(0.341\%\) & \(0.062\%\) \\ \hline \end{tabular} \end{table} Table 4: Fitting parameters \(b_{1}\) and \(b_{2}\) for the TFFs \(f_{+}^{H\to\eta^{(^{\prime})}}(q^{2})\), where all input parameters are set to be their central values. \(\Delta\) is the measure of the quality of extrapolation. Figure 7: (Color online) Differential decay widthes for \(B(D)^{+}\to\eta^{(^{\prime})}\ell^{+}\nu_{\ell}\) in whole \(q^{2}\)-region, where the solid line is the central value and the shaded band shows its uncertainty. As a comparison, the predictions using different theoretical approaches and the experimental data, such as CCQM [95], LFQM [94], LCSR [10] and BESIII collaboration [26; 27], are also presented. ## IV Summary In this paper, we have suggest a LCHO model (13) for the \(\eta_{q}\)-meson leading-twist LCDA \(\phi_{2;\eta_{q}}(x,\mu)\), whose moments have been calculated by using the QCD sum rules based on the QCD background field. To compare with the conventional Gegenbauer expansion for the LCDA, the LCHO model usually has better end-point behavior due to the BHL-prescription, which will be helpful to suppress the end-point singularity for the heavy-to-light meson decays. The QCD sum rules for the \(0_{\rm th}\)-order moment can be used to fix the \(\eta_{q}\) decay constant, and we obtain \(f_{\eta_{q}}=0.141\pm 0.005\) GeV. As an explicit application of \(\phi_{2;\eta_{q}}\), we then calculate the TFFs \(B(D)^{+}\to\eta^{(\prime)}\) under the QF scheme for the \(\eta-\eta^{\prime}\) mixing and by using the QCD light-cone sum rules up to twist-4 accuracy and by including the next-to-leading order QCD corrections to the dominant twist-2 part. Our LCSR prediction of TFFs are consistent with most of theoretical predictions and the recent BESIII data within errors. By applying those TFFs, we get the decay widths of \(B{(D)}^{+}\to\eta^{(\prime)}\ell^{+}\nu_{\ell}\). The magnitudes of the CKM matrix elements \(|V_{\rm ub}|\) and \(|V_{\rm cd}|\) have also been discussed by inversely using the PDG values for the branching fractions and the decay lifetimes. The future more precise data at the high luminosity Belle II experiment [99] and super tau-charm factory [100] shall be helpful to test all those results. ## V Acknowledgments This work was supported in part by the Chongqing Graduate Research and Innovation Foundation under Grant No. CYB23011 and No.ydstd1912, by the National Natural Science Foundation of China under Grant No.12175025, No.12265010, No.12265009 and No.12147102, the Project of Guizhou Provincial Department of Science and Technology under Grant No.ZK[2021]024 and No.ZK[2023]142, and the Project of Guizhou Provincial Department of Education under Grant No.KY[2021]030, and the Key Laboratory for Particle Physics of Guizhou Minzu University No.GZMUZK[2022]PT01.
2307.15995
Pathloss-based non-Line-of-Sight Identification in an Indoor Environment: An Experimental Study
This paper reports the findings of an experimental study on the problem of line-of-sight (LOS)/non-line-of-sight (NLOS) classification in an indoor environment. Specifically, we deploy a pair of NI 2901 USRP software-defined radios (SDR) in a large hall. The transmit SDR emits an unmodulated tone of frequency 10 KHz, on a center frequency of 2.4 GHz, using three different signal-to-noise ratios (SNR). The receive SDR constructs a dataset of pathloss measurements from the received signal as it moves across 15 equi-spaced positions on a 1D grid (for both LOS and NLOS scenarios). We utilize our custom dataset to estimate the pathloss parameters (i.e., pathloss exponent) using the least-squares method, and later, utilize the parameterized pathloss model to construct a binary hypothesis test for NLOS identification. Further, noting that the pathloss measurements slightly deviate from Gaussian distribution, we feed our custom dataset to four machine learning (ML) algorithms, i.e., linear support vector machine (SVM) and radial basis function SVM (RBF-SVM), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and logistic regression (LR). It turns out that the performance of the ML algorithms is only slightly superior to the Neyman-Pearson-based binary hypothesis test (BHT). That is, the RBF-SVM classifier (the best performing ML classifier) and the BHT achieve a maximum accuracy of 88.24% and 87.46% for low SNR, 83.91% and 81.21% for medium SNR, and 87.38% and 86.65% for high SNR.
Muhammad Asim, Muhammad Ozair Iqbal, Waqas Aman, Muhammad Mahboob Ur Rahman, Qammer H. Abbasi
2023-07-29T14:40:27Z
http://arxiv.org/abs/2307.15995v1
# Pathloss-based non-Line-of-Sight Identification in an Indoor Environment: An Experimental Study ###### Abstract This paper reports the findings of an experimental study on the problem of line-of-sight (LOS)/non-line-of-sight (NLOS) classification in an indoor environment. Specifically, we deploy a pair of NI 2901 USRP software-defined radios (SDR) in a large hall. The transmit SDR emits an unmodulated tone of frequency 10 KHz, on a center frequency of 2.4 GHz, using three different signal-to-noise ratios (SNR). The receive SDR constructs a dataset of pathloss measurements from the received signal as it moves across 15 equi-spaced positions on a 1D grid (for both LOS and NLOS scenarios). We utilize our custom dataset to estimate the pathloss parameters (i.e., pathloss exponent) using the least-squares method, and later, utilize the parameterized pathloss model to construct a binary hypothesis test for NLOS identification. Further, noting that the pathloss measurements slightly deviate from Gaussian distribution, we feed our custom dataset to four machine learning (ML) algorithms, i.e., linear support vector machine (SVM) and radial basis function SVM (RBF-SVM), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and logistic regression (LR). It turns out that the performance of the ML algorithms is only slightly superior to the Neyman-Pearson-based binary hypothesis test (BHT). That is, the RBF-SVM classifier (the best performing ML classifier) and the BHT achieve a maximum accuracy of 88.24% and 87.46% for low SNR, 83.91% and 81.21% for medium SNR, and 87.38% and 86.65% for high SNR. line-of-sight (LOS), non-line-of-sight (NLOS), classification, least-squares, binary hypothesis test, machine learning, support vector machine. ## I Introduction The upcoming 6G cellular standard aims to provide an immersive and personalized user experience by enabling a wide range of novel location-based applications, including augmented reality (AR), virtual reality (VR), and mixed reality (MR). To this end, precise indoor localization is the prerequisite to realize such applications, which will allow seamless integration of virtual and physical environments, enable precise positioning of virtual objects, and deliver context-aware services to the users [1]. Indoor localization is a challenging task due to the lack of global positioning system (GPS) signals indoors, due to the presence of obstacles/blockages, multipath, and random signal variations due to rich scattering in indoor environments. To date, numerous indoor propagation models and various methods for indoor localization have been reported in literature to examine and to undo the impact of non-idealities (e.g., multi-path, blockages) [2]. Some popular methods for indoor localization include the following: fingerprinting (scene analysis) based, time of arrival (ToA) based, angle of arrival (AoA) based, phase of arrival (PoA) based, time of flight (ToF) based, time difference of arrival (TDoA) based, and received signal strength (RSS) based, Ricean k-factor based [2]. This work focuses on the challenge posed by the blockages to the indoor localization systems. Specifically, blockages turn a link into a non-line-of-sight (NLOS) link, which in turn makes the distance/AoA estimates obtained by the indoor localization algorithms biased. Thus, NLOS conditions when exist, degrade the accuracy of the indoor positioning systems due to the ranging errors. Therefore, accurate NLOS prediction/classification is the need of the hour. NLOS prediction helps indoor positioning systems identify and mitigate the effects of NLOS conditions, and thus could lead to a boost in the accuracy of the indoor position estimates [3]. Other than indoor localization, NLOS identification could also help solve many other important problems, e.g., it could help discover blocked THz links indoors, which might prompt a THz access point to provide service to the associated users by means of a reconfigurable intelligent surface (RIS) panel, therefore, improving the coverage of the indoor THz link [4]. NLOS identification, thus, provides valuable insights for the design of blockage-aware user association algorithms and handover management algorithms. The problem of NLOS identification has recently caught attention by the research community, and a number of works have been reported in the literature, to date. Thus, the discussion of the selected related works is in order. [5] utilizes a WiFi system to collect channel frequency response (CFR) and channel impulse response (CIR) samples, extracts a number of statistical features (e.g., mean, variance, skew, kurtosis, etc.) from the fine-grained channel state information (CSI) and feeds them to a support vector machine that does the NLOS identification. Authors of [6] consider an ultra-wideband system and use a semi-supervised learning approach, i.e., they utilize the expectation maximization algorithm to learn the parameters of their Gaussian mixture model for NLOS identification. The work [7] extracts a number of features (e.g., AoA, ToA, RSS, etc.) from the incoming received signal and utilizes various methods (e.g., Neyman-Pearson method) from the statistical decision theory in order to identify the NLOS conditions. The authors in [8] collect RSS samples using an indoor WiFi system and extract multiple statistical features from the RSS time series in order to feed them to a least squares support vector machine and to a hypothesis test which do NLOS identification. They further do NLOS mitigation by designing various distance estimation algorithms under both line-of-sight (LOS) and NLOS conditions. Recently, a few researchers have proposed machine learning (ML) and deep learning methods for NLOS identification. For example, the authors in [9] implement a recurrent neural network (RNN) model that utilizes the CSI measurements collected in an indoor office environment, in order to identify the NLOS condition. [10] studies the problem of ultra-wideband based wireless ranging, and utilizes a support vector machine, a random forest classifier and a multi-layer perceptron to solve the three-class classification problem with the following classes: LOS, NLOS, and multipath. The authors in [11] propose feature-based Gaussian distribution method and generalized Gaussian distribution method for NLOS detection under the constraint of an imbalanced dataset (with very few examples from the NLOS class). The authors in [12] propose a novel algorithm for LOS/NLOS classification based on a multi-layer perceptron that utilizes both manually extracted features as well as the features obtained from a convolutional neural network (CNN) using raw CIR inputs. Last but not the least, the authors in [13] study the problem of localization in a millimeter-wave wireless communication system, and train and test a two-stage unsupervised ML model on CSI data in order to classify LOS/NLOS. On the prototyping front, there are a handful of works that report experimental results on indoor localization [14, 15, 16]. For example, the authors in [14] use a bluetooth low energy (BLE) module to do indoor localization via different approaches, i.e., trilateration, dead reckoning, and the fusion method. Further, an experimental study that investigates the relation between the accuracy and energy consumption in a WiFi fingerprinting-based indoor localization system is proposed in [15]. Finally, ML-assisted indoor localization is discussed in [16] where support vector regression (SVR) is done on CFR measurements obtained via a BLE module, in order to accomplish indoor localization in a multipath environment. **Contributions.** This is an experimental study where we do an extensive data collection campaign via a pair of NI 2901 USRP software-defined radios in order to collect pathloss measurements in 5G FR1 band in an indoor setting. We first apply a least-squares method on the pathloss measurements in order to parameterize the pathloss model which is later utilized to construct a Neyman-Pearson-based binary hypothesis test. Further, noting that the pathloss measurements slightly deviate from the Gaussian distribution, we apply following four machine learning algorithms to the experimental data collected: linear support vector machine (SVM) and radial basis function SVM (RBF-SVM), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and logistic regression (LR). It turns out that the performance of the best-performing ML algorithm (i.e., RBF-SVM) is only slightly superior than its counterpart from statistical decision theory, i.e., binary hypothesis test. **Outline.** The rest of this paper is organized as follows. Section II describes the experimental setup and the data collection process. Section III presents the two proposed methods for NLOS identification in detail. Section IV provides some selected results. Section V concludes the paper. ## II Experimental Setup & Data Collection We performed our data collection experiments in 5G FR1 band by deploying a pair of NI 2901 USRP software-defined radios (SDR) in one of the research labs at the Information Technology University (ITU), Lahore, Pakistan. The detailed layout of the room where we conducted our experiments is shown in Fig. 1. As can be seen in Fig. 1, the receiver was placed at \(P=15\) different positions on a linear grid with inter-position spacing of 60 cm. The minimum transmit-receive spacing is 125 cm as per (10 wavelengths) requirement for the receiver to be in far field of the transmitter, while the maximum transmit-receive spacing is 900 cm. Directional (Horn) antennas with a maximum gain of 20 dB each were used at both ends (this helped reduce the impact of multipath for the LOS measurements). Center frequency \(f_{c}\) was set to 2.4 GHz (i.e., the ISM band), while the sampling rate of both the transmit and the receive SDR was set to 200K samples/s. For both LOS and NLOS scenarios, measurements were taken for three different signal-to-noise ratio (SNR) conditions by changing the normalized amplitudes \(A_{t}\) of the transmitted signal in the following range: 0.4,0.5,0.6. The transmit node sent a unmodulated tone of frequency 10 KHz. The channel was considered to be time-slotted with a slot length of 10 ms (large enough so that all the multipath components could be lumped together in one slot). The received signal directly provided the instant RSS measurements. Subsequently, the instant RSS sample within a timeslot were averaged to get a more stable and reliable RSS estimate. Averaging also helped us get rid of the small-scale fading occurring on a relatively fast time-scale. The averaged RSS measurements were then translated into the pathloss measurements using the Friis equation assuming that the antenna gains on both ends as well as the transmit power is Fig. 1: The experimental setup (not to scale). The receiver (blue circle) is placed at 15 different positions on a 1D grid. The transmitter is either in LOS of the receiver (green triangle), or, in NLOS condition (red triangle). known. That is, pathloss \(=\frac{P_{t}}{P_{r}}\) where \(P_{t}\) is the known transmit signal power, while \(P_{r}=(\text{RSS})^{2}\) is the received signal power. The pathloss measurements were then used to construct a least-squares (LS) problem where the pathloss exponent \(\alpha\) was computed for both LOS and NLOS scenarios, for each of three link conditions. A total of \(N=5000\) measurements were obtained for each of the 15 receiver positions for both LOS and NLOS scenarios (in order to construct a balanced dataset), for three different SNR values. _Feasibility of pathloss as core feature for NLOS identification._ Fig. 2 plots the pathloss measurements that we obtained by moving the SDR receiver on a 1D grid during our data collection campaign, for both LOS and NLOS scenarios. Fig. 2 attests to the fact that the pathloss (exponent \(\alpha\)) is higher for the NLOS scenario, compared to the LOS scenario (as is well-known in the literature). ## III The Proposed Methods We first describe our binary hypothesis test for NLOS identification in detail. We then discuss the essentials of the four machine learning classifiers that we have implemented for NLOS identification. ### _NLOS Identification via Binary Hypothesis Testing_ The binary hypothesis testing method for NLOS identification requires the measurements of pathloss conditioned on the two hypotheses, i.e., LOS and NLOS. Therefore, we first present a least-squares method for the estimation of the pathloss parameters. We then design a binary hypothesis test for NLOS identification and compute the two error probabilities (i.e., false alarm rate and missed-detection rate). #### Iii-A1 Least-Squares Estimation of the Pathloss Parameters The Friis equation is: \(P_{r}=P_{t}G_{t}G_{r}\left(\frac{\lambda}{4\pi d}\right)^{\alpha}\) where \(P_{r}\) is the received power, \(P_{t}\) is the transmit power, \(G_{t}\) is the transmit antenna gain, \(G_{r}\) is the receive antenna gain, \(\lambda=\frac{c}{f_{c}}\) is the wavelength, \(c\) is the speed of light, \(f_{c}\) is the center frequency, \(d\) is the separation between the transmitting node and the receiving node, \(\alpha\) is the pathloss exponent. Re-arranging Friis equation, we obtain the following distance-dependent pathloss model: \[PL(d)=\frac{P_{t}}{P_{r}}=\frac{1}{G_{t}G_{r}}\left(\frac{4\pi d}{\lambda} \right)^{\alpha} \tag{1}\] Equivalently, in dB scale, we have: \[PL_{dB}(d)=\mathcal{A}+\alpha 10\log_{10}B(d) \tag{2}\] where \(\mathcal{A}=-10\log_{10}(G_{t}G_{r})\) and \(B(d)=\frac{4\pi d}{\lambda}\). As mentioned earlier, in this work, we collect noisy measurements of instant RSS, square them to translate them into instant \(P_{r}\) measurements which are further translated into pathloss measurements by multiplying \(1/P_{r}\) with the (known) \(P_{t}\). Then, the least-squares (LS) estimate of \(\mathcal{A}\) and \(\alpha\) is: \(\mathcal{B}=\mathbf{X}^{T}\big{(}\mathbf{X}\mathbf{X}^{T}\big{)}^{-1}\mathbf{ y}\) where \(\mathbf{y}\in\mathbf{R}_{+}^{(N\times P)\times 1}\) represents the measurement vector containing pathloss values, \(\mathbf{\Theta}=[\mathcal{A},\alpha]^{T}\in\mathbb{R}_{+}^{2\times 1}\) is the vector of unknowns, \(\mathbf{X}=[\mathbf{x},\mathbf{1}_{(N\times P)\times 1}]\in\mathbb{R}_{+}^{(N \times P)\times 2}\) is the system matrix, \(\mathbf{x}=[\mathbf{x}_{N}^{(1)},...,\mathbf{x}_{N}^{(P)}]^{T}\), \(P\) is the number of receiver positions, \(N\) is the number of measurements obtained at each receiver position. Table I summarizes the vector of unknowns \(\mathbf{\Theta}\) estimated via the LS method for both LOS and NLOS scenarios, for three different SNRs. #### Iii-A2 Binary Hypothesis Test With pathloss parameters in hand, we have the following binary hypothesis test (BHT) for NLOS identification (assuming Gaussian measurement noise): \[\begin{cases}H_{0}(\text{LOS}):&z=\mathcal{A}_{los}+\alpha_{los}10\log_{10}B +n\\ H_{1}(\text{NLOS}):&z=\mathcal{A}_{nlos}+\alpha_{nlos}10\log_{10}B+n\end{cases} \tag{3}\] where \(z\) is the pathloss measurement, \(n\sim N(0,\sigma^{2})\) is measurement error. Let \(m_{0}=\mathcal{A}_{LOS}+\alpha_{LOS}10\log_{10}B\) and \(m_{1}=\mathcal{A}_{NLOS}+\alpha_{NLOS}10\log_{10}B\). Then, \(z|H_{0}\sim N(m_{0},\sigma^{2})\) and \(z|H_{1}\sim N(m_{1},\sigma^{2})\). This translates to the following log-likelihood ratio test (LLRT): \[z\overset{H_{1}}{\underset{H_{0}}{\gtrgtr}}\delta=\left(\frac{\sigma^{2}\ln \eta}{m_{1}-m_{0}}+\frac{m_{0}+m_{1}}{2}\right) \tag{4}\] where \(\eta=\ln(\pi(0)/\pi(1))\). Then, the probability of false alarm is given as: PFA \(=Pr(z>\delta|H_{0})=Q(\frac{\delta-m_{0}}{\sigma})\) where \(Q(x)=\frac{1}{\sqrt{2\pi}}\int_{x}^{\infty}e^{-\frac{x^{2}}{d}}dt\) is the standard \(Q\)-function. Next, the probability of detection is given as: PD \(=1-\) PMD \(=1-Pr(z<\delta|H_{1})=Q(\frac{\delta-m_{1}}{\sigma})\). Fig. 2: Pathloss measurements obtained via our experimental setup consisting of two NI 2901 USRP SDRs when the receive SDR moves across 15 equispaced positions on a 1D grid. ### _NLOS Identification via Machine Learning Classifiers_ We note that the pathloss measurements collected in real-time setup via an SDR-pair slightly deviate from the Gaussian distribution (see Fig. 3). Therefore, the binary hypothesis test defined above that assumes Gaussian distribution for the measurement error may not work very well. However, it is well-known that the machine learning algorithms can cope with this situation (model mismatch) by learning the distribution from the training data. Therefore, we implement the following machine learning algorithms in Python: linear support vector machine (SVM) and radial basis function SVM, linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and logistic regression (LR). We train and test the four ML classifiers on our custom dataset with a train-validation-test split of 70-15-15 (%). ## IV Results Receiver operating characteristic (ROC) curves are one popular metric to evaluate the performance of ML and statistical classifiers. An ROC curve plots the correct decision rate against the error rate, i.e., true positive rate (i.e., deciding NLOS correctly) vs. false alarm/positive rate (i.e., deciding NLOS while it was LOS). Fig. 4 shows the ROC curves for all the four ML classifiers as well as the BHT (an statistical classifier), for three different link conditions. We make the following observations. 1) At low false alarm rates, the BHT performs the best among all the classifiers. But then, there is a switching mechanism in force where beyond a certain false positive rate, the ML classifiers outperform the BHT. 2) To our surprise, an increase in SNR doesn't lead to a monotonous increase in the accuracy of all the proposed NLOS identification methods. This is probably due to residual effects of multipath, small-scale fading and additive noise, and calls for more measurements for each receiver position, and longer time-slot intervals so that we get more stable pathloss measurements due to increased averaging. Table II evaluates the NLOS identification performance of the BHT and the four ML classifiers based on the following three performance metrics: (a) probability of false alarm (PFA), (b) probability of missed detection (PMD), and (c) accuracy, where accuracy= (\(1-\)(PMD+PFA)\(\times 100\). We make Fig. 4: Receiver operating characteristic (ROC) curves. AUC stands for area under the curve. Fig. 3: Pathloss histogram does not fits well to normal distribution (for both LOS and NLOS scenarios). the following observations. 1) The performance of the best-performing ML algorithm (i.e., the RBF-SVM classifier) is only slightly superior to the Neyman-Pearson-based BHT. That is, the RBF-SVM classifier (the best performing ML classifier) and the BHT achieve a maximum accuracy of 88.24% and 87.46% for low SNR, 83.91% and 81.21% for medium SNR, and 87.38% and 86.65% for high SNR. 2) Some ML classifiers (e.g., QDA) perform worst for one error type (i.e., false alarm rate) but perform best for the other error type (i.e. missed detection rate), and vice versa (e.g., LR). However, BHT and RBF-SVM are efficient in the sense that they minimize both error types simultaneously. 3) Again, to our surprise, an increase in SNR doesn't lead to a monotonous increase in the accuracy of all the proposed NLOS identification methods (due to insufficient averaging while obtaining pathloss measurements). ## V Conclusion This paper conducted an experimental study on the problem of LOS/NLOS classification in an indoor environment. We used a pair of NI 2901 USRP SDRs in a large hall (with receive SDR moving on a 1D grid) in order to construct a dataset of pathloss measurements (for both LOS and NLOS scenarios). We utilized our custom dataset to estimate the pathloss parameters (i.e., pathloss exponent) using the least-squares method, and later, utilized the parameterized pathloss model to construct a binary hypothesis test for NLOS identification. Further, noting that the pathloss measurements slightly deviate from the Gaussian distribution, we passed our custom dataset to four ML algorithms, i.e., linear and radial basis function SVM, LDA, QDA, and LR. We observed that the best-performing ML algorithm (i.e., RBF-SVM) marginally outperformed the Neyman-Pearson-based binary hypothesis test. As for the future work, we note that the ML-based techniques are environment-specific, i.e., if the environment changes, we need to train the ML algorithms again. So, one promising future direction is to design reinforcement/online learning methods for NLOS identification.